10

An essential step in the instructional design process and the ADDIE specifically, is the evaluation processes that results in the improvement of the course which leads to an ever revolving cycle of analysis, development, design and evaluation.  As a designer, one of your roles may be to guide instructors with implementing a means of evaluation for needed improvements to their courses.  The majority of this chapter is written from the perspective of the instructor rather than the designer, and how they can implement formative and summative evaluations into their courses for improvement of the instruction and ultimately see a positive effect in the learning outcomes and learning experiences of their students.

Learning Objectives

  • Identify ways of collecting data and indirect feedback from peers and students about online teaching effective, before, during, and after the course.
  • Describe how to use this feedback to make changes to your online course, or course environment, both during the course, and for future offerings.

Why is Evaluation Important?

For the instructor, evaluation is an important component of the tenure and promotion process where the instructor must provide evidence their teaching has been successful. For the instructional designer, it provides evidence that the course design & instruction has been conducted appropriately resulting in effective learning outcomes. New tools and new approaches to teaching are constantly coming available. They provide the opportunity to experiment a little to see if the results are better, and if we do that, we need to evaluate the impact of using a new tool or course design. As mentioned in the preceding section of this text, the main reason is that teaching is like golf: we strive for perfection but can never achieve it. It’s always possible to improve instruction and learning, and one of the best ways of doing that is through a systematic analysis of past experience.

Teaching Effectiveness

Teaching effectiveness describes instructors’ ability to affect student success. It is usually defined according to several factors, such as how well instructors organize courses, how well they know the course material, how clearly they communicate with students, how frequently they provide timely feedback, and other criteria. In the classroom, effectiveness sometimes depends on the instructor’s enthusiasm or disposition. During fully online and blended learning courses, students often need more structure and support to succeed because their course activities usually require them to take greater responsibility for their own learning success. Therefore, many of the criteria take on even more importance when evaluating online teaching effectiveness. Online teaching is often held to higher standards than classroom teaching, and sometimes these standards have nothing to do with the teacher’s ability. For example, a technological breakdown can have a negative impact on students’ evaluation of an instructor’s work, though the instructor is rarely responsible for the technical failure.

There are many ways to evaluate teaching effectiveness in either the physical or virtual environments. Getting pointers and advice before the term begins can save you from making revisions later. Formative feedback, collected during an ongoing course, improves that specific course. Summative feedback, collected after a course ends, improves the next iterations. Feedback that applies to the instructor’s process can also improve other courses.

Few people are born with an innate ability to teach effectively online. If you have not taught online or participated in the design and development of a course before, it is a good idea to take a workshop, or to work with someone to plan or create the online environment. Ask a peer to let you review an online course to see what you like or do not like about how it is constructed, how the instructor(s) provide feedback, how students are assessed, and so on. If you are inheriting an online course from someone else, try to get feedback about what has already been done. Before your course begins, you should ask a peer to tell you about how appropriate the learning objectives are for the topics, as you might do for a face-to-face course. Seek additional people who might provide comprehensive feedback in a faculty development center or an academic technology unit. You might also try to find a fellow teacher who has supplemented face-to-face instruction, taught a hybrid course, or taught a fully online course. Even if this person works in a different department or unit, it is helpful to share your online teaching experiences with someone who has gone through the process. If this is your first time teaching an online course, or using online components for your face-to-face or hybrid course, you do not have to use every online tool or strategy. Instead, choose one or two strategies based on your learning objectives. Writing personal teaching goals is one more practice you can try as you prepare the online environment and the materials and activities to go in it. Creating an online teaching journal allows you to track your thoughts and actions over time. Including personal teaching goals among the first entries will get you off to a good beginning.

Identifying Student Expectations and Abilities

Involve students in the teaching and learning process from the start. Students who take part in this process often become more engaged in the course itself. Let them know what you will be doing and how you will be doing it throughout the term. Tell them if you are new to online teaching. At the beginning of each term, ask students to tell you what they expect from the course, beyond the learning objectives in the syllabus. Revisit these student expectations later, when creating and using mid-semester evaluation surveys.

To prepare for collecting indirect feedback throughout the course, create a benchmark by asking students to perform certain activities at the beginning. For example, ask students to take a small quiz, define common vocabulary, or other minor tasks. In the first week of the course, this activity is not worth any points, though you can assign points to motivate students to complete the activity. Later in the term, you can ask the students to perform the same activity to see how well they are meeting the course objectives or how well they are learning certain material.

You can conduct formative feedback for a number of reasons: to check how things are going at a certain point; to evaluate the effectiveness of a specific assignment or resource; or to gauge student attitudes. The frequency with which instructors obtain feedback can range from once per session to once in the middle of the term. Direct methods to collect formative feedback include, but are not limited to, the following:

Peer Review and Self-Evaluation

As important as student engagement can be, student evaluations by themselves are not sufficient. Solicit peer review of specific resources, activities, or assessment strategies, your course structure, your communication strategies, or anything else about which you might have concerns. If you cannot find anyone in your school, department or college who is also teaching online you can ask school or district administrators, academic technology staff members, or faculty development center staff members to identify prospective peer mentors for this type of feedback. In some cases, the staff members themselves may be able to help you as well.

Another strategy is to create benchmarks for yourself and take time each week to see how you are doing. For example, if you set a goal to answer a certain number of discussion threads in a particular forum, keep track of how many replies you submit, and make adjustments. If you want to return all students’ written assignments in a certain amount of time, note how many you were able to complete within your self-imposed deadline. This will help you create more realistic expectations for yourself for future assignments.

Online Suggestion Box

Online suggestion boxes are unstructured activities that capture voluntary comments at irregular intervals throughout an entire term. You can use email or a threaded discussion forum for this activity. If you use a discussion forum, let students know if their contributions will be graded or non-graded. In some Learning Management System (LMS) solutions, you can allow anonymous comments. Tell students that you will allow anonymous comments as long as they remain constructive. You could make it a portion of a participation grade to enter a certain number of suggestions throughout the term. To focus their comments, give a list of items about which you want feedback, such as amount of respect shown to students and their ideas, variety of avenues to reach learning objectives, amount of feedback provided, relevance of coursework to the world, communication practices, or willingness to make changes based on student feedback. If it is a hybrid or face-to-face course, bring the suggestions back to the classroom and announce them in front of the class, so that students know their ideas have been heard and are being addressed.

Polling

There are various online polling tools that allow you to get small amounts of feedback in a short time. Some of these polling tools are built into LMS solutions, such as Moodle’s Choice module, allowing instructors to ask single questions related to the material, a course reading, or instructional practice.

Focus Group

A focus group (a small group of students) provides feedback to improve upon the design and delivery of a course. Asking a focus group to join you once a month, either physically (e.g., office hours) or virtually (e.g., chat, discussion forum) can provided valuable opportunities to improve a course. These could be the same students for the entire term or a new group of students each time. During your meeting, ask them specific questions to determine information about learning objectives, resources and how they are organized, online activities, assessment strategies, amount of feedback, or other aspects of your teaching that you want to improve.

Tip
The students are more likely to respond honestly if their comments are anonymous.

Mid-semester Evaluation Survey

If you would prefer a larger scale approach than a focus group, try a mid-semester survey, a questionnaire that surveys every student enrolled in a course at the mid-term mark and provides valuable feedback that identifies areas needing improvement in a course. For the survey, if possible with the survey software you are using, choose a Likert Scale & Long Answer option for each question. That way you can get quantitative data, numbers that quickly tell what students like or do not like, and qualitative data, written comments that will tell you how to improve different parts of your class.

One-sentence Summary

The one-sentence summary is another classroom assessment technique that can be adapted to an online environment. Designed to elicit higher level thinking, a one sentence summary demonstrates whether or not students are able to synthesize a process or concept. Students answer seven questions separately: “Who? Does What? To Whom (or What)? When? Where? How? and Why?” Then they put those answers together into one sentence. Angelo and Cross (1993) also describe this exercise in their book about classroom assessment techniques. It is fairly easy to use this technique online. You can set up a discussion forum to collect the student entries. The online environment also makes it fairly easy to engage students in a peer review process and to provide timely feedback. When looking at the results of the students’ summaries, you can identify areas where large numbers of students did not demonstrate an understanding of the topic or concept. The most common problem area for students revolves around the question “Why?”

Student-generated test questions

Ask students to create three to five test questions each. Tell them that you will use a certain number of those questions on the actual test. By doing this, you get the benefit of seeing the course content that the students think is important compared to the content that you think they should focus on. You can make revisions to your presentations to address areas that students did not cover in their questions. If there are enough good student questions you can also use some for test review exercises.

Responding to Formative Feedback With Immediate Minor Changes

Collecting the student feedback is just the beginning. However, you do not have to, and should not, wait until the end of the term to start introducing changes as a result of what you learn from the students. For instance, students may use the different instruments to ask you to be more flexible in the teaching approach, to maintain a good attitude towards students and their ideas, to use more appropriate assessment methods, or to add more real-world application to the content. To continue engaging students in the process, go over the formative feedback results with them and solicit suggestions for changes. To close student performance gaps identified by indirect feedback methods, you can provide extra resources (e.g., websites, articles, or additional attention during face-to-face lectures or online recorded lectures), extra activities (e.g., self-assessment quizzes, discussion forums, wikis), or both.

Online Survey

Similar to the formative feedback surveys, you can use a closing survey to find out what students feel about specific aspects of your online teaching or their overall experience. There are numerous survey tools out there. Some are stand-alone, online survey tools and some are integrated into learning management systems.

Official School Evaluation

You can use the official evaluation provided by the school or campus. Students are very familiar with this evaluation, so ask them to take it seriously. If you are teaching an online course, check if your campus or school has a way to distribute the evaluation form. If not, copy the questions and conduct it yourself using one of the techniques described above. In some cases, instructors can add questions to gather data about specific teaching practices. Use this opportunity to learn how students feel overall about the experience; how they feel about specific content, activities, or assessment strategies; or how they feel about your teaching. If the official course evaluation is conducted online, then you will be able to code the qualitative comments to find common student likes and dislikes.

Once you have the summative feedback, you can let it sit for a while, or immediately begin revising the online course for the next iteration. Changes might include being more flexible in the teaching approach, having a better attitude towards students and their ideas, using more appropriate assessment methods, adding more real-world applications, and so on. Since the new students most likely did not see the previous version of the course, you are free to do anything, from making minor changes here and there to completely starting over. Usually it is somewhere between the two.

Maintain your sanity by keeping the short-term solutions on a small scale. Unless it is based on feedback given before the course, or right after it begins, it is a good idea to wait until after the course is over to institute a major revision. If there is a good reason to make a major change, communicate clearly and often with your students about what is going to happen and how it might affect them. You now have a wide array of possibilities that will help you evaluate and improve your online teaching effectiveness. it can be tough to go through the online teaching process, to hear or read student criticism, and to make adjustments to your carefully planned course. If you can laugh at yourself and remember that the students are on your side, then you are on your way to teaching effectively online.

What to Evaluate: Summative

It will be necessary at some point, to re-design most courses and programs. So it will be important to know whether these redesigned courses are more effective than the ‘old’ previous courses.

One way of evaluating new courses is to see how they compare with the older courses, for instance:

  • completion rates will be at least as good if not better for the new version of the course(s)
  • grades or measures of learning will be at least as good if not better for the new version.

The first two criteria are relatively easily measured in quantitative terms. We should be aiming for completion rates of at least 85 percent, which means of 100 students starting the course, 85 complete by passing the end of course assessment (unfortunately, many current courses fail to achieve this rate, but if we value good teaching, we should be trying to bring as many students as possible to the set standard).

The second criterion is to compare the grades. We would expect at least as many As and Bs in our new version as in the old classroom version, while maintaining the same (hopefully high) standards or higher.

However, to be valid the evaluation will also would need to define the knowledge and skills within a course that meet the needs of a digital age, then measuring how effective the teaching was in doing this. Thus a third criterion would be:

  • the new design(s) will lead to new and different learning outcomes that are more relevant to the needs of a digital age.

This third criterion is more difficult, because it suggests a change in the intended learning goals for courses or programs. This might include assessing students’ communication skills with new media, or their ability to find, evaluate, analyze and apply information appropriately within the subject domain (knowledge management), which have not previously been (adequately) assessed in the classroom version. This requires a qualitative judgement as to which learning goals are most important, and this may require endorsement or support from a departmental curriculum committee or even an external accreditation body.

What to Evaluate: Formative

However, even if we measure the course by these three criteria, we will not necessarily know what worked and what didn’t in the course. We need to look more closely at factors that may have influenced students’ ability to learn. Some of the questions to which you may want to get answers are as follows:

  • Were the learning outcomes or goals clear to students?
  • What learning outcomes did most students struggle with?
  • Was the teaching material clear and well structured?
  • Were the learning materials and tools students needed easily accessible and available 24 x 7?
  • What topics generated good discussion and what didn’t?
  • Did students draw appropriately on the course materials in their discussion forums or assignments?
  • Did students find their own appropriate sources and use them well in discussions, assignments and other student activities?
  • Which student activities worked well, and which badly? Why?
  • What of the supplied learning materials did students make most and least use of?
  • Did the assignments adequately assess the knowledge and skills the course was aiming to teach?
  • Were the students overloaded with work?
  • Was it too much work for me as an instructor?
  • If so, what could I do to better manage my workload (or the students’) without losing quality?
  • How satisfied were the students with the course?
  • How satisfied am I with the course?

Here are some ways that these questions can be answered without again causing a huge amount of work.

How to evaluate factors contributing to or inhibiting learning

There is a range of resources you can draw on to do this, much more in fact than for evaluating traditional face-to-face courses, because online learning leaves a traceable digital trail of evidence:

  • student grades;
  • individual student participation rates in online activities, such as self-assessment questions, discussion forums, podcasts;
  • qualitative analysis of the discussion forums, for instance the quality and range of comments, indicating the level or depth of engagement or thinking;
  • student e-portfolios, assignments and exam answers;
  • student questionnaires;
  • focus groups.

However, before starting, it is useful to draw up a list of questions as in the previous section, and then look at which sources are most likely to provide answers to those questions.

At the end of a course, look at the student grades, and identify which students did well and which struggled. This depends of course on the number of students in a class. In a large class, sample by grades. go back to the beginning of the course and track their online participation as far as possible (learning analytics make this much easier, although it can also be done manually if a learning management is used). I find that some factors are student specific (e.g. a gregarious student who communicates with everyone) and some are course factor specific, for example, related to learning goals or the way I have explained or presented content. This qualitative approach will often suggest changes to the content or the way I interacted with students for the next version of the course. I may also determine next time to manage more carefully students who ‘hog’ the conversation.

Many institutions have a ‘standard’ student reporting system at the end of each course. These are often useless for the purposes of evaluating courses with an online component. The questions asked need to be adapted to the mode of delivery. However, because such questionnaires are used for cross course comparisons, the people who manage such evaluation forms are often reluctant to have a different version for online teaching. Secondly, because these questionnaires are usually voluntarily completed by students after the course has ended, completion rates are often notoriously low (less than 20 per cent). Low response rates are usually worthless or at best highly misleading. Students who have dropped out of the course won’t even look at the questionnaire in most cases. Low response rates tend to be heavily biased towards successful students. It is the students who struggled or dropped out that you need to hear from.

Small focus groups work better than student questionnaires, and for this I prefer either face-to-face or synchronous tools. Deliberately approach 7-8 specific students covering the full range of achievement, from drop-out to A, and conduct a one hour discussion around specific questions about the course. If one selected student does not want to participate, try to find another in the same category. If you can find the time, two or three such focus groups will provide more reliable feedback than just one.

Innovate

Spend quite a bit of time at the end of the first presentation of a redesigned course evaluating it and making changes in the next version, usually working with a trusted instructional designer. After that, concentrate mainly on ensuring completion rates and grades are at the standard aimed for. In the third or subsequent offerings is to look at ways to improve the course that are the result of new external factors, such as new software (for instance. an e-portfolio package), or new processes (for instance, student-generated content, using mobile phones or cameras, collecting project-related data). This keeps the course ‘fresh’ and interesting. However, you may need to limit yourself to one substantive change, partly for workload reasons but also because this way it is easier to measure the impact of the change.

It is important to remember that the aim is to enable students to learn effectively. Follow the experienced route, then gradually add and evaluate new tools and new approaches to learning as you become more experienced.

Lastly, if you do make an interesting innovation in your course, make sure you properly evaluate it, then share these findings with colleagues and help them either include the innovation within their own course, or help them make the innovation even better through their own modifications. That way we can all learn from each other.

Key Terms

  • Teaching Effectiveness describes instructors’ ability to affect student success.
  • Formative Feedback, collected during an ongoing course, improves that specific course.
  • Summative Feedback, collected after a course ends, improves the next iterations. Feedback that applies to the instructor’s process can also improve other courses.
  • Online Suggestion Boxes are unstructured activities that capture voluntary comments at irregular intervals throughout an entire term.
  • Focus Group (a small group of students) provides feedback to improve upon the design and delivery of a course.
  • Mid-Semester Survey, a questionnaire that surveys every student enrolled in a course at the mid-term mark and provides valuable feedback that identifies areas needing improvement in a course.
  • One-Sentence Summary is another classroom assessment technique that can be adapted to an online environment.

Key Takeaways

  • Evaluation is important for many reasons, among them are; collecting evidence that instructor was effective, that the course design and instruction resulted in successful learning outcomes for the student. Among other reasons, new tools and new approaches to teaching are constantly coming available. Both formative and summative evaluations provide a means to determine what is not working and what might be needed to ultimately improve the instruction and thus, the learning outcomes.
  • Questions to consider when conducting a formative evaluation of a course.
  • Factors contributing to or inhibiting learning
  • Become familiar with several ways to conduct formative evaluations of a course while the course is active.

Exercises

  1. Conduct an evaluation of your course (or that of a peer). What changes, if any, would you make as a result?

 

OER Derivative Licenses and Attributions

CC LICENSED CONTENT, ORIGINAL

Experiential Learning in Instructional Design and Technology, Chapter 3.3 Evaluation Essentials. Provided by: the authors under an Attribution 4.0 International (CC BY 4.0) license.

This chapter contains an adaptation of Teaching in a Digital Age  by Bates, A. W., and is used under a CC-BY-NC 4.0 International license.

This chapter also contains an adaptation from Education for a Digital World: Advice, Guidelines and Effective Practice from Around the Globe  by BCcampus and the Commonwealth of Learning, and is used under a CC-BY-SA 3.0 International license

CC LICENSED CONTENT INCLUDED

License:

 

License

Icon for the Creative Commons Attribution 4.0 International License

Experiential Learning in Instructional Design and Technology Copyright © 2018 by jhill5; Joshua Hill; and Linda Jordan is licensed under a Creative Commons Attribution 4.0 International License, except where otherwise noted.

Share This Book