How to evaluate innovative online teaching methods
Published on Sep 21, 2020. Updated on Sep 21, 2020.
As an educator, you have an opportunity to pilot and evaluate innovative online teaching methods to improve your students’ learning experience. In a world where COVID-19 has made remote teaching and learning common, this is especially true.
A recent article, From Modules to MOOCs: Application of the Six-Step Approach to Online Curriculum Development for Medical Education, provides a framework for developing new online curricula in medical education.
In it, the authors outline these steps for building new curricula:
Problem identification and general needs assessment.
Targeted needs assessment.
Goals and objectives.
Evaluation and feedback.
In this article, we’ll share how you can evaluate new online teaching methods using a ten-task evaluation process that is outlined in the above manuscript. Read on to learn about this system.
Identify evaluation users
In this step, you need to identify who you want to complete your evaluation. Students? Faculty? Staff? Tech support team? Even the historical group of students “above” your innovation pilot users who did not have a chance to use the pilot?
Consider who you need to solicit data from in order to determine the success or failure of your tested innovation at this stage, and ask those users to take part in the evaluation.
Articulate user needs
Why are you implementing or creating this innovation in the first place and what user needs do you want to address? When identifying the user needs you aim to focus on, you can look to your learning objectives as a guide.
Prioritize evaluation questions
At this stage, you need to identify the absolutely essential “must have” or “cannot be excluded” questions you simply need to ask your evaluators. While you might feel tempted to add dozens of questions, remember that you need to balance the comprehensiveness of your evaluation with potential respondent fatigue.
Ask too many questions, and respondents may avoid completing your evaluation. Ask too few questions, and you won’t have enough data to determine an innovation’s effectiveness.
Choose evaluation designs
If you didn’t have time to design an evaluation before your learners completed the innovation, then you can’t do a pre/post test. But, you might consider a retrospective-pre test (e.g. at the end ask your learners what they thought they knew at the beginning, and now in hindsight).
Other questions you can ask that will help you figure out your evaluation design include:
Do you want to have a historical control group?
Do you want to ask students before you expose them to the innovation what they think they might need?
Select evaluation methods
Similar to task three above, you need to balance the completeness of your evaluation against what participants will reasonably complete. As your evaluation becomes more exhaustive, it will be harder to complete.
When selecting your evaluation methods, consider whether you want to conduct surveys or focus groups or both? As well, determine if you want to include all students in the evaluation or only a subset of them. Finally, decide on the timepoints of your evaluation and if you will administer it at the beginning, middle, or end of your curricular innovation, or all of the above.
The first question you should ask when choosing or constructing instruments for your evaluation is “Is there any existing evaluation instrument that has previously been validated that I can use (with permission)?”
If not, the second question you should ask is “Can I modify existing instruments to more closely align with my ‘use case scenario’? (e.g. change “medical student” in the question stem to “nursing student?”)?
Finally, if you decide to create a new survey instrument yourself, you will need to decide how many items to include, whether you will have open-ended response questions or Likert-style questions or check-boxes or a combination of all of them, how many anchor points you want in your Likert-style questions, etc.
Ideally, you and every scholar would use previously validated survey tools. But, in reality, many educators create their own surveys and should think deliberately about them.
Address ethical concerns
When designing a curricular innovation, you must consider the ethical implications of how you evaluate it. Do you want participants to see each other's answers (e.g. focus groups allow for collaboration across respondents)? Should you know who is doing the answering (e.g. anonymous or not)? If you intend to submit these results to any future scholarly output (manuscript, poster, conference submission, etc.) did (or will) you apply for IRB approval?
There’s a fine line between educational improvement (you don’t need IRB approval) and doing work to advance the science of learning (where you do need IRB approval). If you assume everything you do is for continual educational improvement, you may not be following the highest ethical standard because you’re not letting the IRB committee tell you if your work qualifies as research.
Because you are evaluating an online teaching method, incorporating an online evaluation strategy should be relatively straightforward. That being said, you should engage in this step as early as when you’re first designing the curricular innovation.
Why? Sometimes we forget to do this while designing the innovation so it may be retrospectively hard to get all pilot learners to complete the evaluation.
This step is the same as analyzing any other innovation in teaching. If you don’t have statistical expertise, rely on your statistician colleagues’ to analyze your evaluation data so you don’t have to do it yourself.
You owe it to your curricular innovations to “showcase them” to the world of fellow health professional educator-innovators. If you don’t share what worked and what didn’t, how can we truly advance the science of learning as fast as possible?
Remember that there are lots of ways to disseminate your education scholarship: posters and presentations at local, regional, national and international conferences, as well as manuscripts to education-centric journals (e.g. MedEdPORTAL, MedEdPublish, etc.).
As you evaluate innovative online teaching methods, you should look for unexpected things that happened and what you can learn from them. How did the impact of your implemented innovation affect your teaching and students’ learning in ways you did not anticipate? You may be able to draw important conclusions from these unanticipated results.