Unlocking the Secrets of Success: Effectively Evaluating Your Course or Program

Published on Nov 6, 2023. Updated on Oct 31, 2023.
In today's Osmosis blog, Dr. Sean Tackett addresses how to evaluate a course or program by explaining how to design an effective framework and highlighting how to avoid missing important issues.
We all have blind spots. Recently, when working with colleagues on an article about "blind spots" in medical education [1], I was reminded of the parable of the blind men and the elephant. Each person touches a different part of the elephant and wrongly assumes what object is in their hands. Only by combining their perspectives could they understand the elephant before them.
Evaluating programs is like trying to "see the elephant," only more challenging. The elephant is not just large and hard to grasp, but the elephant is moving. We're trying to understand not just what we're observing but in what direction it's going at the same time.
It takes many hands and coordination among many individuals to conduct a thorough program evaluation. There are countless frameworks for this, with a movement toward thinking of programs as complex systems [2]. To begin to handle this complexity, it can help to have some guiding principles in mind:
- Find your "why." The starting point for any program evaluation is understanding the goals or outcomes the program is expected to achieve, often the learning objectives in educational programs.
- Stuff happens. It's not enough to determine to what extent program outcomes were achieved; systematically seeking out the unexpected, whether pleasant surprises or negative unintended consequences, is key.
- No two minds think alike. Everyone will have a different experience in an educational program; there might be people who feel like they won and those who feel like they lost, and someone with an advantage in one respect might have a disadvantage in another.
- You'll never really see the whole elephant. We would love to have infinite information, but we will inevitably be blind to some elements of the program. Prioritizing the use of resources is critical.
Starting Your Program Evaluation
Let's walk through a stepwise approach that can help health professionals get started with program evaluation and avoid missing important issues. These are based on the ten evaluation tasks from the Six-Step Approach for Curriculum Development for Medical Education [3].
Tasks 1-2: Identify users and uses of the evaluation
Who are the stakeholders? Evaluators and administrators are typically interested to know if something is working. So are the learners. There may be others to consider, such as regulatory bodies (e.g., accreditors) and the needs of patients, caregivers, and communities. Each will have their own interests in what evaluation data you gather [4].Task 3: Identify resources
How much time, expertise, etc., do you have? Can you advocate for more? Ultimately, your evaluation design should be prioritized based on the available resources, and knowing this upfront will make your design feasible.
Task 4: Identify evaluation questions
Once you have a clearer understanding of the purpose(s) of evaluation (i.e., users and uses) and what's possible with the available resources, you are ready to consider specifically what questions your evaluation should answer. As mentioned above, you'll want to make sure that evaluation questions align with program objectives (e.g., Have learners achieved the expected level of performance?) and ask about the unexpected (e.g., What else happened during the program?). It's often helpful to think through the program's aspects along logic models, considering its inputs, processes, and outputs, as well as the outcomes and impact that may be more downstream from the program.
Tasks 5-6: Choose evaluation designs and measurement methods
Designs and measures should align with each question. For example, pre-post quantitative measures (e.g., knowledge exams) can be effective for evaluations of performance improvements. Quantitative measures may also lend themselves to comparative designs (e.g., including control groups) to determine if learning improved as a result of the program. Qualitative methods are typically required to capture the emergent aspects of the program, with data collected after some experience in the program or after it has ended. Going back to the users and uses may prompt you to think beyond simply assessing learners and considering other stakeholders. Who else's perspectives should you measure? What do other healthcare team members and patients think?
Task 7: Address ethical concerns
Formal ethical approval is needed for anything that is done systematically for scholarly or publication purposes, and there are laws/regulations in most settings that govern data collection and confidentiality. Ethics will always apply regardless, and it's important to be aware of sensitive issues and not collect more information than you need.
Tasks 8-10: Collect and analyze data and report results
By now, you're ready to gather and analyze data aligned with your chosen designs and methods. You may also consider a variety of ways to report your findings and meet the needs of all interested stakeholders. Seeking stakeholder input in interpreting your findings may be the best way to determine if what you did "worked" from the perspectives of everyone involved.
When you've gone through the ten tasks, you'll have a sense of whether what you're doing is working and should have ideas about what changes need to be made. No curriculum or course is perfect, after all.
Even if you are not starting from scratch to develop a comprehensive program evaluation, the principles described will still apply. For example, it's common to want to make a small change, such as using a new technology to teach and then understand its effects. In this case, you still want to understand the purposes of implementing the technology as your starting point; was the intention to improve learning efficiency? Enhance learner's convenience and enjoyment? Increase access for more learners? When these purposes are clarified, you can start at Task 1 and think through who the various stakeholders are and what evidence they would want to show that the innovation has value.
You might also try to incorporate frameworks specific to the changes you are making. For example, there are frameworks for educational technology adoption from medical education that recommend paying attention to technological features (e.g., relative advantage, ease of initial adoption, availability), individual factors (e.g., attitudes towards change, capabilities, pedagogical beliefs, control), and contextual factors (e.g., disciplinary and institutional culture) [5]. Or you could explore other fields for ideas. Social studies of science and technology offer a variety of theories to guide evaluation based on whether the interest is the adoption of the technology by an individual, the activity where the technology is being implemented, or multiple dimensions [6].
Even if you don't have time to scour the literature across healthcare, education, and beyond, sticking with the basic principles described here should serve you well. Always evaluate against the expected outcomes. Systematically look for the unexpected (for better or worse). Remember that every individual's experience is unique and worth trying to capture. And take some comfort in the reality that you will always be leaving something out – be resourceful and efficient, but don't let perfect be the enemy of good.
Finally, remember that every course, curriculum, or program is a work in progress. The key questions are not – "Did it work?" or "Was I successful?" Instead, it's "For whom did it work and how?" and "What can we do to make it work better?"
About the Author
Sean Tackett, MD, MPH, is a General Internist at Johns Hopkins, Bayview Medical Center, and Director of Research at Osmosis by Elsevier. His career aspiration is to contribute to improvements in the quality of health professional education internationally. Dr. Tackett completed his residency and a fellowship in Internal Medicine at Johns Hopkins. He was awarded his MD from the University of Pittsburgh School of Medicine and graduated from Notre Dame University magna cum laude with a BS in Biochemistry. [read more]References and Resources
- Tackett S, Steinert Y, Whitehead CR, Reed DA, Wright SM. Blind spots in medical education: how can we envision new possibilities? Perspect Med Educ. 2022 Dec;11(6):365-370.
- Rojas D, Grierson L, Mylopoulos M, Trbovich P, Bagli D, Brydges R. How can systems engineering inform the methods of program evaluation in health professions education? Medical Education. 2018 Apr;52(4):364-75.
- Thomas PA, Kern DE, Hughes MT, Tackett SA, Chen BY, editors. Curriculum development for medical education: a six-step approach. JHU press; 2022 Aug 30.
- Norcini J, Anderson MB, Bollela V, Burch V, Costa MJ, Duvivier R, Hays R, Palacios Mackay MF, Roberts T, Swanson D. 2018 Consensus framework for good assessment. Medical teacher. 2018 Nov 2;40(11):1102-9.
- Grainger R, Liu Q, Geertshuis S. Learning technologies: A medium for the transformation of medical education? Medical education. 2021 Jan;55(1):23-9.
- Rangel JC, Humphrey‐Murto S. Social Studies of Science and Technology: New ways to illuminate challenges in training for health information technologies utilisation. Medical Education. 2023 Aug 9.