Innovations in health care require evaluation to determine impact and success. But what is the difference between conducting an evaluation and researching a program? At Olive View-UCLA Medical Center, we learned this distinction is important to consider.
This is the eighth blog in a series highlighting how the Los Angeles County Department of Health Services is helping patients better navigate their care. Read part 1, part 2, part 3, part 4, part 5, part 6, and part 7.
Olive View-UCLA Medical Center recently launched ProACT (Prospective Action in Care Transitions), an initiative to improve care transitions by using automatically-triggered emails when a patient-centered medical home (PCMH) patient visits the emergency department (ED) or urgent care clinic. As we investigate whether ProACT is effective, are we conducting an evaluation – or doing research – on ProAct? Research and evaluation are similar in several ways.
- Both use the same toolbox:
- Methods of data collection
- Types of analysis
- Both employ similar skill sets:
- Asking questions
- Identifying needs that certain interventions can fill
However, research and evaluation differ in these important ways:
Research is conducted to generate knowledge or contribute to the growth of a theory. Evaluation is conducted to provide information to help those who have a stake in whatever is being evaluated (e.g., performance improvement).
Questions and Answers
Researchers propose questions and articulate them in hypotheses to reach conclusions. Evaluators ask key questions, the results of which allow them to make judgment about a program’s performance.
Setting the Agenda
Research traditionally has principal investigators or primary investigators with the final say on studies’ focus and direction. An evaluation’s focus is set by a specific group, or groups, of stakeholders. The lead evaluator works with stakeholders to set the agenda.
Good research seeks to maximize generalizability. Researchers love when theories are re-validated across various populations and environments. Good evaluations are specific. Evaluation work is very context driven; the questions, data, and interventions for one program at one point in time are designed for that context alone. While some elements maybe useful for other, similar, evaluations, generalizability isn’t typically a good thing.
Criteria for “Good” Research and Evaluation
Research aims for internal AND external validity. The findings of good research should be replicable across time, population, and context. However, these are standards for a good evaluation:
- Accuracy: data collection and interpretation is correct
- Utility: the evaluation process and findings are useful to stakeholders
- Feasibility: the evaluation process and recommendations are feasible for stakeholders (i.e., you don’t propose a $2 million solution to a program with a $50,000 annual budget)
- Propriety: good evaluations are conducted in an ethical and responsible manner
Research is traditionally single-disciplinary (e.g., cardiologists only). Evaluations tend to be multi- or inter-disciplinary (e.g., cardiologists, nurses, social workers).