In our last guest blog, we discussed differences between research and evaluation. Now, we outline various evaluation approaches in hopes of encouraging you to adopt one or several for your own projects.

These approaches to program evaluation can be used in full or can be “borrowed from” when evaluating health care programs and interventions. A brief description and some helpful resources should assist you in determining the best “fit” of an evaluation approach for your program. In general, there isn’t a right or wrong approach to conducting an evaluation, you are just encouraged to think about it from a systematic perspective.

Olive View-UCLA Medical Center uses a program called ProACT (Prospective Action in Care Transitions) to improve care transitions through automatically-triggered emails to patient-centered medical home (PCMH) Care Managers when a PCMH patient visits our emergency department (ED) or urgent care (UC). Read more in our previous blog posts about this topic.

As we evaluate ProACT – by asking questions that can help us make judgments about how well the program works – we’ve come across several approaches to conducting an evaluation.

 

Common Evaluation Approaches

Empowerment Evaluation, originated by David Fetterman, aims to increase achieving program success by providing program stakeholders with tools to assess the planning, implementation, and self-evaluation of their program. It attempts to mainstream evaluation as part of the planning and management of the program/organization.

This type of evaluation promotes inclusion of all stakeholders’ views and uses a combination of traditional research methods, cases studies, and crowd sourcing. It is designed around three steps:

  1. Establish the mission or purpose.
  2. Review the current state.
  3. Plan for the future.

Try this method if you like these types of research: Community-Based Participatory Research or Bottom-Up Research. For more information, check out David Fetterman and eevaluation.blogspot.com.

Utilization-Focused Evaluation focuses on the intended use of an evaluation by intended users. This type of evaluation asks the question of whose values will frame the evaluation, which results in the clear identification and involvement of primary intended users. Michael Quinn Patton, the originator of Utilization Focused Evaluation, believes that involving primary intended users results in high-quality, useful evaluations.

Try this approach if you like Best Practices Research or Patient Centered Outcomes Research. For more information, look into betterevaluation.org and check out this evaluation checklist.

Deliberative Democratic Evaluation uses concepts of democracy to arrive at justifiable conclusions and strives to remain unbiased by

  • considering all relevant interests, values, and perspectives;
  • engaging in extended dialogue with major stakeholders; and
  • promoting extensive deliberation about the conclusions.

Typically this evaluation approach, pioneered by Ernest R. House, uses traditional research methods with feedback from stakeholders at all steps of the evaluation. Evaluators using this model are encouraged to err on the side of inclusivity for discussions and voting on ideas.

Try Deliberative Democratic Evaluation if you like Community-Based Participatory Research or Comparative-Effectiveness Research. For more information, look into this evaluation checklist and betterevaluation.org.

Responsive Evaluations are conducted by internal evaluation staff. This evaluation approach’s founder, Robert E. Stake, asserts that while external evaluations are likely to be impartial, internal evaluations and self evaluations do organizations the most good. Using the case study as a tool, this evaluation approach leans heavily on personal experience and draws attention to program activities, uniqueness, and culture more than most methods. It uses the evaluation process as a search for and documentation of program quality.

Responsive evaluators inquire, negotiate, and select a few issues around which to organize the evaluation. The evaluator looks for outcomes and program impact, trouble, and coping behavior, gathering data to tailor reports that give a feel for the program. In this type of evaluation, it is ok to present conflicting views of the program based on different stakeholder experiences because there are multiple ways of valuing a program.

Try this approach if you like Using Qualitative Data, Using Case Studies, or Patient Centered Outcomes Research. For more information, look into research from University of Illinois and Robert E. Stake.

 

Feedback

Are you familiar with any of these types of evaluation? Have you applied one or more in your work? Or do you perhaps use another method? Please share with the essential hospital community in the comments below!