Correlation Engine 2.0
Clear Search sequence regions


  • metrics (8)
  • rigor (1)
  • Sizes of these terms reflect their relevance to your search.

    The article proposes three evaluation utility metrics to assist evaluators in evaluating the quality of their evaluation. After an overview of reflective practice in evaluation, the different ways in which evaluators can hold themselves accountable are discussed. It is argued that reflective practice requires evaluators to go beyond evaluation quality (i.e., technical quality and methodological rigor) when assessing evaluation practice to include an evaluation of evaluation utility (i.e., specific actions taken in response to evaluation recommendations). Three Evaluation Utility Metrics (EUMs) are proposed to evaluate utility: whether recommendations are considered (EUMc), adopted (EUMa), and (if adopted) level of influence of recommendations (EUMli). The authors then reflect on their experience in using the EUMs, noting the importance of managing expectations through negotiation to ensure EUM data is collected and the need to consider contextual nuances (e.g., adoption and influence of recommendations are influenced by multiple factors beyond the control of the evaluators). Recommendations for increasing EUM rates by paying attention to the frequency and timing of recommendations are also shared. Results of implementing these EUMs in a real-world evaluation provide evidence of their potential value: practice tips led to an EUMc = 100% and EUMa > 80%. Methods for considering and applying all three EUMs together to facilitate practice improvement are also discussed.

    Citation

    Ralph Renger, Jessica Renger, Richard N Van Eck, Marc D Basson, Jirina Renger. Evaluation Utility Metrics (EUMs) in Reflective Practice. The Canadian journal of program evaluation = La Revue canadienne d'evaluation de programme. 2022 Jun;37(1):142-154


    PMID: 35979063

    View Full Text