Correlation Engine 2.0
Clear Search sequence regions


  • help (2)
  • outcomes- research (1)
  • PICO (7)
  • research (1)
  • Sizes of these terms reflect their relevance to your search.

    Machine learning (ML) has captured the attention of many clinicians who may not have formal training in this area but are otherwise increasingly exposed to ML literature that may be relevant to their clinical specialties. ML papers that follow an outcomes-based research format can be assessed using clinical research appraisal frameworks such as PICO (Population, Intervention, Comparison, Outcome). However, the PICO frameworks strain when applied to ML papers that create new ML models, which are akin to diagnostic tests. There is a need for a new framework to help assess such papers. We propose a new framework to help clinicians systematically read and evaluate medical ML papers whose aim is to create a new ML model: ML-PICO (Machine Learning, Population, Identification, Crosscheck, Outcomes). We describe how the ML-PICO framework can be applied toward appraising literature describing ML models for health care. The relevance of ML to practitioners of clinical medicine is steadily increasing with a growing body of literature. Therefore, it is increasingly important for clinicians to be familiar with how to assess and best utilize these tools. In this paper we have described a practical framework on how to read ML papers that create a new ML model (or diagnostic test): ML-PICO. We hope that this can be used by clinicians to better evaluate the quality and utility of ML papers. Thieme. All rights reserved.

    Citation

    Xinran Liu, James Anstey, Ron Li, Chethan Sarabu, Reiri Sono, Atul J Butte. Rethinking PICO in the Machine Learning Era: ML-PICO. Applied clinical informatics. 2021 Mar;12(2):407-416

    Expand section icon Mesh Tags


    PMID: 34010977

    View Full Text