August 22, 2022 | Erin Bluvas, email@example.com
Members of the South Carolina SmartState Center for Effectiveness Research in Orthopaedics (CERortho) have completed a study where they assessed the ability of machine learning approaches to find patient-specific evidence of the effectiveness of early surgery as a treatment for shoulder fracture injuries. They published their findings in BMC Medical Research Methodology.
“Policy makers want patients to have personalized evidence when clinicians are making treatment decisions,” says health services policy and management (HSPM) professor and SmartState Endowed Chair of CERortho John Brooks, who led the study. “Randomized control trials are the gold standard for generating evidence, but it’s difficult to produce personalized evidence for many patients. In contrast, observational data provide the perspective of real-world practice and a diversity of patients well beyond those evaluated in randomized control trials. Machine-learning algorithms have been proposed to analyze large observational databases to develop patient-specific evidence, but the properties of these algorithms have not been fully assessed using real-world clinical scenarios.”
In recent years, scientists like Brooks have begun conducting comparative effectiveness research using large observational databases. This big data approach offers an alternative way to develop personalized evidence using a practice-based approach.
With funding from the UofSC Big Data Health Science Center, which is led by health promotion, education, and behavior professor Xiaoming Li and HSPM assistant professor Bankole Olatosi, the researchers employed a novel machine-learning method (Instrumental Variable Causal Forest Algorithm) to estimate patient-specific early surgery effects (both positive and adverse) for more than 72,000 Medicare beneficiaries. The team focused on the consistency of the estimates from this method in producing personalized treatment effect evidence when varying the key algorithm parameters.
The researchers found that treatment effect estimates for individual patients varied significantly with the parameters of the algorithm. In contrast, it was found that patients could be grouped using these estimates in a manner that was robust to algorithm parameters. Patient groups of older, frailer patients with more comorbidities, and lower utilizers of healthcare were less likely to benefit and more likely to have detriments from higher rates of early surgery.
“This study suggests that machine-learning methods do not provide consistent treatment effect estimates for individual patients, but these methods can use the data to group patients based on treatment effectiveness, which can be useful for targeting policy interventions,” Brooks says. “Moreover, it demonstrates that using big data and machine learning have tremendous potential in focusing treatment effectiveness evaluation in healthcare.”