"Artificial intelligence (AI) and machine learning (ML) are used in healthcare to combat unsustainable spending and produce better outcomes with limited resources," says Ben Tuck in a recent article on the healthcare data blog ClosedLoop.ai. The article stresses the importance of keeping algorithmic bias in check, and goes on to offer four steps to address it.
When machine learning occurs, particularly in neural network-based systems where it is essentially impossible to fully grasp what's happening within the "mind" of the AI, the system may rely on data that reflects cultural biases, such as racism, sexism, homophobia, ageism, and all of the other stereotyping structures that have become written across our languages, interests, parenting, habits - whether we can precisely identify them (or openly admit them) or not.
Tuck's post identifies two general causes, or types, of algorithmic bias: subgroup invalidity and label choice bias.
Subgroup Invalidity Bias
Subgroup invalidity arises where the AI isn't up to the task of modeling the behavior of certain subgroups, due to training on homogeneous populations. Tuck offers the example of a study of pulse oximeter algorithms that demonstrated bias as a result of training on non-diverse data. The study found that "Black patients had nearly three times the frequency of occult hypoxemia that was not detected by pulse oximetry as white patients." The possibility for adverse health outcomes is obvious.
Label Choice Bias
Label choice bias is harder to detect. This is the situation when the AI's process returns a proxy variable —a stand-in for the real thing when the target metric is unavailable. The use of cost data to predict the need for future healthcare resources is an example; because Black people experience discrimination that results in their receiving less of the care received by the White population. Cost metrics, as derived from mostly white consumers' episodes, is used as though it applies to everyone. An argument can be made that minorities receiving less acute care when needed may actually bias the model in exactly the opposite direction, and the existence of the argument is a strong reason to improve the way the model is built by including race very thoughtfully in the source investigations and in the model's computations.
To limit bias and make the models useful, is possible, Tuck says. "Organizations are taking major steps to ensure AI/ML is unbiased, fair, and explainable," pointing to a playbook developed by the Booth School of Business at the University of Chicago - a guide for healthcare organizations and policy makers on catching, quantifying, and reducing bias. Read Ben Tuck's article for steps that can be taken, and review the Algorithmic Bias Playbook for more on how to define, measure, and mitigate bias in AI/ML algorithms.
CarePrecise is a supplier of authoritative healthcare provider data and insights used across the healthcare community.
Post a Comment