How can health systems ensure their machine learning practices are ethical?
Machine learning has the potential to completely transform the way healthcare is delivered, but unlocking those new approaches can come with risks.
Ethical questions should be asked in the design implementation of machine learning models to ensure models are developed to maximize benefit avoid potential harm. Machine learning relies on access to historical data, often containing personal information, frequently available in lower quantity quality than would be ideal.
How does one protect privacy, account for inherent bias, ensure that the right people benefit explain complex models? These are ethical challenges faced in the development of this capability.
Clinicians are ethical bastions
“Our healthcare providers hold to a strong moral ethical code,” said Kevin G. Ross, CEO of Auckland, New Zealand-based Precision Driven Health, an award-winning, multimillion-dollar public-private research partnership applying data science to enable precision health to become a reality.
“As some of the most trusted roles in society, clinicians hold a place of honor that both they their patients rely upon reinforce through their interactions,” said Ross.
“As with any tool that is introduced into patient care, machine learning should be evaluated on the benefits risks to patient provider,” he said. “Ethics describes our value system machine learning means using computational power to build models make decisions on our behalf. As gatekeepers for patient care decisions, clinicians will not adopt or recommend machine learning unless it aligns with their values builds upon their trusted foundation.”
What makes machine learning particularly challenging is the evolutionary nature of algorithms, Ross noted. Whereas a new device or drug can usually be evaluated in a relatively well-established path of clinical trials, a machine learning algorithm may perform quite differently today from yesterday, give quite different results for different people contexts, said Ross.
“When we allow machine learning to contribute to decision-making, we are introducing an element of real-time research that doesn’t easily replicate the rigor of our traditional research evaluation studies,” he explained. “Therefore we must, from the very conceptual design stage, think about the ethical implications of our new technologies.”
Stopping to think things through
The most important processes involve thinking through what could happen when a model is deployed, with people from a range of perspectives. It’s very easy to get lost in the science of building great models completely miss both opportunities risks that the models create, Ross said.
“Two of the most important processes are a traditional peer review, where someone who understands the data science looks closely at the model its assumptions, a risk assessment with the help of a nontechnical person,” he said.
“Asking a consumer, clinician or planner how they expect a model to be used may identify completely unexpected uses. Could a model designed to accelerate care indirectly penalize one group of people? Could requiring additional personal data exclude the intended beneficiaries?
“Documenting what you believe could be the consequence of releasing a model – then monitoring what happens when you do – is an important practice that allows each model to continuously improve through its lifecycle,” he added.
Automating current practice
The easiest thing to do with machine learning, Ross explained, is to automate current practice.
“Our techniques are designed measured on their ability to replicate the past,” said Ross. “But what if the past isn’t ideal? Are we more efficiently making poor decisions? What happens when a model encounters a new combination? People intuitively learn relate an unusual or new case to what they do or can know already.
“Machines could do the same, or they could make assertions without sufficient relevant information,” he added. “This means by nature that minorities, who are generally poorly represented in past data experience poorer outcomes, will almost certainly benefit less from machine learning, may experience more harm. Our modelling techniques processes must be designed to handle these challenges constantly improve on the past.”
Ross will offer more detail during his HIMSS21 session, Ethical Machine Learning. It’s scheduled for August 10, from 11:30 a.m. to 12:30 p.m. in Venetian San Polo 3404.