AI ethical principles already in place at Sanford Health

World Health Organization issues first guidance on artificial intelligence in health care

AI ethical principles already in place at Sanford Health

The World Health Organization recently published its first guidance on the use of artificial intelligence in health care, and Sanford Health works diligently to comply with all six principles outlined in the document.

International experts appointed by the WHO consulted over two years to create the 150-page report, entitled “Ethics and Governance of Artificial Intelligence for Health.” It states that AI already helps to quickly diagnose disease, assists with clinical care, strengthens research and drug development, and supports public health interventions such as outbreak responses. But it also cautions against overestimating AI’s benefits in health and the unethical gathering and use of health data, biases in algorithms and risks to patient safety, cybersecurity and the environment.

“According to the six points that the WHO brings out, we’re right in line with where they’re at,” said Doug Nowak, vice president of data analytics at Sanford Health. “And keep in mind, they’re talking about artificial intelligence. And where we sit today is really focusing on augmented intelligence. Many of the tools that we create are developed in coordination with providers or other experts, with their input and guidance, to help inform the decision-making process when working with individual patients.”

True artificial intelligence would require approval from the Food and Drug Administration, Nowak said. The organization will soon get to the point of creating artificial intelligence, but it’s not quite there yet, he said.

AI oversight committee

When a provider has an idea or opportunity arises in non-clinical areas for an AI tool, he or she files a request with Nowak’s data team. If feasible, the proposal goes to an AI oversight committee comprised of Sanford Health staffers from the legal department, operations, research, physician leadership, innovations, population health, and the institutional review board (IRB). If approved, the requestor will be assigned to a data scientist who will design the AI tool in consultation with the subject matter expert.

“We always broach that subject of ethics to make sure we’re using the information properly and that, where required, we get approvals from patients to use the data,” Nowak said. “We’re not just off creating things on whims. We’re creating tools that meet the strategy of Sanford Health and the needs of our patients.”

Sanford Health uses of AI in electronic medical records includes spotting potential behavioral health issues, breast cancer risk, sleep apnea and diabetes as well as helping in the health system’s response to the COVID-19 pandemic.

The physician could do the same thing as the algorithm, but it would take a lot longer.

“What this allows the provider to do is to spend more time facing the patient and less time studying all of the data elements in the (electronic medical record). Let the computer do that work. We’re not replacing a provider with our tools, we’re allowing them to use these tools to help direct where there may be issues with a patient,” Nowak said. “AI can be a little spooky. But in the world of augmented intelligence, it’s meant to help direct providers, eliminate routine processes, provide efficiencies or the sky’s the limit with other possibilities.”

6 AI principles

Here are the six principles provided by WHO for use in regulation and governance of AI in health:

  1. Protecting human autonomy: In the context of health care, this means that humans should remain in control of health care systems and medical decisions; privacy and confidentiality should be protected, and patients must give valid informed consent through appropriate legal frameworks for data protection.
  2. Promoting human well-being and safety and the public interest: The designers of AI technologies should satisfy regulatory requirements for safety, accuracy and efficacy for well-defined use cases or indications. Measures of quality control in practice and quality improvement in the use of AI must be available.
  3. Ensuring transparency, explainability and intelligibility: Transparency requires that sufficient information be published or documented before the design or deployment of an AI technology. Such information must be easily accessible and facilitate meaningful public consultation and debate on how the technology is designed and how it should or should not be used.
  4. Fostering responsibility and accountability: Although AI technologies perform specific tasks, it is the responsibility of stakeholders to ensure that they are used under appropriate conditions and by appropriately trained people. Effective mechanisms should be available for questioning and for redress for individuals and groups that are adversely affected by decisions based on algorithms.
  5. Ensuring inclusiveness and equity: Inclusiveness requires that AI for health be designed to encourage the widest possible equitable use and access, irrespective of age, sex, gender, income, race, ethnicity, sexual orientation, ability or other characteristics protected under human rights codes.
  6. Promoting AI that is responsive and sustainable: Designers, developers and users should continuously and transparently assess AI applications during actual use to determine whether AI responds adequately and appropriately to expectations and requirements. AI systems should also be designed to minimize their environmental consequences and increase energy efficiency. Governments and companies should address anticipated disruptions in the workplace, including training for health care workers to adapt to the use of AI systems and potential job losses due to use of automated systems.

“Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology it can also be misused and cause harm,” said Tedros Adhanom Ghebreyesus, WHO director-general. “This important new report provides a valuable guide for countries on how to maximize the benefits of AI, while minimizing its risks and avoiding its pitfalls.”

Read more

Posted In Inclusion at Sanford, Innovations, Leadership in Health Care