What are the ethical considerations for AI

  • Law firms

May 31, 2022 - Artificial Intelligence or AI includes the ability of computers, or machines run by a computer, to perform tasks that are commonly performed by humans. This means that AI includes not only the ability to recognize and analyze data, but also to "infer" or "predict" what that data actually means in certain contexts. Not surprisingly, the use of AI in health care has become the subject of many research studies to ensure that these tools and technology are safe to use on animals and human beings. But given that AI calls for computers/machines to practically "act," "react," and process data like humans, the potential ethical issues are numerous.

While AI relies on machines, math, and technology to work, there is still human intervention which can present issues. Moreover, like any digital health tool, AI models can be flawed, presenting risks to patient safety. These issues can stem from a variety of factors, including problems with the data used to develop the algorithm, the choices that developers make in building and training the model, and how the AI-enabled program is eventually deployed — all functions of human intervention.

One way to manage patient safety risks is to include a Data and Safety Monitoring Board or DSMB on studies reviewing AI in patient care or diagnosis. Allowing an independent committee of data and clinical experts to review the data collected on an AI study helps ensure not only the integrity of the data itself, but also the overall safety of those participating in the study.

Register now for FREE unlimited access to Reuters.com

Another approach to this type of cutting-edge research on AI is to appoint a clinical monitor whose job is to review each participant's clinical records with an independent eye to ensure that the technology is not harming them or inaccurately recording results on their status through the course of the study.

Finally, post-study peer review is another approach to ensuring the accuracy and completeness of the data collected as part of research on an AI tool/technology.

Any or all of these methods should be at least considered depending on the risk level of the research study and patient interventions contemplated.

The World Health Organization released a guidance document outlining six key principles for the ethical use of artificial intelligence in health, including:

(1) Protecting autonomy;

(2) Promoting human well-being, safety and public interest,

(3) Ensuring transparency, explainability, and intelligibility,

(4) Fostering responsiveness and accountability,

(5) Ensuring inclusiveness, and equity, and

(6) Promoting AI that is responsible and sustainable.

Since AI in health care is still relatively new, we have only begun to understand its impact on research, health care and human intervention.

An overarching issue in the research and ultimately, the use of AI in diagnosis and treatment, is Informed Consent which has its roots in the Belmont Report, identifying basic ethical principles and guidelines that address ethical issues arising from the conduct of research with human subjects; and the Declaration of Helsinki, a statement outlining the ethical principles for medical research involving human subjects. Such principles have resulted in the federal Common Rule (45 CFR Part 46) and the corresponding FDA regulations at 21 CFR Part 50 and 52.

An important ethical issue to be considered in this context is the extent to which clinicians have a responsibility to educate the patient around the complexities of AI, including the kind of data inputs, and the possibility of biases or other shortcomings in the data that is being used. Under what circumstances must a clinician notify the patient that AI is being used at all? Although existing laws on this issue do not address AI specifically, the rules regarding informed consent should nevertheless be applicable in the deployment of AI.

To realize the potential of AI, developers need to make sure of two key things: (1) the reliability and validity of the data sets; and (2) transparency. First, the data sets need to be reliable and valid. The better the testing and training, the better the AI will perform.

Second, transparency is critical. While in an ideal world all data and the algorithms would be publicly available, there are legitimate issues relating to protecting investment/intellectual property and also not increasing cybersecurity risk. Third-party auditing may represent a possible solution. Moreover, AI developers should be sufficiently transparent about the kind of data used and any shortcomings of the software.

U.S. regulatory agencies are also weighing in on ethical concerns related to transparency, as evidenced in the Federal Trade Commission's April 2021 guidance, "Aiming for truth, fairness, and equity in your company's use of AI" (explored more specifically in our earlier article for Thomson Reuters,). Transparency creates trust among stakeholders, particularly clinicians and patients, which is the key to a successful implementation of AI in clinical practice.

AI has the capability to improve health care not only in high-income settings, but to democratize expertise, "globalize" health care, and bring it to even remote areas. Biases and discrimination are two issues which also need to be addressed with AI. Developers should consider the risk for biases when deciding (1) which technologies/procedures they want to use to train the algorithms; and (2) what data sets (including consideration of their quality and diversity) they want to use for the programming.

A remaining problem is that a variety of algorithms are naturally sophisticated and nontransparent. As we have seen in the enforcement context, some companies developing software will resist disclosure and claim trade secrecy. It may therefore be left to nongovernmental organizations to collect the data and show the inherent biases. AI developers trying to avoid these criticisms are advised to engage in internal or external auditing or monitoring and to report results in a transparent manner.

In cases where AI needs to be relied upon, such as for diagnosis or treatment of disease, vast amounts of data and thus more data sharing will be necessary. This gives rise to potential privacy and security issues governed by the Health Insurance Portability and Accountability Act (HIPAA). This will require developers to implement the proper policies, procedures and technical security measures under both the HIPAA Privacy Rule as well as the Security Rule. As more data is added to the AI systems, the possibility for data breach or reidentification of deidentified data sets also increases, particularly as increasingly sophisticated AI allows for linking of data.

Beyond the question of what is collected, it is imperative to protect patients against impacts outside the physician-patient relationship that might deleteriously affect patients, such as on health or other insurance premiums, job opportunities, or even personal relationships. Implementing such protections will require strong antidiscrimination laws similar to regimes in place for genetic privacy; but some AI health applications also raise new issues, such as those that share patient data not only with the doctor but also with family members and friends who will not likely have similar obligations.

Moreover, under what circumstances patients have a right to withdraw their data when that data has already been analyzed in aggregate form? Developers will need to keep accurate records which allow them to unwind their aggregated data if necessary.

There is benefit to swiftly integrating AI technology into the health care system, as AI presents an opportunity to improve the efficiency of health care delivery and the quality of patient care. But there is much work to do in order to lay down the proper ethical foundation for using AI technology safely and effectively in health care.

Linda A. Malek, a partner at the firm, contributed to this article.

Register now for FREE unlimited access to Reuters.com

Opinions expressed are those of the author. They do not reflect the views of Reuters News, which, under the Trust Principles, is committed to integrity, independence, and freedom from bias. Westlaw Today is owned by Thomson Reuters and operates independently of Reuters News.

F. Lisa Murtha

F. Lisa Murtha is a partner in the Healthcare practice of Moses & Singer LLP and has experience as a healthcare and research attorney, a Chief Compliance Officer, and a consultant where she served provider and life sciences organizations on research and health care legal and regulatory matters. She can be reached at .

Pralika Jain

Pralika Jain is an associate Moses & Singer's health care practice. She counsels companies at all stages of the life cycle, from formation to investment to exit, building and protecting IP, and providing strategic advice to founders and investors in connection with complex transactions. As counsel to companies and institutions in the health care and technology industries, she regularly advises on matters related to privacy and data laws across jurisdictions. She can be reached at

Kiyong Song

Kiyong Song is an associate at Moses & Singer's Healthcare, Privacy & Cybersecurity practice groups. He counsels clients in the fintech, health care, and health tech space on the regulatory and compliance issues relating to privacy and security of data under U.S. and European laws, clinical research, and medical devices. He can be reached at .

What are the 7 most pressing ethical issues in artificial intelligence?

More videos on YouTube.
Biases. We need data to train our artificial intelligence algorithms, and we need to do everything we can to eliminate bias in that data. ... .
Control and the Morality of AI. ... .
Privacy. ... .
Power Balance. ... .
Ownership. ... .
Environmental Impact. ... .
Humanity..

What are the 3 AI ethics?

Establish a system of governance with clear owners and stakeholders for all AI projects. Define which decisions you'll automate with AI and which ones will require human input. Assign responsibility for all parts of the process with accountability for AI errors, and set clear boundaries for AI system development.

What are the types of ethics consideration in developing AI system?

In the review of 84 ethics guidelines for AI 11 clusters of principles were found: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, sustainability, dignity, solidarity.

What are top 10 principles for ethical artificial intelligence?

Knowledge and behaviour: the 10 principles of ethical AI.
Interpretability. ... .
Reliability and robustness. ... .
Security. ... .
Accountability. ... .
Beneficiality. ... .
Privacy. ... .
Human agency. ... .
Lawfulness..