top of page
Aditi Sinha & Tanish Shah

Healthcare Meets AI: Balancing Opportunity and Liability


By combining cutting-edge technology like machine learning (ML), deep learning, natural language processing (NLP), and computer vision, artificial intelligence (AI) is revolutionizing the healthcare industry. In significant healthcare contexts, these technologies have been improving operational efficiency, treatment customization, and diagnostic accuracy. It is anticipated that these technologies will improve clinical triage, increase the accuracy of diagnostics, optimize therapeutic interventions, and simplify workflow algorithms. Furthermore, the combination of AI and ML can promote the creation of new pharmacotherapies and genetic interpretations, as well as more effective data collecting and processing. 


The implementation of these AI systems is not without difficulties, though. Before effective deployment, concerns about the provenance of data and data ownership, the reliability of input data, the interpretation of test results, customer privacy, and liability issues arising from possible data breaches must be addressed. 


As we delve into the complexities of AI liability in the healthcare sector in this article, it is crucial to explore both the transformative potential of AI and the legal & ethical considerations that accompany their use. Thus, the balance between innovation and responsibility will define the future of AI in medicine.


USE OF AI IN TRANSFORMING HEALTHCARE

The adopters of AI in the Healthcare Industry stand to gain a great deal from it. Their  primary goal  as a whole has been to gather precise and pertinent information about patients and those who enter treatment. Because of this, AI works well in the data-rich healthcare industry. Second, there are numerous applications of AI in this particular industry, such as by enhancing diagnostics, personalizing treatment plans, and improving operational efficiency. This integration of AI technologies in healthcare is reshaping medical professionals' proficiencies pertaining to diagnosis, treatment, and monitoring of patients, which ultimately leading to better health outcomes.

SOME OF THE KEY APPLICATIONS OF AI IN THE HEALTHCARE INDUSTRY ARE:

1

 Medical Imaging Analysis: 

Medical image analysis, including X-ray, MRI, and CT scans, makes substantial use of AI systems. These systems are often faster and more accurate than humans at detecting anomalies. AI systems, for example, can use imaging data to detect early-stage tumors or other disorders, allowing for prompt therapies. Notable instances include the application of AI to the identification of brain tumors and the detection of diabetic retinopathy.


2

Predictive Analytics: 

AI algorithms forecast health risks and outcomes by analyzing enormous volumes of patient data. Proactive care management is made possible by these systems' ability to recognize patterns in electronic health records (EHRs) and indicate individuals who are at high risk for diseases like diabetes or heart disease. For instance, it has been demonstrated that AI-driven notifications greatly enhance patient outcomes by guaranteeing prompt medical treatments.

3

Remote Patient Monitoring: 

AI makes it easier to remotely monitor patients' vital signs and health status as telehealth grows. By using this technology, medical professionals may monitor patients' symptoms in real time, which lowers the need for hospital stays and enhances access to care. For Eg: The HealthSuite Digital Platform from Philips,  employs AI to monitor continuously, aiding in early diagnosis and intervention.

4

Personalised Medicine: 

AI makes it possible to tailor treatment regimens according to each patient's unique genetic and lifestyle information. This method increases patient satisfaction and the efficacy of therapies. Health data is analyzed by AI platforms such as H2O.ai to forecast results and adjust interventions accordingly.

5

Operational Efficiency: 

By automating operations like appointment scheduling, record management, and billing, artificial intelligence (AI) improves administrative efficiency in healthcare facilities. Healthcare workers can concentrate more on patient care as a result of the decreased administrative workload.

AI has been revolutionizing healthcare by enhancing diagnostic accuracy, personalizing treatment plans, and improving patient care through predictive analytics and telemedicine. It streamlines administrative tasks, accelerates drug discovery, and reduces operational costs, allowing healthcare professionals to focus more on patient outcomes. 

While AI offers significant benefits, addressing data privacy and ethical concerns is crucial for its successful integration into healthcare systems. 


CHALLENGES IN USING AI IN THE HEALTHCARE INDUSTRY


1. Data Privacy and Security Risks

Large volumes of sensitive patient data are frequently produced by AI systems, which raises questions regarding data security and privacy. Patient confidentiality and confidence in the healthcare system are seriously jeopardized by the possibility of data breaches, illegal access, and exploitation of private health information. 

2. Bias and Fairness Concerns

AI algorithms can inadvertently perpetuate existing biases if the training data is not representative of diverse populations. For example, studies have shown that algorithms predicting healthcare needs based on cost can disadvantage Black patients, who may incur lower healthcare costs despite having greater health needs compared to White patients. This can lead to unequal treatment, misdiagnosis, or underdiagnosis of certain demographic groups, exacerbating health disparities.

3. Regulatory and Legal Challenges

Navigating the complex regulatory frameworks governing AI in healthcare is challenging. Emerging regulations require compliance with standards for safety, efficacy, and ethical use of AI technologies, which can vary significantly across jurisdictions. This complexity can hinder the timely implementation of beneficial AI solutions. 

4. Reliability and Accountability Concerns

Determining accountability in the event of an error or adverse outcome linked to AI recommendations is a significant challenge. Questions arise about who is responsible—developers, healthcare providers, or the algorithms themselves—when AI-generated decisions lead to negative patient outcomes. In cases where AI contributes to patient harm, establishing a direct causal link between the AI's actions and the resultant injury can be challenging. This is exacerbated by the "black box" nature of many AI algorithms, which obscures understanding of how decisions are made.

5. Over Reliance on AI Recommendations

There is a risk that healthcare professionals may become overly reliant on AI-generated recommendations, potentially diminishing their critical thinking and clinical judgment. This overreliance could lead to complacency in decision-making processes.

6. Data Quality Issues

The effectiveness of AI algorithms heavily depends on the quality of the input data. Incomplete or inaccurate data can lead to flawed predictions and recommendations, ultimately compromising patient care.

7. Cybersecurity Risks

AI systems are vulnerable to cybersecurity threats such as ransomware attacks, malware infections, data breaches, and privacy violations. Protecting these systems from cyber threats is essential to maintain patient safety and trust.

While AI has the potential to transform healthcare positively, addressing these challenges and the liability of whom to be blamed is a complex occurrence. In such a scenario,  determining the liability of AI-enabled medical devices is of utmost importance.



LIABILITY IN CASE OF MISHAPS

The integration of AI-enabled medical devices and robots in healthcare raises significant legal and ethical questions, particularly concerning the attribution of liability for injuries and fatalities resulting from these AI technologies. This section of the article explores various aspects of liability related to AI in healthcare, analyzes potential risk indicators for determining accountability, and examines emerging trends in this evolving landscape.


Product Liability: 

Product liability commonly applies to both the producers of completed products and to the manufacturers of parts that are integrated into any final product. Furthermore, as per most jurisdictions, it may be applicable to commodities importers, distributors, as well as retailers. However, the problem of accountability in the context of employing AI-enabled medical devices and systems  remains complex, as it is unclear if product liability legislation will apply to algorithms and whether the AI robot should be considered as hardware or software.


In India, Product Liability falls under the ambit of Consumer Protection Act, 2019. Section 2 (33) of the Act defines “product" as any article or goods or substance or raw material or any extended cycle of such product, which may be in gaseous, liquid, or solid state possessing intrinsic value which is capable of delivery either as wholly assembled or as a component part and is produced for introduction to trade or commerce, but does not include human tissues, blood, blood products and organs; 


Moreover Section 2 (34) of the same Act defines "product liability" i.e the responsibility of a product manufacturer or product seller, of any product or service, to compensate for any harm caused to a consumer by such defective product manufactured or sold or by deficiency in services relating thereto;


As per Section 84 of the same Act, liability of a product manufacturer can be determined based on manufacturing defects, design defects, and inadequate instructions and warnings. The primary problem here is whether  the design and coding of the AI algorithm should be considered as a design defect.


What further complicates the matter is that the AI medical devices may function autonomously based on their own experience, committing errors independent of people who manufacture, market, or utilize them. 


An alternate approach for product liability can be considered under Section 27 of the Drugs and Cosmetic Act, 1940; wherein, the manufacturer of medical devices can be held liable for penalty if the medical device is used in the diagnosis, treatment, mitigation, or prevention of any disease or disorder is likely to cause his death or is likely to cause such harm on his body as would amount to grievous hurt as defined in Section 116 of Bharatiya Nyay Sanhita, 2023.


In Lowe v. Cerner Health Services, the plaintiff sued Cerner over an electronic health record (EHR) system after a patient suffered a medical incident during post-surgical monitoring. The case centered on a pulse oximetry order entered by a doctor into the Soarian Clinicals system, which was configured by Virginia Hospital Center (VHC) to default continuous monitoring orders to 10:00 AM the next day, resulting in the patient not receiving overnight monitoring. 

The Court relied on the case of Stokes v. Geismar wherein they stated that  to prevail on either his negligent design or his failure-to-warn theory, the plaintiff must demonstrate with reasonable certainty that the defendant caused the plaintiff's injuries. Additionally it stated, the Plaintiff must also demonstrate with 'reasonable certainty' that if there is more than one possible cause of the accident, Defendants caused the injury." 


Thus, the Supreme Court of Virginia has a "clearly-expressed view that [negligence-based] products liability actions may take one of three forms." ("Virginia law only recognizes three products liability claims: negligent assembly or manufacture, negligent design, and failure to warn."). Therefore, the Courts have made clear that there is no "theory of general negligence in products liability."

The court therefore granted summary judgment to Cerner, finding that: (1) the alleged system "defects" were primarily due to VHC's configuration choices, not the product itself, (2) the system met applicable government certification standards, (3) the plaintiff's experts failed to prove the system was unreasonably dangerous or violated industry standards, and (4) multiple potential causes existed for the patient’s injury beyond the EHR system. Critically, the court determined that the plaintiff could not establish with reasonable certainty that Cerner's product caused the harm, especially given that VHC had full control over system configuration and had made multiple changes to the system over time.

Negligence (Fault-Based) Liability:

Negligence is considered to be the primary cause of action in medical malpractice cases; it has been laid out as a potential approach for dealing with harm caused by AI-enabled medical equipment and surgical procedures. As AI systems become more autonomous and capable of making independent decisions, the likelihood of negligence-based liability will increase.


In the case of Sampson v. HeartWise, physicians of Health Systems Corporation followed the output of a software program for cardiac health screening that wrongly classified a young adult with a family history of congenital heart defects as “normal.” The patient died weeks later. In this case, the court dismissed the ordinary negligence claim because the developer’s licensing agreement gave the physicians final decision making authority and the developer of the alleged software was not a “healthcare provider” under state law. Therefore, these cases point to a future where developer liability shall vary depending on private contracts and jurisdictional variation.


Traditionally, so far the courts have usually held humans responsible for the harm produced by these AI-enabled medical devices, even if they were unaware of the underlying processes. This is partly because AI lacks legal personhood, which prohibits them from being held directly accountable for any harm caused. There is also a limited body of case law addressing injuries caused by AI in healthcare, leading to uncertainty about how courts will interpret liability in future cases. This lack of precedent can create hesitance among healthcare providers to adopt innovative AI solutions due to fear of litigation.


However, with the breakthroughs in AI technology that we are presently witnessing, we may no longer be able to ignore the prospect that one day completely autonomous medical robots may outperform humans and even function independently without the need for any human supervision. At that time, AI-enabled medical devices must be recognized as more than just mere aiding tools, and their legal position should be carefully reconsidered.


DIFFICULTIES FACED IN DETERMINING AI’s LIABILITY

Inferring from the above mentioned potential liability claims and use cases, the authors have identified few challenges with respect to determining liability while using AI in healthcare: 


Lack of Specific Design Defect: 

In real-world scenarios, Plaintiffs sometimes struggle to prove software-related injury because it is difficult to establish specific design defects. Simply establishing an error is inadequate; plaintiffs must also identify the precise cause and method of the error, which can be particularly difficult with AI models due to their complex decision-making processes.


Algorithmic Bias and Patient Variability: 

AI algorithms may have biases that disproportionately influence specific patient populations based on the datasets they are trained on. This makes it impossible to determine whether a doctor should have predicted flawed or unreliable results for a particular patient, given the algorithm's performance varies among different groups of population.


Determining Accountability: 

As AI systems become more autonomous, it becomes increasingly difficult to ascertain who is responsible for adverse outcomes. This ambiguity complicates the application of traditional liability frameworks, as it is unclear whether liability should fall on the healthcare providers, the AI developers, or the institutions using the AI tools.


Principal-Agent Relationship: 

The extension of this relationship to include AI systems raises concerns that healthcare professionals may be held liable for decisions made by AI, potentially deterring them from utilizing these technologies effectively.


CONCLUSION

The integration of AI in healthcare can be represented as a dual-edged sword, which offers both significant advantages and notable challenges. There is no doubt that AI has tremendous capability to transform the health sector including but not limited to enhancing diagnostic accuracy, streamlining administrative tasks, and enables personalized treatment plans, ultimately improving patient care and operational efficiency. However, one can not overlook the challenges posed by use of AI in the health sector such as high implementation costs, data privacy concerns, potential biases in AI algorithms, and the risk of overreliance on technology at the expense of human judgment. To address these challenges, it is crucial to establish clear legal frameworks addressing these challenges. Existing liability frameworks may not be adequate enough to address the unique risks posed by AI technologies in healthcare. There is a call for developing specific policies that consider the nuances of AI use while promoting its safe and effective integration into clinical practice. Balancing innovation with responsibility will be essential to harnessing the full potential of AI while mitigating its risks in the healthcare sector.


References - 1. Lowe v. Cerner Health Servs., Civil Action No. 1:19cv625, 23 (E.D. Va. Nov. 20, 2020)

 2. Stokes v. L. Geismar, S.A., 815 F. Supp. 904 (E.D. Va. 1993)  3. Alicia Marie Sampson, as administratrix of the Estate of Joshua Aaron Sampson, deceased v. HeartWise Health Systems Corporation; HeartWise Clinic, LLC; Isaac Health & Prevention Partners, LLC; William A. Nixon, M.D.; and Jeffrey A. Saylor, M.D. (Appeal from Marshall Circuit Court: CV-17-900377



58 views

Comments


bottom of page