top of page
  • Writer's pictureGilberto Objío Subero

Legal dangers in medicine due to the use of ChatGPT: a lawyer's warning.

Who is responsible if a patient suffers harm due to a diagnostic or treatment error based on erroneous information provided by an AI? These situations could lead to medical malpractice lawsuits and complicate the assignment of responsibility between the patient, the medical professional, and the AI developer.


Introduction


Artificial intelligence (AI) has advanced rapidly in recent years, and applications such as ChatGPT have proven to be useful in a wide range of fields, including medicine. Despite warnings about the potential risks of AI and the convincing errors it can generate, some people have ignored these concerns and used these tools irresponsibly. In this article, we will explore the dangers associated with the misuse of ChatGPT and other AI in medicine, examining use cases such as diagnostic errors and misuse of the tool.


Diagnostic errors and the role of AI


One of the main risks associated with the irresponsible use of AI in medicine is the possibility of diagnostic errors. Although tools like ChatGPT can provide useful information based on their broad knowledge, they are not designed to replace the clinical judgment and experience of medical professionals. These AI tools cannot consider all relevant factors and specific nuances of an individual case, which increases the risk of diagnostic errors if their information is relied on solely without valuing the scientific evidence that results from a good doctor-patient relationship through a dynamic interrogation.


For example, a patient seeking medical guidance online could obtain an incorrect diagnosis from ChatGPT or another AI, which does not have access to their complete medical history and cannot perform a physical examination. This could lead to inadequate treatment, delays in the correct diagnosis, and possible health complications.


Errors in the use of the tool


Another danger related to the misuse of AI in medicine is the risk of errors in the use of the tool by users. Although ChatGPT and other AI can be easy to use, users may not fully understand their limitations or misinterpret the information provided.

An example of this could be a patient seeking information about the dosage of a medication. If the patient does not provide sufficient or detailed information to the AI, such as their age, weight, medical history, and other relevant factors, the AI could generate an inaccurate or even dangerous response. This could result in an overdose, an unwanted drug interaction, or serious side effects.

In addition, healthcare professionals may also be at risk of relying too heavily on AI. If a doctor relies on information provided by ChatGPT without verifying it with their clinical knowledge and experience, they could make errors in diagnosis, treatment, or patient care.





Gilberto Objio. Twitter @Objiogilberto


Misuse of AI in medicine also raises legal and ethical questions.


Legal Responsibility


Who is responsible if a patient suffers harm due to a diagnostic or treatment error based on erroneous information provided by an AI? These situations could lead to medical malpractice lawsuits and complicate the assignment of responsibility between the patient, the medical professional, and the AI developer.


Given the intrinsic complexity that arises in attributing responsibility in cases of harm suffered by patients due to a diagnosis or treatment error based on inaccurate information provided by an artificial intelligence system, it is important to note that responsibility could fall on various parties involved in the medical process.


Such responsibility could be attributed, for example:

  • To the healthcare provider, who has the obligation to make informed and evidence-based decisions when diagnosing and treating patients;

  • To the developers and providers of AI systems, whose work could be compromised by programming errors, lack of database updates, or inadequate training of the AI;

  • To the institutions and hospitals, for not conducting a thorough evaluation of the technology prior to its implementation or for not providing the required training and support for the effective and safe use of AI by healthcare professionals;

  • And even to the patient themselves, if they provided inaccurate or incomplete information regarding their medical history or symptoms, which could have resulted in an inadequate diagnosis or treatment based on the information provided to the AI.

Therefore, it is important to emphasize that regulations and laws related to responsibility in the use of AI in the healthcare industry vary between countries and jurisdictions. As AI technology continues to advance and its adoption in the medical field becomes more widespread, it is expected that laws and regulations will adapt and evolve in order to address the ethical and legal issues that arise in relation to responsibility attribution.




In conclusion, responsibility in cases of harm to patients resulting from diagnostic or therapeutic errors based on incorrect information provided by AI may be shared between medical professionals, AI developers and providers, institutions and hospitals, and even patients. The determination of specific responsibility in each case will depend on the particular circumstances and applicable legal provisions in each jurisdiction.


In summary, the irresponsible use of ChatGPT and other AI in medicine can have serious and potentially dangerous consequences. To mitigate these risks, it is crucial that both patients and medical professionals are aware of the limitations of these tools and use the information provided with caution and discernment.


AI has the potential to revolutionize medicine and improve healthcare worldwide, but it is important to remember that it should not replace the clinical judgment and experience of medical professionals. To ensure that they are used safely and effectively, users must follow the recommendations and warnings provided by AI developers and always consult with medical professionals before making important decisions about their health. In this way, we can harness the potential of these innovative technologies without risking the safety and well-being of patients.


Gilberto Objio Subero

Expert lawyer in Civil Liability and Medical Law.



15 views
bottom of page