Software as a Medical Device and AI: how far can medicine health be automated?
6° Episode of the series "Legal Dilemmas of AI"
Authors
Among these innovations, Software as a Medical Device (SaMD) stands out, an internationally recognized concept to designate software that, autonomously or semi-autonomously, performs medical functions, such as prevention, diagnosis, monitoring, or treatment of diseases.
Increasingly common examples of SaMD include AI systems capable of analyzing imaging exams and suggesting diagnoses, algorithms that estimate the risk of serious events such as stroke from the patient's clinical history, or even integration with wearables and Internet of Medical Things (IoMT) devices, which allow continuous health monitoring.
Unlike merely administrative or operational support software, SaMD has the potential to directly influence clinical decisions and, consequently, impact patient health and safety. In the context of artificial intelligence, SaMD often uses machine learning or deep learning techniques, learning from large volumes of clinical data.
These solutions promise relevant gains in efficiency, diagnostic accuracy, and expanded access to health. Among the main benefits are agility in diagnoses, support for clinical decisions, and predictive capacity.
In the opinion of André Chidichimo de França, Legal Director, Compliance, Privacy and Data Protection, Corporate Governance and Risk Management at Odontoprev, a leading company in dental plans in Brazil, precisely, the main impacts of the use of AI solutions in support of diagnosis and clinical decision-making are concentrated in the gain of consistency, scale and clinical focus.
André also points out that AI can reduce human biases, especially in repetitive analyses or with large volumes of data, and improve the quality of decisions, providing quantitative evidence that complement the technical judgment of experts. As a support to diagnosis, especially in imaging exams, it contributes to reducing errors and detecting patterns that could go unnoticed.
However, these solutions also raise complex legal and ethical issues, especially regarding professional liability for possible medical errors, healthcare professional autonomy, and the proper integration of these tools into clinical practice.
More than specific challenges, these questions reveal a legal dilemma still information about the limits of automation in health and about who, after all, should be responsible for decisions increasingly mediated by intelligent systems.
It is no wonder that Bill No. 2,338/2023, which provides for the use of AI in Brazil, classifies as high risk AI systems applied to the health area that assist diagnoses or medical procedures when there is a relevant risk to the physical or mental integrity of people, adding a stricter legal regime.
Who is responsible for medical error?
One of the most sensitive points of the use of SaMD is the definition of responsibility in the occurrences of in professional conduct error and/or due to failures in healthcare. The central question is: who is responsible for damages resulting from a (miss)diagnosis provided with the use of AI?
Lucas Morelli, Legal Manager of dr. consulta, a health management platform that offers medical consultations, exams and procedures at affordable prices, brings a concern:
"From the point of view of civil liability, this is a big problem. When we talk about a doctor using AI as a diagnostic tool, we are actually inserting a pole of responsibility, with possible imputation. The doctor is responsible for the diagnosis and the information he visualizes. He is ultimately responsible for this information until it reaches the patient, the final consumer. So, this is extremely important, because, if there is no such care, civil liability can end up being directed to AI and holding a company responsible that often may not even be able to bear this responsibility and that may not have a defined technical responsible (in the medical field), in addition to not having specific regulations."
As a rule, the healthcare professional remains directly responsible for patient care. Even when using clinical decision support tools, healthcare professional are expected to exercise their professional judgment, critically evaluating the suggestions provided by the software. Thus, for example, if the doctor automatically and uncritically adheres to the AI's suggestion, ignoring relevant clinical signs or established protocols, he may be liable for medical error. And the logic remains the same for any healthcare professional.
On the other hand, requiring the healthcare professional to fully understand the inner workings of complex algorithms may be unrealistic. A tension arises between professional duty of care and the practical limits of technical understanding, especially in the face of AI systems classified as black boxes – when internal decision-making processes are obscure, opaque or difficult to understand, even for their creators.
However, the developers and suppliers of SaMD may be held liable when damage results from software failures, such as programming errors, biased or insufficient databases, or even from the absence of adequate clinical validation regarding the definitions of the parameters or limitations of the system.
Regarding this concern, Lucas Morelli highlights:
"The great concern today for those who develop AI is to know how to place ethical and anti-discriminatory guidelines, in order to predict a non-discriminatory algorithm and the quality of the data imputed in the AI. The biggest problem is not the AI hallucinating on top of the data it receives, but the quality of the data that is imputed to the AI. This is the great complexity when we talk about the diagnosis and use of AI for clinical decision-making."
In the opinion of André Chidichimo de França:
"We consider transparency to be the main pillar to build trust in clinical and care contexts, as it allows professionals and institutions to assess risks, validate results, and take responsibility safely. Systems based on machine learning or connected to the internet need to be fully auditable and transparent about the processing of health data, reinforcing trust, legal compliance and ethical use of technology in care."
Risks to medical and healthcare professional autonomy
In the same sense, Lucas Morelli reinforces that the transparency of suppliers is a decisive factor - certainly the main one - to generate confidence in the use of solutions. The information must be auditable, making it possible to understand the quality of the data, its origin if it is anonymized or not, if the company's own data is also being shared and how, in addition, of course, to how the decision is made on what information the AI uses and how the algorithm thought to bring that result. All of this is essential.
Lucas also points out: "The doctor needs to know if the AI's decision of a possible diagnosis of a patient, who is in a specific condition, comes from statistics or comes, in fact, from some specific condition of the patient."
The introduction of SaMD into clinical practice also raises concerns about medical - and healthcare professionals in general - autonomy. Highly sophisticated systems, especially those that demonstrate high success rates, can induce excessive confidence in their recommendations, reducing the space for individual clinical judgment.
This phenomenon, sometimes called "automation bias", occurs when professionals tend to accept automated decisions even when there are indications of error. In the long term, there is a risk of deterioration of the critical capacity of the professionals, who starts to act more as a validator of the machine's decisions than as a central agent of care.
AI does not replace human clinical judgment, however, complements and enhances it. It is with this objective that it should be applied, as trust in the health care chain is only maintained when there is transparency, ethics, and a critical stance on the impact of technology on clinical decisions.
Although artificial intelligence reveals patterns that are not perceptible to the human eye, the final decision should be made by the healthcare professionals, who integrates contextual, subjective, and historical factors that are inaccessible to the full understanding of technology into the clinical analysis.
In the experience of André Chidichimo de França:
"In practice, AI should not issue diagnoses or replace the necessary analysis and clinical conclusion of the professional, but it allows the identification of risks, atypical patterns, and anomalous behaviors. The use of AI is valuable when it acts as a support for decision-making and not as a substitute for the analysis and clinical performance of professionals. Nothing, however, prevents AI from being an instrument (and a very important one) for decision-making, not replacing or overriding the decision of professionals. When applied automatically, artificial intelligence can amplify important risks, such as clinical decontextualization (when data is read without considering the patient's reality), algorithmic injustices (especially if the model is poorly calibrated), and poorly explainable decisions, difficult to sustain from both a clinical and legal point of view. Thus, the final decision, including the validation of the result presented by the AI and its clinical application, must always remain with the professional, who must have the autonomy and judgment necessary to lead the patient to the most appropriate outcome."
In the opinion of Lucas Morelli, Legal Manager of dr. consulta:
"The return of AI is an automated return, which works on the basis of statistics and not on the basis of a humanized treatment to provide something tailored to that patient. You have to be careful because AI is a great statistic of information, it is a great probability calculator, but it does not make a diagnosis per se. There is information that only the doctor will know how to interpret correctly."
Sectoral regulation
In addition to Bill No. 2,338/2023 expressly mentioning AI systems applied to the health area that assist diagnoses or medical procedures, class entities have been moving towards creating a specific sectoral regulation.
The Federal Council of Medicine (CFM), for example, recognized the urgency of the topic and, recently, on February 11, 2026, published CFM Resolution No. 2,454/2026, which regulates the use of artificial intelligence in medicine.
The objective is to ensure that these tools are applied, always for the benefit of patients, in a safe, transparent, isonomic, and ethical manner, while encouraging innovation and efficiency of services.In addition, among the principles that guide the resolution, respect for the autonomy of medical professionals and institutions stands out.
A specific point of transparency brought by the Resolution is the doctor's duty to inform the patient, in a clear and accessible way, when AI models, systems, and applications are used as relevant support in their care, diagnosis, or treatment, in addition to recording in the patient's medical record the use of AI systems as support for medical decision-making.
Regarding the doctor's responsibility for this use, the Resolution provides that the doctor has the right to be protected against undue liability for failures attributable exclusively to AI systems, provided that the diligent, critical, and ethical use of these tools is proven. In the field of ethical-professional responsibility, the doctor remains fully responsible for the medical acts performed by him through the use of AI models, systems, and applications.
As Lucas Morell points out: "In the medical sector, it has already been accepted that the doctor is the bridge and can only use AI as a tool. This is an important point."
On the other hand, the Federal Nursing Council (COFEN), the Federal Dentistry Council (CFO) and other Federal Councils in the health sector have already made public statements on the use of AI, but there are still no formal regulations in place.In André Chidichimo's perception, the Brazilian regulatory scenario for the use of artificial intelligence in health is still maturing. Although there are important advances, there are still relevant challenges to be overcome to ensure safety, ethics and clarity of responsibilities.
The Federal Councils of the sector's institutional position indicates that the use of AI must observe the ethical principles already consolidated in the practice of medicine and reinforces the importance of caution in the adoption of SaMD solutions.This understanding dialogues with the international principles established by the World Health Organization (WHO) for the ethical use of artificial intelligence in health, which can be summarized in the following principles:
- Human autonomy, ensuring that medical decisions remain under human control, with protection of privacy, confidentiality and informed consent.
- Safety, well-being, and public interest, requiring validation, accuracy, and quality control of AI systems.
- Transparency and explainability, with accessible documentation on the operation, limits, and purposes of the algorithms.
- Responsibility and accountability, holding human and institutional agents accountable for the use of technology and any damage caused.
- Inclusion and equity, preventing discriminatory biases and expanding equitable access to health solutions.
- Responsiveness and sustainability, with continuous evaluation, mitigation of environmental impacts, and attention to the effects on health work.
Still, the WHO report warns against overestimating the health benefits of artificial intelligence, especially when this comes at the expense of investments and strategies essential to achieving universal health coverage.
In Lucas Morelli's view, the Brazilian regulatory scenario focuses too much on the patient's relationship with AI and fails to predict how AI should be trained and developed, forgetting to regulate a very important part, which are the parameters necessary for the safe creation of this AI. This generates insecurity in the market, especially in the health market, regarding the application of new methodologies, especially in the relationship between the company developing the AI and the health company.
Still, the WHO report warns against overestimating the health benefits of artificial intelligence, especially when this comes at the expense of investments and strategies essential to achieving universal health coverage.
Pathways to responsible integration
To mitigate risks and balance innovation with legal and ethical certainty, some measures are essential:
- Regulatory clarity, with well-defined criteria for validating, monitoring, and updating SaMD systems.
- Transparency and explainability, as far as possible, about the operation, limitations and biases of algorithms.
- Adequate training of health professionals, focused on the critical and responsible use of AI.
- Contractual definition of responsibilities, especially between health institutions and technology suppliers.
- Patient-centricity, ensuring that automated decisions do not replace individualized care.
Conclusion
The advancement of Software as a Medical Device, especially when based on artificial intelligence, displaces traditional boundaries of medical law and health regulation. By increasingly influencing diagnoses, prognoses and clinical decisions, these systems come to occupy an intermediate space between a technical tool and a decision-making agent, without the legal system having clearly redefined the limits of this action.
In this scenario, the incorporation of AI into healthcare practice does not eliminate the centrality of the professional, but neither does it fully preserve the classic model of individual clinical decision-making. The attribution of responsibility, professional autonomy and trust in the doctor-patient relationship come to depend on systems whose internal processes are mostly opaque, probabilistic and dynamic.