Website logo
Home

Blog

Predictive analytics and GDPR: the importance of the right to information in the clinical field - Digital Agenda

Predictive analytics and GDPR: the importance of the right to information in the clinical field - Digital Agenda

The spread of artificial intelligence in health care makes diagnosis and clinical decisions more effective, but raises decision-making legal questions.Between GDPR, AI Law and informed consent, algorithmic information power appears as a key position for law, liability and patient safety....

Predictive analytics and GDPR the importance of the right to information in the clinical field - Digital Agenda

The spread of artificial intelligence in health care makes diagnosis and clinical decisions more effective, but raises decision-making legal questions.Between GDPR, AI Law and informed consent, algorithmic information power appears as a key position for law, liability and patient safety.

The increasing use of artificial intelligence systems in contemporary medicine is leading to a structural change in the way healthcare is delivered.It shifts the care paradigm towards more personalized medicine that focuses on the individual.more than in pathology

Predictive diagnostic systems, including clinical decision support systems (CDSS), are able to predict the occurrence of pathologies, assess the risk of complications or propose individualized treatment paths and aim to define the conditions for improving the efficiency and effectiveness of services to patients, but at the same time increase the organizational efficiency of healthcare units.

However, the use of such a system is useful for studying complex legal problems, especially when clinical decisions are supported or partially resolved by algorithmic models that are not entirely meaningful.In fact, we are witnessing an attempt to resolve the structural paradox of law: we are faced with an algorithmic system of increasing capacity but diminishing meaning and information.

Topic index

Practical examples

For example, deep learning models used for diagnosis or cancer risk prediction are based on their arrangement of deep neural network structures that are unknown to their developers, a phenomenon known as black box.

Besides the purely technical issue, which mainly concerns industrial safety incidents associated with "smart" systems, the most central question is certainly legal and regulatory: what information should be provided to the patient when making clinical decisions supported by intelligent systems?And who is legally bound to provide it?How is responsibility distributed along the operational chain involving healthcare facilities, medical staff and software suppliers?

The first answer, which awaits an in-depth ruling by the Supreme Court regarding finances that thoroughly examines the issue, emerges clearly from a coordinated reading of Regulation (EU) 2016/679 (GDPR), Regulation (EU) 2024/1689 (AI Law), Law No. 2719 on informed consent/2721 and Law No. 2719/2421/201(Gelli-Bianco): Algorithmic explanations in the health field are not an option for transparency, but a condition for the legality of data processing, preconditions for the validity of informed consent and a basis for the correct attribution of professional responsibility.

Legal health databases in predictive systems

To provide higher throughput analysis, predictive diagnostic systems will need to process large amounts of clinical data from multiple sources, such as pathology registries, connected medical devices, telemedicine platforms, hospital files, and electronic health records.

These treatments are fully within the scope of the regulations in art.9 GDPR. The processing of certain categories of data, including health-related data, is generally prohibited and is only exempt if there are mandatory conditions.

The relevant features of the operation in the health sector are mainly diagnosis, care or treatment (paragraph 2, letter h), public interest in the field of public health (letter i) and scientific research (letter j).

This exemption program, pending the next European Union Legislation that seeks to unify the program, must be coordinated with Legislative Decree 196/2003, as amended by Legislative Decree 101/2018, and the legal publication of the Guardian on the protection of personal data, including the Electronic Health Guidelines.

A particularly important issue, which is unfortunately often underestimated in practice, concerns the obligation to prepare a data protection impact assessment (DPIA) in accordance with Article 35 of the GDPR.

In fact,The use of predictive systems requires the consistency of data belonging to specific categories;use of new technology;High-risk activity is combined with the consideration of large-scale profiling activities and the production of significant effects on a person.

From this point of view, the list of treatments covered by the DPIA, approved by the controller for the protection of personal data, confirms this classification;it follows that the lack of a specific impact assessment for the analytical technology system may constitute an independent violation of the rules on the protection of personal data, which may be subject to sanctions even without the assurance of actual damage to interested parties.

Algorithmic interpretation in health care and automated clinical decision making

The regulatory basis of the right of interpretation is found in Art.22 GDPR, which recognizes the right of an interested party not to be subject to decisions that create legal effects or significantly affect him based on fully automated processing.

In healthcare, this situation happens much more often than the structures tend to.

A diagnostic algorithm that can guide the course of treatment, risk points that indicate the priority of surgical access, an automatic classification system that separates patients in the emergency room, are clearly complex situations - at least possible - in the case of the rule in question.

Healthcare institutions tend to believe that they are safe from using this technique.22 Due to the medical verification of the algorithmic output.However, the Court of Justice of the European Union declared in OQ v.Land Hessen (C-634/21) that the term "decision" includes actions that significantly affect the personal person who is interested and that human intervention must be prescribed by law and implies a concrete ability to understand, evaluate and deviate from the algorithmic output.

Thoughtless approval of the results produced by the system does not constitute supervision in the legal sense of the term: rather, it constitutes automation in the guise of human responsibility with a consequent escalation of responsibility.

"right to explain"

It should be noted that there is no specific provision in the GDPR establishing a "right of interpretation", but this right is derived through systematic interpretation from the combined provisions of Articles 13, 14, 15 and 22 and words 60, 63 and 71.

Art.13, par.2, letter f) and Art.14, Par.2, letter g) The Owner is required to provide information to interested parties about the possible logic of the automated processing already at the time of data collection.Art.15 Guarantees the interested person access to such information at any subsequent time.

Article 71 explains its minimum content: the logic of the treatment, the meaning, the expected results, the right of human intervention and conflict.

The European Data Protection Board (EDPB) has clarified that the information required must be meaningful (substantial and understandable), but the requirement to disclose source code is mandatory (difficult to enforce).

The data controller has the task of communicating the objectives of the system, the general performance criteria, the implications of the diagnostic recommendation for the interested party and the role of human intervention in the decision-making process.

Key to Guaranteed Privacy

The data protection guarantee - precisely in accordance with the precautions explained above - has repeatedly wanted to return the transparency of algorithms to the principles of legality, correctness and transparency established in the technology.5 GDPR and the reporting principle mentioned in the article.24 GDPR, it is therefore an integral part of the compliance framework and not an additional element.

To confirm this approach, in the conclusions given in the case C-203/22, related to the use of automated credit scoring systems, the Attorney General of the Court of Justice of the European Union reiterated the right of an interested party to have access to information related to automated processes, specified in Articles 15 and 22 of Regulation (EU) 2016/679, the commercial cannot be circumvented only by reservation or industrial circumvention.The secrets of the system provider.

Although a balance must be made with the protection of knowledge and commercial secrets, these needs cannot lead to a significant compression of the right to transparency, the interested party must in any case be guaranteed the opportunity to know the main criteria, the relevant parameters and the general logic of the automated decision in a clear and understandable way, without the need to obtain an algorithm or a limited code or genetic communication.

Algorithmic interpretation in health care, informed consent and medical liability

Interpretation, misunderstandings, and medical liability: regulatory oversight

The hot topic of algorithmic interpretability in healthcare must be analyzed based on three simultaneous and aggregated levels and fronts of legal risk.

The first concern, as presented by us, is the violation of the GDPR due to insufficient information transparency.

The second concerns violations of Law No. 219/2017 on informed consent, which requires health care to provide patients with complete, up-to-date and understandable information about the proposed treatment method: doctors using “opaque” diagnostic algorithms are unlikely to be able to meet this requirement.Because he could not explain what he himself did not understand.Consent given in these terms may be formally effective but not effective.Therefore, there is a dispute in the lawsuit.

Professional responsibility

The third thing is about professional responsibility under Law no.24/2017 (Gelli-Bianco): in the face of a medical decision based on the results of an inexplicable algorithm, the distribution of responsibility between the doctor, the health center and the software provider remains an unexplored area of ​​control, but is governed by a regulation according to which the provider does not give responsibility for the control of the provider.

These three dimensions of risk do not overlap, but overlap, resulting in a common legal impact that cannot be controlled by individual tools.

The entry into force of Regulation (EU) 2024/1689 on artificial intelligence (AI Act) introduces an additional level of regulation that does not replace but integrates the GDPR system with specific obligations for systems classified as dangerous.

Clinical decision support systems and predictive diagnostic systems are subject to Annex III of the Regulation, with further obligations regarding detailed technical documentation, recording of system logs to ensure traceability, qualified human supervision, requirements for transparency and interpretation, and fundamental rights impact assessment prior to use (Article 27).

The principle of design ethics – which recalls and strengthens the privacy of art design.25 GDPR – requires that explainability, traceability and verifiability are built into the design of the system from the outset and are not added as an afterthought as a legal remedy.

Algorithmic governance and accountability in health facilities

Rapid response to the described legislative framework requires a systematic and inevitably active and interdisciplinary approach.

An important starting point is to draw up a complete inventory of the algorithmic systems in use, with a risk classification in accordance with Annex III of the AI ​​Act and the GDPR: without this comparison it is not possible to assess the overall regulatory exposure of the framework.

The second pillar is to conduct specific DPIAs for each AI clinical decision support system, including assessment of model logic, training datasets – with special attention to risks of discriminant bias – and human monitoring measures.

At the contractual level, the relationship with the providers of intelligence systems must be governed by clauses that guarantee the data controller to control the operation of the solutions used, providing, in particular, the right of access to technical documents in accordance with the objectives of compliance with the law, the obligation of the first communication of the change of models, the part of training, and the guarantee of the operation of the data. the verification of automatic processes, according to articles. 28, 24 and 32 of Regulation (EU) 2016/679 and the principles of accountabilityand privacy in the process.

The need to establish a Technical and Ethical Commission

In this scenario, the creation of a technical-ethical committee for the management of AI in the field of medicine - composed of multidisciplinary skills, including doctors, lawyers, IT engineers and patient representatives - can organize implementation evaluations, organize impact evaluation activities, monitor the development of systems and ensure the continuity of updating thepublic policy and algorithmic.

Within this structure, the Data Protection Officer (DPO) serves the providers as a qualified technical interlocutor and can participate in the decision-making process in an advisory and supervisory capacity in accordance with provisions 38 and 39 GDPR, helping to ensure compliance with high-risk treatments and ensuring the correct integration of technical and legal requirements.

The "right to explanation" in healthcare, in this sense, is a real difference between a structure that merely accepts digital tools and one that has assumed full legal, clinical and organizational responsibility.

Regulatory convergence with Regulation (EU) 2016/679, Regulation (EU) 2024/1689 (AI Act), Law No.219/2017 on informed consent and Law No.24/2017 on the safety of care does not lead to regulatory redundancy, but creates a system that ensures the protection of different objectives, but multiple levels of assurance.Value: The patient's right to knowledge to control decisions that affect their health.

In this context, the explanation of the algorithm is not a requirement of simple technical transparency, but a necessary condition for the validity of the informed consent, the legality of the processing of personal data, the security of care and the correct separation of professional and organizational responsibilities.

Therefore, healthcare organizations need to integrate information security, risk management, and clinical accountability, and develop governance models that prevent technological innovation from compromising patient protection.

The transition from digitized healthcare to truly accountable healthcare does not need to provide predictive tools in advance, but to place them in an effective accountability system, where artificial intelligence supports clinical judgment without replacing their understanding, and where every decision assisted by an algorithm must always be explained, can be verified, can be contested and controlled by law.

Bringing you breaking news with deep dives into Sports, Entertainment, Technology, and Health.

© 2025 HamelinProg, Inc. All Rights Reserved.