Key message
  • AI integration in rheumatology is facing unique ethical and regulatory challenges due to longitudinal patient data complexity.

Dear Editor, The integration of artificial intelligence (AI) into rheumatology practice raises important ethical considerations that warrant careful attention [1–3]. The chronic and complex nature of rheumatologic conditions presents unique challenges for AI implementation that deserve focused discussion.

Rheumatology stands apart from other medical specialties due to the longitudinal accumulation of patient data over years or decades of disease management. Patients with conditions like rheumatoid arthritis, systemic lupus erythematosus, and other autoimmune diseases generate vast amounts of data through regular monitoring of disease activity, medication responses and periodic flares. This longitudinal data presents both opportunities and ethical challenges for AI applications.

The European Union (EU) AI Act [4] stands as the first comprehensive legislation specifically addressing AI systems in healthcare and other domains. While other jurisdictions rely on policy frameworks and guidelines, this legislation establishes binding requirements for AI development and deployment. This distinction is particularly relevant for rheumatology, where AI applications could involve complex decision support systems managing sensitive patient data over extended periods.

Disease flare prediction exemplifies the complex interplay between AI capabilities and regulatory requirements in rheumatology. For instance, under the EU AI Act, such predictive systems would be classified as ‘high-risk’ AI since they influence medical decisions and patient outcomes [5, 6]. This classification brings significant regulatory obligations, including requirements for robust risk management systems, high-quality training data and detailed technical documentation. For flare prediction specifically, these requirements are crucial given the serious consequences of prediction errors. While early prediction could revolutionize pre-emptive treatment, a false positive prediction might lead to unnecessary treatment intensification with potential adverse effects, while a missed prediction could result in preventable organ damage. The high-risk designation means these systems must maintain rigorous logging capabilities and enable human oversight of predictions, allowing rheumatologists to understand and potentially override AI recommendations when clinically appropriate.

The regulatory landscape for medical AI varies significantly across jurisdictions. The EU AI Act establishes a risk-based regulatory approach with binding legal requirements. In contrast, the US Food and Drug Administration implements a product-based classification system through regulatory guidelines, focusing on safety and effectiveness verification for AI-driven medical devices.

The UK has adopted a distinctive sector-specific approach, combining targeted oversight through existing regulatory bodies like the Medicines and Healthcare products Regulatory Agency with new coordination mechanisms through the Digital Regulation Cooperation Forum. This is supported by the AI Safety Institute and upcoming AI Bill, maintaining flexibility while implementing mandatory requirements for high-risk systems.

The implications of AI regulation for rheumatology extend beyond flare prediction. Any AI system used for medical diagnosis, patient triage or treatment planning falls under the high-risk category and must meet stringent requirements for accuracy, robustness and cybersecurity (Table 1). Healthcare providers deploying these systems must implement quality management systems, ensure ongoing monitoring and maintain detailed documentation of the AI system’s development and validation.

Table 1.

Ethical and safety considerations for AI systems in rheumatology under the EU AI Act.

ChallengePotential risksEthical aspects (EU AI Act)Safety aspects (EU AI Act)
Bias in training dataDiscriminatory outcomes due to underrepresented populationsEnsure diversity and inclusivity in training datasetsRobust dataset quality checks and representation
Prediction errors (false positives/negatives)False alarms or missed disease flares impacting patient careMitigate harm by minimizing prediction errorsRigorous testing to avoid critical prediction errors
Data privacy and securityBreach of sensitive personal and genetic health dataAdhere to the General Data Protection Regulation and protect patient confidentialityEdge computing to ensure local data processing and security
Transparency and explainabilityBlack-box algorithms leading to lack of trust and usabilityProvide clear and understandable AI system outputsTransparency in model decision-making processes
Human oversightOverreliance on AI without sufficient human interventionEnable effective human control and accountabilityImplement mechanisms for clinician overrides
Compliance with EU AI Act requirementsNon-compliance with mandatory high-risk system regulationsFollow strict documentation, validation and risk managementRegular audits and adherence to regulatory standards
Lifecycle monitoring and documentationDecreased system performance or safety post-deploymentMaintain transparency and accountability throughout the AI lifecyclePost-deployment monitoring and periodic revalidation of AI systems
ChallengePotential risksEthical aspects (EU AI Act)Safety aspects (EU AI Act)
Bias in training dataDiscriminatory outcomes due to underrepresented populationsEnsure diversity and inclusivity in training datasetsRobust dataset quality checks and representation
Prediction errors (false positives/negatives)False alarms or missed disease flares impacting patient careMitigate harm by minimizing prediction errorsRigorous testing to avoid critical prediction errors
Data privacy and securityBreach of sensitive personal and genetic health dataAdhere to the General Data Protection Regulation and protect patient confidentialityEdge computing to ensure local data processing and security
Transparency and explainabilityBlack-box algorithms leading to lack of trust and usabilityProvide clear and understandable AI system outputsTransparency in model decision-making processes
Human oversightOverreliance on AI without sufficient human interventionEnable effective human control and accountabilityImplement mechanisms for clinician overrides
Compliance with EU AI Act requirementsNon-compliance with mandatory high-risk system regulationsFollow strict documentation, validation and risk managementRegular audits and adherence to regulatory standards
Lifecycle monitoring and documentationDecreased system performance or safety post-deploymentMaintain transparency and accountability throughout the AI lifecyclePost-deployment monitoring and periodic revalidation of AI systems
Table 1.

Ethical and safety considerations for AI systems in rheumatology under the EU AI Act.

ChallengePotential risksEthical aspects (EU AI Act)Safety aspects (EU AI Act)
Bias in training dataDiscriminatory outcomes due to underrepresented populationsEnsure diversity and inclusivity in training datasetsRobust dataset quality checks and representation
Prediction errors (false positives/negatives)False alarms or missed disease flares impacting patient careMitigate harm by minimizing prediction errorsRigorous testing to avoid critical prediction errors
Data privacy and securityBreach of sensitive personal and genetic health dataAdhere to the General Data Protection Regulation and protect patient confidentialityEdge computing to ensure local data processing and security
Transparency and explainabilityBlack-box algorithms leading to lack of trust and usabilityProvide clear and understandable AI system outputsTransparency in model decision-making processes
Human oversightOverreliance on AI without sufficient human interventionEnable effective human control and accountabilityImplement mechanisms for clinician overrides
Compliance with EU AI Act requirementsNon-compliance with mandatory high-risk system regulationsFollow strict documentation, validation and risk managementRegular audits and adherence to regulatory standards
Lifecycle monitoring and documentationDecreased system performance or safety post-deploymentMaintain transparency and accountability throughout the AI lifecyclePost-deployment monitoring and periodic revalidation of AI systems
ChallengePotential risksEthical aspects (EU AI Act)Safety aspects (EU AI Act)
Bias in training dataDiscriminatory outcomes due to underrepresented populationsEnsure diversity and inclusivity in training datasetsRobust dataset quality checks and representation
Prediction errors (false positives/negatives)False alarms or missed disease flares impacting patient careMitigate harm by minimizing prediction errorsRigorous testing to avoid critical prediction errors
Data privacy and securityBreach of sensitive personal and genetic health dataAdhere to the General Data Protection Regulation and protect patient confidentialityEdge computing to ensure local data processing and security
Transparency and explainabilityBlack-box algorithms leading to lack of trust and usabilityProvide clear and understandable AI system outputsTransparency in model decision-making processes
Human oversightOverreliance on AI without sufficient human interventionEnable effective human control and accountabilityImplement mechanisms for clinician overrides
Compliance with EU AI Act requirementsNon-compliance with mandatory high-risk system regulationsFollow strict documentation, validation and risk managementRegular audits and adherence to regulatory standards
Lifecycle monitoring and documentationDecreased system performance or safety post-deploymentMaintain transparency and accountability throughout the AI lifecyclePost-deployment monitoring and periodic revalidation of AI systems

Data security takes on heightened importance in rheumatology given the sensitive nature of genetic information and autoimmune histories. The EU AI Act mandates specific data governance practices for high-risk AI systems, including requirements for training data quality and privacy protection [1]. For rheumatology, where the chronic nature of diseases means breaches could expose decades of personal health information, these requirements are particularly relevant. One promising approach to address these concerns is the implementation of locally run algorithms within hospital systems. Edge computing and local large language models that operate entirely within a healthcare institution’s infrastructure can process sensitive patient data without exposing it to external servers, potentially offering a path to compliance with both the AI Act’s requirements and data protection regulations [2].

A crucial element that requires further attention is health equity. AI applications in rheumatology risk exacerbating existing disparities in disease detection, treatment access and patient outcomes if training data are not adequately representative. Many rheumatic diseases present differently across ethnic groups, yet AI training datasets may not reflect this diversity. Without robust safeguards, AI-driven tools risk reinforcing systemic biases rather than mitigating them. Addressing this concern requires greater transparency in dataset composition, regulatory mandates for inclusivity and ongoing performance monitoring to assess AI-driven disparities in healthcare outcomes [8].

The doctor–patient relationship in rheumatology, built over years of managing chronic disease, must not be undermined by AI implementation. Patient perspectives remain central to successful AI integration. Early engagement through patient panels in AI development can help identify concerns and priorities. Surveys indicate that while patients welcome AI’s potential to enhance care, they worry about privacy and the ‘dehumanization’ of medical care. The EU AI Act’s emphasis on human oversight aligns with this concern, requiring that high-risk AI systems be designed to be effectively overseen by humans [6]. While AI can process vast amounts of longitudinal data, the rheumatologist’s accumulated knowledge of individual patient patterns and responses remains invaluable. Clear frameworks for integrating AI insights with clinical expertise are essential and must satisfy both regulatory requirements and clinical needs.

AI developers face complex compliance requirements, including regular audits, transparency in algorithm development and robust documentation of validation processes. This extends to ongoing monitoring and updates of AI systems post-deployment. The question of liability remains critical—when AI contributes to clinical decisions, clear frameworks are needed to delineate responsibility between developers, healthcare providers and clinicians. Insurance policies must evolve to cover AI-specific risks while maintaining affordability of implementation.

Looking ahead, the emergence of the EU AI Act as the first comprehensive AI legislation creates an opportunity for global harmonization of AI governance in healthcare. International collaborations between regulatory agencies, professional societies (such as EULAR and ACR) and AI developers could facilitate the development of unified ethical frameworks. The World Health Organization continues to develop guidelines and technical standards that bridge different regulatory approaches, potentially facilitating greater international alignment [7].

This means developing systems that can process complex longitudinal data while maintaining privacy, predict disease trajectories while allowing for clinical judgment and enhance rather than replace the doctor–patient relationship. The unique challenges of rheumatology—from the variability of disease manifestations to the complexity of treatment decisions—make it an important testing ground for responsible AI implementation in medicine.

Data availability

Data are available upon reasonable request by any qualified researchers who engage in rigorous, independent scientific research and will be provided following review and approval of a research proposal and Statistical Analysis Plan (SAP) and execution of a Data Sharing Agreement (DSA). All data relevant to the study are included in the article.

Authors’ contributions

Conceptualization: V.V., L.G., E.B.; Methodology: V.V.; Supervision: F.I.; Writing – original draft: V.V., L.G., E.B.; Writing– review & editing: V.V., F.I., L.G., S.M.

Funding

No specific funding was received from any funding bodies in the public, commercial or not-for-profit sectors to carry out the work described in this manuscript.

Disclosure statement: The authors have declared no conflicts of interest.

References

1

Aboy
M
,
Minssen
T
,
Vayena
E.
Navigating the EU AI Act: implications for regulated digital medical products
.
NPJ Digit Med
2024
;
7
:
237
.

2

Venerito
V
,
Bilgin
E
,
Iannone
F
,
Kiraz
S.
AI am a rheumatologist: a practical primer to large language models for rheumatologists
.
Rheumatology (Oxford)
2023
;
62
:
3256
60
.

3

Venerito
V
,
Gupta
L.
Large language models: rheumatologists’ newest colleagues?
Nat Rev Rheumatol
2024
;
20
:
75
6
.

4

Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance).

2024
. http://data.europa.eu/eli/reg/2024/1689/oj/eng (6 February 2025, date last accessed).

5

Gilbert
S.
The EU passes the AI Act and its implications for digital medicine are unclear
.
NPJ Digit Med
2024
;
7
:
135
.

6

Freyer
O
,
Wiest
IC
,
Kather
JN
,
Gilbert
S.
A future role for health applications of large language models depends on regulators enforcing safety standards
.
Lancet Digit Health
2024
;
6
:
e662
72
.

7

Kuziemsky
CE
,
Chrimes
D
,
Minshall
S
,
Mannerow
M
,
Lau
F.
AI quality standards in health care: rapid umbrella review
.
J Med Internet Res
2024
;
26
:
e54705
.

8

Ho
CWL
,
Caals
K.
How the EU AI act seeks to establish an epistemic environment of trust
.
Asian Bioeth Rev
2024
;
16
:
345
72
.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.

Comments

0 Comments
Submit a comment
You have entered an invalid code
Thank you for submitting a comment on this article. Your comment will be reviewed and published at the journal's discretion. Please check for further notifications by email.