The gold standard in automated bias detection + responsible AI compliance.

Algorithmic audit of Laura Robot

Since 2016, the Laura Robot has analyzed more than 8.6 million visits in 40 clinical and hospital centers in several Brazilian states. Eticas examined the Laura application in its version 1.0, created in 2017.
The main objective of the Laura system is to provide early warning of a clinical deterioration susceptible to death, with the aim of reducing mortality and hospital service costs. It is an Artificial Intelligence system that provides a classification of the patient’s risk of clinical deterioration after analyzing the indicators of the patient’s last five vital sign collections.
Laura’s algorithmic audit was focused on exploring possible risks of algorithmic bias or discrimination.

The audit was conducted in four phases:

Preliminary study: we gathered basic information about the system and the needs detected for its development and implementation.

Mapping of the situation: we examined whether the system complied with a list of basic requirements to be audited and whether the parties responsible for its design, development and implementation were willing to provide the necessary information for its implementation.

Analysis plan: several meetings and exchanges of information were held with the people responsible for Eticas Consulting and Laura to define and agree with the client the terms (how and what for) and the estimated deadlines (when) for the development of the audit.

Analysis and final report: including the conclusions and recommendations resulting from the audit.

What did we find after conducting the audit?

The system interface is clear and consistent.
There is good acceptance of the system by users and they consider that it helps to improve clinical performance.
The use of the system has enabled further digitization of hospital services and the generation of knowledge about clinical performance.
Knowledge about the scope and limitations of the algorithmic model (e.g., accuracy by group) is not being communicated comprehensively to the staff.
Low false negative rates are observed: the Laura system tends to underestimate the risk of clinical deterioration infrequently.
The system assigns a higher risk of predicted death than observed death in a way that minimizes the risk of false negatives. This is accentuated in males aged 60 years and older.
Correct death predictions are lower for females than for males.
There is a low underestimation of the risk of clinical deterioration, although it is present in patients between 18 and 59 years of age and especially in women

Recommendations from qualitative analysis:

Here are the recommendations that have been identified through qualitative analysis:Conduct inter-hospital surveys to establish limitations in relation to intelligibility, clarity and consistency variables and incorporate the results of these surveys into staff training and technology design.
Test the frequency of use of the system in different hospitals and clinical care areas.
Conduct regular validations on the impact of the Laura system on the quality of clinical data in the digital hospital record.
Incorporate in a systematic and understandable way the information on the functioning of the algorithmic model, detailing its objectives, data, methodology, description of the algorithm and the parameters of performance and error evaluation, for a general public. This information should be communicated to patients as part of the hospital’s privacy policy.
Review the retention period for personal data and follow the principle of data minimization, as well as conduct data protection training for Laura system members and hospital staff.
Assess the security of the system and include mechanisms for authentication, logging and tracking of access to the system, with monitoring of possible unauthorized access.
Review the pseudonymization policy to ensure the highest level of confidentiality.

Recommendations from qualitative analysis:

Monitor groups with few patients and eliminate those with very few patients, as it is considered that they cannot be robustly modeled.
Since the system tends to under protect females between 18 and 39 years of age, it is recommended: Alert system administrators to this characteristic. That is, to warn hospital staff that the system underestimates the risk for this group.
Seek further calibration of the model, particularly around the value used as a cut-off point. This should be contrasted with respect to its effect on the intersecting groups (age and sex) analyzed.
Ensure the necessary preparation of workers who interact with the model during the alignment process.
Explain the objective of the model to hospital workers and patients, making it clear that it is not an autonomous decision-making system, but only an objective reinforcement in decision making.