Advata Research
We are a group of clinicians, research scientists, engineers, and product leaders working to expand machine learning (ML) applications in healthcare. We focus on exploring diverse elements of AI in healthcare to build toward secure, explainable, fair, and responsible outcomes.
We work closely with our product and engineering leaders to integrate and expand our product capabilities through applications of responsible AI, recommendation systems, phenotyping, imaging, time series, and sequence models in healthcare.

fairMLHealth: Measuring Bias in Healthcare ML is Critical for Fair and Equitable Outcomes
September 02, 2021
Advata has released an open source tool (fairMLHealth) including tutorials and videos to assist in fair and equitable design and outcomes for healthcare ML
#fairmlhealth #responsibleai

Pillars of Responsible AI in Healthcare
June 23, 2021
How Advata is meeting the demands of a more trustable and accountable AI in using six distinct pillars of explainability, fairness, robustness, privacy, security and transparency
#responsibleai #pillars


Pillars of Responsible AI
Meeting the demands of more trustable & accountable AI

Explainability
What factors are driving the model’s prediction?
Suppose a model predicts a patient has a high risk of dying within the next three months but, the physician disagrees with this assessment. In this case, the physician would need to know why the model has predicted to inform action.

Fairness
Does the model perform similarly on vulnerable group?
What if the model has substantially lower predictive performance for minority or vulnerable patients? Fair ML models are needed to ensure equal treatment of various populations.

Robustness
Has the model been developed with sufficient data?
If a model is built based on data for population in New York, how will it fare for the African American population in Alabama? Data quality may be different and insufficient data may have been collected. This model would need testing for robustness across cohorts.

Privacy
Is the model output protected along with other patient data?
AI models can be used to make inferences about a patient which may be harmful if disclosed publicly. Model inversion would allow a malicious entity to infer the values of sensitive attributes, like a rare disease or disability. This may in turn be used to discriminate against the patient by other entities.

Security
All are data sources monitored and secure?
Data for some models may come from multiple sources. If a malicious person gets access to one of the sources, they can influence how the model gets trained, resulting in incorrect prediction and potential harm. Thus, the security of AI systems in paramount.

Transparency
Is the AI system and infrastructure transparent?
Suppose a patient is sent to hospice care but the patient survives for more than a year. In this case, the AI system, corresponding pipeline, and infrastructure should be auditable. Accountability ensures that systems are responsible and can be improved in the future.
Our Research
Presented at leading forums around the globe




