MLTest for model fairness — Lakera releases fairness assessments for computer vision.
This article was originally posted on our company website. Lakera’s developer platform enables ML teams to ship fail-safe computer vision models.
Evaluating a machine learning model requires going beyond standard model tests, and computing metrics such as accuracy, precision, and recall. To deploy responsible and aligned ML models, we need to understand if the models that we have built exhibit any biases during operation.
Bias can manifest itself in numerous ways, throughout a model’s lifecycle, from data collection and annotation to the features that a model uses for prediction — and so it’s super important to test for data bias and model bias throughout development and operation.
This becomes even more important as we increasingly rely more on pre-trained models which may have been trained on biased datasets and encode discriminative behavior that can surface during operation (yes, our favorite ImageNet is also affected).
What is a fair ML model?
We asked ChatGPT how it would describe AI fairness:
AI fairness refers to the idea that artificial intelligence (AI) systems should not discriminate against certain groups of people based on characteristics such as race, gender, age, or other factors. This means that when an AI system is used to make decisions that affect people’s lives, such as deciding who gets a job or who is eligible for a loan, it should treat all individuals equally and not make decisions based on biased or prejudiced assumptions. Ensuring AI fairness is important because biased AI systems can perpetuate and even amplify existing inequalities in society.
We think this is actually a pretty good description — let’s stick with it for now.
In the context of computer vision, we see a lot of discussions on fairness in the context of healthcare applications, face recognition systems, and the EHS sector. But even models which do not operate on data in these contexts can suffer from severe biases in operation. If we are not aware of these biases, models may underperform significantly, compromising trust with end-users. Given such risks, it’s definitely worth making fairness assessments a part of your ML testing processes.
With our recent release, MLTest lets you easily include a fairness assessment of your computer vision algorithm as part of your existing pipelines. Learn more here.
Implementing model fairness.
And so for good reason, governments, regulatory organizations, and standardization groups have also released a plethora of content regarding what fairness is and how to evaluate it. Present regulations and proposals such as the EU Artificial Intelligence Act/EU AI Act are not short of demands when it comes to ethics, fairness, and biases in ML models. Once these regulatory guidelines come into effect soon, teams around the world will have to provide detailed fairness assessments of their AI systems.
While the requirements for fairness seem relatively clear at this stage, we have found that there is still a relatively large gap between defining what fairness is and then actually implementing it for specific applications.
As a concrete example, ISO 24027, currently the leading international standard on AI bias, defines a good set of fairness metrics and how they look like for a generic binary classification case. But how do you take the proposed metrics from binary classification to image-based object detection? What about multi-class image segmentation?
In other words, it’s actually quite tricky to evaluate model fairness for your specific models and use cases, especially when it comes to computer vision.
Lakera brings fairness to ML testing.
Knowing the struggles of assessing fairness for real-world computer vision systems, we accepted the challenge. We digested the most recent regulatory proposals, turned towards the latest literature on ML fairness, and combined our regulatory experience with standards such as ISO 24027 to come up with a concrete implementation of model fairness in MLTest. The result of our work includes an implementation of state-of-the-art fairness metrics for computer vision models. We are so excited to release these to our users!
With this new update, users can easily add fairness tests to their MLTest pipelines with a single line of code:
options.add_model_fairness_test(metric="DemographicParity",
label="pedestrian",
dimension="ethnicity",
dimension_values=["E_1", "E_2"])
Our documentation includes more examples and a list of available metrics. You can now evaluate your object detection and image classification models on fairness dimensions such as demographic parity, equality of opportunity, equalized odds, and predictive equality.
Together with MLTest’s insights into model failure clusters, you can now easily include a fairness assessment of your computer vision algorithm as part of your existing pipelines. You can check if your computer vision is compliant with ISO 24027 and other regulatory requirements already today.
The (im)possibility of fairness.
Building fair AI systems is often discussed as an important objective. A common question is “how do we avoid bias?” or “how do we build systems that are fair?”
The truth is that it may be impossible after all to actually build fair ML models. Some notions of fairness are fundamentally incompatible with each other and we may not be able to eliminate bias in the first place.
So the focus should be on surfacing model biases. Known model biases can be taken into account during decision-making. Unknown biases cannot. If we have transparency around model biases, we can put downstream mitigation strategies in place.
Where to go from here?
You can make Lakera’s fairness assessment a part of your everyday workflows and automatically check if your models encode any biases. This lets you reduce the risk that your models behave in undesired ways during operation and minimizes future compliance risks as well.
You can get access to MLTest here. If you have any questions or would like to continue the discussion around ML fairness, reach out to David at david@lakera.ai.