What it's about:
If you're building AI and aiming to reduce bias, you first need to be clear about what "fair" actually means for your system.
Here's the thing: there isn't one single definition of fairness. Equal opportunity? Demographic parity? Equal accuracy? Procedural fairness? These aren't just academic concepts; they're genuinely different approaches, and optimising for one can work against another.
This board helps you explore these four definitions, understand the trade-offs, and make a deliberate choice that fits your context.
What it helps you achieve:
A clear, documented fairness definition that your whole team understands and agrees on. No more vague commitments to "fair AI" without knowing what that actually means in practice.
You'll walk away with an explicit decision and the rationale behind it. That can be used for design, testing and refining.
Who benefits most:
Product managers, data scientists, AI ethics leads, and cross-functional teams building AI systems, particularly in high-stakes areas like recruitment, lending, or healthcare where getting fairness wrong has real consequences for real people.
How to use it:
Set aside 45 minutes for your first session. Work through each fairness definition with your team, discuss the trade-offs honestly, and document your choice.
Good News : This becomes your foundation for all future AI projects; after that, you'll only need 15 minutes to review and confirm it still applies.
Part of the AI Bias Mitigation Workshop series (Boards 1 to 4).