What it's about:
Not all bias risks are equal. Some stakeholder groups face higher likelihood of bias because they're underrepresented in your data or fall into edge cases. Others face more severe consequences if bias occurs, perhaps because the stakes are high, the harm is hard to reverse, or they're already vulnerable.
This board helps you assess both dimensions for every stakeholder group, so you know where to focus your mitigation efforts.
You'll evaluate each group against:
Likelihood of bias: How well are they represented in your training data? Do they fall into edge case patterns? Is there historical bias baked into your data sources?
Severity of harm: What's at stake for them? Can they recover if something goes wrong? Are they already in a vulnerable position?
What it helps you achieve:
A risk heat map that plots every stakeholder group across four zones: Critical, High, Medium, and Low.
You'll walk away with a clear, prioritised list showing exactly which groups need intervention first, rather than trying to fix everything at once.
Who benefits most:
Product managers, data scientists, risk and compliance teams, and anyone responsible for ensuring AI systems don't cause disproportionate harm. Essential for teams operating in regulated or high-stakes domains.
How to use it:
Allow 60 minutes. Work through each stakeholder group from Board 2, assess their likelihood and severity scores, and plot them on the heat map.
This board is project-specific, so you'll repeat it for each new AI initiative, but the framework stays the same.
Part of the AI Bias Mitigation Workshop series (Boards 3 to 4).