What it's about:
Before you can assess bias risk, you need to know who your AI actually affects. And it's rarely just your obvious users. Your stakeholders typically fall into four groups:
Direct users: the people actively using your AI system
Indirect users: people affected by AI decisions without directly interacting with the system
Internal teams: product managers, key decision-makers, QA testers, auditors, legal and compliance
External parties: regulators, advocacy groups, community organisations, media
Some of these groups are vulnerable. Some are edge cases your training data barely represents. This board helps you map all of them systematically, so nobody gets overlooked.
What it helps you achieve:
A comprehensive stakeholder map that identifies every group who interacts with or is affected by your AI.
You'll flag vulnerable populations, spot gaps in your training data representation, and categorise stakeholders by their role: users, decision-makers, affected parties, and oversight.
This map becomes the foundation for assessing where bias risk is highest.
Who benefits most:
Product managers, data scientists, UX researchers, and cross-functional teams building AI systems. Particularly valuable for teams working in areas where decisions affect people's opportunities, finances, or wellbeing.
How to use it:
Allow 60 minutes for your first session.
Brainstorm stakeholder groups across all four categories
Identify who's vulnerable or
Identify underrepresented groups - compare their size in stakeholder landscape and compare & document how well each group appears in your training data.
Good News : Like Board 1, this becomes reusable; future projects just need 20 minutes to review and add any new groups.
Part of the AI Bias Mitigation Workshop series (Boards 1 to 4).