Bias In, Bias Out: Ensuring AI Doesn’t Leave Diversity Behind
02 Apr 2025
Tech Academy
DAS25
AI-powered decision-making is shaping the future of business, but it’s also creating invisible barriers to progression. Hiring, promotions, client onboarding, and financial decision-making are all increasingly driven by algorithms - but these algorithms are currently trained on existing inequalities. This session will explore how firms can ensure AI helps, rather than hinders, efforts to create a diverse and inclusive industry.
1. Intro
- "The Invisible Ceiling" – how AI can create hidden barriers
- Quick poll: Who here is using AI-driven tools in recruitment, payroll, client onboarding, or risk assessment?
2. The Risks: When AI Reinforces the Status Quo
- AI in hiring and promotions - how algorithms can favour “typical” candidates over diverse talent
- AI in client selection - risk assessment tools that disadvantage certain groups
- AI in performance evaluations - biased algorithms reinforcing existing power structures
- Real-world example: Amazon’s AI-driven hiring tool that disproportionately filtered out female candidates, lending companies disproportionately declining black applicants.
3. The Fix: How to Ensure AI Breaks Barriers, Not Builds Them
- The role of humans - AI can’t be left unchecked
- Setting clear ethical AI principles in your firm
- Training AI models on diverse, representative data
- Establishing regular bias audits to ensure AI-driven decisions remain fair
- Transparency in AI decision-making. Processes should be visible and accountable
- The role of leadership in holding AI (and your teams) accountable
4. Wrap-Up & Q&A
- Summary of key points
- Practical takeaways for firms using AI in hiring, risk assessment, and operations
- Open Q&A