Agile Mumbai 19-20 Sep 2025 |DEIB Meets Agility: Organizational Policies for Bias free and Trusted AI by Kiran Gadad and Pranjal Pandya
This Document will focus on how organizations can integrate Diversity, Equity, Inclusion and Belonging (DEIB) principles with agile practices to design policies that minimize bias, build trust, and ensure responsible adoption of AI.
Agile Mumbai 19-20 Sep 2025 |DEIB Meets Agility: Organizational Policies for Bias free and Trusted AI by Kiran Gadad and Pranjal Pandya
1.
Public
Disclaimer by Speaker
1.Views, thoughts, and opinions expressed in the session and presentation,
collectively referred to as “the content”, belong solely to me in my personal
capacity, and not necessarily to my employer / organization / client, etc.
2. “The content” is based on my learning, experience as well as knowledge
gathered through material available publicly on the internet.
3. I do not endorse or promote any organization, committee, product or
person through this session.
4. I have agreed to the Code of Conduct, Privacy Policy, Speaker Engagement
Policy as referred in the Speaker Application Form submitted by me or on my
behalf on the Agile Mumbai Conference website (www.agilemumbai.com).
Kiran Gadad and Pranjal Pandya ( 19-9-25)
Public
Other Examples
• Techgiant's hiring algorithm trained on past resumes systematically penalized
resumes with keywords associated with Gender.
• People from Particular Race/ Colour were not getting recognized in Camera
detections.
• Criminal Justice Bias.
• Loan selection .
• Ai Image generators
www.agilemumbai.com
Public
Product Manager (PM)
Createand maintain datasets.
PM should have data sense
Create apps to get structured data/ Quality data.
Should drive thoughtful creation of Apps
PM should be the voice in data governance .
Validate data with personas.
Demographics of implementation.
www.agilemumbai.com
9.
Public
Categories of BiasesIdentified in AI:
Algorithmic Bias
Confirmation
Bias
Exclusion Bias
Human/Cognitive
Bias
www.agilemumbai.com
10.
Public
Training AI onDEIB:
Four Key
Approaches
• Organization.
• Governance and Policy
• Tools
• Trainings
www.agilemumbai.com
Public
Establishing Governance andPolicy Frameworks
Transparent Data Sharing
Regulating Data Labeling
Collaboration and Reusability
Risk Mitigation and Monitoring
www.agilemumbai.com
13.
Public
Utilizing Standards andTools for Ethical AI
Standards for Ethical AI
Tools to Mitigate Bias
Auditing and Accountability
Global Best Practices
www.agilemumbai.com
14.
Public
Training Teams onDEI and AI Bias
Importance of DEI Training
Specialized AI Bias Courses
Continuous Ethical Learning
Building Inclusive AI Culture
www.agilemumbai.com
15.
Public
Instructions( memory game)
•Observe the picture for 30 sec..
• No taking pictures. No snap..
• Note down individually without
discussing on how many items did
you remember ( 2 mins)
• Discuss with your pair partner and
see how many items were same
from your observations. (2 mins)
www.agilemumbai.com
16.
Public
Link Courtesy :https://c8.alamy.com/comp/TTY4G0/large-set-of-different-objects-
illustration-TTY4G0.jpg www.agilemumbai.com
17.
Public
Outcomes :
• Anyone pair with all three boxes same number???
• Acknowledge that observation skills , critical thinking etc… are
different for each individual.
• There are Biasness embedded in each one of us.
• Helping hand , an extra eye , will always help us to deliver better
product.
www.agilemumbai.com
18.
Public
Scrum Master
• Encouragepairing, across boundaries ( to Develop and review the code).
• Have diverse team ( building and giving inputs on technology).
• Provide diverse experiences to learn from.
• Promote awareness and education.
• Promote team members to run prompts on code to check AI Biasness.
• Help teams to raise flags, if they see biases ( Psychological Safety).
• Work with PM to see if the data sets have all the customer segments.
• Datasets should consider larger populations , (not just the accuracy).
www.agilemumbai.com
Public
AI Fairness Manifesto
OurCommitment
•We pledge to design, build, and deploy AI systems that are trustworthy, fair, transparent, and inclusive—serving all
users equitably.
• Diversify Your Team- hear them out.
• Address Bias in Datasets
• Use Inclusive Design/ Inclusive agile practices.
• Encourage human-centered Agile practices, we can create a robust framework for inclusive AI development
Education & Advocacy
• Every team member is responsible for understanding bias risks and championing inclusive innovation.
• Inclusive innovation is an ongoing commitment. Educate yourself and keep educating.
• Your goal is to embed DEI into every sprint so that your product actively mitigates algorithmic bias across its lifecycle
*Remember - We are all responsible for creating a “Responsible A.I”
www.agilemumbai.com
#3 Involving society in big way… It is improving speed… its other side is also to take measurable steps… to avoid biases… brand should not affect.. Prodcut value should show up.. Teams efforts should be shown.. Society percieveness towards john deere… it should not go wrong…
Deepseek- crack a joke about Chinese president…
Arunachal, part of India?
#4 It is contagious , we cant control the spread, unless understood.
It can travel boundaries.
PM modi s reference to president signing with right handed people
Nodding is different in US and India.
Amazon… Women, wprds to removed.. Gender specific …
Blacks… ethnicity and colour..
Make it more generalized…
#6 According to Nasscom , 26 % are Engineers … around 15 % are engineers.
#8 Create apps to get structured data/ Quality data ( as data classification and data entry would help us get clean data, so that other PM’s can use the same to get insights). It helps master data management.
Create and maintain datasets. ( at product level), Data should be owned by PM.
PM should have data sense ( understanding can be by capturing data points through journey mapping)
Create apps to get structured data/ Quality data.
Thoughtful creation of Apps( having essence of UI/ UX, and customer centric approach )will help us get clean data / index/ store data .
PM should be the voice in data governance .
Make use of AI to validate data with personas.. Demographics of implementation.
#10 Organization- blocking certain features to avoid company data share like block the public code sharing from github copilot and rather promote features and code reusability from opensource code from github copilot.
Avoid using AI in legal matters [This could create biased output when heightened accuracy is needed ]
Governance & policies : making common platform to share and audit datamodels developed, regulating data labelling, promoting reusability and team collaborations, Implement risk mitigation components and monitor value generated by AI integration. For ex Privacy and AI (NIST/RISK) by Design Assessment (PANDA) Process
Testing & tools: Validate against ISO 25010 standards, IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems, The AI Ethics Guidelines Global Inventory by the AI Ethics Lab, IBM AI Fairness 360
Audit AI from architects, AI auditor & extrernal tools.
Trainings: DEI training to teams, AI biases courses to developer team,
#11
Organizational strategies play a crucial role in embedding Diversity, Equity, and Inclusion (DEI) principles into AI development. One key approach is to block certain features that may inadvertently share sensitive company data. For example, restricting public code sharing from platforms like GitHub Copilot helps prevent unintended data exposure. Instead, organizations can promote the use of vetted open-source code repositories to encourage safe and ethical code reuse. Another important strategy is to avoid deploying AI in legal matters where heightened accuracy is essential. AI systems may produce biased or misleading outputs in such contexts, potentially leading to serious consequences. By clearly defining boundaries for AI usage and promoting responsible development practices, organizations can ensure that DEI principles are upheld throughout the AI lifecycle.
#12
Effective governance and policy frameworks are essential for promoting DEI in AI systems. Organizations should create a common platform for sharing and auditing data models to ensure transparency and accountability. Regulating data labeling practices is another critical step, as it helps maintain consistency and fairness in training datasets. Promoting reusability and team collaboration fosters a culture of shared responsibility and ethical development. Implementing risk mitigation components and monitoring the value generated by AI integration are also vital. For instance, a Process where Privacy and Design Assessments provides a structured approach to evaluating privacy risks and ensuring ethical AI deployment. By establishing robust governance mechanisms, organizations can systematically address DEI challenges and build trustworthy AI systems.
#13
Testing and tools are critical components in ensuring that AI systems align with DEI principles. Organizations should validate their AI systems against established standards such as ISO 25010, which focuses on software quality. The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems provides comprehensive guidelines for ethical AI development. Additionally, resources like the AI Ethics Guidelines Global Inventory by the AI Ethics Lab offer valuable insights into global best practices. Tools such as IBM AI Fairness 360 can be used to detect and mitigate bias in AI models. Regular audits conducted by architects, AI auditors, and external tools help maintain accountability and ensure that ethical standards are consistently applied. By leveraging these resources, organizations can build AI systems that are fair, transparent, and inclusive.
#14
Training is a foundational element in promoting DEI within AI development teams. Providing DEI training to all team members helps raise awareness about the importance of inclusivity and ethical practices. Specialized courses on AI biases are particularly beneficial for developers, as they equip them with the knowledge to identify and mitigate bias in algorithms and datasets. Encouraging continuous learning on ethical AI practices ensures that teams stay updated with the latest standards and methodologies. These training programs foster a culture of responsibility and empathy, enabling teams to build AI systems that reflect diverse perspectives and serve all users equitably. By investing in education and skill development, organizations can empower their workforce to champion DEI in every aspect of AI development.
#18 Policies… DoR.. DoD are followed… Prompt engineering.. Prodcut testing by SM…
#19 QR code… feedback on the sessions.. Kind of gamification..
With Vibe coding, increasing, and frame works like FAAFO ( Fast, Ambitious , with Autonomy , Fun , with optionality ), I believe it is gonna bring in more Hallutionation into systems..
You Should be careful if two agents are interacting with each other and there is data exchange.
Educate your team on Biasness.
Have HR / Dei champions review your product .
Have prompts to check AI fariness , to remove biasness before checking in the code..
We are all responsible for creating a “Responsible A.I”