1
Developing Responsible
Generative AI Solutions
HOW?
Actionable Recommendations
Across 9 Framework Dimensions
https://lfaidata.foundation/blog/2025/03/19/responsible-generative-ai-framework-rgaf-version-0-9-now-available/
By LF AI & DataMarch 19, 2025
Haluk Demirkan
ISSIP Lunch & Learn, September 24, 2025
The Responsible AI Workstream (part of the
Generative AI Commons) has completed the
Responsible Generative AI Framework (RGAF)
2
3
Why Should We Care About Responsible
GenAI?
• Generative AI can accelerate
innovation - but also create
harm.
• Recent failures cost
companies billions &
damaged trust.
• How do we build AI that is
safe, ethical and sustainable?
• Let’s explore the Responsible
GenAI Framework (RGAF).
💡 Audience Question: What could go wrong if we ignore responsible AI practices?
4
What? and How?
• What is RGAF?
– The Responsible AI Workstream (part of the Generative AI Commons) has
completed the Responsible Generative AI Framework (RGAF) on March 19,
2025.
– The purpose of the Responsible GenAI Framework (RGAF) promotes that GenAI
is designed, applied and used in ways that are ethical, fair, and beneficial to
society at large in a proven and universally accepted manner. It also involves
adhering to regulations and guidelines, engaging in ethical decision-making
processes, and ensuring that AI systems benefit humanity as a whole
• How?
– Partnered with 25 global experts; completed extensive research about FAILED
Gen AI Solutions; reviewed AI frameworks from ~20 countries (including EU AI
Act Framework, NIST AI Framework, Singapore AI Strategy and China’s AI
Development Plan, and 13 companies (such as Chat GPT, Amazon, Google,
Facebook, Microsoft, IBM, Deloitte and others); and worked almost 12 months
to develop this framework.
In the following section, I will present an overview of 9 RGAI Framework
Dimensions, and example Tools & Techniques.
5
6
Human-Centered &
Aligned
• Human-centered AI learns from human
input, interaction, collaboration where most
algorithms come from human-based systems
like brain-theory and social sciences.
• Implement continuous alignment metrics
with human values.
• Understand
• Respect
Goal: Ensure AI aligns with human values and needs.
How? Examples:
1. Value Sensitive Design (VSD) → Methodology for embedding human values into system design.
2. Participatory Design Workshops → Involve diverse stakeholders early in design.
3. User Feedback Loops & A/B Testing → Continuous alignment with human expectations.
4. RLHF (Reinforcement Learning with Human Feedback): Tools like TRL (Transformer Reinforcement
Learning) (from Hugging Face) for alignment training.
5. Label Studio or Snorkel AI → For creating human-labeled datasets.
6. Prolific / Amazon MTurk → To gather diverse human feedback responsibly.
7
• Ensure systems support
diverse users and abilities.
• Invest in open-source AI
tools to lower entry barriers.
• Expand digital infrastructure
for equitable access.
💡 Audience Question: Who should be accountable when AI makes mistakes?
Goal: Make AI usable and beneficial to all.
How? Examples:
1. W3C Accessibility Guidelines (WCAG) → Standards for accessible interfaces.
2. Inclusive Datasets → Use multilingual/multimodal datasets representing diverse groups.
3. Open-Source AI Platforms (e.g., Hugging Face Transformers) → Lower cost & increase
adoption.
8
• Reliable AI systems consistently
perform their intended
functions accurately, safely, and
dependably, even amidst
challenging or unpredictable
environments.
• Build safeguards against
harmful outputs.
• Continuously test against real-
world scenarios & benchmarks.
Goal: Ensure AI works under diverse real-world
How? Examples:
1. Adversarial Training → Improve model resilience against attacks.
2. MLCommons Benchmarks → Standardized safety and reliability benchmarking.
3. Red-Teaming & Stress Testing → Test model behavior under malicious inputs.
4. IBM AI Fairness 360 (AIF360)
5. Microsoft Responsible AI Toolbox
6. NIST AI Risk Management Framework (AI RMF)
9
• AI systems should be transparent in their
operations and decisions, providing
explanations that are understandable to
users and stakeholders.
• Publish model cards, data cards,
documentation.
• Open source lifecycle where possible to
enhance trust.
💡 Audience Question: How do we balance innovation speed with responsibility?
Goal: Make AI decisions interpretable to humans.
Examples:
1. LIME (Local Interpretable Model-agnostic Explanations) → Explains single predictions.
2. SHAP (SHapley Additive Explanations) → Feature importance & global interpretability.
3. Model Cards & Data Cards → Documentation on purpose, limitations, and training data.
4. InterpretML: Microsoft’s library for interpretable ML.
5. Elicit / ExplainLikeI’m5 AI Tools: For generating human-readable explanations.
10
• Define clear accountability roles and
responsibilities.
• Accountability involves three major
items: 1) who is accountable, to
whom, 2) accountable for what, and
3) how to be accountable & rectify.
• Maintain audit trails and error
rectification mechanisms.
• Provide feedback loops for users to
report issues.
Goal: Establish responsibility and correction pathways.
How? Examples:
1. Model Documentation Frameworks (e.g., IBM AI FactSheets, Google Model Cards,
Amazon Service Cards).
2. Audit Trails & Lineage Tracking → Tools like MLflow for experiment/version tracking.
3. User Feedback Loops & Redress (Grievance) Mechanisms → User-facing reporting
channels.
11
• Security for AI: the
processes of ensuring the
security of data and
systems from various
threats, ensuring their
reliable and safe operation
.
• Encrypt sensitive data &
monitor adversarial
threats.
• Adopt privacy-by-design
principles in development.
Goal: Protect sensitive data and defend against misuse.
How? Examples:
1. Differential Privacy Libraries (TensorFlow Privacy, PyTorch Opacus) → Protect sensitive
data.
2. Adversarial Prompt Defenses → Input sanitization and prompt injection filters.
3. Federated Learning (TensorFlow Federated, PySyft, Flower, FedML) → Train while keeping
data decentralized.
4. Robustness Testing (Adversarial ML tools like CleverHans, Foolbox) → Ensure resilience.
• Privacy for AI: AI systems
(including intelligent agents
) should protect the privacy
of individuals and provide
means to mitigate the
ethical and legal
implications of collecting,
storing, and using personal
data.
• Apply differential privacy &
federated learning
techniques.
12
• Compliant AI systems adhere to
legal, ethical, and regulatory
standards.
• Controllable: defines the ability
to manage and direct AI
systems effectively, ensuring
that they behave as intended
and can be corrected or shut
down if necessary.
💡 Audience Question: Which dimension do you think is hardest to implement?
Goal: Ensure compliance with laws & ability to intervene.
How? Examples:
1. Compliance Cards & Automated Policy Engines → Map to GDPR, EU AI Act, etc.
2. Human Override Mechanisms (kill switches, configuration knobs).
3. Risk Assessment Tools (e.g., NIST AI Risk Management Framework evaluation).
4. NIST AI Risk Management Framework (RMF) → For structured governance.
5. OECD AI Principles → International responsible AI guidelines.
6. Ethics review boards / internal governance checklists → Organization-specific controls.
13
• Ethical AI entails the
development and use of AI
systems in ways that align with
moral principles and societal
values. Avoiding biases that can
lead to unfair treatment of
individuals or groups.
• Ensure diverse data and team
representation.
• Embed ethical review boards in
development lifecycle.
Goal: Ensure fairness, equity, and beneficence.
How? Examples:
1. AI Fairness 360 (IBM) → Bias detection and mitigation toolkit.
2. Fairlearn (Microsoft) → Auditing fairness across groups.
3. Ethics Checklists (e.g., Deon, Ethical OS) → Embed ethics in ML pipelines.
4. Google’s What-If Tool: Interactive bias and interpretability exploration.
14
• AI development and deployment should be
environmentally sustainable, minimizing
negative impacts on the environment.
• Optimize models for efficiency (PEFT, continual
learning).
• Run training on renewable-powered
infrastructure.
• Publish environmental footprint and offset plans.
Goal: Minimize AI’s carbon and energy footprint.
How? Examples:
1. CodeCarbon → Track carbon emissions of ML workloads.
2. Parameter-Efficient Fine-Tuning (PEFT) → Reduce compute for retraining.
3. Energy-Efficient Hardware (NVIDIA A100, Graphcore, cloud carbon-aware
schedulers).
15
Like Driving into the Fog...
HOW?
16
The Typical Gen
AI Deployment
Process with
Sample Tools
17
we need to
expect the unexpected
18
how to assess the risk of AI systems
https://aws.amazon.com/blogs/machine-learning/learn-how-to-assess-risk-of-ai-systems/
https://www.nist.gov/itl/ai-risk-management-framework
• To realize the full potential of generative AI, however, it’s important to carefully
reflect on any potential risks.
• Establishing mechanisms to assess and manage risk is an important process for
AI practitioners (for example, ISO 42001, ISO 23894, and NIST RMF) and
legislation (such as EU AI Act).
According to the NIST Risk Management Framework (NIST RMF)
19
21
22
Responsible AI builds trust with customers, but also with regulators and partners
responsible practices make scaling easier and safer
it’s building on solid foundations instead of shaky ground
Trust
translates into adoption, contracts, and market share
prevents costly failures — because cleaning up after a reputational crisis or lawsuit
helps us comply with regulations like the EU AI Act and NIST standards
24
https://mlq.ai/media/quarterly_decks/v0.1_State_of_AI_in_Business_2025_Report.pdf
25
responsible AI isn’t a cost center
It’s like cybersecurity was 20
years ago — it’s becoming table
stakes.
Companies that embrace
responsibility will move faster,
safer, and with more trust.
Companies that don’t will spend
more time cleaning up crises
than innovating.
The bottom line is this:
Which side do we want to be on?”
27
My current research areas
1. Digital Twin
2. Development of Comprehensive Measurable Risk Assessment Process
1. Early Design Phase (inherent risk)
2. Pre-Launch Phase (residual risk)
3. Post-Launch Phase
3. Collaborative Intelligence = People + AI + Process
4. Developing Metrics & Measures to Evaluate LLM Solutions
1.Lexical Overlap Metrics
1.BLEU (Bilingual Evaluation Understudy)
2.ROUGE (Recall-Oriented Understudy for Gisting Evaluation)
2.Embedding Based Metrics
1.BERTScore (Bidirectional Encoder Representations from Transformers)
2.Word Mover’s Distance (WMD)
3.Task-Specific or Learned Metrics
1.COMET (Supervised metric trained on human-annotated translation quality scores)
2.BLEURT (Fine-tuned BERT model on a regression task to predict human judgments
4.LLM-in-the-Loop Metrics (LLM-as-a-Judge (e.g., GPT-4/Claude scoring))
5.Human-in-the-Loop (Human Preference Score)
And many more…
28
Know where to find the information
and how to use it - That's the secret
of success.
Albert Einstein
Thank you!
haluk@uw.edu
 http://www.linkedin.com/in/halukdemirkan

20250924 20250924 Haluk_Demirkan Responsible_GenAI_Framework ISSIPPresentation_09242025.pptx

  • 1.
    1 Developing Responsible Generative AISolutions HOW? Actionable Recommendations Across 9 Framework Dimensions https://lfaidata.foundation/blog/2025/03/19/responsible-generative-ai-framework-rgaf-version-0-9-now-available/ By LF AI & DataMarch 19, 2025 Haluk Demirkan ISSIP Lunch & Learn, September 24, 2025 The Responsible AI Workstream (part of the Generative AI Commons) has completed the Responsible Generative AI Framework (RGAF)
  • 2.
  • 3.
    3 Why Should WeCare About Responsible GenAI? • Generative AI can accelerate innovation - but also create harm. • Recent failures cost companies billions & damaged trust. • How do we build AI that is safe, ethical and sustainable? • Let’s explore the Responsible GenAI Framework (RGAF). 💡 Audience Question: What could go wrong if we ignore responsible AI practices?
  • 4.
    4 What? and How? •What is RGAF? – The Responsible AI Workstream (part of the Generative AI Commons) has completed the Responsible Generative AI Framework (RGAF) on March 19, 2025. – The purpose of the Responsible GenAI Framework (RGAF) promotes that GenAI is designed, applied and used in ways that are ethical, fair, and beneficial to society at large in a proven and universally accepted manner. It also involves adhering to regulations and guidelines, engaging in ethical decision-making processes, and ensuring that AI systems benefit humanity as a whole • How? – Partnered with 25 global experts; completed extensive research about FAILED Gen AI Solutions; reviewed AI frameworks from ~20 countries (including EU AI Act Framework, NIST AI Framework, Singapore AI Strategy and China’s AI Development Plan, and 13 companies (such as Chat GPT, Amazon, Google, Facebook, Microsoft, IBM, Deloitte and others); and worked almost 12 months to develop this framework. In the following section, I will present an overview of 9 RGAI Framework Dimensions, and example Tools & Techniques.
  • 5.
  • 6.
    6 Human-Centered & Aligned • Human-centeredAI learns from human input, interaction, collaboration where most algorithms come from human-based systems like brain-theory and social sciences. • Implement continuous alignment metrics with human values. • Understand • Respect Goal: Ensure AI aligns with human values and needs. How? Examples: 1. Value Sensitive Design (VSD) → Methodology for embedding human values into system design. 2. Participatory Design Workshops → Involve diverse stakeholders early in design. 3. User Feedback Loops & A/B Testing → Continuous alignment with human expectations. 4. RLHF (Reinforcement Learning with Human Feedback): Tools like TRL (Transformer Reinforcement Learning) (from Hugging Face) for alignment training. 5. Label Studio or Snorkel AI → For creating human-labeled datasets. 6. Prolific / Amazon MTurk → To gather diverse human feedback responsibly.
  • 7.
    7 • Ensure systemssupport diverse users and abilities. • Invest in open-source AI tools to lower entry barriers. • Expand digital infrastructure for equitable access. 💡 Audience Question: Who should be accountable when AI makes mistakes? Goal: Make AI usable and beneficial to all. How? Examples: 1. W3C Accessibility Guidelines (WCAG) → Standards for accessible interfaces. 2. Inclusive Datasets → Use multilingual/multimodal datasets representing diverse groups. 3. Open-Source AI Platforms (e.g., Hugging Face Transformers) → Lower cost & increase adoption.
  • 8.
    8 • Reliable AIsystems consistently perform their intended functions accurately, safely, and dependably, even amidst challenging or unpredictable environments. • Build safeguards against harmful outputs. • Continuously test against real- world scenarios & benchmarks. Goal: Ensure AI works under diverse real-world How? Examples: 1. Adversarial Training → Improve model resilience against attacks. 2. MLCommons Benchmarks → Standardized safety and reliability benchmarking. 3. Red-Teaming & Stress Testing → Test model behavior under malicious inputs. 4. IBM AI Fairness 360 (AIF360) 5. Microsoft Responsible AI Toolbox 6. NIST AI Risk Management Framework (AI RMF)
  • 9.
    9 • AI systemsshould be transparent in their operations and decisions, providing explanations that are understandable to users and stakeholders. • Publish model cards, data cards, documentation. • Open source lifecycle where possible to enhance trust. 💡 Audience Question: How do we balance innovation speed with responsibility? Goal: Make AI decisions interpretable to humans. Examples: 1. LIME (Local Interpretable Model-agnostic Explanations) → Explains single predictions. 2. SHAP (SHapley Additive Explanations) → Feature importance & global interpretability. 3. Model Cards & Data Cards → Documentation on purpose, limitations, and training data. 4. InterpretML: Microsoft’s library for interpretable ML. 5. Elicit / ExplainLikeI’m5 AI Tools: For generating human-readable explanations.
  • 10.
    10 • Define clearaccountability roles and responsibilities. • Accountability involves three major items: 1) who is accountable, to whom, 2) accountable for what, and 3) how to be accountable & rectify. • Maintain audit trails and error rectification mechanisms. • Provide feedback loops for users to report issues. Goal: Establish responsibility and correction pathways. How? Examples: 1. Model Documentation Frameworks (e.g., IBM AI FactSheets, Google Model Cards, Amazon Service Cards). 2. Audit Trails & Lineage Tracking → Tools like MLflow for experiment/version tracking. 3. User Feedback Loops & Redress (Grievance) Mechanisms → User-facing reporting channels.
  • 11.
    11 • Security forAI: the processes of ensuring the security of data and systems from various threats, ensuring their reliable and safe operation . • Encrypt sensitive data & monitor adversarial threats. • Adopt privacy-by-design principles in development. Goal: Protect sensitive data and defend against misuse. How? Examples: 1. Differential Privacy Libraries (TensorFlow Privacy, PyTorch Opacus) → Protect sensitive data. 2. Adversarial Prompt Defenses → Input sanitization and prompt injection filters. 3. Federated Learning (TensorFlow Federated, PySyft, Flower, FedML) → Train while keeping data decentralized. 4. Robustness Testing (Adversarial ML tools like CleverHans, Foolbox) → Ensure resilience. • Privacy for AI: AI systems (including intelligent agents ) should protect the privacy of individuals and provide means to mitigate the ethical and legal implications of collecting, storing, and using personal data. • Apply differential privacy & federated learning techniques.
  • 12.
    12 • Compliant AIsystems adhere to legal, ethical, and regulatory standards. • Controllable: defines the ability to manage and direct AI systems effectively, ensuring that they behave as intended and can be corrected or shut down if necessary. 💡 Audience Question: Which dimension do you think is hardest to implement? Goal: Ensure compliance with laws & ability to intervene. How? Examples: 1. Compliance Cards & Automated Policy Engines → Map to GDPR, EU AI Act, etc. 2. Human Override Mechanisms (kill switches, configuration knobs). 3. Risk Assessment Tools (e.g., NIST AI Risk Management Framework evaluation). 4. NIST AI Risk Management Framework (RMF) → For structured governance. 5. OECD AI Principles → International responsible AI guidelines. 6. Ethics review boards / internal governance checklists → Organization-specific controls.
  • 13.
    13 • Ethical AIentails the development and use of AI systems in ways that align with moral principles and societal values. Avoiding biases that can lead to unfair treatment of individuals or groups. • Ensure diverse data and team representation. • Embed ethical review boards in development lifecycle. Goal: Ensure fairness, equity, and beneficence. How? Examples: 1. AI Fairness 360 (IBM) → Bias detection and mitigation toolkit. 2. Fairlearn (Microsoft) → Auditing fairness across groups. 3. Ethics Checklists (e.g., Deon, Ethical OS) → Embed ethics in ML pipelines. 4. Google’s What-If Tool: Interactive bias and interpretability exploration.
  • 14.
    14 • AI developmentand deployment should be environmentally sustainable, minimizing negative impacts on the environment. • Optimize models for efficiency (PEFT, continual learning). • Run training on renewable-powered infrastructure. • Publish environmental footprint and offset plans. Goal: Minimize AI’s carbon and energy footprint. How? Examples: 1. CodeCarbon → Track carbon emissions of ML workloads. 2. Parameter-Efficient Fine-Tuning (PEFT) → Reduce compute for retraining. 3. Energy-Efficient Hardware (NVIDIA A100, Graphcore, cloud carbon-aware schedulers).
  • 15.
    15 Like Driving intothe Fog... HOW?
  • 16.
    16 The Typical Gen AIDeployment Process with Sample Tools
  • 17.
    17 we need to expectthe unexpected
  • 18.
    18 how to assessthe risk of AI systems https://aws.amazon.com/blogs/machine-learning/learn-how-to-assess-risk-of-ai-systems/ https://www.nist.gov/itl/ai-risk-management-framework • To realize the full potential of generative AI, however, it’s important to carefully reflect on any potential risks. • Establishing mechanisms to assess and manage risk is an important process for AI practitioners (for example, ISO 42001, ISO 23894, and NIST RMF) and legislation (such as EU AI Act). According to the NIST Risk Management Framework (NIST RMF)
  • 19.
  • 20.
  • 21.
    22 Responsible AI buildstrust with customers, but also with regulators and partners responsible practices make scaling easier and safer it’s building on solid foundations instead of shaky ground Trust translates into adoption, contracts, and market share prevents costly failures — because cleaning up after a reputational crisis or lawsuit helps us comply with regulations like the EU AI Act and NIST standards
  • 22.
  • 23.
    25 responsible AI isn’ta cost center It’s like cybersecurity was 20 years ago — it’s becoming table stakes. Companies that embrace responsibility will move faster, safer, and with more trust. Companies that don’t will spend more time cleaning up crises than innovating. The bottom line is this: Which side do we want to be on?”
  • 24.
    27 My current researchareas 1. Digital Twin 2. Development of Comprehensive Measurable Risk Assessment Process 1. Early Design Phase (inherent risk) 2. Pre-Launch Phase (residual risk) 3. Post-Launch Phase 3. Collaborative Intelligence = People + AI + Process 4. Developing Metrics & Measures to Evaluate LLM Solutions 1.Lexical Overlap Metrics 1.BLEU (Bilingual Evaluation Understudy) 2.ROUGE (Recall-Oriented Understudy for Gisting Evaluation) 2.Embedding Based Metrics 1.BERTScore (Bidirectional Encoder Representations from Transformers) 2.Word Mover’s Distance (WMD) 3.Task-Specific or Learned Metrics 1.COMET (Supervised metric trained on human-annotated translation quality scores) 2.BLEURT (Fine-tuned BERT model on a regression task to predict human judgments 4.LLM-in-the-Loop Metrics (LLM-as-a-Judge (e.g., GPT-4/Claude scoring)) 5.Human-in-the-Loop (Human Preference Score) And many more…
  • 25.
    28 Know where tofind the information and how to use it - That's the secret of success. Albert Einstein Thank you! [email protected]  http://www.linkedin.com/in/halukdemirkan

Editor's Notes

  • #1 Welcome to this session on Responsible GenAI. Today we’ll cover why responsibility matters and dive into 9 actionable dimensions. --- 20250924 Haluk_Demirkan Responsible_GenAI_Framework Title: Responsible Generative AI Framework Date/Time: Wednesday September 24, 12noon ET/9am PT/18:00 CET Speaker: Haluk Demirkan (https://www.linkedin.com/in/halukdemirkan/) ISSIP Series: Lunch & Learn with ISSIP Ambassadors Moderator: Christine Ouyang (https://www.linkedin.com/in/christine-ouyang/) Description: As a member on the Technical Steering Committee for the Responsible AI Workstream under the Generative AI Commons at the Linux Foundation for Data & AI, Professor Haluk Demirkan led the publication of the Responsible Generative AI Framework (RGAF), released on March 19, 2025. He will give an overview of RGAF which is designed to guide implementers and consumers of open-source generative AI projects through the complexities of responsible AI. See Paper: https://lfaidata.foundation/blog/2025/03/19/responsible-generative-ai-framework-rgaf-version-0-9-now-available/ See Event: https://issip.org/event/lunch-and-learn-responsible-generative-ai-framework/ Event links: Christine_Ouyang_Post: https://www.linkedin.com/posts/christine-ouyang_lunch-and-learn-responsible-generative-activity-7376645757304721411-Z0Bs Michele_Carroll_Post: https://www.linkedin.com/feed/update/urn:li:activity:7375910933669298176 LinkedIn Company: https://www.linkedin.com/posts/international-society-of-service-innovation-professionals-issip-_lunch-and-learn-responsible-generative-activity-7375908828737048577-FfPB LinkedIn Group: https://www.linkedin.com/posts/international-society-of-service-innovation-professionals-issip-_lunch-and-learn-responsible-generative-activity-7374120193175990272-uj4N Registration: https://docs.google.com/forms/d/e/1FAIpQLSeqDPUjotROB14vLGQjVOBScT2ORvfD8PPjHFM5GphsERNGQg/viewform?usp=header Recording: TBD Slides: TBD Summary: TBD ---
  • #2 Welcome everyone. Today, I’ll share a few cases where GenAI or LLM solutions failed in the media. These failures highlight why responsible AI development is critical.
  • #3 Hook the audience with recent failures, ask them to reflect on why trust is hard to build and easy to lose.
  • #6 This slide introduces the 'Human-Centered & Aligned' dimension. Explain why it matters, highlight what to develop and how, and link to practical examples.
  • #7 This slide introduces the 'Accessible & Inclusive' dimension. Explain why it matters, highlight what to develop and how, and link to practical examples.
  • #8 This slide introduces the 'Robust, Reliable & Safe' dimension. Explain why it matters, highlight what to develop and how, and link to practical examples.
  • #9 This slide introduces the 'Transparent & Explainable' dimension. Explain why it matters, highlight what to develop and how, and link to practical examples.
  • #10 This slide introduces the 'Accountable & Rectifiable' dimension. Explain why it matters, highlight what to develop and how, and link to practical examples.
  • #11 This slide introduces the 'Private & Secure' dimension. Explain why it matters, highlight what to develop and how, and link to practical examples.
  • #12 This slide introduces the 'Compliant & Controllable' dimension. Explain why it matters, highlight what to develop and how, and link to practical examples.
  • #13 This slide introduces the 'Ethical & Fair' dimension. Explain why it matters, highlight what to develop and how, and link to practical examples.
  • #14 This slide introduces the 'Environmentally Sustainable' dimension. Explain why it matters, highlight what to develop and how, and link to practical examples.
  • #20 Summarize the importance of the RGAF. Invite participants to share reflections: Which dimensions resonate most with their work? These cases teach us that deploying AI without responsibility can cause harm, misinformation, reputational loss, and even financial damage. Responsible AI means building safeguards, verifying accuracy, and ensuring trustworthiness—so that innovation is safe and sustainable. Script for Delivery “I know some of you may be thinking — all of this sounds good, but isn’t it just going to slow us down and cost more? Let’s step back. Why should we invest in responsible AI?” (gesture to the ✅ side of the slide) “First, the benefits are clear. Responsible AI builds trust — not just with customers, but also with regulators and partners. That trust translates into adoption, contracts, and market share. It also prevents costly failures — because cleaning up after a reputational crisis or lawsuit is much more expensive than preventing one. It helps us comply with regulations like the EU AI Act and NIST standards, which aren’t optional. And long term, responsible practices make scaling easier and safer — it’s like building on solid foundations instead of shaky ground.” (pause, then point to ⚠️ side of the slide) “Now imagine the opposite. If we skip responsibility: we face penalties, bans, or even product recalls. We risk losing customer trust, and that’s nearly impossible to rebuild. Fixing AI systems after they’ve gone wrong is always more expensive than doing it right upfront. Plus, unsafe AI opens the door to bias, deepfakes, and security vulnerabilities — things that can do real harm to people, and to our reputation. And most importantly, adoption will slow down, because society pushes back on unsafe AI.” (pause, then bring it home) “The bottom line is this: responsible AI isn’t a cost center. It’s like cybersecurity was 20 years ago — it’s becoming table stakes. Companies that embrace responsibility will move faster, safer, and with more trust. Companies that don’t will spend more time cleaning up crises than innovating. Which side do we want to be on?”
  • #21 “I know some of you may be thinking — all of this sounds good, but isn’t it just going to slow us down and cost more? Let’s step back. Why should we invest in responsible AI?”
  • #26 Deliver this as your final message. Pause after reading the quote to let it sink in. Explain the bridge metaphor: Responsible AI is the bridge that connects technological capability with societal trust. Encourage the audience to be the builders of that bridge in their own projects. “As we’ve seen, AI is powerful — but raw power alone isn’t enough. The real question isn’t just what AI can do, but what it should do. Responsible AI is the bridge between innovation and trust — the foundation that lets society embrace what we build. (pause, look around the room) My challenge to you is this: don’t just be AI developers. Be bridge builders. Make sure the systems you create don’t just work — but work responsibly, for people, for society, and for the future.”