0% found this document useful (0 votes)
108 views23 pages

AI Risk and Cybersecurity Course

The AI Risk and Cybersecurity Course, taught by Taimur Ijlal, focuses on AI governance, cybersecurity risks, and the implementation of security controls throughout the machine learning lifecycle, targeting professionals in risk management, cybersecurity, and AI. No prior technical knowledge is required, and participants will create a threat model for an AI-based application as a practical project. Key learning objectives include understanding AI and ML risks, building governance frameworks, and utilizing tools like ChatGPT to enhance security processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
108 views23 pages

AI Risk and Cybersecurity Course

The AI Risk and Cybersecurity Course, taught by Taimur Ijlal, focuses on AI governance, cybersecurity risks, and the implementation of security controls throughout the machine learning lifecycle, targeting professionals in risk management, cybersecurity, and AI. No prior technical knowledge is required, and participants will create a threat model for an AI-based application as a practical project. Key learning objectives include understanding AI and ML risks, building governance frameworks, and utilizing tools like ChatGPT to enhance security processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

AI Risk and Cybersecurity Course

1. Course Overview

 Instructor: Taimur Ijlal (Cloud Security Executive, Speaker, and


Trainer with 20+ years of experience)

 Focus Areas:

o AI governance and cybersecurity.

o Securing AI systems, which is often overlooked.

 Target Audience:

o Risk management professionals.

o Cybersecurity experts.

o AI professionals (data scientists, ML engineers).

o Anyone interested in AI risks.

 No Prerequisites Required:

o No prior AI, ML, or programming knowledge needed.

 Project:

o Create a threat model for an AI-based application (e.g., self-


driving vehicles).

2. Key Learning Objectives

 AI and ML Risks:

o Understand the risks introduced by AI and ML systems.

 AI Governance Framework:

o Learn how to build and implement governance structures for


AI risk management.

 Cybersecurity Risks in AI:

o Identify and mitigate unique cybersecurity risks in AI systems.

 Security Controls in ML Lifecycle:

o Implement controls at each phase of the ML lifecycle.

 Use of ChatGPT in Cybersecurity:


o Enhance security processes with ChatGPT.

3. Topics Covered

A. Introduction to AI and Its Impact

 Growing Market:

o AI market expected to reach $126 billion by 2025.

 AI’s Influence on Society:

o Affects jobs, industries, and daily life.

 AI Adoption Risks:

o New and unique security risks due to rapid AI integration.

B. Machine Learning Overview

 Why ML Matters:

o ML is the most popular AI subfield.

o Most AI-related risks originate from ML models.

 Basic Concepts Covered:

o Supervised, unsupervised, and reinforcement learning.

o Model training and evaluation.

C. AI Governance and Risk Management

 Importance of AI Governance:

o Ensures responsible and ethical AI usage.

o Reduces bias, errors, and vulnerabilities.

 Creating an AI Governance Framework:

o Identify risks.

o Define policies and controls.

o Monitor and audit AI applications.

D. Cybersecurity Risks in AI

 Unique Risks of AI Systems:

o Data poisoning.

o Model inversion attacks.


o Adversarial attacks.

 Cybersecurity Framework for AI:

o Custom-built for AI systems.

o Protects against AI-specific threats.

E. Practical Application – Threat Modeling

 Objective:

o Apply course concepts to create a threat model for an AI


application.

 Example:

o Self-driving vehicles.

o Risk and threat assessment.

o Documenting potential vulnerabilities and mitigation


strategies.

4. Instructor’s Background

 Experience:

o 20+ years in InfoSec and IT risk management.

o Worked in the UK and UAE.

 Achievements:

o CISO of the Year, CISO Top 30, and CISO Top 50 awards.

o Published books on AI security and cloud computing (#1 new


release on Amazon).

 YouTube Channel:

o Cloud Security Guy – covers AI risks, cloud security, and


career advice.

5. Course Highlights

 Gartner's Insight:

o AI poses new trust, risk, and security challenges that


traditional controls cannot address.
 AWS Generative AI Security Scoping Matrix:

o Recently added content on securing AWS-based generative AI


applications.

 No Prior Technical Knowledge Required:

o Accessible to beginners and non-technical professionals.

1. Introduction to AI

 Definition of AI: Coined by John McCarthy at the Dartmouth


Conference in 1956. Defined as the science and engineering of
making intelligent machines.

 Purpose of AI: To perform tasks traditionally done by humans, such


as speech and visual recognition, decision-making, and language
processing.

 Importance:

o AI handles massive data volumes beyond human capacity.

o It is the foundation of future computer learning and complex


decision-making.

2. Everyday Applications of AI

 Chatbots: Basic AI systems handling customer queries, routing to


humans if unresolved.

 Recommendation Systems: Netflix uses machine learning to


suggest content.

 Hate Speech Detection: Platforms like Facebook and Twitter rely


on AI for content moderation.

 Voice Assistants: Alexa and Siri learn and improve through


continuous interaction and feedback.

3. AI as the Fourth Industrial Revolution

 Previous Revolutions:

o 1st: Steam power transformed agriculture and manufacturing.

o 2nd: Automation and mass production through assembly


lines.

o 3rd: Digital revolution with the rise of the internet and cloud
computing.
 4th Industrial Revolution: AI automates decision-making and
takes on complex tasks, marking a massive technological shift.

4. Factors Driving AI's Growth

 Increased Computing Power: Cloud computing enables large-


scale data processing.

 Data Availability: Zettabytes of data and reduced storage costs


allow AI to learn and improve.

5. Course Overview

 Key Learnings:

o AI and machine learning risks.

o Building an AI governance framework.

o Cybersecurity risks of AI systems.

o Implementing security controls in the ML lifecycle.

o Using ChatGPT to enhance security processes.

 Target Audience:

o Risk management professionals.

o Cybersecurity experts.

o AI professionals seeking risk management knowledge.

 Instructor: Taimur Ijlal, a cybersecurity leader with 20+ years of


experience, published author, and award-winning instructor.

6. Course Requirements

 No prior AI or machine learning knowledge required.

 No programming or data science experience needed.

Machine learning Overview:

1. Introduction to Machine Learning

 Machine learning (ML) is the engine behind AI, allowing machines to


learn from data without explicit programming.

 It is a key AI subfield widely used across industries like government


and infrastructure.

 Unlike traditional programming, ML uses data to create predictive


models.
2. How Machine Learning Works

 Traditional Programming:

o Takes input → processes through a hard-coded algorithm →


produces output.

 Machine Learning Process:

o Takes large volumes of training data.

o Uses an algorithm to create a model.

o The model makes predictions on new data.

o Based on prediction accuracy, the model is refined through


retraining.

3. Key Difference Between ML and Traditional Programming

 In traditional programming:

o Input → Algorithm → Output.

 In ML:

o Input + Output → Algorithm → Model.

o The model uses previous data to make future predictions.

 ML models become more accurate over time through continuous


learning.

4. Course Overview

 Course Title: Artificial Intelligence Risk and Cyber Security.

 Instructor: Taimur Ijlal, a cybersecurity leader with over 20 years of


experience.

 Topics Covered:

o AI and ML risks.

o Creating AI governance frameworks.

o Cybersecurity risks in AI systems.

o Implementing security controls in the ML lifecycle.

o Using ChatGPT for cybersecurity enhancement.

 Requirements:

o No prior AI, ML, or programming knowledge needed.


 Target Audience:

o Risk management professionals.

o Cybersecurity experts.

o AI professionals interested in risk management.

5. Course Details

 Duration: 1.5 hours.

 Rating: 4.5 out of 5 (4,971 ratings).

 Students Enrolled: 15,721.

 Last Updated: January 2025.

 Language: English with auto-captions.

6. Instructor Credentials

 Taimur Ijlal is an award-winning cybersecurity leader.

 Recognized with industry awards like CISO of the Year and Most
Outstanding Security Team.

 Published in ISACA Journal and CIO Magazine Middle East.

 Author of books on AI Security and Cloud Computing.

7. Course Objectives

 Understand AI and ML risks.

 Build AI governance frameworks.

 Manage cybersecurity risks.

 Apply security controls throughout the ML lifecycle.

 Use ChatGPT for security process enhancement.

Need for AI governance:

1. Why AI Needs Governance and Risk Assessment

 AI introduces new types of risks not present in traditional


systems.

 AI is a disruptive technology requiring unique risk mitigation


strategies.

2. Key AI Risks
 Biases in AI models: AI systems can reflect and amplify biases
present in the training data.

 Security vulnerabilities: AI systems can be compromised, leading


to malicious outcomes.

 Privacy risks:

o Facial recognition: Governments and companies use AI-


powered facial recognition, raising privacy concerns.

o Deepfakes: AI-generated fake videos (e.g., realistic fakes of


celebrities) blur the line between real and fake content.

 Autonomous machines:

o Fully automated systems without human intervention can


raise concerns about loss of control.

 Job disruption:

o AI automation will replace mundane human jobs but also


create new opportunities.

 Existential risks:

o Concerns about AI becoming sentient and potentially taking


over (e.g., Terminator-like scenarios).

3. Real-World Examples of AI Risks

 Microsoft Tay Bot (2016):

o AI Twitter bot designed to learn through interaction.

o Users fed it racist and offensive content, making it generate


inappropriate responses.

o Microsoft had to shut it down within 24 hours and issue an


apology.

 Autonomous Weapons:

o AI-powered weapons can select and engage targets without


human intervention.

o Risks of misuse and manipulation by bad actors.

o Over 30,000 AI and robotics researchers signed an open


letter in 2015 opposing autonomous weapon deployment.
4. Importance of AI Governance and Regulations

 AI governance frameworks are essential to prevent misuse and


manage risks.

 Lack of regulation could lead to an AI arms race with serious


consequences.

 Implementing cybersecurity measures at all phases of AI and ML


model lifecycles is critical.

5. Course Details

 Instructor: Taimur Ijlal, award-winning cybersecurity leader.

 Learning objectives:

o Understand AI and ML risks.

o Create AI governance frameworks.

o Identify and mitigate cybersecurity risks in AI systems.

o Implement security controls throughout the ML lifecycle.

o Use ChatGPT to enhance security processes.

 Target Audience:

o Risk management professionals.

o Cybersecurity experts.

o AI professionals seeking risk-related knowledge.

 No prior AI or programming knowledge required.

Learn how to govern and secure Artificial Intelligence and Machine


Learning systems:

AI Prejudices and Biases

1. Introduction to AI Bias

 AI models can be biased against gender, age, or race if the training


data is flawed.

 Bias occurs due to human prejudices (conscious or unconscious)


that reflect in the data used for model training.
 Biased AI models can lead to incorrect decisions, impacting health,
wealth, and lives.

2. Real-life Example: Healthcare Algorithm Bias

 A healthcare algorithm used for 200 million people in the US was


biased against Black people.

 The algorithm prioritized white individuals for medical care, even


when Black individuals needed it more.

 The bias arose from improper data representation, denying


necessary care to many Black patients.

3. Case Study: COMPAS Algorithm Bias

 COMPAS: Correctional Offender Management Profiling for Alternative


Sanctions.

 Used in US courts to predict recidivism (likelihood of reoffending).

 Issue:

o People of color were nearly twice as likely to be labeled as


high-risk compared to white individuals.

o White offenders with severe criminal histories were labeled


low-risk, while people of color with minor offenses were
labeled high-risk.

 Example:

o Brisha Borden: Committed minor petty theft → Rated as high-


risk.

o Prater: Armed robbery convict → Rated as low-risk.

o Reality:

 Brisha did not reoffend.

 Prater committed another armed robbery and received


an eight-year sentence.

 This highlights how AI perpetuates unfairness through biased


training data.
4. Risks of Biased AI

 AI systems reflect real-world biases, leading to discriminatory


outcomes.

 Flawed AI can:

o Wrongly classify individuals.

o Impact healthcare, criminal justice, and financial sectors.

o Reinforce societal inequalities.

5. Conclusion

 AI bias poses real-world risks.

 In the next section, the course will cover measures and controls to
mitigate AI risks and prevent biases.

AI Regulations:

AI Governance and Regulations

1. Introduction to AI Governance

 This section covers how to create a governance framework for AI


systems.

 A governance framework ensures AI systems are compliant,


transparent, and accountable.

 Regulations and standards form the foundation of this framework.

2. Importance of AI Regulations

 AI regulations are necessary to prevent misuse, bias, and unfair


decisions.

 Without regulations, companies prioritize profits over ethical


practices.

 Regulations enforce accountability and protect human rights by


setting minimum standards.

 Governments use AI in decisions that significantly impact health,


life, and justice, making regulation essential.
3. Current Regulatory Landscape

 As of now, no specific legislation exclusively regulates AI.

 Existing regulations such as data protection and consumer


protection laws are applied to AI.

 Countries like China and the US have introduced strategies and


principles for AI regulation.

 The White House issued 10 principles for AI regulation.

4. The EU AI Act: A Landmark Regulation

 The European Union (EU) proposed the first concrete AI


regulation in April 2021.

 This regulation is expected to set global standards, similar to GDPR


for data privacy.

 It uses a risk-based approach rather than a blanket ban on AI.

 AI systems are categorized by risk levels:

o Unacceptable risk: Banned (e.g., AI used for social scoring).

o High risk: Subject to strict compliance and monitoring (e.g.,


healthcare, law enforcement).

o Low risk: Requires transparency but fewer controls.

 The regulation has extraterritorial scope, applying to AI systems


outside the EU if their outputs are used in the EU.

5. High-Risk AI Systems and Conformity Assessment

 High-risk systems include:

o Transport systems: AI that impacts public safety.

o Educational systems: AI that affects access to education


(e.g., exam scoring).

o Healthcare systems: AI used in surgeries or patient


screening.

o Employment systems: AI for recruitment decisions.

o Law enforcement and migration systems: AI affecting


legal status or credit scoring.
 Conformity assessment is required for high-risk systems.

o It is an audit-like evaluation of the AI system's technical


documentation, quality, and compliance.

o Successful assessments lead to certification by the EU.

o Biometric systems and other sensitive applications require


third-party, independent assessments.

AI Governance Framework:

AI Governance Framework – Key Points

1. Introduction to AI Governance Framework

 AI Regulation Lag: Comprehensive AI regulations are emerging


but will take time to be fully enforceable.

 Proactive Approach: Companies cannot wait and must implement


governance frameworks to mitigate AI risks.

2. Core Components of the AI Governance Framework

 AI Policy:

o Sets the tone for AI control within the organization.

o Defines general principles and control measures.

 AI Committee:

o Comprises members from data, technology, security, and risk


management teams.

o Oversees AI solutions and makes go/no-go decisions on AI


initiatives.

 AI Risk Management Framework:

o Identifies and mitigates AI-related risks, including:

 Cybersecurity threats

 Bias and integrity issues

 Principles of AI Trustworthiness:

o Ensures AI systems adhere to four trust principles:

 Integrity: Protection against algorithm or data


tampering.
 Explainability: Transparency in AI decision-making
(avoiding black-box models).

 Fairness: AI decisions should be unbiased and reflect


diverse populations.

 Resilience: Robustness to attacks and recovery


capabilities.

3. Reference Model for Governance

 Singapore’s Model AI Governance Framework (2019):

o Released as a template with implementable guidance.

o Provides practical steps for companies to adopt governance


principles.

o Prioritizes human-centric AI over profit-driven motives.

o Principles: Explainability, transparency, fairness, and


human-centricity.

4. Importance of Trust in AI Systems

 Trust Risks:

o Lack of trust can damage customer and company reputation.

o Potential for major fines or legal issues due to biases or lack of


transparency.

 Mitigating Trust Risks:

o Integrity: Prevent data or model tampering.

o Explainability: Clear visibility into AI decision-making


processes.

o Fairness: Avoid biases through diverse, representative


training data.

o Resilience: Ensure AI models are technically robust and


secure against attacks.

5. Conclusion

 The AI governance framework ensures AI systems are safe, reliable,


and trustworthy.

 It is applicable across all sectors and AI technologies.


 Next, the course will cover technical security risks in AI
applications.

NSIT at AI governance:

1. Introduction to NIST AI RMF

 NIST (National Institute of Standards and Technology)


developed the AI RMF as a tool to help organizations design,
develop, deploy, and manage AI systems responsibly.

 It provides a comprehensive guide for managing AI-related risks,


covering a wide range of topics.

 Although voluntary, it is becoming a de facto industry standard


due to its credibility and collaborative development process
involving public and private sector input.

2. Characteristics of Trustworthy AI Systems

 Valid and Reliable: AI systems should be accurate, generalized to


real-world scenarios, and able to meet performance requirements.

 Safe: Systems must not pose harm to users or the environment.

 Secure and Resilient: Designed to withstand cyberattacks and


recover quickly from breaches.

 Accountable and Transparent: AI decisions should be explainable


and auditable.

 Privacy-Enhanced: Systems should respect user privacy and


comply with data protection laws.

 Fair and Unbiased: Prevents discriminatory or unjust outcomes.

 Interdependent Balance: These characteristics must be balanced.


For example, a system could be secure but lack transparency.

3. Core Functions of the AI RMF

The framework consists of four core functions for AI risk management:

 Govern:

o Applied throughout the AI lifecycle.

o Establishes risk management policies and a culture of


accountability.

 Map:
o Identifies and understands the AI system, its environment, and
context-specific risks.

 Measure:

o Assesses AI system performance, trustworthiness, and risk


effectiveness.

 Manage:

o Takes action to address identified risks.

o Involves making deployment decisions and ongoing


monitoring.

4. NIST AI RMF Playbook

 NIST offers a Playbook as a companion tool.

 It provides specific actions and documentation for each core


function.

 Users can filter actions by development or deployment phases


and AI impact assessments.

 This allows for flexible, checklist-like usage of the framework.

5. Importance of AI RMF

 AI RMF promotes trustworthiness, transparency, and security


in AI systems.

 It is expected to be widely adopted globally as it evolves with AI


advancements.

 The document serves as a living resource, updated with industry


feedback and evolving standards.

Cyber security risks

. Introduction to Cyber-Security Risks in AI

 AI introduces three types of security risks:

o Unintentional risks: Errors or flaws in AI systems that cause


security vulnerabilities.

o Malicious usage: Cybercriminals exploiting AI for attacks.

o Compromised AI: AI systems themselves being hacked or


manipulated.
 Traditional cybersecurity methods are insufficient for AI systems
due to their unique vulnerabilities.

2. Unique AI Risks

 AI-specific vulnerabilities:

o AI systems rely heavily on data analysis rather than fixed


rules, making them prone to data manipulation.

o Model behavior can be altered by corrupting the training


data.

 Growing AI adoption expands the threat landscape**:

o Lowers the cost of attacks.

o Introduces new threats (e.g., AI-powered DDoS attacks).

o Evolves existing threats.

3. Categories of AI Risks

 Risks not unique to AI:

o Underlying infrastructure security: Standard issues like


data security, system configuration, and internet access.

o Data transportation security: Protecting data in transit.

o Lack of AI-specific knowledge: Security professionals often


lack expertise in AI-specific risks.

 Risks unique to AI:

o Data poisoning: Injecting corrupted data into the training set


to influence model decisions.

o Model poisoning: Injecting backdoors into pre-trained


models.

o Backdoor exploitation: Hiding malicious code in commercial


or open-source AI models.

4. Lifecycle of AI Security Risks

 Training Phase:

o Data poisoning: Attackers pollute the training data.

o Model contamination: Malicious backdoors in pre-trained


models.

 Production Phase:
o Data breaches: Leaking sensitive production data handled
by data scientists without security training.

 Deployment Phase:

o Model evasion: Slight pixel modifications in image


recognition models can fool AI.

o Model extraction: Repeated querying of the model allows


attackers to recreate its logic and steal IP.

o Model compromise: Exploiting vulnerabilities in the software


hosting the AI model.

5. Mitigation Strategies

 Implementing a cybersecurity framework:

o Secure the entire ML lifecycle.

o Apply traditional and AI-specific security controls.

 Awareness and training:

o Bridge the gap between cybersecurity and AI


professionals.

 Regular testing and validation:

o Frequent model testing and audits to detect evasion


attempts and backdoors.

AI based risks:

1. Introduction and Context

 The module expands on previous discussions about AI-specific


attacks.

 Unlike traditional cybersecurity attacks, AI-based attacks target


the model, data, and decision-making processes.

2. Types of AI Attacks

a) Data Poisoning

 Definition: Involves corrupting the training data to influence the


model's behavior.

 How it works:
o AI models often rely on open-source or commercially
available data for training.

o Attackers inject malicious data during the training phase,


compromising the model's accuracy.

o Example: For self-driving cars, poisoning the dataset could


cause the car to misclassify stop signs, leading to accidents.

 Tactic: Attackers wait until the company gains trust in the data
source before launching the attack.

b) Model Evasion

 Definition: Submitting adversarial inputs (slightly modified data)


to fool the model.

 How it works:

o By making minor, unnoticeable changes (e.g., pixel-level


modifications), the model misclassifies the input.

o Example: Slightly altering the color of a stop sign could


reduce the model’s confidence, making it unrecognizable.

 Impact:

o Cars may ignore stop signs, leading to accidents.

o Critical systems could misinterpret inputs, causing incorrect


decisions.

c) Membership Inference (Data Extraction)

 Definition: Attackers trick the model into revealing sensitive data


from its training set.

 How it works:

o Models deployed with public APIs (e.g., in healthcare or


banking) can be queried by attackers.

o Attackers reverse-engineer the model to extract training


data.

o Example: By querying the model with specific names or


identifiers, an attacker could confirm if a person is on a patient
list or reveal confidential details.

 Impact:

o Leakage of PII (Personally Identifiable Information).


o Financial data (e.g., credit card information) could be exposed.

3. Real-Life Example: Self-Driving Cars

 Self-driving vehicles rely on machine learning for decision-


making.

 They recognize objects like stop signs, pedestrians, and road


markings.

 Potential attack scenarios:

o Data poisoning: Compromised datasets could cause the car


to ignore stop signs or misidentify objects.

o Model evasion: Slight pixel changes could trick the car into
misclassifying images.

 Consequences:

o Malicious attacks could cause accidents or fatalities.

o Attackers could blackmail companies by threatening such


manipulations.

4. Impact and Consequences

 These AI-specific risks are more dangerous than conventional


attacks due to:

o Life-threatening outcomes (e.g., self-driving cars failing).

o Exposure of sensitive data.

o Financial and reputational damage to organizations.

Cyber security framework:

Understanding AI-Specific Risks:

 No one-size-fits-all strategy: Security controls need to be


tailored for AI and ML systems.

 AI-related regulations and laws: Ensure compliance with


regulations like GDPR when dealing with AI systems.
 Inventory management: Maintain an inventory of all AI systems
to track and protect them effectively.

Establishing Security Baseline:

 Create a baseline for AI and ML security based on risk assessments.

 Update existing security processes to include AI-specific controls.

 Include AI security checks in penetration testing to detect


vulnerabilities (e.g., supply chain attacks).

Educating Cybersecurity and Data Science Teams:

 Raise awareness among cybersecurity professionals and data


scientists about AI-related threats.

 Address the current knowledge gap in AI security practices.

Common AI Security Threats:

 Data Poisoning: Attackers inject corrupted data to mislead the


model’s decision-making.

 Model Poisoning: Inserting malicious code/backdoors into pre-


trained models (supply chain vulnerability).

 Data Leakage: Unauthorized access to data during training or fine-


tuning phases.

 Model Evasion: Slight modifications to input data (e.g., pixel


changes in images) can fool the model.

 Model Inference Attacks: Attackers reconstruct data or reverse-


engineer the model through repeated queries.

Security Controls and Measures:

 Data integrity checks: Ensure data authenticity and monitor for


unauthorized modifications.

 Use trusted models: Avoid using pre-trained models from


unverified sources.

 Secure pipelines: Protect data pipelines against unauthorized


access during training and inference.

 Adversarial testing: Introduce adversarial examples during


testing to assess model robustness.

 Limit data exposure: Reduce the amount of information exposed


by the model to prevent inference attacks.
Risk Assessment and Verification:

 Perform regular risk assessments on AI models.

 Verify the model’s integrity and the security of its components.

 Use vendor due diligence when sourcing third-party models or


components.

 Sanitize output data to avoid accidental leakage of sensitive


information.

Framework Implementation:

 Establish AI governance and cybersecurity frameworks based on risk


levels.

 Develop risk evaluation spreadsheets with security controls.

 Ensure security reviews and audits are conducted regularly.

AI security testing:

1. Introduction to AI Security Testing

 Objective: Learn how to conduct security testing for AI systems to


identify vulnerabilities before production deployment.

 Traditional vs. AI Security: While traditional security testing


(penetration testing, vulnerability scanning) is standard, AI security
testing is still evolving.

 MITRE ATT&CK Framework: A widely recognized framework used


in traditional penetration testing.

 MITRE ATLAS:

o Adversarial Threat Landscape for AI Systems

o Based on real-world AI attacks and academic research.

o Helps cybersecurity teams test AI systems with familiar tactics


and techniques.

2. Security Testing Approaches

 Adversarial Samples:

o Introduce malicious inputs during regular security testing.


o Check for vulnerabilities like model evasion.

 Red Team/Blue Team Testing:

o Red Team: Simulates attackers, testing AI security controls


through techniques like inference, evasion, and data
poisoning.

o Blue Team: Monitors and defends against attacks, verifying


the effectiveness of AI security measures.

3. Tools for AI Security Testing

 CounterFit (by Microsoft):

o Open-source tool for automating AI security testing.

o Assesses risks like model evasion and inference


vulnerabilities.

o Suitable for penetration testing.

 Adversarial Robustness Toolbox (ART):

o Python library for machine learning security.

o Supports popular ML frameworks.

o Tests and defends against evasion, poisoning, extraction, and


inference attacks.

You might also like