auditboard.
com Harnessing AI and ML to Optimize Your Security Compliance Program | 1
Table of Contents
4 The Benefits and Risks of AI for InfoSec Teams
8 What Do We Do Now? Applying This Knowledge to Evaluating AI Solutions
10 About the Authors
11 About AuditBoard
auditboard.com Harnessing AI and ML to Optimize Your Security Compliance Program | 2
At AuditBoard, the Information Security and
Compliance teams are stewards of our customers’
data. We care deeply about robust security and
compliance practices and go to great lengths to
incorporate them into our day-to-day jobs — down
to the internal solutions we use to facilitate and
drive our business. AuditBoard’s mission is to help
our customers manage and reduce risk in their
organizations; the last thing we want is to introduce
technology that has a detrimental effect on this
goal. To that end, this discussion aims to help drive
awareness among first-line operations and risk and
compliance professionals around the use of different
AI solutions in the context of both their potential
benefits and risks to the business.
While Artificial Intelligence has existed since its first program, In this guide, we will discuss the most compelling areas where
Logic Theorist, was developed in 1956, AI applications truly InfoSec professionals can use AI to optimize and create
went “mainstream” in 2022 with the release of ChatGPT, efficiencies in their security and compliance programs. This
OpenAI’s chatbot that introduced everyday users to the discussion includes Machine Learning — a subset of AI that
ingenuity and benefits of generative AI. With the barriers to uses algorithms to train on data and make recommendations
entry for technology providers effectively lowered, the impact or predictions — and Generative AI —a subset of Machine
has been widespread. Today, there are countless AI-powered Learning that generates content learned from the data it is
solutions claiming they can help you streamline and optimize trained on. We will also explore the risks these solutions pose
processes within your information security function. However, to the business and best practices and considerations for
AI is a double-edged sword; for every potential benefit it can mitigating those risks when evaluating these solutions.
provide to your business, it also brings new risks.
auditboard.com Harnessing AI and ML to Optimize Your Security Compliance Program | 3
PART ONE
The Benefits and Risks of AI for InfoSec Teams
There are several areas where businesses are successfully leveraging AI and ML in their security
and compliance programs. Specifically, generative AI solutions and ML models have posed
the most promise in driving efficiencies and optimizations for information security teams. We
describe these in more detail below:
There is increasing evidence that leveraging AI and ML
Generative AI solutions, where AI can make content Machine Learning models, which use algorithms solutions in these capacities may yield positive results
completion suggestions when a user is typing in trained on available data to emulate logical decision- for InfoSec teams. AI has been widely referred to as
the interface, are being used to automate activities making, are being used for activities involving intrusion a “force multiplier” and “technology catalyst” with the
involving writing. The following are examples of areas detection and malicious behavior. The following are potential to help information security teams address
where GenAI can be helpful in InfoSec programs: examples of areas where these models can be helpful challenges like rising cyberattacks, new and evolving
in InfoSec programs: compliance requirements, persistent skills gaps, and
• Compliance activities, such as mapping talent retention difficulties. A recent McKinsey and
frameworks and writing control requirements. • Email, spam, and phishing monitoring Company report identified 63 generative AI use cases
• Assurance activities, such as writing and mapping • Detections engineering spanning 16 business functions predicted to generate
questions, controls, and evidence. • Endpoint-based malware detection $2.6 trillion to $4.4 trillion in annual operational benefits
• Risk management activities, such as mapping across industries (notably, software engineering for
• Conditional-based access policies corporate IT was one of four functions identified where
risks, threats, and controls.
• Code analysis/suggestions AI would drive the most value).
• Governance activities, such as narrative and
• Suggesting mapping between controls and
report writing. Conversely, AI’s biggest threats to the business
framework requirements
• Issue/exception creation activities used across involve compromised intellectual property (see recent
• Uncovering duplicate issues in your GRC high-profile examples of data breaches here, here,
GRC functions.
environment and here) and false positives and negatives that can
• Audit or assessment evidence re-use impact business operations (see recent high-profile
examples here and here).
• Anomaly detection in controls testing
auditboard.com Harnessing AI and ML to Optimize Your Security Compliance Program | 4
WEIGHING THE BENEFITS AGAINST THE RISKS
If you are a CISO reading this, you are likely bombarded with Example questions the team can ask during
What is the greatest risk of AI adoption to their risk assessment
vendors trying to sell you AI-powered solutions on a regular
the organization?
basis. This begs the question: how can InfoSec leaders
1. How many malicious emails does our business
responsibly evaluate these options? On the one hand,
receive?
33% Personal information disclosure (data privacy) CISOs cannot afford to reject AI completely; there are far
too many obvious benefits that AI can bring to your team 2. What is the impact of these malicious
30% Improper decision-making due to poor output/data quality emails? How much damage in dollars does a
and function.
compromised account amount to?
Expanded threat surface due to new classes of
19% 3. What is the efficacy of the AI solution we are
technical vulnerabilities
considering? What is its batting average — how
28% of teams have utilized AI to enhance or many false positives and negatives are there?
12% Intellectual property leakage
improve their InfoSec program this year, while 4. What risks/tradeoffs are we most concerned
6% Manipulation, bias, or other ethical misuse 17% have plans to do so in 2025. about? (e.g., spam is inconvenient but won’t end
the world – whereas marking a legitimate email as
Source: AuditBoard, September 2024 flash poll of information security and Source: AuditBoard, September 2024 flash poll of 1,265 InfoSec compliance spam could cause damage to the business)
compliance professionals professionals 5. What are the chances the vendor misuses or
abuses the access granted to the solution or the
underlying data (your company’s historical emails
Therefore, today’s InfoSec teams must approach AI and On the other hand, there are risks that come with adopting and emails it receives in real-time)?
ML responsibly when using them to drive efficiencies and any technology solution: loss or disclosure of your business’s 6. What are the “what could go wrongs” (WCGWs)?
optimizations in their programs without exposing their intellectual property, not meeting your ROI within the
business to additional risks. anticipated time, and the impacts of false positives and false • Marking a legitimate email as spam, which
negatives. In short, purchasing an AI solution always involves could harm our business (false positive)
THIRD-PARTY RISK: INFORMATION LEAKAGE tradeoffs. To decide whether this tradeoff makes sense for • Marking a dangerous email as safe (false
your business, you must calculate all the potential benefits, negative)
An important caveat is that most companies are not planning
risks, and “what can go wrongs” of integrating an AI solution • Misuse/abuse/selling/leaking of data
to build their own AI models. Businesses are more likely to
into your security processes.
leverage free or commercial licenses for AI solutions like
ChatGPT or procure products with AI as an embedded Only after performing such a risk assessment can
Take, for example, an InfoSec team that has identified email-
feature. In many cases, businesses may inadvertently provide the InfoSec team make an informed decision —
based social engineering as a key threat and is considering
the AI provider with proprietary and sensitive information, to purchase or not to purchase — based on their
purchasing an AI-enabled email, spam and phishing
effectively exposing your business to any risk that your vendor business’s risk appetite.
monitoring solution to combat this threat. The organization
may leak, misuse, or abuse your data. will grant the solution access to their business’s email
messages (which may include training) in exchange for
auditboard.com better detection of malicious emails. Harnessing AI and ML to Optimize Your Security Compliance Program | 5
AREAS TO IMPLEMENT AI AND ML INTO YOUR THIRD-PARTY ASSESSMENT REVIEWS DETECTIONS ENGINEERING
INFOSEC PROGRAM Benefits: AI reduces the time and effort spent responding Benefits: ML, leveraging pattern-based detections, can
In addition to AI-powered phishing, spam, and email monitoring, to third-party questionnaires. Standardizing responses also detect and block malicious traffic to a web application. AI can
the following are other opportune areas for utilizing AI and ML helps eliminate human error and author bias while improving process much larger data sets faster and more thoroughly
to drive efficiencies in information security processes in the consistency and accuracy. than a human reviewing for anomalies. AI is also better than
context of their potential benefits and tradeoffs (i.e., risks to the Tradeoffs: human users at recognizing statistical patterns around typical
business). and unusual patterns in the data and finding statistical outliers
• If someone asks the AI a unique question that has never that need to be investigated.
been asked before or requires nuance (in a business, risk,
COMPLIANCE, ASSURANCE, RISK MANAGEMENT MAPPING Tradeoffs:
or technical sense), the AI can’t exercise discernment in
Benefits: Generative AI can automate mapping compliance, answering those questions. • Supporting and maintaining massive logs of data
assurance, and risk management frameworks, requirements, describing activity requires substantial processing power.
• AI tries to give the best-sounding answer, which may not
risks, controls, etc. GenAI is also useful in expediting the The cost of storage and computing is very high, although
always equate to the most truthful answer.
process of drafting control requirements, descriptions, efforts are being made to lower costs.
narratives, and reports - saving InfoSec teams time. • (Both of these examples illustrate the necessity of having
a human in the loop to exercise discernment.) • Models take time to train and get better over time as
Tradeoffs: they learn. Therefore, you must invest adequate time to
Example WCGWs:
• AI may not produce the best or most innovative properly train and fine-tune them before they begin to
conclusions. • If a competitor sends an RFP, and the AI over-discloses generate value for you.
sensitive information; they can learn everything about Example WCGWs:
• Users may not understand the technology’s limitations
your security environment.
and may accept low-quality or inaccurate Gen AI • Blocking legitimate traffic to websites from customers.
recommendations without scrutiny. • If an AI-generated answer is accepted without proper
review and amounts to a material misstatement, you can • Misuse/abuse/selling/leaking of website traffic by the
• (Both of these examples illustrate why you want a “human third-party vendor.
risk terminating a contract, damages, and even being
in the loop” to review and accept recommendations before
found guilty of fraud. • Alert fatigue, which occurs when AI generates alerts that
they are accepted into the final work product.)
aren’t real issues (false positives), and those in charge of
Example WCGWs: responding start tuning them out.
• Gen AI creates inappropriate relationships between
requirements, which could result in an overstatement
of how compliant your organization is. This may not
be discovered until a third-party audit challenges the
decisions made by the AI.
• The AI’s recommendations reflect what it sees as most
common in the training data; consequently, you may get
the most common answer rather than the best answer.
auditboard.com Harnessing AI and ML to Optimize Your Security Compliance Program | 6
ENDPOINT-BASED MALWARE DETECTION CONDITIONAL-BASED ACCESS POLICIES AI-ASSISTED CODE ANALYSIS
Benefits: An anomaly-based or behavior-based AI malware Benefits: ML and AI can be used to help identify cases of Benefits: AI solutions can analyze code which can be
detection solution can be trained to identify and block location change to help protect the business against hackers particularly helpful if misconfigurations exist. This enables
malicious behavior designed to evade signature-based and threats. This makes it generally challenging for an junior and less skilled developers to be more efficient at
detection. individual to be on a network or connect to a system from two triaging and remediating software vulnerabilities.
different locations in the world.
Tradeoffs: Tradeoffs:
Tradeoffs:
• Both signature-based and anomaly-based solutions • Requires someone to review and validate automated
often need administrator access to the endpoint to • False positives, where the AI solution rejects access, even code testing manually.
function. if someone has the right credentials. • Time and effort vs. risk: If there is a large volume of
• A compromised malware solution has the potential to • Requires a properly staffed IT team with the training and findings, engineering teams may not have the time or
lock the entire machine. capacity to deal with false positives. capacity to review them.
• There are reliability and performance penalties to running • The difficulty of creating uniform patterns to make Example WCGWs:
this type of detection on an endpoint. access decisions around.
• An attacker could discover the flaw before you do.
Example WCGWs: Example WCGWs:
• Overreliance or confidence in the quality of AI output
• Overconfidence around this control can hurt you if you • Misconfiguration of certain business requirements and without skilled human review; AI suggests a bad
experience a 0-day, novel threat that the AI has never exclusion groups (e.g., road warriors) may lead to an parameter that causes an outage.
seen before. important employee getting locked out of their laptop • Time spent reviewing findings takes engineers away from
• An overzealous or poorly designed malware system can ahead of an important meeting — leading to significant higher-value activities, such as building products.
slow down or introduce a reliability risk into the system it business impacts, such as loss of a deal.
is monitoring. • The downtime an employee experiences if their account
• False positives (blocking a legitimate application) could is locked and the negative impacts on employee morale
cause a disruption to the business, such as locking and productivity.
employees out of machines.
auditboard.com Harnessing AI and ML to Optimize Your Security Compliance Program | 7
PART TWO
What Do We Do Now?
Applying This Knowledge to Evaluating AI Solutions
The overall attitude of InfoSec professionals toward AI This is why, as businesses explore ways to integrate AI into Since then, several companion publications have been
is reasonably optimistic when its use is coupled with their teams and processes, there is a strong incentive from published that offer further guidance, including:
strong risk management practices. boards and executive leadership to establish the proper • The NIST AI RMF Playbook: a companion piece to the
guardrails — i.e., governance, controls, and processes — from AI framework that includes recommended actions,
the start. A June 2024 AuditBoard flash poll of 3,500 InfoSec, references, and guidance for achieving the outcomes of
Do you believe that, with a strong risk compliance, and risk professionals found that over 50% of the framework.
those surveyed have implemented or are actively developing a
management foundation, AI can be responsibly dedicated strategy for managing AI-related risks. • NIST-AI-600-1, Artificial Intelligence Risk Management
leveraged to drive cybersecurity resilience in the Framework: Generative Artificial Intelligence Profile:
organization? developed in support of President Biden’s Executive
RECOGNIZED RESOURCES TO DESIGN INTERNAL
Order on Safe, Secure, and Trustworthy Artificial
GOVERNANCE AND DUE DILIGENCE PROCESSES
Intelligence, this publication outlines the risks unique to or
90% 10% The EU AI Act: The EU AI Act applies to all parties involved exacerbated by generative AI and suggested actions to
in the development, distribution, and usage of AI. Its manage GAI risks.
Yes No requirements include a model inventory, risk classification,
and risk assessment and are aimed at protecting user safety ISO/IEC 42001: Published in 2023, ISO 42001 is the world’s
Source: AuditBoard, September 2024 flash poll of 1,335 information security,
and fundamental rights. Penalties for non-compliance range first standard on AI management systems that addresses
compliance, and risk professionals
from 1% to 7% of the business’s annual turnover. Because the unique challenges of AI, including ethics, transparency,
the EU typically leads in setting global data privacy and policy and continuous learning. Per ISO, the standard “sets out a
standards (like GDPR), it is reasonable to expect that the EU structured way to manage risks and opportunities associated
This is why, as businesses explore ways to integrate AI into their AI Act may influence U.S. AI regulations in the future. with AI, balancing innovation with governance.”
teams and processes, there is a strong incentive from boards
and executive leadership to establish the proper guardrails — NIST AI Framework: In obligation to the National Artificial
i.e., governance, controls, and processes — from the start. A Intelligence Initiative Act, The National Institute of Standards
June 2024 AuditBoard flash poll of 3,500 InfoSec, compliance, and Technology (NIST) published the AI Risk Management
and risk professionals found that over 50% of those surveyed Framework (AI RMF) in 2023 to help guide — through the
have implemented or are actively developing a dedicated four functions of Govern, Map, Measure, and Manage — the
strategy for managing AI-related risks. responsible development and deployment of AI systems.
auditboard.com Harnessing AI and ML to Optimize Your Security Compliance Program | 8
MODEL, DATA OWNERSHIP & RETENTION, WCGW Selecting an AI vendor that supports your InfoSec initiatives THE WAY FORWARD: UNLOCKING THE BENEFITS OF AI FOR
CHECKLIST FOR EVALUATING AI & ML SOLUTIONS while also aligning with your organization’s security policies INFOSEC TEAMS WITH STRONG RISK MANAGEMENT
Security professionals with plans to integrate AI or ML can be tricky. The following is a checklist of best practices AI and ML solutions represent an exciting opportunity to
solutions in their processes might consider the following to consider when vetting potential AI vendors: accelerate the efficiency and efficacy of InfoSec compliance
points of entry and engagement for applying the resources activities, from enhancing threat detection and automating
□ Vendor due diligence on data privacy and security
noted above: compliance tasks to accelerating incident response. While
(to evaluate data handling, security, and
• Procurement & TPRM these technologies can empower InfoSec teams in many
development practices)
ways, balancing their potential benefits with a cautious
• Recurring control or evidence reviews □ Vendor due diligence on solution development approach to risk management is essential to protecting the
• Periodic testing with first-line operations and risk and (to evaluate how the vendor designs its solutions as business. To do so, InfoSec leaders must establish robust
compliance teams the relationship between the vendor and the provider governance, processes, and controls around AI usage from
of the model) the start. By navigating these complexities carefully, InfoSec
In addition, the following is a list of sample questions InfoSec □ Vendor due diligence on data usage & rights professionals can harness AI’s potential to optimize their
teams can use to build risk ratings for vetting AI solutions. (to evaluate impact of inclusion in training and security and compliance programs and set an example of
• What are the WCGWs? the ownership of inputs and outputs) good AI governance for the rest of their organizations.
□ Solution/model due diligence
• Have we evaluated the impact and likelihood of things
(to evaluate specifics of the model, risk of hallucination,
going wrong?
safeguards designed or required, if outputs are explicitly
• What data do we have to give the vendor to train the AI? accepted by the end user)
• What is the value/sensitivity of that data? □ Propose mitigations by enabling safe AI options
• What is the efficacy of the AI we get in return? Is it □ Train users on appropriate AI usage
accurate, high-quality output that can be used to make □ Familiarity with existing AI frameworks and standards
decisions? such as ISO/IEC 42001 and NIST AI RMF.
• What level of risk does that allow you to mitigate?
• What are the most concerning outcomes or tradeoffs?
• Do we have contractual measures with a third party or
internal guidance on usage for the given solution?
• What is the vendor’s stance on security? What security
certifications do they have?
• Does the risk fall in or outside of our risk appetite?
auditboard.com Harnessing AI and ML to Optimize Your Security Compliance Program | 9
About the Authors
Richard Marcus John Volles Mary Tarchinski-Krzoska
Chief Information Security Officer Director of Information Security Compliance Market Advisor
AuditBoard AuditBoard AuditBoard
Richard Marcus, CISA, CRISC, CISM, TPECS, is the John Volles, CISA, CISM is a Director of Information Mary Tarchinski-Krzoska, CISA, is a Market Advisor
CISO at AuditBoard, where he is focused on product, Security Compliance responsible for managing at AuditBoard. Mary began her career at EY before
infrastructure, and corporate IT security, as well as AuditBoard’s compliance, risk, and privacy obligations transitioning to a risk and compliance focus at A-LIGN,
leading the charge on AuditBoard’s own internal as well as helping customers understand AuditBoard’s and brings 9 years of global experience including
compliance initiatives. In this capacity, he has become security posture and position. John joined AuditBoard SOC, HIPAA and ISO compliance audits, consulting on
an AuditBoard product power user, leveraging the from EY, where he reviewed and implemented client business continuity and disaster recovery processes,
platform’s robust feature set to satisfy compliance, risk compliance programs and supporting technologies. and facilitating risk assessments.
assessment, and audit use cases.
auditboard.com Harnessing AI and ML to Optimize Your Security Compliance Program | 10
About AuditBoard
AuditBoard is the leading cloud-based platform transforming audit, risk,
compliance, and ESG management. Nearly 50% of the Fortune 500 leverage
AuditBoard to move their businesses forward with greater clarity and agility.
AuditBoard is top-rated by customers on G2, Capterra, and Gartner Peer
Insights, and was recently ranked for the fifth year in a row as one of the
fastest-growing technology companies in North America by Deloitte. To learn
more, visit: AuditBoard.com.
Copyright © 2024 AuditBoard
auditboard.com Harnessing AI and ML to Optimize Your Security Compliance Program | 11