0% found this document useful (0 votes)
62 views44 pages

Inside The Mind of A CISO Resilience in An AI Accelerated World

The document discusses the challenges faced by CISOs in an AI-accelerated world, emphasizing the importance of collaboration and community-driven approaches to enhance security resilience. It highlights trends in vulnerabilities, particularly the rise of API and hardware vulnerabilities, and offers actionable recommendations for CISOs to improve their security strategies. Additionally, it includes insights from various articles and stories that provide a comprehensive overview of the current security landscape and best practices for effective risk management.

Uploaded by

Anass J
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views44 pages

Inside The Mind of A CISO Resilience in An AI Accelerated World

The document discusses the challenges faced by CISOs in an AI-accelerated world, emphasizing the importance of collaboration and community-driven approaches to enhance security resilience. It highlights trends in vulnerabilities, particularly the rise of API and hardware vulnerabilities, and offers actionable recommendations for CISOs to improve their security strategies. Additionally, it includes insights from various articles and stories that provide a comprehensive overview of the current security landscape and best practices for effective risk management.

Uploaded by

Anass J
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Inside

the Mind
of a
CISO

Resilience in an
AI-accelerated world
The CISO’s challenge:
Measuring security outcomes
"Security without true adversarial
testing is just an illusion”

The dos and don’ts


for your next board deck

Ask a hacker
5 vulnerabilities to watch out for
Table of Contents
03
LETTER FROM THE EDITOR
An Introduction from our CI&SO Nick McKenzie

04
Executive Summary

06
ARTICLE
The Vulnerability Intelligence Report

12
INFOGRAPHIC
Ask a Hacker: Vulnerabilities to Watch out for

15
CISO EXCELLENCE STORY
Hacking the NFL Tomás Maldonado

18
ARTICLE
The CISO’s Challenge: Measuring Security Outcomes

22
INFOGRAPHIC
The Dos and Don’ts of a Great Board Deck

23
ARTICLE
Building a Board Deck: A Guide for CISOs

28
THOUGHT PIECE
From Simulation to Strength A CISO’s Guide to Red Teaming

34
CISO EXCELLENCE STORY
Securing a Leading AI Supercomputer Dan Maslin, Monash University

36
HACKER THOUGHT PIECE
Will AI Replace Security Research?

40
ARTICLE
From Assets to Action Operationalizing Attack Surface Intelligence

43
Conclusion

© 2025 Bugcrowd Inc. All Rights Reserved. Reproduction and distribution of this publication in any form without prior written permission is forbidden.
LETTER FROM THE EDITOR

An introduction from
our CI&SO Nick McKenzie

As security leaders, we are Let’s look back on my original question about


knowing where to anchor ourselves when
in the thick of so much change. it comes to AI. The reality is that we are at
AI is everywhere, and frankly, an inflection point. AI has fully taken over
many of us are sick of talking conversations about offensive security, so where
do human testers and analysts end and where
about it. How do we separate does AI begin? I guarantee that the moment you
the wheat from the chaff? figure out how to answer that question, the goal
posts will move again.
We are in a high-stakes innovation race, but
with every AI advance, the security landscape The key to success is understanding the role
becomes exponentially more complex. Attackers of humans, the role of AI, and the fact that the
are exploiting this complexity, but still targeting balance between the two will change over
foundational layers like hardware and APIs. No time. We can’t get lost in the buzzwords. While
single CISO can win this race alone. To thrive, we others race to introduce flashy AI workflows and
must move beyond isolated efforts and cultivate a copilots, it is my belief that now is the time for
collective resilience of collaboration—pooling our sensible decision-making and adopting AI models
knowledge of the hacker community to outpace where they make sense and provide true value.
emerging threats together. Ultimately, CISO confidence in an
This community-driven approach is the only way AI-accelerated world comes from continuous,
to stay ahead—defeating attackers as one unified community-powered testing augmented by
force. That’s why, in this report, we are sharing AI that translates risk for the board. This is
a range of insights for CISOs, from vulnerability what results in true security resilience.
hot-spots to watch out for all the way through to The challenges we face are daunting, but they’re
strategies to confidently communicate with board not insurmountable when we work together.
members and justify investments. By tapping into a larger collection of
knowledge, we can successfully lead our
teams through these chaotic times—a
fundamental truth this report highlights.
The CISO position As you dive into this report,
may be at the top I encourage you to view each insight
of security leadership, not as isolated information, but
but it's strengthened as part of a larger community of
knowledge. Take what resonates,
most by the collective
share what works, and continue
intelligence of our
building the collaborative spirit
community that will define the future of
cybersecurity leadership. ■

ITMOAC 3
EXECUTIVE SUMMARY

Vulnerability trends
The trends, patterns, and themes we’re seeing
from hundreds of thousands of vulnerabilities
submitted through the Bugcrowd Platform.

32%↑
10%↑ increase in average
payouts for critical
vulnerabilities
increase in API
vulnerabilities

42%↑
36%↑ increase in sensitive
data exposure critical
increase in broken vulnerabilities
access control critical
vulnerabilities

40%↑ 88%↑
increase in hardware
increase in broken vulnerabilities
access control
vulnerabilities

ITMOAC 4
EXECUTIVE SUMMARY

Recommendations
This digital magazine is made up of 10 articles, all examining different aspects
of the CISO experience right now, whether you’re a first-time CISO, a seasoned
vet, or even an aspiring security leader. It’s jam-packed with information,
but for those in a hurry, here are a few highlights paired with actionable tips.

THE TOPIC THE TL;DR WHAT TO DO NEXT

The rise Last year, Bugcrowd saw an 88% increase in → Consider adding APIs
hardware vulnerabilities and a 10% increase and hardware to the scope
of API and
in API vulnerabilities. 81% of researchers and of your offensive security
hardware hackers cite that they’ve encountered a new testing programs.
vulnerabilities hardware vulnerability they had never seen
before in the past 12 months.

Prioritizing API and hardware testing ensures


we proactively protect our systems and
hardware so that CISOs can be more resilient
and deliver secure experiences to users
downstream.

The need to As the damage from cybercrimes increases → Adopt an integrated approach
rapidly, CISOs can’t afford to wait weeks to attack surface intelligence
operationalize
or months to act on their attack surface to demonstrate measurable
attack surface intelligence. improvements in security
intelligence efficiency and faster remediation
To help CISOs truly reduce risk, security
cycles. This enables you to
teams must integrate EASM intelligence into
prove the value and outcomes
their offensive testing platforms so that there’s
of a security program to external
a direct path from discovery to remediation.
stakeholders.

The gift Getting objective perspectives on where → Leverage hackers, pentesters,


and why you are vulnerable is crucial and red teamers for offensive
of objective
for any CISO looking to build a stronger security testing to get a true
feedback security program. The most mature understanding
organizations don’t just value objective of where you’re
feedback, they prioritize it. vulnerable.

CISOs must go beyond annual pen tests


that only provide a snapshot of their
security posture. They must invest in
continuous testing that incentivizes expert
feedback. A big part of this is fostering a
culture where learning more about your
attack surface, discovering the unknown,
or being “beaten” by a red team is not
seen as failure but as opportunity.

ITMOAC 5
ARTICLE

The
Vulnerability
Intelligence
Report
Every day, hackers in the Bugcrowd
community submit hundreds of
vulnerabilities via our Platform. These
vulnerabilities range in criticality,
target type, and submission category.

We analyzed hundreds of thousands of Our goal is to provide security teams with the
proprietary data points and vulnerabilities most up-to-date information on vulnerability
collected from across thousands of public trends to help them make educated decisions
and private engagements from January 1, about their own risk and threat profiles.
2024, to December 31, 2024.

Number of vulnerabilities
This graph shows the number of vulnerabilities over the past three years.

TRENDS

1.3% Over the past three years,


the number of vulnerabilities found
has stayed relatively consistent.

WHY?

The number of vulnerabilities


is balanced by long-time
engagements from more security
mature organizations (which have
a lower volume of vulnerabilities)
and newer engagements that
often have a higher volume of
vulnerabilities.
2022 2023 2024

ITMOAC 6
ARTICLE THE VULNERABILITY INTELLIGENCE REPORT

Number of critical vulnerabilities


This graph shows the number of critical vulnerabilities over the past three years.

TRENDS

The number of critical


vulnerabilities has gone down

-7% -19% slightly year over year.

WHY?

Many new customers find that their


first year of their engagement yields
a high number of P1s. Over time, the
number of P1s decreases—which isn’t
a bad thing! This is simply a signal of
a program becoming more secure.
2022 2023 2024

ASK A CISO It’s a win-win situation—either the Crowd


One CISO working for finds something we didn’t see, in which
a long-time Bugcrowd
customer shared his
case we can fix it, or they don’t find
thoughts on this situation anything, which validates our efforts.

Number of critical vulnerabilities by target type


This graph shows the number of critical vulnerabilities (P1) by target type over the past three years.

API TRENDS

The number of critical vulnerabilities in API targets decreased


by about 25%. Critical vulnerabilities in website targets
decreased by 30%. There was a slight increase in critical


25% vulnerabilities for Android, hardware, IoS, and network targets.

WHY?

Website The decrease in critical vulnerabilities in API and website targets


is an incredibly encouraging stat. This points to customer API
infrastructure becoming more secure. It tells us that developers
are doing a good job at fixing bugs and securing their code.
Keeping in mind that website target numbers indicate overall

30% trends, we know that it is becoming harder to find P1s, giving
customers assurance that they are becoming more secure.

ITMOAC 7
ARTICLE THE VULNERABILITY INTELLIGENCE REPORT

Number of vulnerabilities by target type


This graph shows the number of vulnerabilities by target type over the past three years.

API Network TRENDS

Over the past three years, API vulnerabilities


increased by almost 10%. We also saw an 88%
increase in hardware vulnerabilities. The number
of network vulnerabilities doubled, and the number
of website vulnerabilities stayed consistent.

↓ ↓

WHY?

10% 2x As a rule of thumb, it’s helpful to use website target numbers


as the ground truth when analyzing target data. This is
because most engagements include websites in their scope.
Hardware Website
Because website vulnerabilities stayed consistent, we can
look at changes in other targets for additional insights.
The increase in API and hardware vulnerabilities aligns
with what we’re seeing in the market—hardware is having
a resurgence and API security is more important than
ever. These numbers also tell us that organizations are
diversifying their scope on their engagements. Many
↓ engagements start with a more limited scope, with website
targets as the primary focus. As teams see the value in
working with hackers, they will often expand their scope to
88% Consistent include additional targets like IoT, network, and hardware.

ASK A CISO APIs are the foundation of our platform, enabling key
API Testing services and handling sensitive data. Because they directly
Dan Ford, expose business logic and functionality, they are a natural
CISO, focus for attackers. Prioritizing API testing ensures that
ClassDojo we are proactively protecting our systems and delivering
secure experiences to our users.
APIs can expand the attack surface available to malicious actors,
so securing them is critical. By combining internal testing with a
comprehensive suite of unit tests, alongside live testing through our bug
bounty program, we validate our defenses, catch subtle issues early, and
maintain strong security as our platform evolves.

Partnering with Bugcrowd gives us access to diverse, skilled researchers


who uncover vulnerabilities traditional testing might miss. Their insights
strengthen our defenses, help us identify gaps, and continuously refine
our internal security processes based on real-world attacker perspectives.

ITMOAC 8
ARTICLE THE VULNERABILITY INTELLIGENCE REPORT

Payouts for vulnerabilities


This graph shows the average payouts for vulnerabilities over the past three years.

TRENDS
-1.3% -1.3%
Over the past three years, Average ↓ ↓

the average, median, and 90th


percentile of payouts have
remained relatively consistent.
-11%
Median ↓

WHY?

Even in challenging times


where budgets are being -4%
90th
cut down and layoffs are ↓

common, security teams are


maintaining their investments
in crowdsourced security.
2022 2023 2024

Payouts for critical vulnerabilities


This graph shows the average payouts for critical vulnerabilities (P1) over the past three years.

+35%
TRENDS
Average ↓

The average payouts for critical


vulnerabilities increased by 32%
in 2024. Median and 90th
percentile critical vulnerability
Median payouts remained the same.

WHY?

90th
Notice how the graph above
showed that overall vulnerability
payouts remained relatively
2022 2023 2024 consistent while average critical
vulnerability payouts increased
each year? This points to
organizations emphasizing critical
vulnerability payouts. They are
paying more for P1 vulnerabilities
and balancing that by paying less
for P3, P4, and P5 vulnerabilities.

ITMOAC 9
ARTICLE THE VULNERABILITY INTELLIGENCE REPORT

Number of vulnerabilities by VRT category


This graph shows the top 10 most commonly reported VRT categories.

1 Broken access control TRENDS

The VRT category that saw the largest increase in vulnerabilities


2 Cross site scripting XSS was broken access control (40% increase). Other categories
that saw increases over a smaller volume of submissions include
cryptographic weakness and network security misconfiguration.
Server security
3
misconfiguration The only VRT category that saw a statistically significant decrease
in vulnerabilities submitted was application-level denial of service
4 Sensitive data exposure (DoS). The categories that saw decreases over a smaller volume
of submissions include automotive security misconfiguration,
client-side injection, insecure data storage, and mobile security
Broken authentication misconfiguration.
5
and session management

6 Other
WHY?

It was no surprise to see that broken access control vulnerabilities


7 Server-side injection increased so much in 2024. This is a common category that many
hackers gravitate toward. Many hackers prefer finding a niche set
Unvalidated redirects of skills and going all in on the VRT categories that align with that
8
and forwards skill set, and broken access control vulnerabilities are certainly
popular with hackers who employ this style.
Cross-site request
9 We saw about an 11% decrease in application-level denial of service
forgery
(DoS) vulnerability submissions. This VRT category is often out
of scope in engagements, so hackers are less likely to test these
Application-level denial
10 applications fearing legal consequences.
of service (DoS)

ASK A HACKER When looking at this data, it’s important to remember


The increase that apps are getting more and more complex. Given the
in broken
increase in features and integrations, access controls are
access control
vulnerabilities
becoming harder to manage. Most broken access control
issues are trivial to exploit, yet they carry a huge impact.
DK999
Any app goes through multiple development cycles with numerous code
changes, keeping the attack surface dynamic. Developers are under
pressure to release features quickly, meaning security often
takes a backseat. Proper access control implementation
can be time-consuming. With AI-assisted coding
becoming common, we can expect the percentage of
broken access control vulnerabilities to increase.

Between app complexity, rapid development, and the


new AI adoption cycle, security is being neglected
early on. I believe this is why we’re seeing this
vulnerability type surge.

ITMOAC
ARTICLE THE VULNERABILITY INTELLIGENCE REPORT

Number of critical vulnerabilities by VRT category Sensitive


data exposure
This graph shows the top 5 most commonly reported VRT category for P1s.
P1s increased
by 42%.
TRENDS
1 Server security
misconfiguration The number of broken access control P1s increased
by 36% in 2024. This category joined the top three in
2024. The top three categories of P1s rewarded in 2023
were broken authentication and session management,
sensitive data exposure, and server-side injection.
Server-side
2
injection

WHY?

The increase in sensitive data exposure vulnerabilities is a


Broken access key finding because it tells us that more personal identifiable
3
information (PII) is being exposed to the world. PII includes items
control
like names, addresses, emails, and social security numbers.

Unfortunately, in the process of code development, data


exposure is still an afterthought. Given the number of P1s we’re
seeing in this category, we can assume that the PII that is being
Sensitive data exposed is unencrypted. In the wrong hands, this type of data
4
exposure can lead to catastrophic consequences for customers and an
organization’s reputation.

Luckily, many hackers specialize in reconnaissance work in


this specific category. There are thousands of hackers on
Broken authentication
the Bugcrowd Platform who consider sensitive data exposure
5 and session
vulnerabilities to be their bread and butter. They help organizations
management
find these P1 vulnerabilities before threat actors do.

While many vulnerabilities are highly technical, complex, ASK A HACKER


and mind-blowing, sensitive data exposure is much more, The increase
in sensitive
well, boring. But they can be cringe-inducing “how did I
data exposure
not know about this” issues with serious consequences. vulnerabilities
From leaked credit card numbers to leaked employee tokens InsiderPhD
in GitHub repos, regulatory compliance compels organizations
to take steps to secure sensitive data so that a data breach
doesn’t end up happening.

Attackers can end up living rent-free inside your systems,


as data might be stolen for years before a breach is
noticed. While you think everything is fine, attackers
might be having a field day stealing intellectual property
and PII on your customers or employees! And they don’t
just stop at breaching your systems; they’ll extort you,
dumping the data publicly when you refuse to pay.

ITMOAC 11
INFOGRAPHIC

Ask a Hacker

Vulnerabilities to
Watch out for...?

We asked expert hackers on the Bugcrowd Platform


to break down the top five most commonly reported VRT
categories for critical (P1) vulnerabilities last year.
These insights can help you understand the impact
of some of the most common vulnerability types.

The potential impact of server Server security


security misconfiguration misconfiguration
vulnerabilities
Server security misconfigurations remain one of
the most common and dangerous weaknesses
in modern environments. Misconfigured Masonhck357
authentication, caching, or access controls can
turn low-severity issues into critical breaches.
Through my own work, I’ve located admin panels 1
left exposed via default credentials, granting
unrestricted system access. I’ve also uncovered
a low-level rate limiting flaw on cached URLs
containing sensitive documents protected by
OTP codes. By combining predictable caching
behavior with the rate limiting weakness, I was
able to bypass the OTP requirement entirely and
escalate the issue to a critical vulnerability. These
cases show how small oversights can create
significant risk when chained together.

ITMOAC 12
INFOGRAPHIC ASK A HACKER

Vulnerabilities to Watch out for...?

Broken The potential impact of


access broken access control
vulnerabilities
control
Broken access control vulnerabilities should
be a priority for every security team for three
main reasons: ease of exploitation, likelihood
of exploitation, and compliance requirements.
Most of these vulnerabilities are easy to
exploit, even for novice attackers. Threat
actors are actively targeting these flaws to
breach companies and leak data. Standards
like the GDPR and HIPAA mandate strong
2
access control. Failure to address these issues
can result in significant fines and penalties.

DK999 From my experience, these vulnerabilities


often leak critical information like PII,
healthcare data, confidential system
information, and internal documents. They
are absolutely necessary to address.

Broken The potential impact of broken


authentication authentication and session
management vulnerabilities
and session
Broken authentication and session management
management bugs are common vulnerabilities that often go
unnoticed, with a very critical business impact.
Even if you invest in a great firewall and EDR,
keeping a completely clean dashboard, these bugs
can run in the background, meaning someone
can impersonate a legitimate user account
without the security team getting any alerts.
Aituglo
This can be devastating from a business
risk perspective. From a compliance and
3 regulatory perspective, it can trigger GDPR
or CCPA penalties because it commonly
concerns sensitive customer data. There
is also a downstream impact. Attackers can
chain together these vulnerabilities, leading
to more advanced attacks.

ITMOAC 13
INFOGRAPHIC ASK A HACKER

Vulnerabilities to Watch out for...?

The potential impact of Sensitive data


sensitive data exposure exposure
vulnerabilities
The potential impact of sensitive data exposure
can be a legal, financial, and reputational nightmare.
4
Often, the sensitive data that is exposed—user
names, addresses, IDs, and mobile numbers—are
just part of an attack, and the attacker is pivoting
their way deep into and across your network.
Brig
By the time you find the breach, attackers have
often already been working their way through your
network for months. Meanwhile, they’ve already
sold the data to the highest bidder, and now you
and your organization are being targeted with
phishing emails tweaked in just the right way
to get you to engage.

These vulnerabilities are an obvious


priority for security teams and CISOs.
They must be identified and fixed quickly.

Server-side The potential impact of server-side


injection injection vulnerabilities
Whenever we type something into a website, a search
box, a login form, or even a comment box, it sends that
information to a server in the form of parameters and its
Anon Hunter value to process. Normally, a server should treat our input
as plain, harmless text. But with server-side
injections, attackers can send specially
5 crafted text, called a payload,
that the server will follow as if it
were a legitimate command or
instruction, considering my input
as a part of its own coding.

This could lead to the attacker


stealing all of your data, locking
you out of your own system,
demanding a ransom, or selling
your stolen information on the
dark web.

ITMOAC
CISO EXCELLENCE STORY

Hacking the NFL


Tomás Maldonado, National Football League (NFL)

Tomás Maldonado is a New York-based security leader and


independent board director with over 25 years of experience across
finance, media, manufacturing, and technology. He has been the CISO
of the NFL for six years. As the largest and most popular sports league
in North America, the NFL faces unique security challenges. The NFL
is an organization of organizations—it is comprised of 32 clubs, each
with unique operations, plus a league office, media properties, and
global events. Tomás is in charge of securing this entire ecosystem.

We sat down with Tomás to learn more about


his top priorities, his thoughts on AI governance,
and his approach to proactive security.

TOP PRIORITIES

AI GOVERNANCE

AI INNOVATION TRANSFORMATION

SECURITY AND RISK MANAGEMENT

PROACTIVE AND OFFENSIVE SECURITY

ITMOAC 15
CISO EXCELLENCE STORY HACKING THE NFL

The issue of AI governance


What have been your top extends beyond tech into
priorities since taking the realms of compliance,
helm as CISO of the NFL? operations, and brand
reputation. How are you
Our first priority has been to align security
approaching and prioritizing
with the business’s objectives and risk appetite.
Cybersecurity cannot sit in isolation; it must AI governance?
support the NFL’s mission and enable growth.
AI governance can’t live in a silo, so building
We established a risk-based program mapped
an AI governance council that includes security,
to standards like the NIST Cybersecurity
compliance, legal, and business leaders
Framework and made risk transparent to
is necessary. Every AI use case should be
leadership so that they can make informed
reviewed for compliance, privacy, bias, and
decisions about priorities and appetite.
security concerns before it launches. We also
The NFL isn’t just one monitor emerging regulations and translate those
requirements into controls.
organization—it’s an entire
ecosystem. To secure it all, we From an operational standpoint, I treat AI like
built a unified framework that any other critical system. This means securing
raises the baseline for every data, testing models for manipulation, monitoring
entity. Through consistent outputs for anomalies, and preparing incident
controls, shared playbooks, response playbooks for AI-specific scenarios.
and regular assessments, we I operate on a “security by design” principle,
so innovation never outpaces safeguards.
try to ensure no single point of
weakness can impact the whole. The part I emphasize most is trust and brand
integrity. We’re entering an era where the line
We have also invested heavily in culture between real and fake is becoming increasingly
and people. We don’t see employees as the blurred. Deepfakes and AI-generated content are
weakest link—we see them as potential security a real risk to organizations. Companies investing
advocates. By equipping them with training in detection tools, validation processes for official
and awareness, we’ve created an extended line communications, and crisis playbooks for
of defense where everyone plays a part. AI-driven misinformation campaigns will be
ahead of the curve. For me, AI governance is
Finally, resilience has been central to our
about protecting that fragile trust because once
approach. We’ve strengthened threat detection,
it’s lost, it’s incredibly difficult to win back.
incident response, and data protection, but
we haven’t stopped there. We test these
In short, AI governance should
capabilities constantly through tabletop
be an extension of your security
exercises and red team drills, ensuring that
framework built on compliance,
when the spotlight is on, security is seamless
operational resilience, and brand
and the business can shine.
protection; all of these elements
must work in tandem.

ITMOAC 16
CISO EXCELLENCE STORY HACKING THE NFL

How can CISOs effectively


balance AI innovation How does proactive
and transformation with security and offensive
robust security and risk security testing play
management? a role in your overall
For me, balance comes from embedding security strategy?
security from day one. Whenever a new
Proactive testing is a cornerstone of our
AI initiative is proposed, my team runs risk
strategy. We don’t believe in waiting for an
assessments, applies guardrails, and ensures
incident to occur—we simulate attacks, run
only the right data and systems are accessible.
red team operations, and drill relentlessly.
This way, we prevent later surprises.
We do so many tabletop exercises that when
But I don’t view security as a roadblock. I often a real incident happens, we have a plan.
say, “If security is not enabling the business, That preparation builds the confidence and
then what are we doing?” Security should speed we need when they matter most.
accelerate innovation, not stop it.
It’s also about thinking like the adversary. I remind
We celebrate when teams my team that unlike sports, cybersecurity has
launch secure products and no rules—“We don’t play fair with adversaries.”
This mindset drives us to simulate phishing,
not just fast ones because
ransomware, and denial-of-service attacks
this sets the tone that secure
against ourselves. If we can break our own
innovation is the standard.
defenses, we know where to shore them up.

Culturally, we work hard to make cybersecurity For our key events, testing starts months in
a partner to innovation. When business leaders advance. We bring in partners to run scans,
understand why we’re putting in guardrails, penetration tests, and tabletop drills. By event day,
they become allies. Additionally, we highlight weaknesses we’ve found have been remediated,
success stories where secure deployments and security is invisible to customers and staff.
allowed us to move faster or expand into new The goal is to be boring from a cyber standpoint
areas confidently. and exciting on the field.

Finally, we emphasize resilience. You can’t Ultimately, proactive testing shapes what we do.
block every threat; this is unrealistic. But you It reinforces resilience because blocking every
can prepare. We monitor AI systems, we scan attack is impossible, but being prepared is.
for new vulnerabilities, and if something goes
wrong, we respond quickly and learn from it. It also helps validate our
It’s about embedding security into the DNA defenses, sharpen our responses,
of innovation, so the organization can move and keep our people vigilant.
forward safely and confidently. Offensive testing is how we
stay one step ahead and ensure
our defense is ready. ■

ITMOAC 17
FEATURE STORY

The CISO’s
Challenge:
Measuring
Security
Outcomes
By Trey Ford

In Greek mythology, Sisyphus


is punished to roll an immense
boulder up a hill for the rest
of eternity. As the boulder Budgetary constraints help us think
approaches the top, it critically and give us the opportunity
immediately rolls back down. to prioritize and innovate, though the
roadmap and tradeoffs along the way
From the ELT and board’s are not always clear. When
perspective, CISOs can sometimes the board struggles to
sound like Sisyphus when understand our vision,
presenting our never-ending contextualize our risk
list of projects and asks. Every investment strategy, or
security program has a story full see how we measure
of milestones and gaps (from success or failure,
assessments, audits, best practices, our boulder rolls back
customer requests, or some other down the hill, requiring
source)—and there is always more CISOs to start the
to do and spend money on. process over again.

ITMOAC 18
FEATURE STORY THE CISO’S CHALLENGE: MEASURING SECURITY OUTCOMES

How we define
“success” and “failure”
In reality, a security program However, can the lack of breach be
considered a silent metric of success?
can be a lot like our health and
(Reminder, we cannot prove a negative….)
wellness journeys. Everyone is When we define success as a lack of
on their own path, and we are incidents, justifying a constant increase in
constantly having to navigate security spending to our boards is nearly
tradeoffs. impossible. In practicality, security without
true adversarial testing is almost an illusion,
In my private life, I measure success in these leaning heavily on the “maturity” of best
areas by my ability to say yes to the things practices without pragmatic validation.
I care about—energy to say yes to family, This means that diversified research
capacity to be present and engage with and testing clearly validates success, or
friends, and ability to make time for sports identifies points of failure (opportunities for
and hobbies. Failure is when I don’t have the improvement),directly justifying our asks.
energy to balance my work, travel, and the The culture we’re building isn’t about
things that matter to me outside of work. running from failure—it is aimed at
The difference between my personal goals continuous improvement and honest
and those of security programs is that the and objective feedback on what needs
latter require that adversarial element to focus or prioritization.
determine if we’re executing at a level we’re
comfortable and confident in. Creating a safe environment for
this level of objectivity is what
Furthermore, CISOs need to stretch a limited
budget to balance people, process, and changes our frame of reference
technology. The success of a program is from “failure” to a “growth
measured in a handful of ways, but “an mindset.” This carries directly
auditor approved” is the answer for so many. into program management and
budgetary planning.

If the CISO community has learned anything


through the zero-basis budget cycles over
the last couple of years, it might be that
the assumed nonnegotiable or brinksman
position of “We need to be doing all of these
things” doesn’t easily stand up to scrutiny.

ITMOAC 19
FEATURE STORY THE CISO’S CHALLENGE: MEASURING SECURITY OUTCOMES

Adversarial testing:
The path to objective
measurement
NIST defines “resilience”
as “the ability to maintain
required capability in the face Why CISOs need
of adversity.” So how do we
measure this?
adversarial testing
to understand
Adversarial testing evaluates our defenses
by applying the tactics, techniques, success and failure
and procedures of real-world attackers,
highlighting deficiencies in our programs that Adversarial testing forces us to ask the
rise above our agreed-upon risk profiles. hard questions and gives us an unparalleled
Adversarial testers, like red teamers or ethical view into the outcomes of our security
hackers, test resilience and provide actionable spend. For most companies, this is almost
insights, highlighting high-priority gaps to like a Christmas card you send your
address with a sense of purpose. customers and auditors—a once-a-year
snapshot of your program. There’s value,
One way adversarial testing helps with
but moving beyond point-in-time
objective measurement is it aids us in
assessments enables CISOs to confidently
evaluating our technology investment stack.
report program effectiveness.
This area is notoriously difficult to be objective
about—where are our people, process, and
By investing in adversarial
technology investments paying off or coming
up short? We have a fear of asking how our
testing, we quantify our
technology investments are working, or even security outcomes, identify
if they’re working at all. Vendor evaluations gaps, and move beyond
are time-consuming, changes come with subjective assessments and
cost, and can be emotionally charged, so it’s
maturity scores.
natural that there is an unwillingness to fire or
rotate vendors/technologies. When leadership
With the findings from adversarial testing,
is confident in our objectivity in evaluating
we can articulate and defend our asks to
existing investments, we gain credibility.
the risk committee and board, helping them
When we engage in adversarial testing, make informed decisions about where we
we have the objective data to shine a light on need to fund, where we need to defund, and
our program to inform our decisions about what we need to adjust in the tech stack.
what is and isn’t working.

ITMOAC 20
FEATURE STORY THE CISO’S CHALLENGE: MEASURING SECURITY OUTCOMES

A push toward
resilience
When everything we ask for is “mission
↓ critical,” we sound like Sisyphus, pushing
our boulders up the hill over and over again.

Bringing results We must shift from incident prevention to


measuring resilience. With the power of
to risk committees adversarial testing as a core component of
our security programs, our asks are backed
From my perspective, the most successful, by evidence and we can tangibly demonstrate
capable, and upwardly mobile CISOs operate the value of our security investments.
in partnership with a risk committee. They Why does this resilience matter so much?
regularly gather representatives from key Again, resilience is the ability to maintain
leadership positions across an organization required capability in the face of adversity.
to sit down and evaluate the top risks to their A strong security program means fewer
business. These committees are an opportunity disruptions to business, more effectively
for businesses to look at their investments, managed risk, and better processes to deal
assessments, audits, known technical with incidents. We’re building programs strong
deficiencies, and key concerns. In other words, enough to protect what matters while letting
CISOs use risk committees as an opportunity teams focus on what they love outside of work.
to align on difficult investment decisions
associated with competing business risks. Resilience isn’t a destination
but a series of daily choices
In a time where zero-basis
and practices that become
budgeting is becoming the
your way of operating.
norm, CISOs are constantly
asked to defend every dollar When your security foundation is solid and
and make difficult choices continuously validated via adversarial testing,
about what to cut. you’re creating space for innovation, growth,
and the kind of work–life balance that lets you
Budget cuts affect every aspect of security say yes to what matters most. ■
planning, strategy, and operations—all of which
are part of a complex tapestry woven across a
business in alignment with the risk committee.
Every time CISOs are asked to defund projects,
they need fresh acceptance from the risk
committee so that leadership can calibrate
on the tradeoffs. CISOs can use the results of
adversarial testing to justify these tradeoffs
to the risk committee and make educated
decisions to address risks and gaps.

ITMOAC 21
INFOGRAPHIC

The Dos and Don’ts


of a Great Board Deck
Take your deck from ‘pulse check’
to a story the board will fund

Build upon Treat your board


previous presentation as a
decisions and status report
discussions instead
in a narrative of an
approach. ongoing
story.

Build dashboards with


meaningful, binary metrics
that tell your story over time.
Consistency is key!
Regularly change the metrics you
report on or report on metrics that
require deep security expertise.

Include a maturity Only rely on maturity


score based on a scores; combine
maturity model or with additional
risk management frameworks or
framework. metrics to show
efficacy.

Use insights Include asks


from adversarial with no
testing to prove justification.
what is and isn’t
working.

ITMOAC 22
ARTICLE

Building a Board-Deck-Final-Final.pptx Board-Deck-V23.pptx


Board-Deck-Final-ForReal.pp
Board-Deck-V
Board-Deck-V15.pptx
Board-Deck-Final-Tu
Board-Deck-Final-B.ppt

Board Deck Final-Tuesday.pptx

Deck-ThisIsItFinal.pptx

A Guide for CISOs

2 Boards routinely approve significant


growth investments but freeze when CISOs
3
ask for budget to fund security initiatives.
This disconnect isn’t a result of your
presentation skills but the lack of context.
4
Specifically, most board members lack the
technical context to understand security
5
risks (or tradeoffs) to evaluate your
proposals against other initiatives, which
6 makes it challenging to get their buy-in.

7 Your role as CISO is to bridge this gap by helping the board


and executive team calibrate risk tolerance and make
informed tradeoffs that align with organizational goals.
8 This requires translating risk to help them understand what
level of risk they’re comfortable accepting.

Let’s break it down further.

ITMOAC 23
ARTICLE BUILDING A BOARD DECK: A GUIDE FOR CISOS

What is a board looking for?


The first step is to understand what a board is looking for.
Every board is looking for clarity on these three questions:

What do I need to know What do you need from me


The board wants a high-level The board wants to know what you
understanding of the current state of need them to do to prevent risks
the security system and risks that keep from materializing, whether it’s
you up at night. This also includes any greenlighting a funding request or
critical data points or trends that you getting executive alignment on a
and the team are monitoring. strategic direction.

Why do I care
The board needs to understand why the risks and trends matter for the
business, whether it’s a threat to operations or a regulatory/compliance need.

ASK A CISO What advice do you have for CISOs looking to


Dan Maslin, effectively communicate risk to their board?
CISO, Once you've passed the commonly used risk rating matrix
Monash of “likelihood vs. consequence,” you need to bring the reality
University to life. Go to the next level and develop threat-informed
scenarios that are most likely to occur within the organization
to make it real for your audience. Next, consider the key
mitigations or controls for each scenario and rate the
effectiveness of each.

For example, you might say, “Advanced


threat actors leverage social engineering to
manipulate staff into providing unauthorized
access,” and your top three controls
are “staff training,” “privileged access
management,” and “email protection,”
with an effectiveness rating for each.

Having a few of these scenarios—which


are realistic because they are based on
current intelligence—will bring risk to
life and drive a good human-to-human
discussion about what could happen
and how risk can be mitigated.

ITMOAC
ARTICLE BUILDING A BOARD DECK: A GUIDE FOR CISOS

Craft a narrative
Board members understand business stories better than security metrics.
They want to see progression, learn from challenges, and understand
how decisions play out over time. This is why the most effective CISO
presentations are built around story arcs. Here’s a rundown of how to begin
crafting your narrative.

1. Make each meeting a new chapter


Think of your board presentation as part of an ongoing story, not a
status report. You want to build on previous decisions and show how
they’re being addressed to create an ongoing story about the state of the
organization’s security. This requires you to translate technical risks into
a compelling business narrative that helps the board understand why the
risks matter for the business, which builds mutual understanding and trust.

For example, you could start an audit storyline with,

We’ve got the audit coming up next month, and we’ve expanded our scope.
We’ll likely see new action items because we’ve never thoroughly audited this.

Then, in the next quarter, continue the arc:


Even if you do your best to
communicate your story,
Here’s what was accomplished.
you might get conflicting
Here’s what we learned. Here’s what feedback from the board. Don’t
it means for the business. This is panic—it’s normal. Take what’s
what we’re going to do about it. valuable from the feedback
and keep moving forward.

2. Dashboards: A picture
TIP Don’t have all the data yet? Put red Xs in is worth a thousand words
your presentation where those metrics would
To help your board buy your narrative, use dashboards
go. This builds transparency and trust—don’t
hide what you don’t know. You can use this to support your story with metrics. Focus on showing
as an opportunity to ask for budget and trend lines that demonstrate what’s working, improving,
resources to track them going forward. or failing over time. It’s best to use the same dashboard
structure each quarter so that the board can quickly
understand the data.

ITMOAC 25
ARTICLE BUILDING A BOARD DECK: A GUIDE FOR CISOS

ASK A CISO What advice do you have for CISOs looking


Tomás to effectively communicate risk to their board?
Maldonado, When I engage with the board, my priority is transparency and reducing
CISO, NFL complexity in my messaging. I raise issues candidly, explain why they
matter, and secure support for the solutions we need. This means being
willing to deliver difficult news but with solutions and a go-forward
approach. As CISOs, we can’t sugarcoat; our responsibility is to escalate
risks so leadership understands what’s at stake.

Treat security as a business function; don’t talk in terms of firewalls or


CVEs. Instead, talk about how a risk could impact operations, revenue,
or brand reputation. The reality is that no board member wants to
see the organization in headlines for the wrong reasons, and they
understand that cybersecurity protects both the business and the brand.

One of the most effective ways I get my message across is through


storytelling. I’m a firm believer of “never let a good incident go to
waste.” When a high-profile incident hits the news, we map it back to
our own business. This makes the risk tangible. For example, I have
shown how a ransomware attack could disrupt operations and erode
customer trust.

Finally, it’s important to make board engagement routine.


Don’t just show up in a crisis; provide consistent updates on threat
trends, resilience, and preparedness. This cadence builds trust and
positions cybersecurity as a standing business priority, not a one-off
conversation.

So, my advice is this:

✓ Be transparent, even with bad news—transparency builds trust.

✓ Speak the board’s language—frame risks in language directors care about.

✓ Use real-world examples—make the risks relatable.

✓ Keep communication regular—there should be no surprises.

When you do this, the board begins


to see cybersecurity as integral to the
business, and they’re far more willing
to support the investments you need.

ITMOAC
ARTICLE BUILDING A BOARD DECK: A GUIDE FOR CISOS

3. Choosing the right metrics


To build the best dashboard, you need the right metrics to tell your story.
Focus on binary metrics: simple yes/no answers to questions like "do you have
this coverage?" These work well because cyber insurance underwriters have
learned they correlate to actual breach payouts. This can include:

• End-of-life timelines and upgrade plans for software

• Coverage metrics (e.g., logging, EDR, system inventory completeness)

• SLA adherence by risk level (not total vulnerability count)

• Security baseline compliance

• Hygiene indicators (e.g., patch compliance rates, incident response


training frequency, backup/recovery testing results)

Just like taking your vitamins the day before


4. Come prepared with options
going to the doctor doesn't improve your Once you have your narrative, present options for
health, quick fixes just before the board your top risks to help the board understand how they
meeting don't show real security health.
can help. Highlight the cost, timeline, and resources
Instead, focus on consistent trends over time.
for each priority to make sure the proposals are clear.

The finishing touches


You’ve built the narrative. Here are tips to ensure it lands effectively.

✓ Know your fundamentals: Make sure you When it comes to boards, credibility is everything.
have an in-depth understanding of your If you're not believable, you're not safe.
attack surface, data locations, and SLAs.
The best way to build credibility is to create
✓ Align with your executive team: Get a clear, compelling narrative that your board
consensus from leadership on your risk can understand, changing them from security
priorities and recommendations before your skeptics into advocates. ■
board meeting to present a united front.

✓ Calibrate on the board’s technical literacy: SUCCESS FORMULA


Use this knowledge to decide the right
context level for each topic.
Story Arc
✓ Present with conviction: State your Board
confidence and conviction levels honestly
+ =
Success
Dashboards

ITMOAC 27
THOUGHT PIECE

FROM SIMULATION TO STRENGTH

A CISO’s Guide
to Red Teaming
BY ALISTAIR G Director of Red Team Operations, Bugcrowd

I’m often struck by the parallels between maintaining


personal health and an organization’s cyber defenses.
Regular checkups, stress tests, and immunizations help
uncover hidden health issues before they become
life-threatening—and in cybersecurity, red teaming plays
a similar preventative role.

A red team exercise is a full-scope, real-world attack simulation


that acts as the “diagnostic stress test” of an organization’s security
immune system. Conducted by ethical hackers, it probes a company’s
defenses (technology, people, and processes) in a controlled but
adversarial manner. The goal isn’t mere compliance or checklist
completion; it’s to proactively expose weaknesses, from
unpatched systems to human errors, before a real
attacker does. For a CISO, red teaming provides
an unvarnished view of how their organization
stands up to modern threats and where strategic
reinforcements are needed.

I am new and I need budget. Can you show us our security holes?

How good or bad are our defenses? Does my security strategy reduce risk?

Is my organization ready and able to respond to an attack?

How would a real threat target our company?

How secure is this company we have just acquired?

Ask anything

ITMOAC 28
THOUGHT PIECE A CISO’S GUIDE TO RED TEAMING

The role of red teaming in cybersecurity strategy


From a CISO’s perspective, red teaming is not an isolated technical drill—it is a strategic tool
that validates and strengthens an organization’s security posture. CISOs often employ red team
exercises to see how their enterprise detection and response mechanisms hold up under a
simulated crisis. Red teaming serves several critical functions in a mature security program:

Simulating real-world Identifying and prioritizing


attacks to test defenses risks for reduction
A red team can improve security resilience Red teaming helps translate technical
by simulating the TTPs used by threat findings into business risk terms.
actors that organizations would realistically Demonstrating the practical impact
face. This “live fire drill” often uncovers of certain vulnerabilities or process
hidden vulnerabilities or attack paths that failures enables security leaders to
routine scans or compliance audits miss. prioritize what matters most.

Challenging assumptions Strengthening security


and finding weak links programs proactively
CISOs often have assumptions about Overall, red teaming embodies a shift from
what their security controls and staff can reactive security (waiting for incidents to
handle. Red teaming validates if existing occur) to proactive security. By uncovering
security controls, policies, and procedures weaknesses and prompting fixes, red
work as expected when under attack. teaming drives continuous improvement.

Validating detection In many sectors, the value of red teaming


and response (blue team has become so recognized that it’s mandated
effectiveness) or strongly encouraged by regulators

Red teaming demonstrates how well blue and industry standards. This regulatory

teams can detect and respond to stealthy push underscores a key point: from a

and evasive attacks. A well-run red team boardroom’s perspective, red teaming is not

engagement will produce concrete data on just about finding holes—it’s about assuring

detection gaps, and a good internal control stakeholders (regulators, customers, and the

group can measure response times, which board) that an institution’s defenses work

the CISO can use to drive improvements. against high-end threats.

ITMOAC 29
THOUGHT PIECE A CISO’S GUIDE TO RED TEAMING

Common defensive
controls and red team
evasion techniques
Across all these industries, organizations
deploy a range of defensive controls to
protect their assets. A CISO’s mandate is to
build a layered defense (people, process,
and technology) such that if one layer fails,
another will catch an attacker. I like to call
this “the defensive onion” because the
more layers an attacker cuts through, the
more likely they are to cry. However, one
lesson red teaming continually reinforces
is that adversaries are adept at finding
ways around even well-crafted controls.
Email and endpoint
Understanding this cat-and-mouse dynamic
hygiene vs. phishing
is crucial for security leaders—it reveals
Red teams routinely craft convincing phishing
which controls are truly resilient and which
emails, texts, and voice calls. They might register
ones may provide a false sense of security
lookalike domains or exploit trusted services
if not complemented by others.
like calendar invites or Dropbox links. Even with
increased user education, all it takes is one
clever email at the right time to get a click.

Identity and access controls


(passwords, MFA, and SSO)
One common evasion tactic is socially
engineering users to unknowingly assist Endpoint security
attackers. For example, the use of MFA (antivirus, EDR, and XDR)
fatigue attacks has been widespread: an Red teams employ custom tooling and obfuscation
attacker uses a stolen password and keeps so that malicious code does not match any known
spamming a user’s authenticator app with signatures and looks benign or unique to slip
login approvals, hoping the user will eventually through the cracks of EDR agents. With enough skill,
tap “allow” out of annoyance or confusion. endpoint agents can be undermined, highlighting
Even a 1% success rate can be enough, but to a CISO that no single control is infallible.
typically, I have seen successful exploitation
between 10% and 30% of the time.

Network and perimeter defenses


(firewalls, WAFs, and segmentation)
Data protection and monitoring With the shift to cloud and remote work,
Many firms encrypt data on disk and rely traditional perimeters have become more
on access controls, assuming that even porous. Red teams take advantage of this by
if an attacker gets in, they can’t easily attacking cloud services directly or by abusing
access the most sensitive data without VPN and remote access solutions, reminding
keys. Red teams sometimes reveal that CISOs that rigorous external attack surface
encryption wasn’t covering everything. management and patching are still crucial.

ITMOAC 30
THOUGHT PIECE A CISO’S GUIDE TO RED TEAMING

Beyond technical vulnerabilities: People and process


One of the most important insights a CISO gains from red teaming is that security is not
just a technical problem—it’s a human and organizational one. While vulnerability scanners
and patch management address software flaws, red team exercises often reveal that the
weakest links lie in human behavior and process deficiencies. A comprehensive red team
doesn’t just stop at hacking computers; it will probe the awareness and reactions of people,
as well as the robustness of processes (incident response, change management, physical
security procedures, etc.).

Red teaming goes beyond finding a misconfigured server or an open port


and uncovers systemic issues such as employees being phished, IT support or
helpdesk processes being tricked, or incident response playbooks failing under
pressure. Many red teams find that they can gather a lot of information just by
calling various departments and asking innocuous questions (pretexting as an
auditor, new employee, etc.), a tactic known as elicitation. This might reveal
internal lingo, names of key staff, or even details about what software
or security measures are in place—all useful intel for further attacks.

Red teaming also shines a light on process failures and


organizational silos. In a red team debrief, the timeline of
“here’s when we did X, here’s when/if it was noticed, and
this is how the staff responded” is incredibly valuable.
It might show that the on-call process on weekends is
unclear, the SOC is too understaffed to investigate every
alert, or the SOC did respond but the communication to the
broader team failed. These are systemic issues in incident
response and crisis management that a red team helps
identify without the cost of a real incident.

Red teaming outcomes often highlight


the need for organizational learning and
adaptability. The most mature organizations
foster a culture where being “beaten” by the
red team is not a failure but an opportunity
to improve, akin to how regular exercise
breaks down muscle fibers only to allow
them to rebuild stronger. To go with the
health metaphor, small, controlled doses
of stress (red team drills) build the
resilience “muscle” of an organization.

ITMOAC 31
THOUGHT PIECE A CISO’S GUIDE TO RED TEAMING

Leveraging red team outcomes for resilience


and executive decision-making
A red team engagement is only as valuable as what an organization does with its results. For a
CISO, the true deliverable of red teaming is not the successful “attack” itself but the actionable
insights that emerge to strengthen security strategy, justify investments, and inform stakeholders.

Let’s look at four areas where CISOs can benefit from the immediate impacts of red teaming:

Budgeting and investment Board and executive reporting


One of the most immediate impacts of a Boards of directors today are acutely aware of
red team report is on budgeting and project cyber risk. Many ask management, “How do we
prioritization. It provides concrete evidence know we’re secure? Have we tested ourselves?”
of where an organization is exposed, often in A red team exercise provides a narrative that a
a storytelling format (“We were able to steal CISO can bring to their board to answer these
the CEO’s credentials and access sensitive questions credibly. This storytelling is powerful;
M&A data because control X failed”). This can it avoids jargon and instead uses a plot (“The
be incredibly persuasive when making the attacker tried this, then this, we caught them
case for investments. Red team findings can here, but only after they had done that”). It gives
also affect the strategic direction of security the board a clear picture of risk in context, not
programs. For instance, if time and again red just theoretically. Crucially, it also highlights
teams show that phishing is the entry point, improvements, which shows progress and
a CISO might decide to shift budget into more accountability. Another board-level angle is using
user-focused controls like advanced phishing red team results to quantify potential impact
training, new email filtering solutions, or reduction. Essentially, it’s demonstrating cyber
perhaps moving more apps to SSO with risk management in practice: find the problems,
phishing-resistant MFA. Thus, a red team fix them, and reduce the likelihood or impact of a
acts as a feedback mechanism for whether breach. Over time, repeated red team exercises
previous investments are yielding results can show a trend line, which can be translated
or if new ones are required. into a risk reduction story for leadership

Driving SOC and blue team improvement


On a more operational level, red team findings are gold for the SOC and blue team.
Every detection missed is an opportunity to create a new detection rule or refine an alert.
Many SOCs will take the indicators of compromise (IoCs) from a red team activity (specific
file hashes, command line strings, C2 domains, etc.) and retroactively check if their tools
picked them up. If not, why? Perhaps the logs weren’t there or thresholds were too high.
They then improve those. Additionally, the exercise can be used to train a blue team in a
“lessons learned” way. Some organizations even do replays or purple team sessions after
the main covert red team is done. In effect, red teaming provides a continuous training loop
for the defense team under realistic conditions.

ITMOAC 32
THOUGHT PIECE A CISO’S GUIDE TO RED TEAMING

Strategy
Regular red teaming fosters strategic cyber resilience. Resilience isn’t just about preventing
attacks; it’s about ensuring that an organization can continue to operate and quickly recover
even if an attack succeeds. Red team findings inform not just how to prevent breaches
but how to limit damage and rebound from them. By incorporating red team scenarios into
broader risk scenarios, leadership can develop a more robust risk management strategy.
Another significant advantage is tracking improvement over time. A single red team
exercise gives a snapshot; doing them regularly gives a trend. A CISO can set targets like
“By next year’s red team exercise, we aim to detect them at least at the data exfiltration
stage, not after they have simulated customer data theft like this year.” Achieving this goal
would indicate improved resilience.

Conclusion
In the complex, ever-shifting cybersecurity landscape, CISO constantly ask:
“Are we as prepared as we think we are?” Red teaming provides a profound
and practical way to answer that question. Through the lens of simulated
adversaries, it reveals the truth about an organization’s defenses, the robust
parts as well as the weak points, in a way no theoretical analysis can.
A CISO can leverage red teaming to test assumptions, sharpen detection and
response, and ultimately drive down risk in alignment with real-world threats.
These insights galvanize holistic fixes: better training, clearer processes,
and more resilient architectures.

Red team outcomes give tangible metrics and stories that drive home the
value of security initiatives. They help answer the tough questions from
CEOs and boards like “How do we know our security investments are
working?” by demonstrating improved detection times, fewer successful
attack paths, and tested response procedures. In budgeting discussions,
instead of relying on fear, uncertainty, and doubt, CISOs can point to red
team exercises to say, “This is where we were, this is where we are now,
and here’s where we need to get to next.”

For a CISO, red teaming is an indispensable tool for achieving and


demonstrating cybersecurity excellence. With the insights gained from red
teaming, and the resulting enhancements in strategy, controls, and culture,
security leaders can sleep a bit more soundly at night, and assure their
stakeholders that the organization’s digital health is continuously monitored
and improving. In the ongoing battle against cyber threats, red teaming
ensures we are fighting fit and ready for whatever comes our way. ■

ITMOAC 33
CISO EXCELLENCE STORY

Securing a Leading
AI Supercomputer
Dan Maslin, Monash University

Dan Maslin is an experienced technology executive


based in Australia. For the past six years he has worked
at Monash University, Australia’s largest university with
around 90,000 students and 20,000 staff, where he is
Group Chief Information Security Officer and Head of
Infrastructure Strategy.

In 2025, Monash University announced its investment


in building and operating an advanced AI supercomputer
to transform AI-driven research. This supercomputer
is the first of its kind in Australia to utilize the NVIDIA
GB200 NVL72 platform and is expected to deliver
unprecedented AI capability for research in areas
from cancer detection to climate action.
We sat down with Dan to learn more about this amazing project,
AI governance, and his approach to proactive security.

Can you tell us about how you’re I needed to be comfortable on everything from
approaching security for this new the data center where we’d host it through to
AI supercomputer? the supplier of the hardware. We landed on an
arrangement with CDC as a data center and
There are so many layers to this! To start, NVIDIA and Dell hardware.
fortunately for me, the organization has a
positive security culture and typically considers I was able to query security considerations
cyber, privacy, and sovereignty early on in for every aspect—from physical security
projects. As CISO, I was brought into the project at the place of hosting to software and
very early—more than 6 months before anything
hardware supply chain assurance, the
became public—and was on the evaluation panel
vetting of staff, and all parties’ approach to
for all parts of the project.
vulnerability disclosure and inclusion in bug
bounty programs.
Yes, that was a question
they needed to respond to!

ITMOAC 34
The issue of AI governance
extends beyond tech into realms
of compliance, operations, and
brand reputation.
We will never have the broad and expert skills
internally to deeply test and provide effective
assurance across everything, from mobile apps
and building management systems to corporate
How are you approaching and
IT and supercomputers; we need to leverage a
prioritizing AI governance?
variety of skills available within a crowd of ethical
For Monash, AI governance runs even deeper. hackers to have confidence that we can know
Aside from the usual corporate environment about a vulnerability first.
considerations around AI in operations, we
also have to consider the impacts of AI on both I’ve always said that we can’t manage
research and education, both of which are likely what we don’t know about, so we’re
to be heavily impacted in the coming years. In better off prioritizing the scalability and
early 2024, Monash established an Artificial continuous visibility of our environment.
Intelligence Steering Committee, with more than
a dozen members representing every corner of
the university. Reporting directly to the
Vice-Chancellor (the equivalent of the CEO in
Can you highlight an initiative
a corporation), the Committee exists to create
from your team over the past
a clear understanding of the risks and strategic
benefits of using AI for education, research, and
year that exemplifies excellence,
operations, both in the short and long term, and it innovation, and resilience?
oversees and informs decision-making on the use Our team created and runs the Cyber
of AI across the Monash Group into the future. Security Student Incubation Program,
Monash also has a publicly published AI which was set up to do three things: build a
Readiness Framework that is fairly comprehensive reliable talent pipeline for the internal cyber
and considers the people, technology, and scaling security team, give students meaningful
aspects, and this is where governance is situated. paid experience while they study, and help
It includes an organization-wide agreement on produce job-ready graduates who don’t
responsible use principles, internal policies, need to start from scratch in the industry. We
the risk management approach, and tracking recruit five students each year and give them
of the evolving legal and regulatory landscape part-time roles (usually 2–3 days a week for
surrounding AI. So in short, AI governance is a a year) paid at market rate and supported by
product of organization-wide input, reporting into structured training and mentoring.
the most senior level of management.
This isn’t unpaid work experience—
they’re treated as part of the team.
How do proactive security and
We see it as win-win-win.
offensive security testing play a role
in your overall security strategy? We win because we get access to new
Offensive security testing is absolutely at the intelligent talent about to enter the market, the
core and one of the first principles we introduced students win because they get real-life paid
when I joined five years ago. We can’t scale to work experience for a year, and the industry
continuously proactively test our environment wins because it gets a Monash graduate with
with our internal resources—we need a crowd. a degree and a full year of hands-on real-life
work experience. ■

ITMOAC 35
HACKER THOUGHT PIECE

Will AI Replace
Security
Research?
BY FRANCOIS GAUDREAULT aka P3t3r_R4bb1t

Hi. I’m Francois, also known as


P3t3r_R4bb1t. I’m a cybersecurity
leader with over 15 years of
experience in information security,
risk management, and
ethical hacking.

I’ve served as the Senior Manager of Security Let’s jump right into the topic of this article.
and Enterprise Engineering at Wayfair, and I AI agents and automated validators have gained
previously held key security roles at National traction recently in the hacking and cybersecurity
Bank of Canada, Videotron, and GoSecure, space. Some self-proclaimed enterprise solutions
where I led teams, managed multimillion-dollar are starting to leverage vulnerability disclosure
budgets, and developed comprehensive security programs (VDPs) or even private bug bounty
programs. As a top-ranked ethical hacker on programs to train and demonstrate full automation
Bugcrowd (#4 out of 100,000+ active hackers), capabilities in AI agents.
I have identified over 1,700 valid vulnerabilities
across public and private programs, including Given my experience as both a
U.S. Federal Government systems, while also hacker and a security leader, I’d like
bringing my technical expertise and leadership to share my thoughts on how AI will
skills to help organizations strengthen their impact the hacking and security
cybersecurity posture through strategic risk research space, as well as how
management and offensive security initiatives. CISOs should be approaching their
offensive security testing in this
new landscape.

All images in this article have been created with AI ✨


36
HACKER THOUGHT PIECE

AI is like a puppy

The other key reason why I believe AI will


not replace human hunters in the short
term, or perhaps even the longer term, is
the need for AI to be trained. Currently, that
training has to come from humans proficient
in prompt engineering. Today, AI systems
train on public data and complementary
datasets. You don’t know what you don’t
know, and the same applies to AI. In other
words, an AI agent doesn’t know what
Automation isn’t new humans don’t tell it. Thus, I strongly believe
humans will continue to have an edge
and maintain some control on that front.
The concept of leveraging scripts, workflows,
and automation is not new in the bug bounty Such training requirements may also
world. These approaches are likely as old as trigger unwanted opacity in the future
the concept of bug bounty itself. Of course, of vulnerability disclosure and research.
the landscape has evolved quite a bit over Nobody wants their job to be replaced by
the last 5 years, now requiring less and less AI. Therefore, in an AI-dominated world
human interaction. Bugs are captured by the where companies fight for competitive
continuous scanning of assets and pushed advantage, will ethical security researchers
to queues using webhooks. Findings are continue to disclose their vulnerabilities
validated either manually or automatically and publicly, or will they keep these techniques
even pushed to platforms using prewritten or findings to themselves for an extended
and heavily templated reports. amount of time? If we push this thinking
slightly further, will researchers sell
So, what does AI automation their research to AI companies instead?
actually bring to the table? Similarly, will product manufacturers or
I would say it’s simply the following: companies disclose the vulnerabilities in
their assets, or will they use incredibly vague
✓ Increased speed
statements (some businesses are already
✓ Larger asset coverage
experts at this!) in their disclosures?
✓ Drastically reduced complexity in tooling
✓ A basic level of thinking These are crucial questions to ask
ourselves, and I myself am puzzled. On
my end, I do see a potential case where
In its current state, I do not believe AI has the
an AI-dominated market may encourage
ability to provide additional depth (i.e., critical
additional secrecy, persuading bug bounty
findings related directly to business-specific
researchers or even AI companies to keep
contexts) or the capability to efficiently
their edge in a highly competitive space.
circumvent proactive controls like a web
application firewall (WAF) or bot detection
technologies. For instance, how would an AI It needs training
react if companies were to start implementing
bot prevention at scale (or more simply,
just denying traffic based on the AI traffic
signature) to reduce the AI’s reconnaissance
capabilities? A human researcher can move
around this limitation rather quickly.

ITMOAC All images in this article have been created with AI ✨ 37


HACKER THOUGHT PIECE

Cost and architecture Complementary

Another interesting angle that could rather than competitive


generate additional discussion and
research is AI automation costs and
architecture.

I discussed this before, but I personally


tend to hunt manually. I limit myself to the
bare-minimum tooling and automation.
This strategy obviously can’t scale to a
larger scope and to multiple programs at
the same time. This is an area where AI
agents may drastically outpace researchers.
But at what cost? And what do the
architectures of these solutions look like?

While AI automation may revolutionize bug From a leading bug bounty researcher’s
bounty research at scale, the economic point of view, AI-based automation
reality reveals hidden costs that extend far should be able to drastically speed
beyond simple model usage fees. An AI up bug hunting processes, help with
system capable of meaningful vulnerability reconnaissance on large scopes,
discovery across multiple programs requires highlight interesting aspects of a
sophisticated infrastructure orchestrating target, help pinpoint low-hanging fruit,
reconnaissance engines, specialized and even submit issues to programs
AI models, validation pipelines, evasion automatically. AI excels at processing
mechanisms, and continuous monitoring vast amounts of data quickly, identifying
systems. Each component demands significant patterns across extensive attack
computational resources, storage capacity, and surfaces, and performing repetitive
operational expertise to maintain effectiveness tasks that would consume significant
while avoiding detection by increasingly human effort and time.
sophisticated bot-prevention systems.
However, I personally see
Architectural complexity grows exponentially
AI automation as far more
when you take into account the need for
distributed scanning, real-time data processing,
relevant to enterprise attack
model retraining, and compliance monitoring surface monitoring solutions.
across diverse program requirements.
These organizations have complex
digital footprints that can benefit from AI
systems that continuously scan, catalog,
and assess their assets for potential
vulnerabilities in real time.

ITMOAC All images in this article have been created with AI ✨ 38


HACKER THOUGHT PIECE

An unknown future

Using today’s technology, I do not see this


level of automation going as deep into a

Why CISOs should care system as a human researcher would, and


AI likely won’t be able to find unique
business-context vulnerabilities. Human
CISOs should be concerned researchers bring critical thinking, creativity,
about the use of AI in and contextual understanding that AI
cybersecurity primarily due currently lacks. Researchers can identify
to the significant increase in logic flaws specific to business workflows,
speed and efficiency it offers understand the nuanced implications
threat actors. of seemingly minor issues, and chain
together multiple small vulnerabilities to
While the fundamental nature of yield significant security impacts. The
cyber threats hasn’t changed, AI’s most sophisticated vulnerabilities often
automation capabilities mean that require understanding not just technical
vulnerabilities, especially low- implementation but also business logic, user
hanging fruit on a perimeter, can behavior patterns, and organizational context.
be discovered and exploited far
more rapidly than ever before. Only human intuition and
This acceleration could allow experience can provide this
attackers to quickly pull in zero- level of understanding.
day exploits through systematic
testing. Although AI may struggle However, no one can really predict if a
with complex business logic flaws breakthrough will be made to significantly
or tricky injection attacks, its boost AI’s capabilities. As AI becomes more
ability to quickly find and leverage and more sophisticated and capable of
simpler vulnerabilities still poses a contextual reasoning, this gap might narrow.
substantial risk that security leaders With all that said, I remain confident that
cannot ignore. Ultimately, it’s the humans will continue to have a place
unprecedented speed of both of choice in the bug bounty (or even
detection and exploitation that makes the cybersecurity) ecosystem, with the
AI a critical concern for modern future likely showing a complementary
relationship; AI will handle the breadth while
humans will provide the depth and creative
problem-solving that high-value, complex
vulnerabilities demand. One thing is for sure:
no one really knows how AI will effectively
change the paradigm in the cybersecurity
space. Only time will tell..■

ITMOAC All images in this article have been created with AI ✨ 39


ARTICLE

From Assets to Action

Operationalizing Attack
Surface Intelligence
Managing today’s attack surface feels like a never-ending
game of whack-a-mole—just as you get a handle on the
current landscape, something changes, whether it’s a new asset,
attack vector, or vulnerability. As a result, security teams find
themselves constantly reacting rather than staying ahead, which
creates blind spots that attackers can exploit.

To proactively safeguard their


assets, many organizations
turn to external attack surface
management (EASM) to improve
visibility. However, these tools
operate in isolation from offensive
testing workflows, which
usually have different logins and
reporting structures. The result?
Critical intelligence sits idly in
the EASM tool, disconnected
from remediation efforts.

To help CISOs truly reduce risk,


security teams must integrate
EASM intelligence into their
offensive testing platforms
so there’s a direct path from
discovery to remediation.

ITMOAC 40
ARTICLE OPERATIONALIZING ATTACK SURFACE INTELLIGENCE

The disconnected
state of security tooling
As organizations scale, their attack surfaces This means having accurate, up-to-date
become increasingly complex to manage. intelligence on each asset: exposure status,
Development teams are constantly deploying environment, criticality, and any validated
new infrastructure, like cloud services, APIs, vulnerabilities.
and proprietary LLMs, creating a dynamic
Most security teams try to fill this gap
environment that’s nearly impossible to track
themselves, using EASMs with some
in real time. This is further exacerbated by the
combination of spreadsheets, open-source
rise of third-party integrations and shadow IT,
tools, and internal systems. Each solution
which expand attack surfaces unpredictably.
has its own login, workflow, and data model,
But visibility is just one part of the equation— creating a patchwork approach that leads to
CISOs must also be able to prioritize assets stale data, duplicated effort, and inconsistent
based on business risk, ensuring resources context across tools—slowing down
are focused where they matter most. remediation and increasing exposure risk.

The case for integrating


EASMs with offensive testing

To bridge the gap between discovery and targeted testing through an integrated
action, security teams should integrate their platform to determine if it’s exploitable and
EASMs with offensive testing workflows. what data is at risk—fully leveraging their
This creates an automated pipeline where attack intelligence for swift remediation.
newly discovered intelligence is immediately
At a more strategic level, integrating these
prioritized and validated through offensive
workflows fundamentally shifts security
testing methods like bug bounties, red team
operations from reactive firefighting to
engagements, and pen testing. The result:
intelligence-led decision-making. Security
teams respond to threats as quickly as they
teams not only become more efficient in
emerge and continue to stay one step ahead
their daily tasks but can proactively and
of attackers.
confidently prioritize vulnerabilities based
For example, when an EASM identifies a on real-time, actionable insights—leading to
new subdomain with an exposed admin smarter, faster, and more informed security
panel, teams can immediately scope strategies.

ITMOAC 41
ARTICLE OPERATIONALIZING ATTACK SURFACE INTELLIGENCE

The bottom line

As the damage from cybercrimes


increases rapidly, CISOs can’t afford
to wait weeks or months to act on their
attack surface intelligence.

By adopting this integrated approach,


CISOs demonstrate measurable
improvements in security efficiency and
faster remediation cycles, which enable
them to prove the value and outcomes of a
security program to external stakeholders.

Asset View

Bugcrowd’s Asset View tool can


help you build these systems
inside the Bugcrowd platform

LEARN MORE

ITMOAC 42
INSIDE THE MIND OF A CISO

Conclusion
If you’re a CISO, think back to what some of your At Bugcrowd, we’re doing a lot with AI, but we
early jobs in security looked like. Chances are don’t believe it’s the silver bullet that can solve
that the space is now unrecognizable. Perhaps every CISO problem. As a leader in the offensive
you remember receiving your patches in a folder security testing space, it’s our responsibility to use
filled with floppy discs, and phrases like “artificial critical judgment, embrace AI with caution, and
intelligence” felt like they belonged in The Matrix, most importantly, share our knowledge with the
not the office. community.

And now here we are. We’re not just at the In this edition of Inside the Mind of a CISO, we
precipice of change in this new AI landscape; covered some of the biggest priorities and pain
we’ve jumped. The question is, do you have points for security leaders. As we wrap up, let’s
a parachute that you can dependably deploy, look at three ways Bugcrowd can help CISOs
allowing you to land safely? achieve greater security resilience.

Three ways
Bugcrowd
can help

We orchestrate 3
the balance between
1 AI and the Crowd We demonstrate true
impact so that you can
We give you the gift take informed actions
CISOs shouldn’t be expected
of objective feedback to keep up with every nuance
of where the Crowd ends and CISOs are in the business of
AI begins—at this point, the putting out fires all day, every
You’ve likely heard the question,
goal posts are moving too day. The noise is constant,
“What keeps you up at night
quickly. Bugcrowd is here to and the “what ifs” never end.
as a CISO?” The answer is
cut through the complexity.
simple—it’s the unknown. Bugcrowd can provide clear
Using our security expertise,
Ultimately, we all need a way visibility into your attack surface,
we make sensible decisions
to objectively measure security simplifying prioritization so you
about where the adoption of AI
outcomes. If you’re vulnerable, know where to focus first. We
models makes sense and where
you would want to know about it. also give you the ability to take
human ingenuity is still king.
action right in the Platform.
By partnering with Bugcrowd,
For over a decade, Bugcrowd
CISOs can lean on the For those ready to take
has helped organizations know
expertise of a global hacking their security testing to the
the right levers to pull in their
community to help them find next level, they can kick off
security programs at the right
and fix vulnerabilities faster. world-class red teaming
time to find and fix unknown
engagements with Bugcrowd.
The Crowd offers continuous vulnerabilities faster. AI is simply
testing from experts with a another powerful lever we pull Red teaming measures the true
massive range of specialties for our customers, bringing impact of a potential breach. For
and skill sets. When CISOs tap the best outcomes possible. CISOs, red teaming provides an
into the Crowd for their insights, unvarnished view of how their
they’re not just accessing organization stands up to modern
increased security resiliency; threats and where strategic
they’re accessing peace of mind. reinforcements are needed. ■

ITMOAC
INSIDE THE MIND OF A CISO

You might also like