Is Security Losing the Code War?

Is Security Losing the Code War?

This is the question that arises from reading Checkmarx’s newly released Future of Application Security in the AI Era report.

Whichever way you lean, it leads to an inevitable conclusion: 2025 may go down as the year that changed AppSec - whether for better or worse remains to be seen. 

And yes, at the risk of sounding predictable, it has something to do with AI.

The report’s central thesis is clear: AI is accelerating software development faster than security can react. But perhaps more critically, AI isn’t the cause of the problem—it’s the catalyst. The core issue isn’t AI itself, but how organizations are choosing to absorb its risk without adapting their governance. 

Many are knowingly treating that risk as the cost of velocity. 

Drawing on insights from 1,500 AppSec leaders, CISOs, and developers globally, the report reveals a troubling trend: 

⚠️ 81% admit to knowingly shipping vulnerable code just to meet delivery deadlines.  This isn’t a failure of awareness—it’s an accepted trade-off. 

And that’s just one signal in a broader breakdown of AppSec oversight: 

  • 34% say over 60% of their codebase is now AI-generated. 
  • Only 18% have formal policies for governing AI tool usage. 
  • 20% admit AI tools are used without approval (Shadow AI). 
  • 98% experienced at least one security breach in the past year—a 3-point increase over 2024’s already high 95%, with 27% of respondents reporting their organizations experienced four or more breaches in the past year. This was a significant jump from 16% in 2024, indicating a compounding risk trend. 

From Checkmarx's Future of AppSec report. 

34% say over 60% of their codebase is now AI-generated. 

Only 18% have formal policies for governing AI tool usage. 

20% admit AI tools are used without approval (Shadow AI). 

98% experienced at least one security breach in the past year—a 3-point increase over 2024’s already high 95%, with 27% of respondents reporting their organizations experienced four or more breaches in the past year. This was a significant jump from 16% in 2024, indicating a compounding risk trend.
The Future of AppSec report: Featured statistics

These numbers paint a stark picture: AppSec programs haven’t just fallen behind—they’ve been, in many cases, sidelined to make room for AI-fueled velocity

But the issue is that AI-generated code isn’t only a risk multiplier. It’s also an independent risk producer and a new risk surface.  

Just earlier this month, reality gave us a reminder of how fast this risk is evolving. 

A critical vulnerability in Cursor was exposed, allowing attackers to execute remote code simply through a crafted prompt. The flaw didn’t reside in code written by AI. It lived inside the tool itself. 

AI isn’t just expanding the threat landscape. It’s completely terraforming it.  

A link to Checmarx's Future of AppSec Report

These aren’t theoretical risks. AI-powered development is already producing a new class of vulnerabilities that are pressure-testing organizations’ AppSec readiness: 

  • Prompt injection – malicious instructions embedded in prompts that cause AI to output insecure code. The recent Cursor vulnerability is a striking example, where a crafted prompt didn’t just produce insecure code, but compromised the tool itself, enabling remote code execution. 
  • Package hallucination – AI suggesting non-existent or malicious packages that developers then import. Attackers can exploit this by publishing lookalike malicious packages under those hallucinated names, creating an instant supply-chain compromise when developers install them. 
  • Data/model poisoning – corrupting AI training or fine-tuning data to embed vulnerabilities in generated code. Once poisoned, every developer relying on that model could unknowingly introduce the same exploitable flaw across multiple applications. 
  • Model jailbreaking – bypassing AI safety controls to generate unauthorized or insecure outputs. 

Taken together, these threats underscore the shift: AI-powered development isn’t just accelerating existing vulnerabilities—it’s creating entirely new attack surfaces that many organizations aren’t yet prepared to defend.  

Traditional AppSec controls were never designed for adversaries who can weaponize the development tools themselves. Closing that gap means rethinking governance, embedding security into the developer workflow, and scaling defenses at the same pace as AI-driven code creation. 

Beyond AI: The Broader Picture of AppSec Readiness 

While AI takes center stage, the report also uncovers broader structural gaps across the AppSec ecosystem: 

  • Tooling adoption remains inconsistent: Fewer than half of organizations use foundational tools like DAST, IaC scanning, or container security in active workflows. 
  • DevSecOps remains aspirational for many: Security and development often operate in silos, slowing remediation and weakening accountability. 
  • Security maturity is self-reported, but rarely operationalized: Many teams believe they’re above average, yet lack unified visibility, developer alignment, clear governance or real-time mitigation strategies. 

The takeaway? Awareness is high—but operational readiness hasn’t caught up. Addressing that requires more than tools. It demands better governance, cross-functional alignment, and embedded developer enablement. 


Developer Recommendations for the AI Era   

Drawing on findings from the report, these five immediate actions can help secure AI-powered software development: 

  1. Integrate real-time SAST and dependency scanning directly into the IDE – Ensure vulnerabilities and policy violations are detected as code is written, including AI-generated snippets, to prevent risky code from ever reaching a commit. 
  2. Maintain a vetted AI tool list and audit usage regularly – Implement governance to approve safe AI coding assistants and detect unauthorized “Shadow AI” tools through telemetry and usage monitoring. 
  3. Train developers on AI-specific threats – Cover risks such as prompt injection (as seen in the Cursor exploit), package hallucination with supply-chain implications, and data/model poisoning that can seed flaws across multiple projects. 
  4. Adopt ASPM (Application Security Posture Management) platforms – Consolidate SAST, SCA, IaC, and runtime insights into a single, code-to-cloud governance layer, enabling teams to prioritize and remediate vulnerabilities in context and at scale. 
  5. Leverage agentic AI for security automation – Deploy AI agents capable of autonomously detecting, triaging, and even remediating vulnerabilities in real time, keeping security velocity in lockstep with AI-accelerated development.  

Get the Full Picture 

Download the full Future of Application Security in the AI Era report to benchmark where your organization stands—and what steps to take next. 

Article content

Thanks for checking in with The Monthly CheckUp. In our next edition, we’ll bring you more crisp insights from across the AppSec landscape. Until then, build fast, scan deep, and ship clean!

Check you later,

The Checkmarx Team

To view or add a comment, sign in

Others also viewed

Explore content categories