Embrace the Vibe: Governing the Rise of Vibe Coding

Embrace the Vibe: Governing the Rise of Vibe Coding

Developers are nothing if not resourceful. If something makes their lives easier, they will absolutely, without a doubt, 100% of the time, will find a way to elegantly work around your firm ‘no’.  

And right now, that something is using AI to write their code.  

It’s not just a hunch. We’ve got the stats to back up this claim: our recent Future of AppSec Report found that one in five (20%) developers admit to using AI tools even where they’re banned. 

If local news were covering AppSec right now, it would play out as a full-blown moral panic piece: 

"Do you know what your developers are really up to after hours?  They’re not at football practice. They’re not at the library. They’re out there… getting high on AI.  The youngsters are calling this dangerous new fad - ‘Vibe Coding.’”  

Luckily, we’re not in the “scare you straight” business, and there’s no cause for alarm. 

But the stats surface the inevitable dilemma: You can try to fight against it… or you can be the cool parent and create a supportive environment for your developers to practice AI-coding safely. 

Because like it or not, AI-assisted coding is already happening. The only question is whether you’re governing it or letting it run wild in the shadows.  

That’s what we’re covering in our new report “Keeping Bad Vibes Out: AppSec in the Age of AI-Assisted Coding”. The report builds on the data from our recent Future of AppSec survey of 1,500 industry insiders – CISOs, AppSec and dev leaders – to provide a deep analysis of how AI adoption is affecting code security.  

 The New Reality of AI-Assisted Coding 

Everyone saw it coming, but still, the adoption wave is bigger than most enterprise dev and AppSec leaders realize: 

  • Only 63% of organizations officially allow AI-assisted coding 
  • 20% forbid it, but it’s happening anyway 
  • 54% of all code written in the past year was AI-generated 
  • 78% of organizations say more than 40% of their codebase is AI-written (up from 58% in 2024) 
  • 9% report that 80–100% of their code is now AI-generated 

In fact, it’s projected by Webhosting.Today that by 2030, 95% of code will be AI-generated. 

Article content

And the security debt is mounting accordingly: 

  • 96% of orgs knowingly shipped vulnerable code to meet deadlines 
  • 98% suffered at least one application-layer breach 
  • Teams using 80–100% AI-generated code shipped more vulnerabilities and suffered more breaches 

The message is clear: AI-assisted coding is mainstream. Shadow AI is everywhere. AppSec is straining to keep up, and we’re not doing it any favors by banning AI instead of facilitating it with clear policies, guardrails and education for safe practices. 

 

Article content
Article content

AI-powered software development has reached its inflexion point. Developer AI has progressed from code assistants suggesting solutions for developers to copy/paste, to fully integrated developer environments like Cursor and Base44 that translate natural language prompts into code, implement next-step edits, and carry out contextual bug fixes. 

The report highlights a fundamental shift in the developer role: 

  • Yesterday’s developer wrote every line, debugged deeply, and knew their codebase intimately. 
  • Today’s developer prompts, curates, and edits AI-generated suggestions. 

For senior developers, this shift is empowering. They use AI to offload grunt work, accelerate delivery, and spend more time architecting. 

For junior developers, though, there’s a risk of skills erosion. If they grow up curating instead of coding, they may never acquire deep expertise in codebases, security fundamentals, or complex debugging. 

And for organizations, that creates a double challenge: 

  1. Security blind spots as devs accept insecure AI output at face value. 
  2. Talent gaps as traditional coding skills atrophy in the workforce. 

How do you embrace the “vibe” without letting the whole system unravel? 

It takes more than policy memos and trust falls. To balance the productivity advantages of AI-assisted coding with the reality of heightened security risk, all three corners of the AppSec triangle—CISOs, AppSec leaders, and developers—need to align. Each has a role to play in governing AI adoption safely, effectively, and sustainably.  

Our new report breaks it down with specific, actionable recommendations for every stakeholder.   

Article content

Read the full report for more insights 

Devs will vibe-code, with or without you. Better make sure they do it with you, by building the framework to facilitate AI-assisted coding practices.

Read the full report to learn more fresh stats, insights, and recommendations every organization needs to vibe securely

Download the Report 

On the Same Note 

If developers are going to use AI to write code (and they are), security leaders need to understand what happens when those same AI tools are asked to review code. 

Our Checkmarx Zero team recently put Claude Code’s /security-review through its paces — and found it could be tricked into approving unsafe code. 

Article content

The takeaway: AI can assist, but it’s not a substitute for real AppSec rigor. 


The Monthly Check Up will be back next month with more insights. 

Until then—build fast, govern smart, and keep the bad vibes out. 

  Check you later, 

— The Checkmarx Team 

 

Dare Aimuyedo

Cybersecurity Engineer| l.T Support Engineer| Network & System Administrator| IT Operations Manager| Cloud Support Engineer| Virtual Assistant| Application Security Analyst| Purple Team| DevSecOps Engineer

4w

Informative and insightful!

To view or add a comment, sign in

Explore content categories