Appsurent Cyber Security’s cover photo
Appsurent Cyber Security

Appsurent Cyber Security

Computer and Network Security

Toronto, Ontario 68 followers

We find what automated scanners and AI tools miss. Trusted by government, fintech, and critical infrastructure in Canada

About us

We help Canadian organizations protect their most critical applications and AI systems. Our team delivers manual, adversary-driven security testing that uncovers risks before attackers exploit them. We find what automated scanners and AI tools miss. Trusted by government, fintech, and critical infrastructure teams across Canada, we bring 35+ years of penetration testing expertise together with specialized knowledge in securing AI/ML systems. Whether you’re building modern web and mobile apps, rolling out APIs, or integrating AI into your business, we ensure your technology is safe, resilient, and compliant. Our services include: 🔹 Application Security – Web, mobile, API, and cloud penetration testing to protect your apps from real-world attackers. 🔹 AI/ML Security – Assessments of LLMs, AI agents, RAG pipelines, and MLOps environments to safeguard next-gen AI systems. 100% Canadian-based, fully certified (OSCP, OSCE), and proven across government and industry. We can help give you the confidence that your applications and AI systems are secure.

Website
https://www.appsurent.com/
Industry
Computer and Network Security
Company size
2-10 employees
Headquarters
Toronto, Ontario
Type
Privately Held
Founded
2020
Specialties
Application Security, AI Security, Web Application Security, Penetration Testing, API Security, Mobile Application Security, Cloud Infrastructure Security, MLOps Security, Adversarial Security Assessments, Threat Modeling, Secure Architecture Reviews, AI/ML Risk Assessments, AI Red Teaming, Cybersecurity Consulting, LLM Security Testing, and Agent Security

Locations

Updates

  • We've put together a comprehensive guide covering everything you need to know about web application penetration testing: → What penetration testing actually involves (and why scanning and automation aren't enough). → Common vulnerabilities threatening modern applications. → Our 5-phase web app testing process from scoping to remediation. → How to evaluate security providers. → Testing frequency, along with web app testing vs. other security measures. → Modern web app security challenges and more. Whether you're testing for the first time or switching providers, this guide helps you make informed decisions about protecting your business. https://lnkd.in/g3u-G9hj

  • 𝗛𝗼𝘄 𝗱𝗼 𝘆𝗼𝘂 𝗸𝗻𝗼𝘄 𝘄𝗵𝗲𝗻 𝗮 𝘃𝗲𝗻𝗱𝗼𝗿 𝗷𝘂𝘀𝘁 𝗿𝗮𝗻 𝗮 𝘀𝗰𝗮𝗻? Here are a few telltale signs: 𝟭. 𝗧𝗵𝗲 𝗮𝗽𝗽 𝗶𝘀 𝗳𝘂𝗹𝗹 𝗼𝗳 𝗜𝗗𝗢𝗥𝘀 - Insecure Direct Object References that any junior tester would catch via manual testing. If the app is full of IDORs, it means that someone wasn't 𝘢𝘤𝘵𝘶𝘢𝘭𝘭𝘺 𝘵𝘦𝘴𝘵𝘪𝘯𝘨 your application. 𝟮. 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗲 𝗽𝗮𝗴𝗲𝘀 𝗹𝗲𝗮𝗸 𝗹𝗶𝗸𝗲 𝗮 𝘀𝗶𝗲𝘃𝗲 - the application returns way more data than the frontend needs. It may not seem important, but more often than not, critical vulnerabilities can be discovered by stringing together bread crumbs and manipulating application logic. 𝟯. 𝗠𝗶𝘀𝘀𝗶𝗻𝗴 𝘁𝗵𝗲 𝗼𝗯𝘃𝗶𝗼𝘂𝘀 𝗯𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝗹𝗼𝗴𝗶𝗰 𝗳𝗹𝗮𝘄𝘀 - automated tools can't understand business logic, so if the application is full of business logic vulnerabilities after it’s been supposedly “tested”, it probably was not looked at the first time. Good web application security testing requires 𝗵𝘂𝗺𝗮𝗻 𝗶𝗻𝘀𝗶𝗴𝗵𝘁, not just running tools and generating reports. While AI tools can certainly augment, they still require a human to verify or guide the effort because pattern-matching cannot generate context. Your applications deserve more than checkbox security. #CyberSecurity #ApplicationSecurity #PenetrationTesting #InfoSec

  • 𝗖𝗼𝗺𝗺𝗼𝗻 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗩𝘂𝗹𝗻𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀 𝗶𝗻 𝗩𝗶𝗯𝗲-𝗖𝗼𝗱𝗲𝗱 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 - 𝗡𝗼𝘁𝗲𝘀 𝗙𝗿𝗼𝗺 𝗧𝗵𝗲 𝗙𝗶𝗲𝗹𝗱 With more and more "vibe-coded" applications making their way into the wild, we're noticing some common patterns in the types of vulnerabilities that are emerging. It's important to remember that producing a slick MVP for a demo is very different from a production ready robust & secure code-base. Here's a top 4 of some of the major vulnerability trends we're seeing. 𝗖𝗹𝗶𝗲𝗻𝘁 𝗦𝗶𝗱𝗲 𝗣𝗲𝗿𝗺𝗶𝘀𝘀𝗶𝗼𝗻 𝗖𝗵𝗲𝗰𝗸𝘀 LLM generated code often prefers client-side authorization and permission checks, in some cases often leaving backend endpoints unprotected. As a bonus SSRF filters and defences are often implemented client side as well (and so trivially by-passable). 𝗛𝗶𝗱𝗱𝗲𝗻 𝗙𝗹𝗮𝗴𝘀 𝗮𝗻𝗱 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝘀 Occasionally LLMs will insert hidden flags such as debug or admin, allowing an authentication bypass. When doing assessments, definitely want to make sure to use paraminer extension in Burp (or similar in your tooling of choice). 𝗛𝗮𝗿𝗱-𝗰𝗼𝗱𝗲𝗱 𝗦𝗲𝗰𝗿𝗲𝘁𝘀 Both in commits and generated client-side code (however in some cases they are hallucinated and not the actual secret). 𝗖𝗹𝗮𝘀𝘀𝗶𝗰 𝗝𝗪𝗧 𝗩𝘂𝗹𝗻𝗲𝗿𝗮𝗯𝗶𝗹𝗶𝘁𝗶𝗲𝘀 LLMs seem to prefer to emit their own JWT handling code rather than relying on established libraries, many of the classic JWT processing vulnerabilities are reappearing, such as failure to check the signature, the classic alg:none bypass, algorithm & public/private key confusion and claims modification. In many ways it is a return to vulnerabilities that used to be commonplace. After years of testing applications and reviewing code, there's definitely a particular feel or "code smell" to applications generated by LLMs. An uncanny valley of software development. Like how AI generated eyes in images or videos always look a little off. It's no surprise that "vibe-coding cleanup specialist" is an emerging profession on LinkedIn. And with that, if you need help securing your vibe-coded applications, please reach out to us, we'd be happy to help. #ApplicationSecurity #AppSec #PenetrationTesting #AISecurity (And stay tuned - next week we'll cover approaches to address each of these)

  • 𝗪𝗲𝗹𝗰𝗼𝗺𝗲 𝘁𝗼 𝗔𝗽𝗽𝘀𝘂𝗿𝗲𝗻𝘁 𝗖𝘆𝗯𝗲𝗿 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 | 𝗣𝗿𝗼𝘂𝗱𝗹𝘆 𝗖𝗮𝗻𝗮𝗱𝗶𝗮𝗻, 𝗠𝗮𝗻𝘂𝗮𝗹-𝗙𝗶𝗿𝘀𝘁 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗧𝗲𝘀𝘁𝗶𝗻𝗴 For years, we’ve helped protect Canada’s critical infrastructure, financial systems, and government applications. Now we’re stepping out of the shadows and ramping up our presence here on LinkedIn to share what we’ve learned. At Appsurent, we specialize in manual, adversary-driven security testing that uncovers what automated scanners and AI tools miss. While much of the industry leans on automation, we focus on what machines can’t do: understanding context, business logic, and the creative ways real attackers think. What makes us different: ✔ Every engagement delivered hands-on by senior practitioners ✔ 100% Canadian team, 100% senior practitioners leading, always manual analysis ✔ Just real adversary-driven assessments that deliver clarity and confidence We’ll share insights on application security, AI/LLM security, and the evolving threat landscape. We’re not here to sell fear or compliance checkboxes. We’re here to share what actually works in defending modern applications against real adversaries. 🔍 Follow us for insights from the front lines of application and AI security or reach out anytime for help securing your applications. #ApplicationSecurity #CanadianCyber #PenetrationTesting — 🌐 Follow us for insights from the front lines of application and AI security. 🇨🇦 Proudly Canadian-owned | OSCP, OSCE, CISSP | Founded by 15+ year industry veterans 🔗 https://www.appsurent.com

  • One of our principals shares some quick take-aways on AI security from Blackhat / Defcon 2025.

    View profile for Jamie B.

    Helping Organizations Secure Web, Mobile, API & AI Applications | 15+ Yrs Pen Testing | Principal @ Appsurent | 🇨🇦

    My quick take-aways on AI Security from Blackhat and Defcon this year. Promptware. The latest security buzzword (which I do actually like).  From emails and meeting invites to git change logs and issues the vectors for indirect prompt injection are only increasing. Ultimately if the LLM ingests it, it’s fair game for potential prompt injections. Exploitation payloads are increasingly more sophisticated, with payloads for accessing connected resources like shared drives to establishing persistence via memory to even accessing connected home automation devices Successful prompt injections / jailbreaks are often released within hours of a new model release. MCP (Model Context Protocol) servers while very cool for closed experiments are not ready for production. Model capability seems to be plateauing in a number of cybersecurity domains. As the recent ChatGPT release shows we’re entering the phase of increasingly smaller incremental gains, rather than exponential leaps (unless there is another fundamental architectural change) The lexicon seems to have shifted to agents, as if that somehow addresses the core issues with LLMs. An agent is a LLM at its core, and for LLMs using transformer architecture, prompt injection is not solved and likely not solvable without architectural change. As it stands there's no separation between data and control planes, the input space is unlimited, and the problem only gets worse as models get bigger. If that's not enough, it's non-deterministic, so an attacker can keep trying until the probabilistic token predictor rolls in their favour. And conversely you can test it with the same malicious input 99 times and have it fail on the 100th. This needs repeating because I don’t think there’s enough awareness of the unique security challenges that non-deterministic systems pose. In web terms, think of it as having an unfixable SQL injection in a core business application, that randomly shows up. What can you do? Treat your LLM deployment like that old unpatchable Windows XP host vulnerable to MS08-67. Assume untrusted, protect the boundaries, have the LLM emit code in a sandbox and verify the code, rather than provide direct tool/resource access and whatever you do don’t give it access to anything you don’t want disclosed. Often all an attacker has to do is ask nicely - “My grandmother used to...” (or use emojis or add spaces between letters or unicode characters or use a less common language...) Some recommended AI Security presentations from the Main Defcon Tracks (AI Security Village were not out yet): Ben Nassi Or Yair - Stav Cohen - Invitation Is All You Need Invoking Gemini for Workspace Agents with a Simple Google Calendar Invite Tobias Diehl - Mind the Data Voids Hijacking Copilot Trust to Deliver C2 Instructions with Microsoft Authority. Keane Lucas - Claude - Climbing a CTF Scoreboard Near You

  • The term “AI security” gets thrown around a lot; but what does it actually mean when you're responsible for keeping real systems secure? Our Principal Consultant, Jamie Baxter, breaks down three of the most persistent myths we've seen when assessing modern AI-integrated applications and explains why some traditional assumptions don’t hold up.

    View profile for Jamie B.

    Helping Organizations Secure Web, Mobile, API & AI Applications | 15+ Yrs Pen Testing | Principal @ Appsurent | 🇨🇦

    𝗧𝗼𝗽 𝟯 𝗠𝘆𝘁𝗵𝘀 𝗔𝗯𝗼𝘂𝘁 𝗦𝗲𝗰𝘂𝗿𝗶𝗻𝗴 𝗔𝗜 𝗔𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 (Well, my top 3!) 1️⃣ “𝗧𝗵𝗲 𝗺𝗼𝗱𝗲𝗹 𝗶𝘀 𝘀𝗲𝗰𝘂𝗿𝗲, 𝘀𝗼 𝘁𝗵𝗲 𝗮𝗽𝗽 𝗶𝘀 𝘀𝗲𝗰𝘂𝗿𝗲.” Not even close. Even a perfectly fine-tuned LLM can be misused in insecure workflows; prompt injection, tool overreach, vector poisoning, and downstream abuse don’t care how safe your base model is. In some cases the larger the model the more easily it can be coaxed into performing undesired behaviour. 2️⃣ “𝗣𝗿𝗼𝗺𝗽𝘁 𝗶𝗻𝗷𝗲𝗰𝘁𝗶𝗼𝗻 𝗶𝘀 𝘀𝗼𝗹𝘃𝗲𝗱.” It’s not. Not even close. Regex filters and system prompts aren’t silver bullets. Attackers chain context, leverage encodings, embed triggers, poison memory and bypass naive controls in ways many teams haven’t even threat modelled yet. A recent paper from only a few weeks ago found multiple bypass techniques which worked across all tested guardrails. (“Bypassing Prompt Injection and Jailbreak Detection in LLM Guardrails” - https://lnkd.in/g-prgNCM) - One of the most successful uses using emoji variation selectors (aka emoji smuggling) 😲. 3️⃣ “𝗜𝘁’𝘀 𝗷𝘂𝘀𝘁 𝗮𝗻𝗼𝘁𝗵𝗲𝗿 𝗺𝗶𝗰𝗿𝗼𝘀𝗲𝗿𝘃𝗶𝗰𝗲.” If only. Traditional authN/Z patterns and input/output validation break down when your app includes a non-deterministic reasoning engine that can interpret context, rephrase inputs, and initiate tool use. AI apps just don’t behave like REST APIs under pressure and can often surprise. GenAI introduces a new category of dynamic non-deterministic cyber risk, requiring full-stack, continuous, AI-specific security testing. At 𝗔𝗽𝗽𝘀𝘂𝗿𝗲𝗻𝘁 𝗖𝘆𝗯𝗲𝗿 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆, we're working with teams to address these myths to help 𝗯𝘂𝗶𝗹𝗱 𝗿𝗲𝘀𝗶𝗹𝗶𝗲𝗻𝘁 𝗮𝗽𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻𝘀 𝗿𝗼𝗼𝘁𝗲𝗱 𝗶𝗻 𝗿𝗲𝗮𝗹-𝘄𝗼𝗿𝗹𝗱 𝘁𝗵𝗿𝗲𝗮𝘁 𝗽𝗿𝗲𝘀𝘀𝘂𝗿𝗲, not hopeful or incomplete assumptions. Has your organization started integrating adversarial thinking into AI application deployment yet?

  • Our principal shares critical insights on the accumulation of security debt in today's AI implementations. Just as we battled legacy security issues from the early web era, hastily deployed AI systems are creating tomorrow's security vulnerabilities. This post explores how non-deterministic behavior, complex interdependencies, and surface-level understanding create unique security challenges for organizations adopting AI. This analysis informs how we approach security assessments for AI systems, focusing on both immediate vulnerabilities and long-term security architecture. #AISecurityDebt #SecurityAssessment #AIRisk

    View profile for Jamie B.

    Helping Organizations Secure Web, Mobile, API & AI Applications | 15+ Yrs Pen Testing | Principal @ Appsurent | 🇨🇦

    The next post on my series of in-the-trenches AI security lessons learned: 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗗𝗲𝗯𝘁: 𝗛𝗼𝘄 𝗧𝗼𝗱𝗮𝘆'𝘀 𝗔𝗜 𝗜𝗺𝗽𝗹𝗲𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻𝘀 𝗪𝗶𝗹𝗹 𝗛𝗮𝘂𝗻𝘁 𝗨𝘀 𝗧𝗼𝗺𝗼𝗿𝗿𝗼𝘄 As I test more and more AI systems and applications generated in whole or in part by GenAI. It's clear we're building tomorrow's security debt today with rushed deployments and vibe-coded applications. 2000: "Just get the website up!" - "We'll fix the security issues later." 2025: "Just get the AI working!”- "We'll add guardrails once it's in production." The AI tools and AI generated applications rushed into production now, may soon become the legacy headaches for four key reasons: • 𝗡𝗼𝗻-𝗱𝗲𝘁𝗲𝗿𝗺𝗶𝗻𝗶𝘀𝘁𝗶𝗰 𝗯𝗲𝗵𝗮𝘃𝗶𝗼𝘂𝗿 of GenAI systems makes vulnerabilities harder to consistently detect and verify • 𝗔𝗴𝗲𝗻𝘁, 𝗠𝗼𝗱𝗲𝗹 𝗮𝗻𝗱 𝗧𝗼𝗼𝗹 𝗶𝗻𝘁𝗲𝗿𝗱𝗲𝗽𝗲𝗻𝗱𝗲𝗻𝗰𝗶𝗲𝘀 create complex attack surfaces that are poorly documented and only just being understood.  • 𝗦𝘂𝗿𝗳𝗮𝗰𝗲 𝗟𝗲𝘃𝗲𝗹 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱𝗶𝗻𝗴 provides perfect hiding places for vulnerabilities • 𝗘𝘃𝗼𝗹𝘃𝗶𝗻𝗴 𝗮𝘁𝘁𝗮𝗰𝗸 𝘁𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲𝘀 will expose flaws in today's "secure enough" implementations Simply relying on prompt-based guardrails ("don't do X") is the equivalent of escaping double quotes in queries and calling it a day. By 2026, expect more AI-specific CVEs, major remediation projects, and complete system rebuilds when security debt becomes unmanageable. The most dangerous assumption? That AI systems are too novel for traditional security principles to apply. They actually require both traditional and new AI-specific controls. So please make sure to perform a security assessment of the code generated by AI and please don't give the new shiny AI Agent a service account with domain administrator privileges. #AISecurity #TechnicalDebt #SecurityLeadership

  • Essential guidance for security leaders navigating AI adoption based on our day to day experience. Our principal shared these practical tips for applying traditional security principles to new AI threats. These fundamentals form the foundation of our approach to AI security assessments.

    View profile for Jamie B.

    Helping Organizations Secure Web, Mobile, API & AI Applications | 15+ Yrs Pen Testing | Principal @ Appsurent | 🇨🇦

    𝗔𝗜 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗧𝗶𝗽𝘀 𝗳𝗼𝗿 𝗖𝗜𝗦𝗢𝘀: 𝗧𝗿𝗮𝗻𝘀𝗹𝗮𝘁𝗶𝗻𝗴 𝗢𝗹𝗱 𝗣𝗹𝗮𝘆𝗯𝗼𝗼𝗸𝘀 𝗳𝗼𝗿 𝗡𝗲𝘄 𝗧𝗵𝗿𝗲𝗮𝘁𝘀 As AI transforms the enterprise, CISOs are forced adapt existing security frameworks to address these emerging technologies. And while the tech industry does love to reinvent the wheel, the good news here is you've faced similar challenges before and we don’t have to start from scratch. • 𝗟𝗲𝗮𝘀𝘁 𝗽𝗿𝗶𝘃𝗶𝗹𝗲𝗴𝗲 𝘀𝘁𝗶𝗹𝗹 𝗮𝗽𝗽𝗹𝗶𝗲𝘀: Your LLMs should access only the functions and data necessary for their specific tasks. Compartmentalization of AI capabilities isn't new thinking—it's the principle of least privilege in a new context. Core AI Model Services, AI/ML pipeline, LLM agent service accounts, tool service accounts, they should be managed like any other account. • 𝗔𝘂𝘁𝗵𝗲𝗻𝘁𝗶𝗰𝗮𝘁𝗶𝗼𝗻 𝘀𝘁𝗶𝗹𝗹 𝘃𝗲𝗿𝘆 𝗺𝘂𝗰𝗵 𝗺𝗮𝘁𝘁𝗲𝗿𝘀: Just as you wouldn't let anonymous users access your critical systems, establish strong authentication for AI access. Each AI interaction should have a known, authenticated identity behind it, both from the user interacting with the AI and the system the model interacts with. • 𝗜𝗻𝗽𝘂𝘁/𝗢𝘂𝘁𝗽𝘂𝘁 𝘃𝗮𝗹𝗶𝗱𝗮𝘁𝗶𝗼𝗻 𝗿𝗲𝗺𝗮𝗶𝗻𝘀 𝗰𝗿𝘂𝗰𝗶𝗮𝗹: Traditional apps validate form inputs and often sanitize outputs; AI systems need prompt boundaries and input/output sanitization. The approaches differ, but the security principle is identical. • 𝗟𝗼𝗴𝗴𝗶𝗻𝗴 𝗶𝘀 𝗻𝗼𝗻-𝗻𝗲𝗴𝗼𝘁𝗶𝗮𝗯𝗹𝗲: You'd never deploy a web app without proper logging (or at least you shouldn’t). Similarly, capturing prompt-response pairs for your AI systems as well as logging the actions taken by tools and agents provides the audit trail needed for incident investigation or model improvement. • 𝗜𝗻𝗰𝗶𝗱𝗲𝗻𝘁 𝗿𝗲𝘀𝗽𝗼𝗻𝘀𝗲 𝗻𝗲𝗲𝗱𝘀 𝘂𝗽𝗱𝗮𝘁𝗶𝗻𝗴: Define what constitutes an AI "breach" before it happens. Is a fine-tuned model or RAG database a "crown jewel". When is an unusual output a security incident versus a harmless quirk? • 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝘁𝗲𝘀𝘁𝗶𝗻𝗴 𝗿𝗲𝗾𝘂𝗶𝗿𝗲𝘀 𝘀𝗽𝗲𝗰𝗶𝗮𝗹𝗶𝘇𝗲𝗱 𝗸𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲: Just as you wouldn't hire web app testers who don't understand the OWASP web top ten, ensure your AI security testers have experience with LLM-specific vulnerabilities and testing methodologies. Traditional security testing approaches need to be augmented with an understanding of prompt engineering, non-deterministic behaviors, and AI-specific attack vectors. This is an area where specialized expertise pays dividends (and of course, feel free to reach for your AI/ML security testing needs). The fundamental principles of cybersecurity haven't changed—we're just applying them to systems with new properties and risks. The greatest risk is in thinking these systems are too novel for existing expertise. What security principles from your existing playbook are you successfully applying to AI systems? #CISO #AIGovernance #CyberResilience #SecurityLeadership

  • Our principal shares some thoughts on why AI security vulnerabilities mirror traditional threats - but with a non-deterministic twist that demands stronger system-wide protection. This understanding forms one of the building building blocks on how we approach AI security testing for our clients. #AISecurityTesting #SecurityAssessment

    View profile for Jamie B.

    Helping Organizations Secure Web, Mobile, API & AI Applications | 15+ Yrs Pen Testing | Principal @ Appsurent | 🇨🇦

    𝗙𝗿𝗼𝗺 𝗦𝗤𝗟 𝗜𝗻𝗷𝗲𝗰𝘁𝗶𝗼𝗻 𝘁𝗼 𝗣𝗿𝗼𝗺𝗽𝘁 𝗜𝗻𝗷𝗲𝗰𝘁𝗶𝗼𝗻: 𝗪𝗵𝘆 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗛𝗶𝘀𝘁𝗼𝗿𝘆 𝗥𝗲𝗽𝗲𝗮𝘁𝘀 𝗜𝘁𝘀𝗲𝗹𝗳 27 years ago we learned: "Never trust user input in your SQL queries!"  Today it’s: "Never trust user input in your AI prompts!" The more things change, the more they stay the same. SQL injection and prompt injection share the same fundamental flaw: trusting user (or attacker)-controlled input in a privileged execution context, whether typed on a keyboard in web form or ingested through a document by an agent. But there's a critical difference: SQL databases are deterministic - the same query on the same state always returns the same result, this is what we’ve been used to. However LLMs are not deterministic. An attack prompt that fails on the first attempt might succeed on the fifth try due to the probabilistic nature of these models. This non-determinism means we can't rely solely on the LLM to consistently refuse malicious requests, especially considering it has likely been extensively trained to be helpful in its responses, a turn of phrase may be all it takes. Even with guardrails and filter/supervisor LLMs, clever encoding bypasses or indirect questioning will often eventually find gaps. The implication? Good security hygiene across the entire system becomes even more crucial. We need defence in depth because the front gate - the LLM itself - can't be guaranteed to always stand firm. Have we really learned our lessons from the past? Or are we facing an even tougher challenge that demands a return to fundamental security principles throughout our systems? #AppSec #AISecurityTrends #PromptInjection

Similar pages