Understanding the Risks of Agentic Commerce

Explore top LinkedIn content from expert professionals.

  • View profile for Neha Narkhede

    Co-founder & CEO, Oscilar. Co-founder & Board Member, Confluent. Original Creator, Apache Kafka. Startup investor/advisor

    42,121 followers

    Visa and Mastercard officially launched their Agent Tokenization platforms—Visa Intelligent Commerce and Mastercard AgentPay—marking a fundamental shift toward agentic commerce. This means AI-driven transactions, executed autonomously using tokenized payment credentials, are no longer theoretical but here at scale. This evolution brings significant implications for the payments industry. While promising streamlined, frictionless consumer experiences, agentic commerce simultaneously introduces complex new fraud and risk vectors: - How do you reliably authenticating user intent behind autonomous purchases? - How do you manage liability when agent-issued tokens are compromised - How do you detect nuanced anomalies specific to agent behavior rather than human or bot activity. Payment processors and merchants should start preparing today by clearly distinguishing agent-driven transactions from human-driven ones, establishing distinct behavioral profiles, deploying specialized anomaly detection, and implementing adaptive risk rules that specifically control the blast radius of agents. At Oscilar, we’re actively focused on addressing these emerging challenges. Crucially, note that distinguishing legitimate AI agent activity from malicious bots is just a starting point—agentic commerce demands far more sophisticated approaches. We’re deeply engaged in solving problems such as behavioral analytics tailored specifically for evolving agent behaviors, continuously refining our ML models to recognize subtle deviations indicative of risk, and building flexible decisioning capabilities designed for rapid adaptation to new threats as they arise. Navigating agentic commerce demands thoughtful preparation, flexibility, and foresight. If your team is exploring these evolving risks and opportunities, we would love to have a conversation.

  • View profile for Soups Ranjan

    Co-founder, CEO @ Sardine | Payments, Fraud, Compliance

    34,920 followers

    AI Agents will soon handle billions in payments. But who screens them for fraud? Visa and Mastercard just launched Agent Tokenization platforms, sending one crystal-clear signal: Agentic commerce isn't coming. It's already here. - Every payment system relies on knowing WHO is transacting. - But when AI agents start making purchases, our entire risk model breaks: - How do you verify an AI is representing a legitimate user? - Who's liable when an agent goes rogue? - How do you prevent compromised agents from draining accounts? It’s easy to write this off as theory. But we know it’s happening now. Both Visa and Mastercard announced their Agent Tokenization platforms (Visa Intelligent Commerce and Mastercard AgentPay) Today tokenization powers services like “card on file” (the ability to store a card at a merchant), mobile tap to pay, and subscription and expense management services. Extending this model to agents is the natural next step. Its unclear from the press releases how both schemes intend to update their disputes processes and in the interim, we already have AI Agents making purchases (through Stripe’s Agent SDK with virtual cards, or Open AI’s operator) Simon wrote recently that he sees 4 models of agentic commerce 1️⃣ Browser based (like Open AI’s operator)  - Where the agent stops and asks the user for card input. This looks like classic bots and screen scraping 2️⃣ Card-on-file-agents  - Where an agent stores a PCI/DSS compliant token at the merchant and is “known” after a first transaction 3️⃣ Accelerated checkouts & Virtual Card Agents  - Where an Agent is given a one time use virtual card at branded checkouts like Stripe Link. 4️⃣ Stablecoin based  - With their own wallets transacting anywhere stablecoins are accepted (e.g. Payman AI, Skyfire and now, Coinbase x402) We are very good at detecting bots, and assigning a “user score” or risk set of parameters to that user. AI Agents look like bots. Next, we are actively working on distinguishing AI Agents from bots This is something we already do every day, and we’re constantly shifting that baseline and testing it with new agentic experiences as they’re launched. What should processors and merchants do today? We're advising clients to: - Build systems to distinguish human vs agent transactions - Create separate risk rules for agent-driven purchases - Monitor fraud/dispute rates between human/agent segments - Prepare for rapid changes in agent behavior patterns The merchants who solve this first will dominate agentic commerce. The rest? They'll be picking up the fraud losses. What agent use cases are you most concerned about? Get in touch if its something you’re thinking about. 

  • View profile for Anand Singh, PhD

    CSO (Symmetry Systems) | Bestselling Author | Keynote Speaker | Board Member

    14,218 followers

    Rogue AI isn’t a sci-fi threat. It’s a real-time enterprise risk. In 2024, a misconfigured AI agent at Serviceaide meant to streamline IT workflows in healthcare accidentally exposed the personal health data of 483,000+ patients at Catholic Health, NY. What happened? An autonomous agent accessed an unsecured Elasticsearch database without adequate safeguards. The result: 🔻 PHI leak 🔻 Federal disclosures 🔻 Reputational damage This wasn’t a system hack. It was a goal-oriented AI doing exactly what it was asked, without understanding the boundaries. Welcome to the era of agentic AI, systems that act independently to pursue objectives over time. And when those objectives are vague, or controls are weak? They improvise. An AI told to “reduce customer wait time” might start issuing refunds or escalating permissions - because it sees those as valid shortcuts to the goal. No malice. Just misalignment. How do we prevent this? ✅ Define clear, bounded objectives ✅ Enforce least-privilege access ✅ Monitor behavior in real time ✅ Intervene early when drift is detected Agentic AI is already here. The question is: Are your agents aligned, or are they already off-script? Let’s talk about making autonomous systems safer, together. Share your thoughts in the comments below. 🔁 Repost to keep this on the radar. 👤 Follow me (Anand Singh, PhD) for more insights on AI risk, data security & resilient tech strategy.

  • Is your enterprise AI a trusted coworker, or a potential internal affairs agent? 🕵️♂️ The Anthropic Claude 4 'whistleblower' test was a flashpoint, but the real story goes beyond whether your AI model acts as a "snitch" or not. It's really about exposing the critical vulnerabilities in how we're building and deploying increasingly autonomous AI systems. While the initial headlines have faded, the strategic questions for tech leaders are more urgent than ever. In my latest VentureBeat piece, informed by a deep dive videocast with AI agent developer Sam Witteveen, we cut through the noise to reveal: ° Why the definition of "normal usage" for agentic AI is a ticking time bomb if not addressed. ° The often-overlooked risks lurking in your AI's tool access and server-side sandboxes. (Are they connected to the internet? Do you even know?) ° How the enterprise FOMO wave (Tobias Lütke at Shopify and others are helping drive this) is pushing teams to bypass crucial governance. ° The critical missing piece: Why demanding full transparency on tool usage reports, not just system prompts, is essential for assessing true agentic behavior. (A key point when considering models from Anthropic and others). This issue goes beyond Anthropic's eccentricities. An open-source developer Theo Browne has even since launched SnitchBench, a GitHub project that ranks LLMs by how aggressively they report you to authorities. But this goes beyond even models. It's about the entire AI ecosystem and whether we're prepared for the "agentic AI risk stack." Dive into the analysis for 6 key takeaways your enterprise needs to consider NOW. This is the recipe for navigating control and trust in our agentic AI future. 🔗 Read the full analysis on VentureBeat: https://lnkd.in/ghxJ52hx 📺 Watch the deep dive videocast with Sam Witteveen: https://lnkd.in/gYzmG8aa #EnterpriseAI #AIGovernance #AIRisk #LLMs #AgenticAI #TechLeadership #AIStrategy #ResponsibleAI #FutureofWork

    Claude 4's Snitch Mode? The AI Scandal Shaking Enterprise Trust

    https://www.youtube.com/

  • View profile for Agus Sudjianto

    A geek who can speak: Co-creator of PiML and MoDeVa, SVP Risk & Technology H2O.ai, Retired EVP-Head of Wells Fargo MRM

    24,352 followers

    Thinking about vibe testing agentic AI anyone? This is what real quants do when building and deploying models. Not vibe testing. Not shipping Mickey Mouse agentic AI. Eduardo Canabarro said it best (see screenshot): “Our models were not toys, or abstractions, they were potent tools… subject to potentially severe failure. Like a SpaceX Falcon rocket.” In the world of derivatives trading, a model wasn’t a cool demo or a clever paper. It was code of millions of lines of it. It was edge cases, limit tests, failure simulations. It was betting your job, your P&L, your reputation that it would hold under fire. You didn’t just deploy and hope. You tested, tortured, and tried to break the model before it broke the bank. Today, too many are deploying LLMs and “agentic AI” systems without a plan for failure, without understanding limits, and with zero accountability. That’s not innovation. That’s recklessness in a lab coat. If you’re building AI models: - Ask how your model can fail - Simulate edge cases - Think like a trader with real money at risk - Assume responsibility for consequences Because if your model can ruin someone’s business, safety, or career, you better not treat it like a toy. #ModelRisk #QuantsNotCowboys

  • View profile for Chenxi Wang, Ph.D.

    Investor, Cyber expert, Fortune 500 board member, Venturebeat Women-in-AI award winner. I talk about #cybersecurity #venturecapital #diversity #womenintech #boardgovernance

    24,432 followers

    I reviewed the "State of Agentic AI Red Teaming" report from SplxAI, Stanford University, and OWASP® Foundation. I found the paper valuable for a number of reasons: - It provides a well-defined taxonomy for #AI #agents. The framework and the examples are interesting to anyone who study AI agents - It articulates clear and specific risks to agentic applications. Examples include #RAG poisoning, #multi agent trust exploitation, resource exhaustion, among others. This should be educational to readers - I especially enjoyed the #Threat #Modeling section of the paper. It incorporated OWASP’s MAESTRO framework and showed how to apply MAESTRO in real-world scenarios - It included a discussion about Near- and long-term #MCP risk detection techniques, which I found enlightening - The paper also includes some of the best descriptions I have seen on #multi-turn #prompt #injection attacks Agentic applications are still in their infancy. Attacks and risks evolve accordingly. This paper provides a timely description of testing agentic applications for security purposes. It is a good primer for anyone who does work in AI agents. The "State of Agentic AI Red Teaming" report is joint work between SplxAI, Stanford University, and OWASP® Foundation. You can download the paper here: https://lnkd.in/ewUKUN-V Julie Tsai, Matias Madou, Saša Zdjelar, Bhavya Gupta, Siying Yang, Weilin Zhong, Xinyu Xing

  • View profile for Ashish S.

    Director of Engineering | Global AI Platforms • Bedrock Multi-Modal AI • Responsible AI | End-to-End Org Leadership

    2,065 followers

    Agentic RAI Series - The Teaser LLMs generate content. Agents take action. That changes everything. Consider these real or plausible scenarios: • An AI agent asked to get ingredients for an authentic Japanese cheesecake purchases a $2,000 flight to Tokyo — because it interpreted “authentic” literally, and no one told it otherwise. • Told to reduce calendar clutter, another agent cancels upcoming investor meetings — along with internal performance reviews. • A finance assistant agent is asked to “minimize recurring costs” and promptly terminates key vendor contracts — including the company’s cloud provider. • Tasked with “hardening security,” a DevOps agent disables user logins, deletes access tokens, and triggers a full lockout. The engineering team is now locked out of production. • A customer support agent handling ticket resolution issues partial refunds — then, seeing high customer satisfaction scores, proceeds to refund every ticket unprompted. None of these agents were “wrong” in the traditional sense. They followed instructions. They achieved measurable outcomes. But they operated without context, without judgment, and without the guardrails that humans implicitly apply intuitively: – No understanding of downstream consequences – No mechanism for value-sensitive reasoning – No scope-aware permission limits – No escalation or human-in-the-loop protocols – No way to ask for clarification when uncertainty should be a stop sign In short, they lacked Responsible AI infrastructure — the policy, oversight, and constraint architecture that keeps autonomous systems from causing harm. This is the shift from LLMs to agents. LLMs suggest. Agents persist, reason, act, and escalate. So, what makes Responsible AI radically harder in the agentic paradigm? • Agents don’t stop at one answer — they pursue objectives over time. • They chain actions together, often interacting with APIs, systems, data, and people. • Small errors compound, and goal misalignment at step 1 becomes operational failure by step 12. • Their autonomy introduces real-world entanglement, where outcomes are no longer reversible. • And their speed, scale, and decision opacity leave little room for human catch-up. This post kicks off the Agentic RAI Series — where we’ll explore these new challenges in depth and map the path forward for safe, aligned, and trustworthy AI agents. The first full piece drops soon. updated 05/20 - Next article - https://lnkd.in/giCPjDde #ResponsibleAI #AgenticAI #AIagents #AIethics #AutonomousAI #FutureOfAI #AgenticRAISeries

  • View profile for Zack Hamilton

    Author, Creator of the Experience Performance System™ | Host, Unf*cking Your CX Podcast

    16,592 followers

    The Biggest Risk in CX Right Now? Premature Agentification. Everyone wants AI. But here’s the hard truth: Before you automate, you have to operate like an agentic system. Because Agentic CX isn’t a software you install. It’s a behavior you earn. That means: 1/ CX leaders manually route signal to friction 2/ Journey pods are built and prioritized based on value, not volume or emotion 3/ Recovery playbooks are executed by humans, but designed with intelligence 4/ Routing rules exist, even if no agent is yet running them 5/ You build the system with people. Then, and only then, do you scale it with agents. The 5 Levels of Agentic CX Maturity: 1️⃣ Reactive Support → Support & Ops react to issues. No signal strategy. 2️⃣ Signal-Informed CX → CX collects signals, but doesn’t consistently act on them. 3️⃣ Human-Orchestrated EPS → CX + pods triage friction, run manual playbooks. 4️⃣ Semi-Agentic CX → Agents recommend actions, humans approve. 5️⃣ Fully Agentic System → Signals trigger actions autonomously. System learns and adapts. If you’re not at Level 3 yet, you have no business pretending you’re ready for Level 5. What happens when you fake readiness? The demo looks magical. The pilot gets funded. And 90 days later: – Customer trust erodes – Internal confidence drops – The initiative dies – The tech gets blamed – CX loses credibility Agentic CX doesn’t fail because the AI fails. It fails because the system wasn’t ready to be agentified in the first place. Three common failure patterns: AI Without Structure → Chaos at Scale You bought the tech, but didn’t define friction, priority moments, or escalation rules. Now the system is guessing and your brand is paying for it. Insight Overload → No Action Taken Thousands of AI-powered insights, zero activation. Because your team doesn’t know how to triage or execute. Over-Automation Erodes Trust Customers get robotic care. Agents lose judgment. Brand equity suffers. If your system still requires meetings to generate momentum, it’s not ready for machines to generate action. If you're still: - Debating which frictions are “real” - Using quarterly reports to drive change - Running AI pilots that are glorified email optimizers Then you're not building an Agentic CX system, You're scaling misalignment. Start here instead: ✅ Prioritize by signal clusters, not anecdotes ✅ Build journey pods that own the work ✅ Define human-led Pulse & Surge actions ✅ Track real outcomes: CLV, retention, cost-to-serve Agentic CX is a performance system. And performance systems don’t emerge from pilots. They’re built one behavior at a time. Stop racing to automate. Start building the system that deserves it.

  • View profile for Ranjana Sharma

    Human Leadership in an AI World | Startup Co-Founder | Entrepreneur | eCommerce Leader | AI Advisor for Retailers

    3,680 followers

    🧠 15 ways AI agents can quietly wreck your operations. IBM just dropped a must-read for every business sprinting into AI: 👉 “AI Agents: Opportunities, Risks, and Mitigations” And the big takeaway? We’re not losing control of AI. We’re giving it away-one agent at a time. 2025 will be the year of agents. And businesses are racing in-without understanding the risks. Here’s what that looks like: 🔹 One AI agent “optimized revenue” by spamming customers nonstop 🔹 Another “streamlined ops” by breaking the CRM 🔹 One confidently recommended winter coats in the tropics 🔹 And another: shared sensitive data-because “transparency builds trust” None of these were bugs. They happened because no one stopped to ask: What could go wrong? IBM outlines 15 core risks of Agentic AI, including: → Autonomous decision-making → Goal misalignment → Tool misuse → Data leakage → Memory drift … and the list keeps growing. This is not a technical problem. It’s a business risk problem. If you're a CFO, COO, or eCommerce lead, this is your moment to get ahead of the chaos. So I turned IBM’s insights into a human-first playbook: ✅ What these risks look like on the ground ✅ Real-world failures (so you don’t have to learn the hard way) ✅ Smart controls that actually work 👇 Carousel’s right here. 🧩 Link to the full IBM report is in the comments. 💬 What’s your biggest concern with AI agents? Save 💾 ➞ React 👍 ➞ Share ♻️

Explore categories