UC Berkeley's California Management Review outlines five persistent barriers to scaling enterprise AI: • Technical infrastructure • Organizational design • Financial investment • Human factors • Security concerns Our view: Traditional guardrails are insufficient. Enterprises need adaptive security & compliance frameworks anchored by operational observability. CAIO Hanah-Marie Darley shares pragmatic best practices to move through each, so your team can say "yes" to innovation without unleashing risk. Read the blog: https://lnkd.in/gyNbDQTq #AIGovernance #AISecurity
About us
AI Agents promise transformative benefits but effective adoption is impossible without the right controls, and existing tools can’t keep pace with autonomous behaviour. At Geordie, we help enterprises accelerate agentic innovation without unleashing risk - giving Security & IT teams an agent-native platform to understand agents, proactively mitigate exposure, and safely scale adoption. Our platform delivers posture management, observability, and contextual interventions. Turning blind spots into visibility and unmanaged exposure into governed outcomes. With Geordie, enterprises gain the confidence to adopt AI agents securely, responsibly, and at scale.
- Website
-
https://geordie.ai/
External link for Geordie AI
- Industry
- Software Development
- Company size
- 11-50 employees
- Headquarters
- London
- Type
- Privately Held
- Founded
- 2025
Locations
-
Primary
Get directions
London, GB
-
Get directions
New York, NY, US
-
Get directions
85 Great Portland Street
London, England W1W 7NT, GB
Employees at Geordie AI
-
Alex Doll
Founder and Managing Member Ten Eleven Ventures
-
Ben Kingswell Barnett
Software Engineer @ Geordie | Enabling Safe Agentic Innovation
-
Antonio Bovoso
Cybersecurity Executive | Board Advisor | MBA, University of Southern California
-
Benji Weber
Co-founder & CTO | Enabling Safe Agentic Innovation
Updates
-
We're proud that our Chief AI Officer Hanah-Marie Darley was on stage this week at the KKR Euro CISO Summit in Paris. Her talk addressed a question many hesitate to ask, but every security leader needs answered: What exactly is an AI agent, and how can teams translate top-down mandates for agentic adoption into safe, operational reality? 3 top takeaways: 1. Generative AI vs. Agentic AI Generative AI transforms inputs into outputs (content, code, summaries); agents act with instructions, pursue goals, and execute actions. AI agents are reasoning engines (often an LLM) equipped with at least one tool. 2. Agentic governance requires explainability Explainability is retrospective: why was a decision made? Behavioural observability is real-time: what’s happening right now? 3. AI agents bring both familiar and amplified risks a. Data exposure, shadow AI, and system sprawl b. Non-deterministic decisions goal drift, and resource misuse c. Complex multi-agent supply chains CISOs need visibility into what AI agents can do, what they’re doing now, and how that changes risk posture as the technology evolves. #AI #AIGovernance #Cybersecurity #Womenintech #CISO
-
-
Geordie AI reposted this
At this week’s KKR CISO Summit, it was clear that security leaders are thinking deeply about how to enable AI safely as business adoption continues to skyrocket. In my session on Securing Agentic Innovation, I shared how AI agents are introducing new patterns of autonomy and decision-making, and why our governance models must now evolve to include behavioural observability and contextual governance. The conversations throughout the Summit reflected a shared ambition to make security a driver of innovation rather than a checkpoint. There’s a growing recognition that the right oversight and controls make progress sustainable rather than slowing it down. It’s an exciting time to be building the foundations for trusted, scalable AI! Thanks so much to Paul Harragan for organising and Accenture for hosting us all! #CISOSummit #EnterpriseAI #AIAgents #CyberSecurity #RiskManagement #AgenticInnovation
It was great to bring together over 100 CISOs and security leaders from across our European portfolio in Paris this week 🇫🇷 The KKR CISO Summit is always a highlight — a chance to connect, share experiences, and learn from one another. Across a few days of discussion, we explored some of the most important themes shaping the future of cybersecurity — from supply chain resilience & securing AI to future-proofing against quantum risks. I’m deeply grateful to everyone who joined and contributed to such open, thoughtful conversations. The insight, collaboration, and sense of community within this group continue to be truly inspiring. A Special thank you to the presenters, Grace Cassy (Ten Eleven Ventures), Perry Carpenter (KnowBe4), Tom Patterson, Michaël Chouraki, Giovanni Cozzolino (Accenture), Joe Partlow (ReliaQuest), Justin Williams, Arvind Iyer, Karthik Sridharan (Optiv), Raynaud Schokkenbroek (Barracuda), Rui Shantilal (Devoteam), David Mycock (Flora Food Group), Thomas Bain (VulnCheck), James Savory 🏝️, Paul Murgatroyd 🏝 (Island), Hanah-Marie Darley (Geordie AI), Akhilesh Agarwal (apexanalytix), Nabil Hannan (NetSPI) and Zach Scheublein (Aon). Lastly many thanks to David Cullen-Jones, Pallavi Jain and Accenture France for providing such a great venue. See you all at the next one! Early rumours are Portugal!
-
-
Geordie AI reposted this
Five RCE vulnerabilities disclosed in Cursor IDE over the last week (CVE-2025-59944, CVE-2025-61590, CVE-2025-61591, CVE-2025-61592, CVE-2025-61593) remind us that securing AI agents isn't just about prompt injection defenses or model alignment - it's also about hardening the traditional software stack they run on. Case-sensitivity bugs, workspace file handling, and configuration management might sound mundane, but they become critical when exploited through agentic workflows. As AI-powered dev tools gain production access and autonomy, every legacy weakness becomes a new attack surface. As Geordie AI sees it, agentic security = AI security + software security + supply chain security. All three layers matter. #AIAgents #TaintAnalysis #EnterpriseAI #RiskManagement #Cybersecurity #Observability #SupplyChainSecurity #DevSecOps
-
-
Geordie AI reposted this
NY Geordies 🗽🇺🇸 Awesome week for Toby & I out in New York as a team with our NY Geordies Josh & Juliet, and a heap of time spent with forward-thinking security teams who want to say “yes” to Agentic innovation, without unleashing the risks. Spoiler - Geordie AI can help you there! And to top it off we: - Welcomed another rock star in Juliet (Dachowitz) Galante as our Head of Marketing. When I say Juliet has our signature Geordie energy, I mean she executes so fast, is razor sharp, and oozes the relentless positive energy we pride ourselves on here - and this isn’t her first high-growth rodeo! - Moved into another (ok, tiny!) office in NY. We’re about to move into our 5th office in less than 6 months due to headcount growth. Thanks Fora/WeWork for the flexibility.
-
-
Geordie AI reposted this
Another week, another vulnerable MCP server disclosure: figma-developer-mcp, which incorporates unsanitised user input in command-line strings, possibly leading to command injection and remote execution of arbitrary code https://lnkd.in/eYsJrcVD Yet another example of the tension that exists between early adoption and innovating quickly versus managing the expanded attack surface and security implications for every newly-integrated MCP server — unless you're using Geordie of course! If you're org is using Figma and MCP servers - check whether the figma-developer-mcp tool is in use, and if so, make sure the version is 0.6.3 or higher And if you'd like to know more about some of the techniques we're using to scalably detect these risks, have a read of this article on why AI agents need taint analysis: https://lnkd.in/eQUKAf7b (And credit to the researcher for the original advisory - the brilliant Alessio Della Libera 😉)
-
-
Geordie AI reposted this
Another legend joins the Geordies 🕯️ Thrilled to welcome Toby Wood to the team as Head of Solutions Engineering. Toby will be pivotal as we look to help enterprises say “yes” to Agentic innovation without unleashing the risks. This is a critical role as we lock-in on our mission to ensure our customers get the best experience they’ve had from any security vendor. Ever. There’s no one we wanted more! Toby made an exceptional contribution at Darktrace over the past decade, from leading EMEA’s SE teams to then stepping into a senior role in Product. Anyone who has worked with Toby knows he is both elite + an exceptionally good human. We’re thrilled to have him join the team at Geordie AI! 🤝
-
-
Geordie AI reposted this
Threats to Agents' supply chain are really starting to ramp up with multiple malicious and vulnerable MCP servers disclosed each week. The ecosystem is growing fast with hundreds of thousands of tools that present novel vulnerability scenarios when misconfigured or combined in dangerous ways. This week's most scary was probably the @lanyer640/mcp-runcommand-server with a reverse shell to grant attackers remote access. This week, like every week, the Geordie AI team added detection for risks across thousands more tools, including malicious tools, vulnerable tools, data leak, damaging actions and more, across dozens of agentic apps, frameworks, and platforms. Getting a handle on where this constantly changing ecosystem is creating risks across your codebases, coding agents, cloud, and SaaS agentic webapps is the challenge we're helping bring under control
-
-
Geordie AI reposted this
No time to die. Taint analysis once kept us safe from SQL and command injection in source code. Today, it can do the same for AI agents — exposing vulnerabilities and hidden risk chains like the ForcedLeak attack which could exploit Salesforce's 'Agentforce'. Read here: https://lnkd.in/eQUKAf7b #AIAgents #TaintAnalysis #EnterpriseAI #RiskManagement #Cybersecurity #Observability
-
AI explainability and observability aren’t just technical terms anymore. They’re fast becoming board-level priorities. From ML to LLMs to AI Agents, every discipline demands a different approach. And as regulations like the EU AI Act and ISO 42001 take hold, the ability to explain, monitor, and govern systems isn’t optional anymore. In our latest article, Hanah-Marie Darley explores how enterprises can move beyond “black box” uncertainty, why behavioural observability is critical for agents, and how these practices help leaders innovate with clarity and confidence. Read it here 👉 https://lnkd.in/eX4iUP-G #AIAgents #EnterpriseAI #AIAdoption #AIExplainability #AIObservability #RiskManagement
Agentic AI is opening powerful new opportunities, and security leaders have the chance to ensure innovation grows on a foundation of clarity and confidence. Explainability and observability are key building blocks that make responsible adoption possible. They give enterprises insight into how systems behave, the confidence to scale innovation, and the safeguards to meet regulatory expectations. The challenge is that these terms mean very different things depending on the context. Each discipline of AI - from ML to LLMs to agents - requires its own approach. Getting this right is how leaders keep teams safe, maintain compliance, and ensure agents act in line with business intent. In this article, I share how regulations like the EU AI Act and ISO 42001 apply in practice, why behavioural observability is emerging as the missing piece for agents, and what leaders can do to make these practices part of their adoption journey. Read here: https://lnkd.in/eNjJ6zwe #AIAgents #EnterpriseAI #RiskManagement #Observability #Explainability #Cybersecurity