On March 2025, Prompt Security dropped what might be the most consequential product extension in GenAI security since the category's inception: real-time, identity, and context-based #authorization controls. For enterprise #security teams staring down a tidal wave of AI-powered data flow, this wasn't just a feature, it was the missing circuit breaker. Prompt's new layer lets enterprises authorize, redact, or mask model responses dynamically, based on who's asking and what they're allowed to see. It's not about slowing AI down. It's about keeping secrets secret when GenAI is running at full throttle. Headquartered commercially in NY City with an R&D hub in Tel Aviv, Prompt Security isn't a recycled DLP rebrand or IAM bolt-on. It's a purpose-built GenAI protection layer, launched in 2023 by CEO Itamar Golan and CTO Lior Drihem, both ex-Check Point and Orca Security heavyweights. Golan's fingerprints are all over the OWASP® Foundation Top 10 for #LLM Applications. Drihem holds 25+ patents and knows runtime architecture like a surgeon knows arteries. Together, they're architecting the semantic #firewallAI actually needs. Prompt's platform intercepts every prompt and model response with sub-millisecond latency. It inspects not just the user, but the context, down to #token-level granularity. It masks unauthorized output mid-inference, without wrecking the flow. Plug it into Okta or Microsoft Entra Community and it doesn't just know who you are, it knows what you're allowed to see, say, and receive in real time. Not with a delay. Now. Investors noticed early. Jump Capital, Hetz Ventures, Ridge Ventures, Okta, and F5 came in hot. Since its Series A last November, Prompt has quintupled revenue, doubled its customer base, and secured #Fortune500 names like Royal Caribbean Group, Elementor, and 10x Banking. Calcalist listed them among the "50 most promising Israeli startups of 2025." Gartner flagged them as a likely near-term M&A candidate. Inbound offers? Between $200 million and $300M, reportedly from Zscaler, F5, and Check Point. No surprise there, Prompt is protecting data while it's in motion, not when it's parked. This isn't some speculative play on future threats. EU AI Act and U.S. #ExecutiveOrder14110 are already demanding data-in-use logging, access control, and redaction. Prompt's architecture bakes that in. And while Cisco, Palo Alto Networks, and Microsoft scramble to bolt AI controls onto legacy stacks, Prompt's thirty-person squad, many from Unit 8200, are already running policy-as-code pipelines through live inference. Watch this team closely. Whether Prompt takes an exit or raises a rumored $50M to $100M Series B, one thing's clear: the next AI-cyber standard is being written in real time. #AI #GenAI #Data #DataDriven #Security #CyberSecurity #Privacy #Enterprise #EnterpriseTech #EnterpriseAI #SaaS #Technology #Innovation #TechEcosystem #StartupEcosystem #TechNews If engineering peace of mind is what you crave, Vention is your zen
GenAI Integration in Enterprise Security
Explore top LinkedIn content from expert professionals.
-
-
A benefit of conducting our Technology Road Map study each year is the ability to see which information security technology implementation plans come to fruition, since many factors can interrupt or delay start-of-year project plans, including newly identified pressing priorities. One technology for which plans to implement converted to usage jumps out in our 2025 study: security for generative AI. This broad category covers various security vendors’ products that approach securing generative AI use and platforms through various means, including AI application assessment, assistance in model selection, monitoring, guardrails, usage controls, policy assistance or runtime protections. This growth is pinned mainly to the general increase in use of #GenAI, and the space is concerned with protecting or mitigating unique threats to AI solutions, such as model poisoning, prompt injection, jailbreaking and related unintended model outcomes. In 2024, 15% of security professionals responding to this survey noted they had GenAI security solutions in place, with another 23% conducting pilots or proof-of-concept exercises. In 2025, 30% of survey-takers note having GenAI solutions in place. That comes in slightly lower than projected, but represents a serious growth rate for these technologies in surveyed enterprises. This attention on securing GenAI is also reflected in the M&A for the companies in this space, with Robust Intelligence being picked up by Cisco and Palo Alto Networks announcing its intention to acquire Protect AI, as well as Snyk’s recently announced pick up of Invariant Labs to secure agentic AI.
-
🚨 AI Governance Isn’t Optional Anymore — CISOs and Boards, Take Note As AI systems become core to business operations, regulators are catching up fast — and CISOs are now squarely in the spotlight. Whether you're facing the EU AI Act, U.S. Executive Orders, or the new ISO/IEC 42001, here’s what CISOs need to start doing today: ✅ Inventory all AI/ML systems – Know where AI is being used internally and by your vendors. ✅ Establish AI governance – Form a cross-functional team and own the AI risk management policy. ✅ Secure the ML pipeline – Protect training data, defend against poisoning, and monitor model drift. ✅ Ensure transparency & explainability – Especially for high-risk systems (e.g., hiring, finance, health). ✅ Update third-party risk assessments – Require AI-specific controls, model documentation, and data handling practices. ✅ Control GenAI & Shadow AI – Set usage policies, monitor access, and prevent unintentional data leaks. ✅ Stay ahead of regulations – Track the EU AI Act, NIST AI RMF, ISO 42001, and others. 🔐 AI is no longer just a data science topic — it’s a core risk domain under the CISO’s scope. The question is: Are you securing the models that are shaping your business decisions? #AICompliance #CISO #CyberSecurity #AIRegulations #EUAIAct #NIST #ISO42001 #MLOpsSecurity #Governance #ThirdPartyRisk #GenAI #AIAccountability #SecurityLeadership
-
The AI Stack is the New Operating Model If you haven't watched Andrej Karpathy's YC talk on this yet, please do so, you won't regret it: https://lnkd.in/emZa9iME If you are short on time, I summarized his key takeaways in the carousel below In short, GenAI won’t just change software — it’s redefining how enterprises operate But that also means enterprises can't see AI as just one layer to plug in, it's a whole stack revamp I love the below overview of the GenAI stack from Andreas Horn, cause it shows how each layer of the stack represents a new decision point: vendor selection, compliance risk, data security, IP control. For CIOs and COOs, the GenAI stack becomes the blueprint for how their next-generation workflows will run — and where cost and risk concentrate. 1. Cloud Hosting & Inference → AWS, Azure, GCP, NVIDIA The backbone of every GenAI system — delivering the scalable compute and infrastructure needed to train and run models efficiently at production scale. 2. Foundation Models → GPT, Claude, Gemini, Mistral, DeepSeek The core intelligence layer — powerful pre-trained models capable of reasoning, generating, and adapting across diverse tasks and domains. 3. Frameworks → LangChain, HuggingFace, FastAPI The orchestration layer — providing tools for developers to design structured workflows, chains, and agentic behavior on top of large models. 4. Vector DBs & Orchestration → Pinecone, Weaviate, Milvus, LlamaIndex Serve as memory and retrieval engines — connecting unstructured data to AI systems and enabling capabilities like RAG and long-term agent context. 5. Fine-Tuning → Weights & Biases, HuggingFace, OctoML Tooling and processes that tailor foundation models to specific domains, use cases, or proprietary data — improving accuracy and relevance. 6. Embeddings & Labeling → Cohere, ScaleAI, JinaAI, Nomic Turn raw inputs into machine-readable signals — powering semantic search, similarity detection, and labeled datasets for supervised learning. 7. Synthetic Data → Gretel, Tonic AI, Mostly Generate high-fidelity, privacy-preserving datasets — especially valuable when real-world data is scarce, regulated, or sensitive. 8. Model Supervision → WhyLabs, Fiddler, Helicone Enable real-time visibility into model behavior — through monitoring, debugging, and tracing to ensure performance, reliability, and transparency. 9. Model Safety → LLM Guard, Arthur AI, Garak Provide output controls and safeguards — enforcing ethical boundaries, compliance policies, and trust standards for safe AI deployment. 📷 : ByteByteGo, Andrej Karpathy If you enjoy these insights and news, sign up for my newsletter: https://lnkd.in/eiv5a3tT … Did you find this helpful? ♻️ Repost this to inform your network 🔔 Follow me for AI insights and career advice: https://lnkd.in/eVXiMwT6 🔖 Subscribe to my newsletter: https://lnkd.in/eiv5a3tT
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development