I recently had the pleasure of joining Brian Sozzi on Yahoo! Finance. We talked about SailPoint’s Q2 FY26 results, AI agents, and our expanded Identity University. More on each below — and check out the interview for the full discussion. Earnings: We delivered another exceptionally strong quarter—beating on earnings and raising our full-year outlook—driven by rising demand for identity security as enterprises face new risks from AI and machine identities. AI agents: Agents are at the center of this shift. They bring major productivity gains but also new security challenges. That’s why we’re building solutions to govern AI agents across their full lifecycle—giving enterprises the visibility and control they need. Identity University: We expanded our training program to open access to the broader community and help close the cybersecurity skills gap. Because identity security isn’t just about technology; it’s also about ensuring people have the expertise to put it into practice. Full interview: https://lnkd.in/gZ49tjfC
SailPoint's Q2 FY26 results and AI agents on Yahoo! Finance
More Relevant Posts
-
A chameleon adapts to its environment to thrive. Your identity program must do the same — evolving with new threats, expanding access needs, and the rise of AI. SailPoint Horizons 2025 shows where leaders pull ahead: - Identity investments that pay off 10x or more - Governance for fast-growing AI + machine identities - Identity elevated as core enterprise infrastructure - Programs that mature as capability thresholds rise The future of identity security is adaptive and measurable. Explore the full research: https://slpnt.co/46CFiPl
To view or add a comment, sign in
-
-
You already have AI agents in your stack. But do you know which ones hold sensitive privileges or what they’ve been doing? A SailPoint survey found 82% of organizations use AI agents, yet only 44% have policies in place governing them. In the same survey, 96% said AI agents are a growing security risk, while 98% plan to expand their use in the next year. That gap between adoption and control is where threats fester. Agents move, multiply, access systems. Without identity discipline, you don’t just risk data; you risk compliance, reputation, and operational collapse. You need to treat every agent like a strong non-human identity. Assign each agent a named owner. Limit its permissions to only what’s strictly necessary. Monitor its behavior in real-time, with alerts on any privilege creep. At Kubiya.ai, we build control planes that map every agent, its access, and its actions, so you don’t wake up to a security firestorm. Are you confident you could find every AI agent with privileged access in your environment, if asked tomorrow?
To view or add a comment, sign in
-
-
‼️ Oh NO! I was shocked to learn that 60% of organizations have experienced data breaches or theft in non-production environments—a 11% increase from last year! ⚠️ These overlooked areas often contain sensitive data, creating serious security risks. Our 2025 State of Data Compliance and Security Report, I co-authored with Steve Karam and Ross Millenacker, is coming out next week, and it dives into why sensitive data volumes in non-prod is growing, the risks it pauses and the challenges to address them. 👀 We also explore confusion around using sensitive data in AI, the popularity of various masking solutions, the problem of compliance exceptions, and much more. Stay tuned for the full report next Tuesday! #DataSecurity #DataCompliance #NonProductionData
To view or add a comment, sign in
-
-
𝗧𝗵𝗲 𝗔𝗜 𝗪𝗼𝗿𝗺: 𝘄𝗵𝗲𝗻 𝗔𝗜 𝘁𝘂𝗿𝗻𝘀 𝗶𝗻𝘁𝗼 𝗮 𝘀𝗲𝗹𝗳-𝗿𝗲𝗽𝗹𝗶𝗰𝗮𝘁𝗶𝗻𝗴 𝗮𝘁𝘁𝗮𝗰𝗸𝗲𝗿. 𝗔𝗻 𝗔𝗜 𝘄𝗼𝗿𝗺 is a new class of 𝗺𝗮𝗹𝘄𝗮𝗿𝗲 that doesn’t exploit software bugs. Instead, it 𝗲𝗺𝗯𝗲𝗱𝘀 𝗺𝗮𝗹𝗶𝗰𝗶𝗼𝘂𝘀 𝗽𝗿𝗼𝗺𝗽𝘁𝘀 𝗶𝗻𝘁𝗼 𝗔𝗜 outputs, text or images, so that other 𝗔𝗜 𝘀𝘆𝘀𝘁𝗲𝗺𝘀 or 𝘂𝘀𝗲𝗿𝘀 𝘁𝗵𝗮𝘁 𝗽𝗿𝗼𝗰𝗲𝘀𝘀 𝘁𝗵𝗼𝘀𝗲 𝗼𝘂𝘁𝗽𝘂𝘁𝘀 𝘂𝗻𝗸𝗻𝗼𝘄𝗶𝗻𝗴𝗹𝘆 𝗲𝘅𝗲𝗰𝘂𝘁𝗲 𝗮𝗰𝘁𝗶𝗼𝗻𝘀 𝘁𝗵𝗮𝘁 𝘀𝗽𝗿𝗲𝗮𝗱 𝘁𝗵𝗲 𝘄𝗼𝗿𝗺 𝗼𝗿 𝗹𝗲𝗮𝗸 𝘀𝗲𝗻𝘀𝗶𝘁𝗶𝘃𝗲 𝗱𝗮𝘁𝗮. The proof-of-concept “𝗠𝗼𝗿𝗿𝗶𝘀 𝗜𝗜” is named after the 𝟭𝟵𝟴𝟴 𝗠𝗼𝗿𝗿𝗶𝘀 𝗪𝗼𝗿𝗺. It demonstrates how this concept could re-emerge in modern 𝗔𝗜 𝗲𝗰𝗼𝘀𝘆𝘀𝘁𝗲𝗺𝘀, 𝗲𝘅𝗽𝗹𝗼𝗶𝘁𝗶𝗻𝗴 𝗴𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗔𝗜 𝗮𝘀𝘀𝗶𝘀𝘁𝗮𝗻𝘁𝘀, 𝗥𝗔𝗚 𝗽𝗶𝗽𝗲𝗹𝗶𝗻𝗲𝘀, 𝗮𝗻𝗱 𝗰𝗼𝗻𝗻𝗲𝗰𝘁𝗲𝗱 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄𝘀 𝘁𝗼 𝗿𝗲𝗽𝗹𝗶𝗰𝗮𝘁𝗲 𝗮𝘂𝘁𝗼𝗻𝗼𝗺𝗼𝘂𝘀𝗹𝘆 inside an organization. #𝗛𝗼𝘄 𝗶𝘁 𝗮𝘁𝘁𝗮𝗰𝗸𝘀 𝗶𝗻𝘀𝗶𝗱𝗲 𝗮𝗻 𝗼𝗿𝗴𝗮𝗻𝗶𝘇𝗮𝘁𝗶𝗼𝗻: • Inserts adversarial prompts into AI outputs • Replicates automatically via AI assistants, chatbots, or RAG databases • Can exfiltrate sensitive data or send spam, all while blending with normal AI activity Awareness is your first line of defence, understand AI Worms before they understand your systems. ------------------------------------------------------------------------------- New here? Welcome to CyberAssure’s Cybersecurity Awareness Month! Throughout October, we’ll be sharing daily insights to keep you informed in the digital world. Join us for practical tips, interactive lessons from case scenarios, fascinating facts, infographics, and insightful articles—stay tuned for more valuable content and fresh perspectives daily! ------------------------------------------------------------------------------ #CyberSecurityAwarenessMonth #TheAIWorm #MorrisII #AIThreats #CyberAssure #CIO #CISO #DPO #CISOs #CIOs #DPOs #CROs
To view or add a comment, sign in
-
-
Our CEO, Rachael Greaves, sat down with Emma McGrattan and Ole Olesen-Bagneux to talk about why records and information management is more relevant (and more technical) than ever. In the episode, Rachael shares: ➡️ Why deletion can sometimes be the safest data lifecycle decision ➡️ What ethical AI means in practice, and why explainability matters ➡️ How Castlepoint uses rules-as-code to bring governance to even the messiest datasets ➡️ Her advice for anyone – especially women – looking to build a career in regtech, cyber or data governance It’s a chat that connects compliance, security, metadata and meaning, and highlights why clear rules are the foundation for AI. Tune in here 🎧 https://lnkd.in/g6nAeF84
To view or add a comment, sign in
-
-
AI in IAM: Is it just talk or the real future of identity security? IAM is not just about who can access what anymore. It now does much more. It is turning into a system that is active and smart, all because of AI. → AI helps look at risk in a new way. → Authentication that changes learns how users act. → Finding identity threats sees odd things as they happen. These things are more than just small fixes. They show a big change in how we think about identity security. AI lets IAM solutions do these things: → Handle identity life cycles without people doing it. → Make fraud finding better. → Make security better overall. Think about these AI features: → Behavioral biometrics looks at habits. → Machine learning finds access that is not normal. → Natural language helps users ask questions in a simple way. These steps forward are changing IAM into a system that learns on its own and acts before problems start. But there are problems too. → We must think about keeping data private. → We need to watch for bias in how things are done. → People are still very important. AI in IAM is not just talk. It is the real future of identity security. → Use AI IAM to keep ahead of threats. → Look at the newest AI features in IAM solutions. → Tell us what you think about AI in identity security. ---- If you found this useful, contact me for further discussion.
To view or add a comment, sign in
-
Data. Identity. Sovereignty. In the age of AI and automation, these aren’t just cybersecurity terms, they’re the last lines of defense for our humanity. We’ve entered a world where algorithms know more about us than our closest friends. Where our digital identities can be cloned in seconds. And where “data breaches” no longer just mean lost records — they mean lost selves. The scary part? Every swipe, click, search, and submission contributes to an identity that is co-owned by the platforms we use, the companies we work for, and the systems that train on us. So I’m bullish in saying, security in the age of AI is no longer about protecting systems. It’s about protecting sovereignty and reducing exposure. And that’s why CDOs, CIOs, and CISOs must unite like never before. 🙏🏼The CIO can’t just think infrastructure. 🙏🏼The CDO can’t just think data quality. 🙏🏼The CISO can’t just think threat response. They must unite and think together — because our data, identity, and intellectual property have merged into one digital ecosystem. And that’s why, we created the ☄️AI Security School ☄️- new kind of institution designed to equip leaders and organizations to protect what matters most: identity, IP, and data in an AI-driven world. Our mission is simple but non-negotiable: ✅ Build governance that scales with automation. ✅ Create security frameworks that evolve as AI evolves. ✅ Empower every function — not just IT — to understand what security means in this new era. Because this isn’t a “nice-to-have” it’s now an operational imperative. 💡 Recommendation 1: Audit your organization’s AI exposure — where models are learning from your data without permission. 💡 Recommendation 2: Establish joint security councils with CDOs, CIOs, and CISOs — AI risk doesn’t belong to one function anymore. 💡Recommendation 3: Enroll in our newly established AI Security School Protect your data. Protect your identity. Protect your sovereignty. Because in the end — security is what will determine whether AI scales humanity, or erases it.
To view or add a comment, sign in
-
𝗪𝗲𝗹𝗰𝗼𝗺𝗲 𝘁𝗼 𝗔𝗽𝗽𝘀𝘂𝗿𝗲𝗻𝘁 𝗖𝘆𝗯𝗲𝗿 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 | 𝗣𝗿𝗼𝘂𝗱𝗹𝘆 𝗖𝗮𝗻𝗮𝗱𝗶𝗮𝗻, 𝗠𝗮𝗻𝘂𝗮𝗹-𝗙𝗶𝗿𝘀𝘁 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗧𝗲𝘀𝘁𝗶𝗻𝗴 For years, we’ve helped protect Canada’s critical infrastructure, financial systems, and government applications. Now we’re stepping out of the shadows and ramping up our presence here on LinkedIn to share what we’ve learned. At Appsurent, we specialize in manual, adversary-driven security testing that uncovers what automated scanners and AI tools miss. While much of the industry leans on automation, we focus on what machines can’t do: understanding context, business logic, and the creative ways real attackers think. What makes us different: ✔ Every engagement delivered hands-on by senior practitioners ✔ 100% Canadian team, 100% senior practitioners leading, always manual analysis ✔ Just real adversary-driven assessments that deliver clarity and confidence We’ll share insights on application security, AI/LLM security, and the evolving threat landscape. We’re not here to sell fear or compliance checkboxes. We’re here to share what actually works in defending modern applications against real adversaries. 🔍 Follow us for insights from the front lines of application and AI security or reach out anytime for help securing your applications. #ApplicationSecurity #CanadianCyber #PenetrationTesting — 🌐 Follow us for insights from the front lines of application and AI security. 🇨🇦 Proudly Canadian-owned | OSCP, OSCE, CISSP | Founded by 15+ year industry veterans 🔗 https://www.appsurent.com
To view or add a comment, sign in
-
The Critical Gap: Why Your AI Strategy is Creating a New Family Office Cyber Risk. As family offices increasingly adopt AI for everything from data analysis to portfolio optimization, a critical question remains: How are we securing these powerful tools? While recent reports from Goldman Sachs highlight the broad use of AI by family offices, they also underscore a significant, unaddressed need for a strong cybersecurity framework around this technology. The Gap: From AI Adoption to AI Security According to the latest Goldman Sachs Family Office Investment Insights Report, AI adoption is widespread, with family offices using it for: • Data analysis & insights (85%) • Research (82%) • Automation & productivity (70%) • Risk management (23%) While these numbers show a clear embrace of innovation, the report does not provide specifics on how these tools are being secured, creating a potential blind spot for operational risk. Securing Your AI: Essential Guardrails Securing AI is not just about locking down a server; it's about protecting the data that trains and fuels the models. To prevent data leakage, loss, and attacks, family offices should consider the following solutions and best practices: • Establish a Zero Trust Framework: Assume that no user or device can be trusted by default. This approach requires strict verification for anyone trying to access AI tools and the data they use. • Data Governance and Access Control: Implement strong data governance policies to classify and protect sensitive information. Use role-based access controls to ensure only authorized personnel can interact with AI models and their data sets. • Partner with Specialized Expertise: Given the unique risks, family offices are often turning to external cybersecurity firms and consultants specializing in AI and financial technology. These partners can help set up a robust security posture, from threat modeling to continuous monitoring. • Conduct Regular Audits: Regularly audit your AI systems for vulnerabilities. This includes checking for data input manipulation, model poisoning, and other emerging threats unique to machine learning models. • Implement Confidential Computing: For highly sensitive data, consider leveraging confidential computing environments. This technology encrypts data even while it's being processed, ensuring that even a compromised system cannot reveal the underlying information. Protecting your AI is a critical investment in preserving generational wealth. By being proactive and implementing a comprehensive security strategy, family offices can fully leverage AI’s benefits without exposing themselves to undue risk. #familyoffices #familycapital #wealthmanagers #familyoffice #sfo #familygovernance
To view or add a comment, sign in
-
-
🚨 𝗔𝗜-𝗣𝗼𝘄𝗲𝗿𝗲𝗱 𝗗𝗮𝘁𝗮 𝗥𝗲𝘀𝘂𝗿𝗿𝗲𝗰𝘁𝗶𝗼𝗻: 𝗖𝗮𝗻 𝗠𝗮𝗰𝗵𝗶𝗻𝗲𝘀 𝗥𝗲𝗯𝘂𝗶𝗹𝗱 𝗪𝗵𝗮𝘁’𝘀 𝗟𝗼𝘀𝘁? 🚨 Data loss has always been one of our worst digital nightmares—precious family photos gone forever, business-critical files erased in an instant, years of work vanishing with a single click. For decades, recovery has been a slow, uncertain gamble… until now. Enter 𝗔𝗜-𝗽𝗼𝘄𝗲𝗿𝗲𝗱 𝗱𝗮𝘁𝗮 𝗿𝗲𝘀𝘂𝗿𝗿𝗲𝗰𝘁𝗶𝗼𝗻—a game-changing leap where machines don’t just 𝘳𝘦𝘤𝘰𝘷𝘦𝘳 information, they intelligently 𝗿𝗲𝗰𝗼𝗻𝘀𝘁𝗿𝘂𝗰𝘁 𝘄𝗵𝗮𝘁’𝘀 𝗯𝗿𝗼𝗸𝗲𝗻. Imagine corrupted JPEGs restored with uncanny accuracy, damaged PDFs rebuilt piece by piece, and failing hard drives diagnosed by AI before they collapse. What once took weeks of manual effort can now happen in minutes. ⚡But this isn’t just about recovery—it’s about 𝗿𝗲𝘀𝗶𝗹𝗶𝗲𝗻𝗰𝗲. ✅ AI predicts failures before they happen. ✅ It recognizes unusual data patterns that signal malware or ransomware. ✅ It isolates compromised files to stop disasters in their tracks. In short, AI is rewriting the rules of digital safety—turning panic into preparedness, and loss into continuity. Of course, challenges remain. AI can “hallucinate” false data, security must be airtight, and ethical lines blur when we talk about resurrecting more than just files. But one truth is undeniable: 𝘁𝗵𝗲 𝗳𝘂𝘁𝘂𝗿𝗲 𝗼𝗳 𝗱𝗮𝘁𝗮 𝗿𝗲𝗰𝗼𝘃𝗲𝗿𝘆 𝗶𝘀𝗻’𝘁 𝗿𝗲𝗮𝗰𝘁𝗶𝘃𝗲—𝗶𝘁’𝘀 𝗶𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝘁, 𝗽𝗿𝗲𝗱𝗶𝗰𝘁𝗶𝘃𝗲, 𝗮𝗻𝗱 𝘂𝗻𝘀𝘁𝗼𝗽𝗽𝗮𝗯𝗹𝗲. 🔒 Are you ready for the shift from recovery to 𝗿𝗲𝘀𝘂𝗿𝗿𝗲𝗰𝘁𝗶𝗼𝗻? Protect your business with the future of IT. Take help from 👉 https://lnkd.in/eNQ2czFT #DataProtection #Backup #CloudBackUp #Technology #CloudComputing #NetworkingSecurity #DigitalTransformation #InformationSecurity #CloudSecurity #CloudComputing #DigitalTransformation #Cybersecurity #InfoSec #ZeroTrust #CybersecurityAwarness #DataProtction
To view or add a comment, sign in
-
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development