AI is not failing because of bad ideas; it’s "failing" at enterprise scale because of two big gaps: 👉 Workforce Preparation 👉 Data Security for AI While I speak globally on both topics in depth, today I want to educate us on what it takes to secure data for AI—because 70–82% of AI projects pause or get cancelled at POC/MVP stage (source: #Gartner, #MIT). Why? One of the biggest reasons is a lack of readiness at the data layer. So let’s make it simple - there are 7 phases to securing data for AI—and each phase has direct business risk if ignored. 🔹 Phase 1: Data Sourcing Security - Validating the origin, ownership, and licensing rights of all ingested data. Why It Matters: You can’t build scalable AI with data you don’t own or can’t trace. 🔹 Phase 2: Data Infrastructure Security - Ensuring data warehouses, lakes, and pipelines that support your AI models are hardened and access-controlled. Why It Matters: Unsecured data environments are easy targets for bad actors making you exposed to data breaches, IP theft, and model poisoning. 🔹 Phase 3: Data In-Transit Security - Protecting data as it moves across internal or external systems, especially between cloud, APIs, and vendors. Why It Matters: Intercepted training data = compromised models. Think of it as shipping cash across town in an armored truck—or on a bicycle—your choice. 🔹 Phase 4: API Security for Foundational Models - Safeguarding the APIs you use to connect with LLMs and third-party GenAI platforms (OpenAI, Anthropic, etc.). Why It Matters: Unmonitored API calls can leak sensitive data into public models or expose internal IP. This isn’t just tech debt. It’s reputational and regulatory risk. 🔹 Phase 5: Foundational Model Protection - Defending your proprietary models and fine-tunes from external inference, theft, or malicious querying. Why It Matters: Prompt injection attacks are real. And your enterprise-trained model? It’s a business asset. You lock your office at night—do the same with your models. 🔹 Phase 6: Incident Response for AI Data Breaches - Having predefined protocols for breaches, hallucinations, or AI-generated harm—who’s notified, who investigates, how damage is mitigated. Why It Matters: AI-related incidents are happening. Legal needs response plans. Cyber needs escalation tiers. 🔹 Phase 7: CI/CD for Models (with Security Hooks) - Continuous integration and delivery pipelines for models, embedded with testing, governance, and version-control protocols. Why It Matter: Shipping models like software means risk comes faster—and so must detection. Governance must be baked into every deployment sprint. Want your AI strategy to succeed past MVP? Focus and lock down the data. #AI #DataSecurity #AILeadership #Cybersecurity #FutureOfWork #ResponsibleAI #SolRashidi #Data #Leadership
AI in Cybersecurity
Explore top LinkedIn content from expert professionals.
-
-
For the first time in history, the #1 hacker in the US is AI …but as the threats have been evolving, so have the solutions. Over the past year, the focus for all major players has shifted to building an AI-enhanced SOC (Security Operations Center). Every company has a different approach, but the key trend has been building out data infrastructure and response capabilities on top of the data that companies already have. Here are the key components of the Agentic AI SOC. ◾ Sources of Data ◾Data Infrastructure ◾Response and Decision Layer ◾AI Agents that act on these insights While the ultimate goal is to create AI Agents, that is not necessarily where the value lies. Companies were able to whip up AI Agents shortly after the first LLMs were introduced. I think the value will be in the data, both the Source and the Data Infrastructure Layer. 1. Sources of Data. This stems from a large installed customer base. Here, leaders in Network, Endpoint, Identity, and Cloud security have a significant advantage, as they already possess large amounts of data. 2. Data Infrastructure: This is an emerging area where there is ample room for new entrants to offer innovative solutions. It is also the primary source of acquisitions for large, publicly traded companies. As Francis Odum from Software Analyst Cyber Research put it “We know that data sources are multiplying rapidly with GenAI. More tools mean> more data sent into SIEMs > which means more storage, costs, and alert noise! If we solve issues at the data sources (filter, normalize, threat intel enrichment, and importantly, fix detection rules, etc.), everything else will follow. In the next phase of cybersecurity, the winners will be those who can move from collecting data to orchestrating outcomes and build cohesive platforms. Where do the public players stand today? 🟩 Companies that are building unique platforms are winning: Zscaler, Cloudflare, CrowdStrike, Palo Alto Networks 🟥 Companies that rely on antiquated technologies are losing: Splunk, Exabeam We just published Spear 's updated Cybersecurity Primer, which delves into recent cybersecurity trends and provides a lay of the cybersecurity landscape. You can access it here: https://lnkd.in/gWdRfxnz #cybersecurity #ai #technology
-
I’ve seen the evolution of security operations firsthand. From manual alert triage to partially automated workflows, we’ve made progress—but it’s still not enough. The volume of threats is overwhelming, and traditional SOC models can’t keep up. Enter SOC 3.0. This AI-powered approach not only assists analysts but also enhances and speeds up their decision-making, transitioning security operations from reactive to proactive. How SOC 3.0 Changes the Game: - AI-Driven Triage & Remediation – Automatically classify, prioritize, and resolve alerts at scale. - Adaptive Detection & Correlation – AI continuously learns, reducing false positives and spotting novel threats. - Automated Threat Investigations – AI surfaces key insights instantly, cutting investigation time from hours to minutes. - Optimized Data Processing – Query data where it resides, eliminating unnecessary storage costs and vendor lock-in. The bottom line? SOC 3.0 empowers human analysts, reduces burnout, and ensures faster, more accurate threat response. Are you ready to embrace AI in your SOC? Let’s discuss. 🔗 Read more on the evolution of SOC and how AI is transforming security: https://lnkd.in/e2j2ZUUt #Cybersecurity #SOC #AI #ThreatDetection #SecurityOperations
-
Yesterday, the National Security Agency Artificial Intelligence Security Center published the joint Cybersecurity Information Sheet Deploying AI Systems Securely in collaboration with the Cybersecurity and Infrastructure Security Agency, the Federal Bureau of Investigation (FBI), the Australian Signals Directorate’s Australian Cyber Security Centre, the Canadian Centre for Cyber Security, the New Zealand National Cyber Security Centre, and the United Kingdom’s National Cyber Security Centre. Deploying AI securely demands a strategy that tackles AI-specific and traditional IT vulnerabilities, especially in high-risk environments like on-premises or private clouds. Authored by international security experts, the guidelines stress the need for ongoing updates and tailored mitigation strategies to meet unique organizational needs. 🔒 Secure Deployment Environment: * Establish robust IT infrastructure. * Align governance with organizational standards. * Use threat models to enhance security. 🏗️ Robust Architecture: * Protect AI-IT interfaces. * Guard against data poisoning. * Implement Zero Trust architectures. 🔧 Hardened Configurations: * Apply sandboxing and secure settings. * Regularly update hardware and software. 🛡️ Network Protection: * Anticipate breaches; focus on detection and quick response. * Use advanced cybersecurity solutions. 🔍 AI System Protection: * Regularly validate and test AI models. * Encrypt and control access to AI data. 👮 Operation and Maintenance: * Enforce strict access controls. * Continuously educate users and monitor systems. 🔄 Updates and Testing: * Conduct security audits and penetration tests. * Regularly update systems to address new threats. 🚨 Emergency Preparedness: * Develop disaster recovery plans and immutable backups. 🔐 API Security: * Secure exposed APIs with strong authentication and encryption. This framework helps reduce risks and protect sensitive data, ensuring the success and security of AI systems in a dynamic digital ecosystem. #cybersecurity #CISO #leadership
-
Revolutionizing ethical hacking—AI is changing the way we protect against cyber threats. Gone are the days of time-consuming manual assessments. With AI-driven tools, ethical hackers can identify and patch vulnerabilities faster and more effectively than ever before. Here’s how AI is leading the charge in transforming ethical hacking: 1️⃣ Automated Vulnerability Scanning ↳ Tools like Senteon, CheckRed and CYRISMA automate the scanning process, quickly identifying security gaps such as SQL injections. This allows for more frequent checks and quicker fixes. 2️⃣ Enhanced Threat Detection ↳ AI analyzes vast data sets to detect abnormal patterns, adapting to new attack methods and enabling preemptive threat responses. 3️⃣ Natural Language Processing for Command Execution ↳ Tools like Nebula allow ethical hackers to input commands in simple language, improving speed and accessibility. 4️⃣ Intelligent Risk Prioritization ↳ AI ranks vulnerabilities by severity, helping hackers focus on the most critical threats first and allocate resources effectively. 5️⃣ Continuous Learning and Improvement ↳ AI systems evolve by learning from past data and incidents, staying ahead of emerging threats and improving security responses over time. AI is a game-changer in cybersecurity, making the fight against digital threats more efficient and proactive. Where do you see AI taking cybersecurity next? Let’s chat in the comments!
-
Not long ago, attackers needed a team, weeks of planning, and a lot of trial and error to breach a system. Today, a well-tuned AI model can orchestrate an attack end-to-end without a human hand to guide it. The fact that AI can advance on its own and operate much faster than a human makes protecting sensitive information and systems a more difficult problem. Difficult doesn’t mean impossible. At Equifax, we’ve already seen AI make a difference: • Automated and AI-driven detection slashing our mean-time-to-detect to under 60 seconds. • Automated anomaly hunting, lighting up blind spots for us in real time before they become breaches. • Red teams using LLMs to safely simulate adversaries and close gaps faster. Threat actors aren’t waiting to upskill on AI and neither should security teams. Here are 3 actions I recommend: • Build AI literacy across all security roles, not just data scientists. • Treat AI-powered adversaries as your baseline threat model, not a future risk. • Lean into partnerships. The AI security community is your force multiplier. As AI continues its rapid advancement, it's inevitable that both technology and attackers will evolve. Our focus must be on ensuring security teams outpace these evolving threats. 🛡️ #AI #Cybersecurity #Innovation #LLM #SecurityCommunity
-
The Cyber Security Agency of Singapore (CSA) has published “Guidelines on Securing AI Systems,” to help system owners manage security risks in the use of AI throughout the five stages of the AI lifecycle. 1. Planning and Design: - Raise awareness and competency on security by providing training and guidance on the security risks of #AI to all personnel, including developers, system owners and senior leaders. - Conduct a #riskassessment and supplement it by continuous monitoring and a strong feedback loop. 2. Development: - Secure the #supplychain (training data, models, APIs, software libraries) - Ensure that suppliers appropriately manage risks by adhering to #security policies or internationally recognized standards. - Consider security benefits and trade-offs such as complexity, explainability, interpretability, and sensitivity of training data when selecting the appropriate model to use (#machinelearning, deep learning, #GenAI). - Identify, track and protect AI-related assets, including models, #data, prompts, logs and assessments. - Secure the #artificialintelligence development environment by applying standard infrastructure security principles like #accesscontrols and logging/monitoring, segregation of environments, and secure-by-default configurations. 3. Deployment: - Establish #incidentresponse, escalation and remediation plans. - Release #AIsystems only after subjecting them to appropriate and effective security checks and evaluation. 4. Operations and Maintenance: - Monitor and log inputs (queries, prompts and requests) and outputs to ensure they are performing as intended. - Adopt a secure-by-design approach to updates and continuous learning. - Establish a vulnerability disclosure process for users to share potential #vulnerabilities to the system. 5. End of Life: - Ensure proper data and model disposal according to relevant industry standards or #regulations.
-
According to its public disclosures, a recently hacked genetic testing company was breached for approximately five months from April through September 2023 (about 150 days) before it became aware of the breach. See https://lnkd.in/eGcuMZ5T. That may seem like a long time, but according to IBM and Ponemon, in 2023, it took the average company 204 days to identify that it had been breached and another 73 days to contain the breach. See https://lnkd.in/eFTF9XKQ at 13. The longer an attacker has access to your systems, the more damage he or she can do, and the more likely the incident will be a "material" one that you are required to disclose. Indeed, according to IBM, breaches that can be identified and resolved within 200 days cost, on average, more than $1M (23%) less than breaches identified and contained in over 200 days. Id. at 7. What are the most important steps your organization can take to reduce the time it takes to identify and contain an incident? According to IBM's research, focus on the following: 1. SECURITY AI & AUTOMATION: Organizations that extensively used security AI and automation were able to identify and contain a breach 34% faster than those that did not. Limited use of Security AI and Automation also made a significant impact, with an average time to identify and contain a breach by 28%. Id. at 53. 2. ATTACK SURFACE MANAGEMENT SOLUTION (ASM): Having an ASM solution reduced the time to identify and contain a breach by 25%. Id. at 60. According to Gartner, some popular ASM solutions include Microsoft Defender, Crowdstrike Falcon, and Palo Alto Cortex. See https://lnkd.in/emRd2dTY. 3. MANAGED SECURITY SERVICE PROVIDERS (MSSP): Organizations with MSSPs were able to identify and contain breaches 20% faster than those without MSSPs. See See https://lnkd.in/eFTF9XKQ at 61. 4. IR TEAM AND TABLETOP EXERCISES: The dual strategy of forming an IR team and testing an IR plan reduced the time to identify and contain a breach by 19.4%. Testing the IR plan without forming a team was nearly as effective, resulting in a difference of 17%. Id. at 55. 5. AUTOMATED RESPONSE PLAYBOOKS OR WORKFLOWS: Organizations with automated response playbooks or workflows designed specifically for the type of attack that occurred (e.g., ransomware) were able to contain the incident 16% faster than those that did not have such playbooks or workflows. Id. at 35. 6. THREAT INTELLIGENCE: Organizations that used threat intelligence uncovered breaches in 13.9% less time than those without a threat intelligence investment. Id. at 57. 7. INVOLVEMENT OF LAW ENFORCEMENT: Total time to identify and contain an incident was 11.4% with law enforcement involvement. Consider taking some of these steps to reduce the amount of time it takes you to identify and contain a security incident, and thereby, reduce the impact of the incident.
-
Excited to share insights from Microsoft’s study on "Generative AI and Security Operations Center Productivity." This first-of-its-kind research reveals how generative AI is transforming cybersecurity operations. Key findings: 🔹 30%+ reduction in Mean Time to Resolution for security incidents, consistently demonstrated across various modeling scenarios 🔹 Significant cost-saving potential: SOC analysts currently spend ~3 hours daily resolving incidents, contributing to a $3.3B cost in the U.S. alone 🔹 Enhanced threat identification accuracy and speed, allowing analysts to handle more incidents in less time These findings underscore the transformative potential of tools like Microsoft Security Copilot in reducing security incident resolution times and improving SOC efficiency. Looking ahead, I'm excited to see how these GAI tools continue to evolve and strengthen the cybersecurity landscape. #Cybersecurity #MicrosoftSecurity #GenAI #Copilot Read the full study here:
-
In my work with organizations rolling out AI and generative AI solutions, one concern I hear repeatedly from leaders, and the c-suite is how to get a clear, centralized “AI Risk Center” to track AI safety, large language model's accuracy, citation, attribution, performance and compliance etc. Operational leaders want automated governance reports—model cards, impact assessments, dashboards—so they can maintain trust with boards, customers, and regulators. Business stakeholders also need an operational risk view: one place to see AI risk and value across all units, so they know where to prioritize governance. One of such framework is MITRE’s ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) Matrix. This framework extends MITRE ATT&CK principles to AI, Generative AI, and machine learning, giving us a structured way to identify, monitor, and mitigate threats specific to large language models. ATLAS addresses a range of vulnerabilities—prompt injection, data leakage, malicious code generation, and more—by mapping them to proven defensive techniques. It’s part of the broader AI safety ecosystem we rely on for robust risk management. On a practical level, I recommend pairing the ATLAS approach with comprehensive guardrails - such as: • AI Firewall & LLM Scanner to block jailbreak attempts, moderate content, and detect data leaks (optionally integrating with security posture management systems). • RAG Security for retrieval-augmented generation, ensuring knowledge bases are isolated and validated before LLM interaction. • Advanced Detection Methods—Statistical Outlier Detection, Consistency Checks, and Entity Verification—to catch data poisoning attacks early. • Align Scores to grade hallucinations and keep the model within acceptable bounds. • Agent Framework Hardening so that AI agents operate within clearly defined permissions. Given the rapid arrival of AI-focused legislation—like the EU AI Act, now defunct Executive Order 14110 of October 30, 2023 (Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence) AI Act, and global standards (e.g., ISO/IEC 42001)—we face a “policy soup” that demands transparent, auditable processes. My biggest takeaway from the 2024 Credo AI Summit was that responsible AI governance isn’t just about technical controls: it’s about aligning with rapidly evolving global regulations and industry best practices to demonstrate “what good looks like.” Call to Action: For leaders implementing AI and generative AI solutions, start by mapping your AI workflows against MITRE’s ATLAS Matrix. Mapping the progression of the attack kill chain from left to right - combine that insight with strong guardrails, real-time scanning, and automated reporting to stay ahead of attacks, comply with emerging standards, and build trust across your organization. It’s a practical, proven way to secure your entire GenAI ecosystem—and a critical investment for any enterprise embracing AI.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development