Influence on the Cybersecurity Landscape

Explore top LinkedIn content from expert professionals.

  • View profile for Jason Makevich, CISSP

    Founder & CEO of PORT1 & Greenlight Cyber | Keynote Speaker on Cybersecurity | Inc. 5000 Entrepreneur | Driving Innovative Cybersecurity Solutions for MSPs & SMBs

    6,879 followers

    The Unseen Threat: Is AI Making Our Cybersecurity Weaknesses Easier to Exploit? AI in cybersecurity is a double-edged sword. On one hand, it strengthens defenses. On the other, it could unintentionally expose vulnerabilities. Let’s break it down. The Good: - Real-time Threat Detection: AI identifies anomalies faster than human analysts. - Automated Response: Reduces time between detection and mitigation. - Behavioral Analytics: AI monitors network traffic and user behavior to spot unusual activities. The Bad: But, AI isn't just a tool for defenders. Cybercriminals are exploiting it, too: - Optimizing Attacks: Automated penetration testing makes it easier for attackers to find weaknesses. - Automated Malware Creation: AI can generate new malware variants that evade traditional defenses. - Impersonation & Phishing: AI mimics human communication, making scams more convincing. Specific Vulnerabilities AI Creates: 👉 Adversarial Attacks: Attackers manipulate data to deceive AI models. 👉 Data Poisoning: Malicious data injected into training sets compromises AI's reliability. 👉 Inference Attacks: Generative AI tools can unintentionally leak sensitive info. The Takeaway: AI is revolutionizing cybersecurity but also creating new entry points for attackers. It's vital to stay ahead with: 👉 Governance: Control over AI training data. 👉 Monitoring: Regular checks for adversarial manipulation. 👉 Security Protocols: Advanced detection for AI-driven threats. In this evolving landscape, vigilance is key. Are we doing enough to safeguard our systems?

  • View profile for Adnan Amjad

    US Cyber Leader at Deloitte

    3,936 followers

    From data privacy challenges and model hallucinations to adversarial threats, the landscape around Gen AI security is growing more complex every day.    The latest in Deloitte’s “Engineering in the Age of Generative AI” series (https://deloi.tt/41AMMif) outlines four key risk areas affecting cyber leaders: enterprise risks, gen AI capability risks, adversarial AI threats, and marketplace challenges like shifting regulations and infrastructure strain.    Managing these risks isn’t just about protecting today’s operations but preparing for what’s next. Leaders should focus on recalibrating cybersecurity strategies, enhancing data provenance, and adopting AI-specific defenses.   While there’s no one-size-fits-all solution, aligning cyber investments with emerging risks will help organizations safeguard their Gen AI strategies — today and well into the future. 

  • View profile for Peter Slattery, PhD
    Peter Slattery, PhD Peter Slattery, PhD is an Influencer

    Lead at the MIT AI Risk Repository | MIT FutureTech

    62,806 followers

    "Cutting edge advances in artificial intelligence (AI) are taking the world by storm, driven by a massive surge of investment, countless new start-ups, and regular technological breakthroughs. AI presents key opportunities within cybersecurity, but concerns remain regarding the ways malicious actors might also use the technology. In this study, the Institute for Security and Technology (IST) seeks to paint a comprehensive picture of the state of play— cutting through vagaries and product marketing hype, providing our outlook for the near future, and most importantly, suggesting ways in which the case for optimism can be realized. The report concludes that in the near term, AI offers a significant advantage to cyber defenders, particularly those who can capitalize on their "home field" advantage and firstmover status. However, sophisticated threat actors are also leveraging AI to enhance their capabilities, making continued investment and innovation in AI-enabled cyber defense crucial. At this time of writing, AI is not yet unlocking novel capabilities or outcomes, but instead represents a significant leap in speed, scale, and completeness. This work is the foundation of a broader IST project to better understand which areas of cybersecurity require the greatest collective focus and alignment—for example, greater opportunities for accelerating threat intelligence collection and response, democratized tools for automating defenses, and/or developing the means for scaling security across disparate platforms—and to design a set of actionable technical and policy recommendations in pursuit of a secure, sustainable digital ecosystem." Great work from Jennifer Tang, Tiffany Saade, Steven M. Kelly, CISSP, and the Institute for Security and Technology (IST)

  • View profile for Helen Yu

    CEO @Tigon Advisory Corp. | Host of CXO Spice | Board Director |Top 50 Women in Tech | AI, Cybersecurity, FinTech, Insurance, Industry40, Growth Acceleration

    98,261 followers

    How do we navigate AI's promise and peril in cybersecurity? Findings from Gartner's latest report "AI in Cybersecurity: Define Your Direction" are both exciting and sobering. While 90% of enterprises are piloting GenAI, most lack proper security controls and building tomorrow's defenses on today's vulnerabilities. Key Takeaways: ✅ 90% of enterprises are still figuring this out and researching or piloting GenAI without proper AI TRiSM (trust, risk, and security management) controls. ✅ GenAI is creating new attack surfaces. Three areas demand immediate attention: • Content anomaly detection (hallucinations, malicious outputs) • Data protection (leakage, privacy violations) • Application security (adversarial prompting, vector database attacks) ✅ The Strategic Imperative Gartner's three-pronged approach resonates with what I'm seeing work: 1.   Adapt application security for AI-driven threats 2.   Integrate AI into your cybersecurity roadmap (not as an afterthought) 3.   Build AI considerations into risk management from day one What This Means for Leaders: ✅ For CIOs: You're architecting the future of enterprise security. The report's prediction of 15% incremental spend on application and data security through 2025 is an investment in organizational resilience. ✅ For CISOs: The skills gap is real, but so is the opportunity. By 2028, generative augments will eliminate the need for specialized education in 50% of entry-level cybersecurity positions. Start preparing your teams now. My Take: ✅The organizations that will win are the ones that move most thoughtfully. AI TRiSM is a mindset shift toward collaborative risk management where security, compliance, and operations work as one. ✅AI's transformative potential in cybersecurity is undeniable, but realizing that potential requires us to be equally transformative in how we approach risk, governance, and team development. What's your organization's biggest AI security challenge right now? I'd love to hear your perspective in the comments. Coming up on CXO Spice: 🎯 AI at Work (with Boston Consulting Group (BCG)): A deep dive into practical AI strategies to close the gaps and turn hype into real impact 🔐 Cyber Readiness (with Commvault): Building resilient security frameworks in the GenAI era To Stay ahead in #Technology and #Innovation:  👉 Subscribe to the CXO Spice Newsletter: https://lnkd.in/gy2RJ9xg  📺 Subscribe to CXO Spice YouTube: https://lnkd.in/gnMc-Vpj #Cybersecurity #AI #GenAI #RiskManagement #BoardDirectors #CIOs #CISOs

  • View profile for Bob Carver

    CEO Cybersecurity Boardroom ™ | CISSP, CISM, M.S. Top Cybersecurity Voice

    50,654 followers

    Why AI Is The New Cybersecurity Battleground - Forbes AI has evolved from a tool to an autonomous decision-maker, reshaping the landscape of cybersecurity and demanding innovative defense strategies. Artificial intelligence has quickly grown from a capability to an architecture. As models evolve from backend add-ons to the central engine of modern applications, security leaders are facing a new kind of battlefield. The objective not simply about protecting data or infrastructure—it’s about securing the intelligence itself. In this new approach, AI models don’t just inform decisions—they are decision-makers. They interpret, respond, and sometimes act autonomously. That shift demands a fundamental rethink of how we define risk, build trust, and defend digital systems. From Logic to Learning: The Architecture Has Changed Historically, enterprise software was built in layers: infrastructure, data, logic, and presentation. Now, there’s a new layer in the stack—the model layer. It’s dynamic, probabilistic, and increasingly integral to how applications function. Jeetu Patel, president and chief product officer at Cisco, described this transformation to me in a recent conversation: “We are trying to build extremely predictable enterprise applications on a layer of the stack which is inherently unpredictable.” That unpredictability is not a flaw—it’s a feature of large language models and generative AI. But it complicates traditional security assumptions. Models don’t always produce the same output from the same input. Their behavior can shift with new data, fine-tuning, or environmental cues. And that volatility makes them harder to defend. AI Is the New Attack Surface As AI becomes more central to application workflows, it also becomes a more attractive target. Attackers are already exploiting vulnerabilities through prompt injection, jailbreaks, and system prompt extraction. And with models being trained, shared, and fine-tuned at record speed, security controls struggle to keep up. Runtime Guardrails and Machine-Speed Validation Given the speed and sophistication of modern threats, legacy QA methods aren’t enough. Patel emphasized that red teaming must evolve into something automated and algorithmic. Security needs to shift from periodic assessments to continuous behavioral validation. Agentic AI: When Models Act on Their Own The risk doesn’t stop at outputs. With the rise of agentic AI—where models autonomously complete tasks, call APIs, and interact with other agents—the complexity multiplies. Security must now account for autonomous systems that make decisions, communicate, and execute code without human intervention. #cybersecurity #AI #AgenticAI #dynamic #riskmanagment

  • View profile for Usman Asif

    Access 2000+ software engineers in your time zone | Founder & CEO at Devsinc

    203,557 followers

    When I founded Devsinc fifteen years ago, I never imagined we'd be living in an era where artificial intelligence could both shield us from cyber threats and serve as the weapon itself. Today, as I reflect on the landscape we navigate, the paradox is striking: 93% of security leaders fear AI attacks, yet 69% see AI as the answer. Last month, while reviewing our quarterly security assessments, I witnessed this duality firsthand. Our AI-powered systems successfully detected and neutralized a sophisticated phishing campaign targeting one of our clients. The same technology that protected them had been weaponized by attackers – AI now generates 40% of phishing emails targeting businesses. The numbers paint a sobering picture. Cybercrime costs are estimated to hit $10.5 trillion annually by 2025 – a staggering 300% increase from just a decade ago. Yet within this challenge lies unprecedented opportunity. To my fellow CTOs and CIOs: this isn't just about budgets anymore. Gartner estimates 80% of CIOs are increasing their cybersecurity budgets, but money alone won't build our digital fortresses. We need architects who understand that AI finds hidden threats 80% more effectively and can predict new attacks with 66% accuracy. To the brilliant graduates entering our field: you're inheriting a battlefield that demands both technical prowess and strategic thinking. 88% of cybersecurity professionals believe AI will significantly impact their jobs – but this isn't about replacement; it's about amplification. At Devsinc, we've learned that building digital fortresses requires more than technology – it demands courage to embrace AI as both sword and shield, wisdom to understand the evolving threat landscape, and the conviction that every line of code we write contributes to a safer digital future. The fortress isn't just about keeping threats out; it's about empowering innovation within. That's the legacy we must build together.

  • View profile for Dr. Paul de Souza

    Founder President at Cyber Security Forum Initiative (CSFI.US) National Security Professional | Advisor | University Professor

    49,766 followers

    🌐 A FASCINATING STUDY by #UNIDIR, the United Nations Institute for Disarmament Research, reveals how #AI accelerates intrusion trajectory, from reconnaissance to system compromise, lowering barriers for malicious actors while amplifying their capabilities. 𝑾𝒊𝒕𝒉𝒐𝒖𝒕 𝑨𝑰, 𝒄𝒚𝒃𝒆𝒓 𝒐𝒇𝒇𝒆𝒏𝒔𝒊𝒗𝒆 𝒐𝒑𝒆𝒓𝒂𝒕𝒊𝒐𝒏𝒔 𝒓𝒆𝒍𝒚 𝒐𝒏 𝒎𝒂𝒏𝒖𝒂𝒍 𝒆𝒇𝒇𝒐𝒓𝒕 𝒂𝒏𝒅 𝒕𝒆𝒄𝒉𝒏𝒊𝒄𝒂𝒍 𝒆𝒙𝒑𝒆𝒓𝒕𝒊𝒔𝒆. 𝑩𝒖𝒕 𝒘𝒊𝒕𝒉 𝑨𝑰, 𝒂𝒏 𝒆𝒏𝒕𝒊𝒓𝒆𝒍𝒚 𝒅𝒊𝒇𝒇𝒆𝒓𝒆𝒏𝒕 𝒅𝒚𝒏𝒂𝒎𝒊𝒄 𝒆𝒎𝒆𝒓𝒈𝒆𝒔. Advanced algorithms automate reconnaissance, craft polymorphic malware, and prioritize high-value targets. Intrusions can self-adapt in real time, countering defensive measures, which can quickly escalate cyber risk. ⚠️ 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈 𝐢𝐬 𝐧𝐨𝐭 𝐣𝐮𝐬𝐭 𝐚 𝐭𝐨𝐨𝐥; 𝐢𝐭 𝐢𝐬 𝐚 𝐟𝐨𝐫𝐜𝐞 𝐦𝐮𝐥𝐭𝐢𝐩𝐥𝐢𝐞𝐫! ***It democratizes access to sophisticated attack vectors, enabling state and non-state actors to strike precisely and at scale.*** We must grapple with whether AI will reshape cybersecurity and how we adapt to this offensive shift. 🤔 Thank you, Giacomo Persi Paoli and Samuele Dominioni, Ph.D., for authoring this paper! 🙏 UNIDIR’s Security and Technology Programme produced this study with the support of the Czech Republic 🇨🇿, France 🇫🇷, Germany 🇩🇪, Italy 🇮🇹, the Netherlands 🇳🇱, Norway 🇳🇴, the Republic of Korea 🇰🇷, Switzerland 🇨🇭, and Microsoft. 𝑯𝒐𝒘 𝒑𝒓𝒆𝒑𝒂𝒓𝒆𝒅 𝒂𝒓𝒆 𝒘𝒆 𝒕𝒐 𝒅𝒆𝒂𝒍 𝒘𝒊𝒕𝒉 𝒕𝒉𝒆 𝒔𝒕𝒐𝒓𝒎 𝒐𝒇 𝒂𝒓𝒕𝒊𝒇𝒊𝒄𝒊𝒂𝒍 𝒊𝒏𝒕𝒆𝒍𝒍𝒊𝒈𝒆𝒏𝒄𝒆-𝒅𝒓𝒊𝒗𝒆𝒏 𝒄𝒚𝒃𝒆𝒓 𝒕𝒉𝒓𝒆𝒂𝒕𝒔? United Nations Cyber Security Forum Initiative #CSFI #Cybersecurity #AI #OffensiveOperations #UNIDIR

  • View profile for Anirban Bose

    CEO of Americas SBU | Member of the Group Executive Board

    23,056 followers

    As cybersecurity incidents rise and threats like phishing, ransomware, and deepfakes grow more sophisticated, organizations are facing increasing pressure to enhance their defenses. Our recent Capgemini Research Institute report shows 92% of organizations experienced a breach last year, a significant rise from 51% in 2021. AI, including Gen AI, plays a dual role: While it can be exploited for malware creation and social engineering, it also strengthens threat detection and response. More than half of leaders expect that leveraging AI will lead to faster detection of threats. Therefore, it is crucial for organizations to integrate AI into their security strategies, invest in AI-driven solutions, and prioritize employee training regarding the capabilities and risks associated with AI. https://ow.ly/5H5U50UazIl #cybersecurity #ransomware #GenAI

  • View profile for Tyler Cohen Wood CISSP

    Keynote Speaker | Host Our Connected Life podcast | CEO & CoFounder Dark Cryptonite | Top 30 Women in AI | Cyber Woman of the Year Finalist | Top Global Cybersecurity | Board Member | Fmr DIA Cyber Chief | AI security

    30,153 followers

    🤖 AI is both an ally and a new challenge, shaping how we protect our digital world. I dove into a recent report by Ivanti that highlights how generative AI is pushing the boundaries of security innovation while also introducing complex threats we must stay ahead of. 📊 Nearly half of security experts see AI as a net positive for defense, yet 72% report that their security and IT data are still stuck in silos. AI’s potential to detect and respond to threats is incredible, but without accessible data across systems, that power remains untapped. Breaking down these barriers is crucial to staying ahead of threats. 🎯 45% of experts highlight phishing as the top threat being supercharged by AI. With AI, attackers can create more convincing and personalized messages at scale, exploiting every opportunity to deceive. It’s a vital reminder that as our tools advance, so must our strategies. 🛡️ Only 32% of professionals feel their current training is effective against AI-powered attacks. We need to rethink our approach to security education. Real resilience means preparing teams to recognize and counter the adaptive, AI-enhanced tactics we’re up against. ⚔️ AI’s impact on cybersecurity is a double-edged sword, opening doors to smarter defenses while also creating more advanced threats. It’s up to us as leaders to shape an agile, AI-driven security strategy to meet today’s demands and prepare for tomorrow. Who’s ready to read the report and take on the future of security: 👀 https://bit.ly/4fHLwz5 #Cybersecurity #AI #DigitalDefense #DataSilos #AIResilience #CyberAwareness #Ransomware #cybercrime

Explore categories