Strategies to Combat AI-Generated Fraud in Workplaces

Explore top LinkedIn content from expert professionals.

  • View profile for Brian Levine

    Cybersecurity & Data Privacy Leader • Founder & Executive Director of Former Gov • Speaker • Former DOJ Cybercrime Prosecutor • NYAG Regulator • Civil Litigator • Posts reflect my own views.

    14,447 followers

    It is becoming difficult to identify and prevent wire transfer fraud (WTF). Recently, a threat actor was able to steal $25M by using Deep Fake AI to impersonate a CEO and other management on a video call.  See https://lnkd.in/ermje-5j. In an even more challenging example, a small bank's ACTUAL long-time CEO was dupped, and caused his employees to make ten wire transfers equaling more than $47M. See https://lnkd.in/eh-Xqagv. If we can't trust a real looking/sounding fake CEO and we can't trust an ACTUAL CEO, how can we ever prevent WTF? Here are some tips:   1. INDEPENDENT RESEARCH: At least one employee involved in an "unusual" wire transfer (i.e., unusual considering size, payee, payment method, situation, need for speed, new wire information, etc.) should independently research the transaction to confirm its validity. This employee should fill out pre-prepared worksheets to document that all of the steps below were taken. Such investigation might include: •  Speaking directly with the person requesting the wire or the change in the wire to understand: (a) the purpose of the wire; (b) the origin of the request; and (c) how the request was made (e.g., by email). Always call that person directly using his or her known contact information. Also, consider speaking directly with the originator of the request, if that is someone different than the requestor.    •  Independently looking up the payee (perhaps on a personal device, in case the network is infected) to understand what the payee does, whether the payment makes sense, and whether there are any reputational issues with the payee (e.g., check the BBB website, State AGs, or other sites.)     • Independently finding the true phone number of the payee, and calling the payee to verify the wire transfer information is accurate.    • Speaking directly with someone more senior than the requestor to confirm the transaction is legitimate. If the requestor is the CEO, and the transaction is significant enough, speak with someone on the board or outside counsel.  In advance, create a contact list with the relevant approvers.        2. DUAL CONTROL: At least two employees should approve every significant transfer. Ideally, there are technical controls (e.g., two separate MFA approvals) to ensure both employees have approved.   3. WRITTEN PROCEDURE:  Your procedure should be documented and updated annually. Written validation logs should also be retained.   4. TRAINING: Everyone involved should be trained on the procedure upon onboarding and at least annually.   5. TABLETOP EXERCISES: This is another big one. Consider conducting "WTF tabletop exercises" at least annually. Test your procedure with challenging situations, such as a deep fake CEO or a real CEO who has been dupped.    6. ESCROW OPTIONS: For significant transactions, consider whether there are options to transfer the funds into an escrow or other safe account until you can fully validate the payee or the transaction.    

  • View profile for Tony Scott

    CEO Intrusion | ex-CIO VMWare, Microsoft, Disney, US Gov | I talk about Network Security

    12,919 followers

    Too many organizations still assume their cybersecurity tools are enough. But attackers are evolving faster than ever, and AI is making their tactics almost indistinguishable from legitimate business. Here is what happens: Recently, I saw this firsthand. I received an email thread that, at first glance, looked like a routine internal conversation. The thread appeared to include my CFO, our attorney, and others in the organization discussing a problem and looping me in at the end to approve a money transfer. The content was convincing, the tone familiar, and the request plausible. But the entire thing was fabricated. AI had generated the emails, mimicking internal communication patterns and even the writing styles of my colleagues. The only giveaways were very subtle, like slightly off email addresses, and some inconsistencies in phrasing that didn’t quite fit. If I hadn’t paid close attention, it would have been easy to miss. Had I approved the request, funds would have been sent to the wrong place. That’s how sophisticated these attacks have become. This is the reality we face: technology alone can’t protect you from every threat. Attackers are using AI to create believable, tailored scams that can fool even experienced leaders. The tools we rely on are necessary, but they are not sufficient. So what do we do? You have to examine everything closely. Don’t just trust the surface details, even if it looks like it’s coming from inside your organization. Encourage your teams to trust their instincts and if something feels off, even if you can’t immediately pinpoint why, take a closer look. Look at the email addresses, the language, and the context. Slow down and verify, especially when money or sensitive information is involved. The bottom line: The sophistication of attacks is increasing, and so must our vigilance. Technology is part of the answer, but judgment and attention to detail are just as critical. Train your teams to think this way. The stakes are only getting higher. Intrusion Shield can help prevent many attacks from happening, even when both people and existing technology fail - we’d be happy to show you how!

  • View profile for Tamas Kadar

    Co-Founder and CEO at SEON | Democratizing Fraud Prevention for Businesses Globally

    10,996 followers

    Continuing from my last post on the complexities of modern fraud, let's explore further challenges and strategies in this battle: 🤖 Automation in Fraud: The Rise of Selenium & Bot Operations More advanced fraudsters use tools like Selenium or headless browsers to automate their entire operation, minimising the need for human intervention. This automation extends beyond account takeovers to include fraudulent registrations and checkouts. By feeding a list of stolen identity data points into their systems, combined with fresh IPs and spoofed device IDs, fraudsters can conduct mass attacks that are extremely challenging to detect. Some advanced solutions have developed smarter ways of detecting such bot operations. 🔍 Going Beyond Surface-Level Checks: Digital Profile Analysis To combat sophisticated fraud attacks and bots, we need to delve deeper. Investigating the digital presence of a customer's phone number or email address provides valuable insights such as how many digital and social profiles are connected to that one email or phone. Fraudsters typically can't replicate an in-depth digital footprint, so a lack of online presence can be a red flag. 🛡️ Staying Ahead: Adopting the Latest in Fraud Prevention As automated bot attacks get more complex, the only way to stay ahead of them is by adopting advanced fraud prevention strategies. These strategies should not only focus on detecting and preventing account takeovers, which often result from the reuse of breached passwords but also on identifying fraudulent registrations and checkouts. By understanding and counteracting the mix of fresh IPs and sophisticated device spoofing used by fraudsters, we can significantly reduce the effectiveness of their automated processes. 🔑 Bonus Tip: Protecting Against Account Takeovers Approximately 90% of account takeover attempts can be prevented by forcing users with breached passwords to change them. Implementing 2FA further enhances security for such customers. For example, Cloudflare's k-anonymity model for checking breached passwords: https://lnkd.in/dMWCqpCj Experienced fraudsters operate like a business, aiming for maximum gain with minimal effort. They rely on automation and scalability to reduce their manual efforts. As fraud fighters, we must continually adapt by embracing the latest tools and methods. This proactive approach is how we all stay ahead in the constantly evolving battle against fraud. #FraudPrevention #BotAttacks #CyberSecurity #MachineLearning #IPspoofing #DigitalFootprint

  • View profile for Michael L. Woodson

    Cybersecurity Executive | CISO | Application Security & Risk Strategist | AI Governance | Identity & Data Resilience | Board Advisor

    10,670 followers

    As Chief Information Security Officers (CISOs), we're entrusted with safeguarding our organizations in the ever-evolving digital landscape. Today, a new frontier beckons - Generative AI. This powerful technology has incredible potential but presents unique challenges for risk management and governance.   Generative AI: A Double-Edged Sword    Generative AI can create content, from text to images, with astounding accuracy. While this fuels innovation, it also fuels cyber threats:   Deepfakes: Convincing AI-generated deepfakes can deceive even the most discerning eye.   Advanced Phishing: Cybercriminals use AI to craft sophisticated, personalized phishing attacks.   AI-Generated Malware: New strains of malware are born from AI algorithms.   Balancing Act: We must find the equilibrium between security and leveraging AI for legitimate purposes.   Ethics and Privacy: Ethical considerations in AI governance are paramount.   Our Way Forward:    Advanced Defense: Implement cutting-edge threat detection to combat AI-generated threats. Education: Invest in the education and training of our teams to tackle AI challenges effectively. Ethical Guidelines: Develop ethical guidelines to navigate AI use responsibly.   Collaboration: Join hands with peers and AI ethics communities to share insights and strategies. Regulatory Adherence:, Stay informed and compliant with evolving AI regulations and data privacy standards.   As CISOs, we rise to the occasion, adapting to the ever-changing digital landscape. Generative AI governance is our new frontier, and together, we'll navigate its challenges, ensuring a secure and ethical digital future.    #CISO #Cybersecurity #GenerativeAI #AIrisks #Ethics #Privacy #Deepfakes #Phishing #AIinBusiness

  • View profile for ✨Sallie Newton, CISSP, CSSLP, GISP

    Certified Information System Security Professional | GRC, Policy & Procedures, Training & Awareness, SDLC Requirements, Risk Management and Gap Assessments.

    9,969 followers

    The AI revolution is upon us, and its impact on cybersecurity will be profound. 🌐💻 🛡️ The Good: Turbocharging Defenses According to IBM, AI and automated monitoring tools have significantly accelerated breach detection and containment. Organizations leveraging these technologies experience shorter breach life cycles, potentially saving millions. Yet, only 40% of organizations actively use security AI. Combining automation with vulnerability disclosure programs and ethical hacking can supercharge cybersecurity. 🚫 The Bad: Novice to Threat Actor LLMs offer benefits but can't replace professionals. Overestimating their capabilities can lead to misuse, introducing new attack surfaces. In one case, a lawyer used ChatGPT to draft a legal brief with fabricated citations, causing dire consequences. In cybersecurity, inexperienced programmers may deploy flawed code generated by LLMs, risking security. ⚠️ The Ugly: AI Bots Spreading Malware Proof-of-concept malware like BlackMamba is a disturbing reality. It can evade cybersecurity products by synthesizing malicious code at runtime. Cybercriminals are likely exploring similar methods. So, what can organizations do? 1 Rethink employee training to incorporate responsible AI use. 2 Consider the sophistication of AI-driven social engineering. 3️ Test AI implementations rigorously for vulnerabilities. 4️ Establish strict code review processes, especially for LLM-generated code. 5️ Have mechanisms to identify vulnerabilities in existing systems. The AI age brings incredible opportunities, but also risks. Responsible adoption and a vigilant approach to cybersecurity are our best defenses. Let's embrace this new era wisely. 🔐🤖 Source: InfoWorld 🔗 In Comments #AI #Cybersecurity #ChatGPT

  • View profile for Stephanie Goutos

    Attorney | AI & Legal Tech Leader | Head of Practice Innovation @ Gunderson Dettmer | Building the Future of Law | Fueled by Coffee, Humor & Being Told It Can’t Be Done 🔥

    14,521 followers

    Imagine this: It's Tuesday morning, you are kicking back with your #Starbucks coffee prepping to hop on a zoom with your CFO. You get on the call and see your CFO, six of your colleagues, and jump into the agenda. Without missing a beat, your CFO directs you to wire $25 million for what he describes as an "urgent, discreet investment in a groundbreaking company that will redefine our industry." The details are sparse, shrouded in confidentiality clauses and the promise of a strategic partnership that will put your company ahead of the curve. You are hesitant - why didn't he mention this before? But your colleague Steve is on the call, making jokes about his weekend antics, and you can see your other colleagues nodding in agreement. Feeling reassured by the presence of your team and swept up in the urgency conveyed by your CFO, you proceed with the wire transfer, sending over $25.6 million, in what ends up being a total #scam. The fallout is immediate and devastating. Questions arise about due diligence, verification processes, and why the usual checks and balances were bypassed for such a significant financial decision. Unfortunately, this scenario is based on actual events. A finance worker at a multinational firm was recently tricked into paying out over $25 million to fraudsters using #deepfake technology to pose at the company's #CFO and colleagues. Hong Kong police stated all of the people in the video conference were, in fact, fake. The case is one of several recent episodes in which fraudsters are believed to have used #deepfake #technology to modify publicly available video and other footage to cheat people out of money. The Lesson: ❌ Don't take everything at face value - Train your employees! A single weak link can lead to disastrous consequences. Always verify financial requests through trusted communication channels. Stay informed. Keep your team updated on the latest digital fraud tactics and make sure they know the capabilities of new #AI technology. #artificialintelligence #cybersecurity #fraudprevention #employeetraining #datasecurity #awareness #riskmanagement Source: https://lnkd.in/gd8ZhMWJ

Explore categories