How to Protect Your Organization From Deepfake Scams

Explore top LinkedIn content from expert professionals.

  • View profile for Brian Levine

    Cybersecurity & Data Privacy Leader • Founder & Executive Director of Former Gov • Speaker • Former DOJ Cybercrime Prosecutor • NYAG Regulator • Civil Litigator • Posts reflect my own views.

    14,446 followers

    It is becoming difficult to identify and prevent wire transfer fraud (WTF). Recently, a threat actor was able to steal $25M by using Deep Fake AI to impersonate a CEO and other management on a video call.  See https://lnkd.in/ermje-5j. In an even more challenging example, a small bank's ACTUAL long-time CEO was dupped, and caused his employees to make ten wire transfers equaling more than $47M. See https://lnkd.in/eh-Xqagv. If we can't trust a real looking/sounding fake CEO and we can't trust an ACTUAL CEO, how can we ever prevent WTF? Here are some tips:   1. INDEPENDENT RESEARCH: At least one employee involved in an "unusual" wire transfer (i.e., unusual considering size, payee, payment method, situation, need for speed, new wire information, etc.) should independently research the transaction to confirm its validity. This employee should fill out pre-prepared worksheets to document that all of the steps below were taken. Such investigation might include: •  Speaking directly with the person requesting the wire or the change in the wire to understand: (a) the purpose of the wire; (b) the origin of the request; and (c) how the request was made (e.g., by email). Always call that person directly using his or her known contact information. Also, consider speaking directly with the originator of the request, if that is someone different than the requestor.    •  Independently looking up the payee (perhaps on a personal device, in case the network is infected) to understand what the payee does, whether the payment makes sense, and whether there are any reputational issues with the payee (e.g., check the BBB website, State AGs, or other sites.)     • Independently finding the true phone number of the payee, and calling the payee to verify the wire transfer information is accurate.    • Speaking directly with someone more senior than the requestor to confirm the transaction is legitimate. If the requestor is the CEO, and the transaction is significant enough, speak with someone on the board or outside counsel.  In advance, create a contact list with the relevant approvers.        2. DUAL CONTROL: At least two employees should approve every significant transfer. Ideally, there are technical controls (e.g., two separate MFA approvals) to ensure both employees have approved.   3. WRITTEN PROCEDURE:  Your procedure should be documented and updated annually. Written validation logs should also be retained.   4. TRAINING: Everyone involved should be trained on the procedure upon onboarding and at least annually.   5. TABLETOP EXERCISES: This is another big one. Consider conducting "WTF tabletop exercises" at least annually. Test your procedure with challenging situations, such as a deep fake CEO or a real CEO who has been dupped.    6. ESCROW OPTIONS: For significant transactions, consider whether there are options to transfer the funds into an escrow or other safe account until you can fully validate the payee or the transaction.    

  • View profile for Jeremy Tunis

    “Urgent Care” for Public Affairs, PR, Crisis, Content. Deep experience with BH/SUD hospitals, MedTech, other scrutinized sectors. Jewish nonprofit leader. Alum: UHS, Amazon, Burson, Edelman. Former LinkedIn Top Voice.

    14,983 followers

    AI PR Nightmares Part 2: When AI Clones Voices, Faces, and Authority. What Happened: Last week, a sophisticated AI-driven impersonation targeted White House Chief of Staff Susie Wiles. An unknown actor, using advanced AI-generated voice cloning, began contacting high-profile Republicans and business leaders, posing as Wiles. The impersonator requested sensitive information, including lists of potential presidential pardon candidates and even cash transfers. The messages were convincing enough that some recipients engaged before realizing the deception. Wiles’ personal cellphone contacts were reportedly compromised, giving the impersonator access to a network of influential individuals. This incident underscores a huge growing threat: AI-generated deepfakes are becoming increasingly realistic and accessible, enabling malicious actors to impersonate individuals with frightening accuracy. From cloned voices to authentic looking fabricated videos, the potential for misuse spans politics, finance, and way beyond. And it needs your attention now. 🔍 The Implications for PR and Issues Management: As AI-generated impersonations become more prevalent, organizations must proactively address the associated risks as part of their ongoing crisis planning. Here are key considerations: 1. Implement New Verification Protocols: Establish multi-factor authentication for communications, especially those involving sensitive requests. Encourage stakeholders to verify unusual requests through secondary channels. 2. Educate Constituents: Conduct training sessions to raise awareness about deepfake technologies and the signs of AI-generated impersonations. An informed network is a critical defense. 3. Develop a Deepfakes Crisis Plan: Prepare for potential deepfake incidents with a clear action plan, including communication strategies to address stakeholders and the public promptly. 4. Monitor Digital Channels: Utilize your monitoring tools to detect unauthorized use of your organization’s or executives’ likenesses online. Early detection and action can mitigate damage. 5. Collaborate with Authorities: In the event of an impersonation, work closely with law enforcement and cybersecurity experts to investigate and respond effectively. ———————————————————— The rise of AI-driven impersonations is not a distant threat, it’s a current reality and only going to get worse as the tech becomes more sophisticated. If you want to think and talk more about how to prepare for this and other AI related PR and issues management topics, follow along here with my series or DM if I can help your organization prepare or respond.

  • View profile for Jennifer Ewbank

    Board Director | Strategic Advisor | Keynote Speaker on AI, Cyber, and Leadership | Former CIA Deputy Director | Champion of Innovation, Security, and Freedom in the Digital Age

    14,641 followers

    The FBI recently issued a stark warning: AI-generated voice deepfakes are now being used in highly targeted vishing attacks against senior officials and executives. Cybercriminals are combining deepfake audio with smishing (SMS phishing) to convincingly impersonate trusted contacts, tricking victims into sharing sensitive information or transferring funds. This isn’t science fiction. It is happening today. Recent high-profile breaches, such as the Marks & Spencer ransomware attack via a third-party contractor, show how AI-powered social engineering is outpacing traditional defenses. Attackers no longer need to rely on generic phishing emails; they can craft personalized, real-time audio messages that sound just like your colleagues or leaders. How can you protect yourself and your organization? - Pause Before You Act: If you receive an urgent call or message (even if the voice sounds familiar) take a moment to verify the request through a separate communication channel. - Don’t Trust Caller ID Alone: Attackers can spoof phone numbers and voices. Always confirm sensitive requests, especially those involving money or credentials. - Educate and Train: Regularly update your team on the latest social engineering tactics. If your organization is highly targeted, simulated phishing and vishing exercises can help build a culture of skepticism and vigilance. - Use Multi-Factor Authentication (MFA): Even if attackers gain some information, MFA adds an extra layer of protection. - Report Suspicious Activity: Encourage a “see something, say something” culture. Quick reporting can prevent a single incident from escalating into a major breach. AI is transforming the cyber threat landscape. Staying informed, alert, and proactive is our best defense. #Cybersecurity #AI #Deepfakes #SocialEngineering #Vishing #Infosec #Leadership #SecurityAwareness

  • View profile for Jason Rebholz
    Jason Rebholz Jason Rebholz is an Influencer

    I help companies secure AI | CISO, AI Advisor, Speaker, Mentor

    30,198 followers

    There’s more to the $25 million deepfake story than what you see in the headlines. I pulled the original story to get the full scoop. Here are the steps the scammer took: 1. The scammers sent a phishing email to up to three finance employees in mid-January, saying a “secret transaction” had to be done. 2. One of the finance employees fell for the phishing email. This led to the scammers inviting the finance employee to a video conference. The video conference included what appeared to be the company CFO, other staff, and some unknown outsiders. This was the deep fake technology at work, mimicking employees' faces and voices. 3. On the group video conference, the scammers asked the finance employee to do a self-introduction but never interacted with them. This limited the likelihood of getting caught. Instead, the scammers just gave orders from a script and moved on to the next phase of the attack. 4. The scammers followed up with the victim via instant messaging, emails, and one-on-one video calls using deep fakes. 5. The finance employee then made 15 transfers totaling $25.6 million USD. As you can see, deep fakes were a key tool for the attacker, but persistence was critical here too. The scammers did not let up and did all that they could to apply pressure on the individual to transfer the funds. So, what do businesses do about mitigating this type of attack in the age of deep fakes? - Always report suspicious phishing emails to your security team. In this context, the other phished employees could have been an early warning that something weird was happening. - Trust your gut. The finance employee reported a “moment of doubt” but ultimately went forward with the transfer after the video call and persistence. If something doesn’t feel right, slow down and verify. - Lean into out-of-band authentication for verification. Use a known good method of contact with the individual to verify the legitimacy of a transaction. - Explore technology driven identify verification platforms for high dollar wire transfers. This can help reduce the chance of human error. And one of the best pieces of advice I saw was from Nate Lee yesterday, who called out building a culture where your employees are empowered to verify transaction requests. Nate said the following “The CEO/CFO and everyone with power to transfer money needs to be aligned on and communicate the above. You want to ensure the person doing the transfer doesn't feel that by asking for additional validation that they're pushing back against or acting in a way that signals they don't trust the leader.” Stay safe (and real) out there. ------------------------------ 📝 Interested in leveling up your security knowledge? Sign up for my weekly newsletter using the blog link at the top of this post.

  • View profile for Shawnee Delaney

    CEO, Vaillance Group | Keynote Speaker and Expert on Cybersecurity, Insider Threat & Counterintelligence

    33,662 followers

    It’s not paranoia if they really are out to get you. And guess what? They are. While you’re busy worrying about VPNs and password policies, scammers are sliding into your employees’ DMs with sweet nothings, fake job offers, and “just one click” crypto deals. Welcome to the trifecta of human-targeted scams: - Romance - Recruitment - Financial fraud They don’t need root access if they’ve already got your heart, your résumé, or your retirement account. Are you protecting your people? Not just their inboxes. Them. Here’s what you’re up against: ❗Deepfake-enabled fraud: $200M lost—in just one quarter of 2025 ❗AI-generated crypto scams: $4.6B stolen in 2024—up 24% ❗Over 50% of leaders admit: no employee training on deepfakes ❗61% of execs: zero protocols for addressing AI-generated threats Companies spend millions locking down endpoints—then leave their employees to get catfished by a deepfake on Tinder. But here’s the good news: you’re not powerless. You just have to stop pretending a phishing test is a strategy (please). Here’s how to actually reduce risk: ✔️Make your training real. Include romance bait, fake recruiters, and deepfake voicemails. If your simulations don’t mirror reality, it’s not training—it’s theater. ✔️Train managers to notice when something’s off. Isolation. Sudden secrecy. Financial stress. These aren’t just HR problems—they’re prime conditions for social engineering. ✔️Build a culture where it’s safe to ask, “Is this sketchy?” If your people feel dumb for asking, they’ll stop asking—and that’s how scams slip through. ✔️Partner with HR. Online exploitation, financial manipulation, digital coercion—these are wellness issues and security issues. Treat them that way. ✔️Empower families, not just employees. Scams often hit home first. Make your materials so good they want to send them to their group chat. Bonus: they’ll bring those healthy habits right back to work. When you protect the human—not just the hardware—you don’t just lower risk. You build trust. And for the record? Paranoia gets a bad rap. Sometimes it’s just pattern recognition. #Cybersecurity #HumanRisk #AIThreats #Deepfake #RomanceScams #AI #RecruitmentFraud #InsiderThreat #Leadership #DigitalWellness #SpycraftForWork

  • Last quarter, a multinational firm nearly wired $1.2 million to a cybercriminal. Why? Because their CEO “sent a video” authorizing it. The voice matched. The gestures were perfect. The tone? Convincing enough to override protocol. Only one sharp-eyed assistant noticed the lip sync was slightly off. It was a deepfake. Built using public video interviews, social media clips, and off-the-shelf GenAI tools. The real damage? → 72 hours of internal chaos → A global PR scare they never wanted hitting the press → And a complete rebuild of their executive comms protocol Most companies are racing to use GenAI for sales, marketing, and training… But very few are asking: “What’s the attack surface we’re creating?” ☑ Public-facing execs? ☑ Long-form video content online? ☑ AI-powered customer service agents? Here’s what most companies are doing now: Focusing on AI creation tools without validation layers Allowing execs to be overly visible without deepfake monitoring Assuming “awareness” is a substitute for response strategy But here’s the shift smart companies are making: → Embedding video integrity checks in workflows → Training staff on synthetic media indicators → Partnering with cybersecurity leads before publishing AI content #GenAI is a superpower. But without #governance, it becomes your enemy in disguise. Ask yourself: What would it cost you if someone impersonated your founder on camera? What security guardrails can you implement to protect your organization?

  • View profile for Connor Swalm

    Helping MSPs humanize security awareness 🚀

    3,933 followers

    One phone call could have prevented a multinational firm from losing $25.6M. Here's the story – it involves an AI tool anybody with an internet connection and a few bucks can use⤵ An employee of the business got an email, supposedly from the organization's CFO, asking him to pay out $25.6M. They were initially suspicious that it was a phishing email… but sent the money after confirming on a Zoom call with the CFO and other colleagues he knew. The twist: Every other person on the Zoom was a Deepfake generated by scammers. It might sound like a crazy story. But it's one we're going to hear more – As long as cybersecurity practices lag behind publicly available AI. A premium subscription to Deepfakes Web costs $19/month. And the material scammers use to pull hoaxes like this is free – 62% of the world's population uses social media, which is full of: ✔️ Your voice ✔️ Your image ✔️ Videos of you But if that sounds apocalyptically scary, there's no need to panic – Two straightforward cybersecurity practices could have prevented this easily: 1. Monthly training Anyone who can control the movement of money in or out of accounts needs to be trained *monthly* on how to follow your security process. Don't just have them review the policy and sign off. Have them physically go through it in front of you. They need to be able to follow it in their sleep. 2. Identity Verification Integrate as many third-party forms of identity verification as you can stand – then double-check them before *and* during money transfers. A couple of ways to do this: → One-time passcode notifications Send an OTP code to the person asking for a money transfer and have them read it to you from their email or authenticator live on the call. → Push notifications Have a security administrator ask them to verify their identity via push notification. I can't guarantee that these 2 steps would've sunk this scam… But the scammers would have needed: - Access to the work phone of whoever they were impersonating (so the phone *and* its passcode) - The password to that person's authenticator or access to their email - At their fingertips the moment the push notification was sent In short: it's possible, but not probable. It's overwhelming to think we can't trust what's in front of our eyes anymore. But my hope is that stories like this will empower people to level up their cybersecurity game. The best practices that will keep us safe are the same as ever – Educated people and simple, secure processes. But the scams are getting more sophisticated. Make sure you're ready. P.S. Maybe you're wondering: "Is my company too small for me to worry about this stuff?" Answer: If more than one person is involved in receiving and sending funds to anyone for any reason at your company… it’s good to start implementing these security practices now.

Explore categories