"Disinformation campaigns aimed at undermining electoral integrity are expected to play an ever larger role in elections due to the increased availability of generative artificial intelligence (AI) tools that can produce high-quality synthetic text, audio, images and videos and their potential for targeted personalization. As these campaigns become more sophisticated and manipulative, the foreseeable consequence is further erosion of trust in institutions and heightened disintegration of civic integrity, jeopardizing a host of human rights, including electoral rights and the right to freedom of thought. → These developments are occurring at a time when the companies that create the fabric of digital society should be investing heavily in, but instead are dismantling, the “integrity” or “trust and safety” teams that counter these threats. Policy makers must hold AI companies liable for the harms caused or facilitated by their products that could have been reasonably foreseen. They should act quickly to ban using AI to impersonate real people or organizations, and require the use of watermarking or other provenance tools to allow people to differentiate between AI-generated and authentic content." By David Evan Harris and Aaron Shull of the Centre for International Governance Innovation (CIGI).
The Effects of Deepfakes on Democratic Processes
Explore top LinkedIn content from expert professionals.
-
-
Earlier this year, I made a confident prediction: deep fakes and AI would not have nearly as big of an effect on this election as the doomsayers were portending. My reasoning seemed sound. Doomsayers have often been wrong in predicting the destruction new media would cause to democracy, civil order, and truth. But I was only partially right. A deep fake or AI manipulated image hasn’t meaningfully moved the needle in this election — but the haunting specter of this technology threatens to. Former President Trump has been dismissing visuals he dislikes as deep fakes or AI manipulation – and with a deeply media illiterate public that is prone to confirmation bias, he’s kind of getting away with it. As Vice President Harris has drawn larger crowds, Trump has started to claim that the crowds aren’t really there at all. “She AI-ed it,” Trump wrote on Truth Social, speaking of the crowds at an event VP Harris held in Detroit outside an airplane hangar in August. This strategy isn’t new. Trump has been using the “it’s just AI” defense since December 2023, when the Lincoln Project aired an ad against him. His recent claims about crowd sizes, however, seem to be gaining more steam than previous lies. Last Friday, Trump even declared that a photo he had once acknowledged as real—one from 1987 showing him with journalist E. Jean Carroll, whom he was later found liable for sexually abusing and defaming—was now an AI fabrication. Law professors Robert Chesney and Danielle Citron coined a term for this new phenomenon in a 2018 paper for the California Law review. They call it the "liar's dividend." “Imagine a situation in which an accusation is supported by genuine video or audio evidence." The professors wrote six years ago. "As the public becomes more aware of the idea that video and audio can be convincingly faked, some will try to escape accountability for their actions by denouncing authentic video and audio as deep fakes." Of course, contesting the crowd size of a rally four months ahead of election day might seem like small potatoes, but it seems to me that this could be fairly strategic. Donald Trump is pilot-testing the efficacy of the liar’s dividend – and, at least within his own base, it seems to be working. At last night’s Presidential debate, Trump once again asserted that he was the real winner of the 2020 election. He also planted a seed for sending elections to be certified by the legislature – insisting that’s what should have happened four years ago. We already know that, even without the specter of deep fakes and despite losing an election, Trump can incite his supporters to violence (see: January 6, 2021). The question we ought to be asking ourselves is this: If Trump can play this sleight of hand with his followers today, what will he convince them of in the future?
-
Big AI development! California’s AG's office has released its first legal advisory on how existing state laws apply to AI, and it’s full of gems! This gives explicit answers to many elements of the long-running debate about whether or not we need new laws about AI, or if existing laws apply. It specifically mentions two new laws that I helped pass last year with the California Initiative for Technology and Democracy (CITED), AB 2655 (Berman) & 2839 (Pellerin), and one that I publicly supported and encouraged Governor Gavin Newsom to sign (SB 942 - Becker). Some highlights: “...it may be unlawful under CA's Unfair Competition Law to:... • Use AI to foster or advance deception... the creation of deepfakes, chatbots, and voice clones that appear to represent people, events, and utterances that never existed or occurred would likely be deceptive. Likewise, in many contexts it would likely be deceptive to fail to disclose that AI has been used to create a piece of media. • Use AI to create and knowingly use another person’s name, voice, signature, photograph, or likeness without that person’s prior consent… • Use AI to impersonate a real person for purposes of harming, intimidating, threatening, or defrauding… • Use AI to impersonate a government official… “Businesses may also be liable for supplying AI products when they know, or should have known, that AI will be used to violate the law…" Specifically on election disinfo, the AG says: “CA law prohibits the use of undeclared chatbots with the intent to mislead a person about its artificial identity in order to incentivize a purchase or influence a vote… It is also impermissible to use AI to impersonate a candidate for elected office… and to use AI to distribute... materially deceptive audio or visual media… “...in Election and Campaign Materials: • AB 2355 (Carrillo) requires any campaign ads generated... using AI to include the... disclosure: “Ad generated or substantially altered using artificial intelligence.” • AB 2655 (Berman) requires that large online platforms... develop and implement procedures using state-of-the-art techniques to identify and remove certain materially deceptive election-related content—deepfakes—during specified periods before and after elections in CA. It also requires certain additional content be labeled as manipulated, inauthentic, fake, or false... must provide an easy mechanism for CA users to report the prohibited materials…” On watermarking/provenance: "SB 942... places obligations on AI developers... to make free and accessible tools to detect whether specified content was generated by generative AI systems.” On liability: “...CA laws—including tort, public nuisance, environmental and business regulation, and criminal law—apply equally to AI systems and to conduct and business activities that involve the use of AI...” Big thanks to State of California Attorney General, Rob Bonta, and his dedicated team for pulling this together! #AI #California
-
AI PR Nightmares Part 2: When AI Clones Voices, Faces, and Authority. What Happened: Last week, a sophisticated AI-driven impersonation targeted White House Chief of Staff Susie Wiles. An unknown actor, using advanced AI-generated voice cloning, began contacting high-profile Republicans and business leaders, posing as Wiles. The impersonator requested sensitive information, including lists of potential presidential pardon candidates and even cash transfers. The messages were convincing enough that some recipients engaged before realizing the deception. Wiles’ personal cellphone contacts were reportedly compromised, giving the impersonator access to a network of influential individuals. This incident underscores a huge growing threat: AI-generated deepfakes are becoming increasingly realistic and accessible, enabling malicious actors to impersonate individuals with frightening accuracy. From cloned voices to authentic looking fabricated videos, the potential for misuse spans politics, finance, and way beyond. And it needs your attention now. 🔍 The Implications for PR and Issues Management: As AI-generated impersonations become more prevalent, organizations must proactively address the associated risks as part of their ongoing crisis planning. Here are key considerations: 1. Implement New Verification Protocols: Establish multi-factor authentication for communications, especially those involving sensitive requests. Encourage stakeholders to verify unusual requests through secondary channels. 2. Educate Constituents: Conduct training sessions to raise awareness about deepfake technologies and the signs of AI-generated impersonations. An informed network is a critical defense. 3. Develop a Deepfakes Crisis Plan: Prepare for potential deepfake incidents with a clear action plan, including communication strategies to address stakeholders and the public promptly. 4. Monitor Digital Channels: Utilize your monitoring tools to detect unauthorized use of your organization’s or executives’ likenesses online. Early detection and action can mitigate damage. 5. Collaborate with Authorities: In the event of an impersonation, work closely with law enforcement and cybersecurity experts to investigate and respond effectively. ———————————————————— The rise of AI-driven impersonations is not a distant threat, it’s a current reality and only going to get worse as the tech becomes more sophisticated. If you want to think and talk more about how to prepare for this and other AI related PR and issues management topics, follow along here with my series or DM if I can help your organization prepare or respond.
-
We can't be surprised by this, and content moderation will only go so far when trying to mitigate this kind of disinformation. This will complicate complying with (and auditing for) the Digital Services Act. "Days before a pivotal national election in Slovakia last month, a seemingly damning audio clip began circulating widely on social media. A voice that sounded like the country’s Progressive party leader, Michal Šimečka, described a scheme to rig the vote, in part by bribing members of the country’s marginalized Roma population." "Rapid advances in artificial intelligence have made it easy to generate believable audio, allowing anyone from foreign actors to music fans to copy somebody’s voice — leading to a flood of faked content on the web, sewing discord, confusion and anger." "On Thursday, a bipartisan group of senators announced a draft bill, called the No Fakes Act, that would penalize people for producing or distributing an AI-generated replica of someone in an audiovisual or voice recording without their consent." "Social media companies also find it difficult to moderate AI-generated audio because human fact-checkers often have trouble spotting fakes. Meanwhile, few software companies have guardrails to prevent illicit use." "In countries where social media platforms may essentially stand in for the internet, there isn’t a robust network of fact-checkers operating to ensure people know a viral sound clip is a fake, making these foreign language deepfakes particularly harmful." #disinformation #deepfake #aiethics Ryan Carrier, FHCA, Manon van Rietschoten, Dr. Benjamin Lange, Maurizio Donvito, Mark Cankett https://lnkd.in/daRx25sf
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development