🎮 #AI in Gaming Just Hit a Major Ethical Milestone 💥 In a huge moment for both AI ethics and creative labor, video game voice actors and performers have voted to ratify a new contract that formally protects their likenesses and voices from AI misuse. This might not make tech headlines like #GPT-5 or Gemini, but it’s just as impactful—because it sets a precedent for how AI and human identity can (and must) co-exist. 🧾 What’s in the New Contract? ✅ Consent Required Studios must now get explicit actor approval before using AI to replicate their voice, movements, or likeness. ✅ Fair Compensation If AI-generated content uses an actor’s data, they must be paid—just as if they performed it. ✅ Transparency Clauses Studios must disclose when and how AI is being used to generate or modify performances. 🎤 Why This Is a Big Deal Protecting Creative Identity Performers are more than their data. They’re artists. Without regulation, studios could easily train AI to mimic them—without credit or compensation. AI Has a Memory Once an actor’s voice is in a model’s training data, it can be cloned indefinitely. Without safeguards, this leads to a world where actors are “recast” by machines—forever. A New Industry Standard? This contract could inspire other unions—from Hollywood to advertising—to adopt similar AI protection clauses. This isn’t just about gaming. It’s about the future of work in every creative field. 🎯 The Broader AI Conversation As AI advances in speech synthesis, motion capture, and digital avatars, we must ask: 📌 What is the boundary between simulation and exploitation? 📌 How do we protect originality in the age of infinite reproduction? 📌 Where’s the line between inspiration and impersonation? This vote represents the first serious attempt to answer those questions at scale. 💡 My Take I'm a strong believer in AI’s potential to augment creativity—but not to replace the human essence behind it. If AI is trained on someone’s voice, face, or performance, they should: ✅ Know about it ✅ Agree to it ✅ Be paid for it This isn’t just ethical—it’s sustainable. Trust and transparency will drive long-term adoption of AI in industries like gaming, film, and media. The best future for AI and humans is one of co-creation, not conflict. 📣 So let’s give credit where it’s due: 👏 To the performers who stood up 👏 To the unions who negotiated 👏 And to the studios who listened This is a win for human dignity in a digital world. #AIinGaming #VoiceAI #CreativeRights #AIEthics #GenerativeAI #DeepfakePolicy #FutureOfWork #GamingIndustry #DigitalLabor #AIFairness #SyntheticMedia #TechAndHumanity 🗣️🎮🤝
Legal Protections for Creators Against AI Misuse
Explore top LinkedIn content from expert professionals.
-
-
A new bill was just introduced in the house that addresses #generativeai and #deepfakes from the standpoint of #rightofpublicity in light of the rash of celebrity impersonations, revenge porn/nonconsensual intimate images, and, presumably, the upcoming election. It's essentially a proposed federal publicity right but limited to the context of digital deepfakes, and is called the No AI FRAUD Act (No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act). I want to know whose job it is to name these bills. The acronyms folks on the Hill come up with are just bonkers. But I digress. High points (#tldr): 1. Would apply to any person, living or dead. 2. Includes protection for unauthorized use of name, voice, and likeness, with likeness broadly defined to also include "face, likeness, or other distinguishing characteristic". 3. Rights are defined as #intellectualproperty rights as opposed to privacy rights (there is an ongoing debate as to where publicity rights fall, this bill plants the flag in the property camp). 4. Rights are descendible and transferrable. 5. Authorized use requires the subject to be 18 and be represented by counsel (an interesting twist) or a collective bargaining agreement. 6. Includes both direct and contributory liability provisions. 7. Damages include the greater of actual damages or $50,000 per violation, plus profits of the violator above actual damages (similar to #copyright actual damages) for digital cloning and $5,000 per violation or actual damages (plus profits) for digital voice replicas. 8. Punitive damages and attorney's fees are available. 9. Disclaimers don't get you off the hook. 10. There are #firstamendment carve-outs that consider commerciality, expressive purpose, and competition/adverse market effects. 11. Limitations on liability where harm is "negligible" (interesting side note - definition of harm includes emotional harm) and where uses are transformative or constitute commentary on matters of public concern. 12. Various types of intimate or sexual images will lead to per se harm. 13. Four-year statute of limitations. It is an ambitious bill and certainly includes provisions I can see being challenged. Who knows if this will go anywhere, but #congress at least has important #ai issues on its radar. #golfclap chrome-extension://efaidnbmnnnibpcajpcglclefindmkaj/https://lnkd.in/eDS23Q_Z
-
Sens. Christopher Coons (D-Del.), Marsha Blackburn (R-Tenn.), Senator Amy Klobuchar (D-Minn.), and Thom Tillis (R-N.C.) released a discussion draft that seeks to respond to widespread concerns surrounding AI-generated content and intellectual property - reports Oma Seddiq for Bloomberg Law The draft legislation would hold companies and users legally accountable for the damages caused by an unauthorized AI-generated replication of individuals. It would also hold platforms liable if they knowingly hosted such unauthorized content. The bill prohibits: - The production of a digital replica without consent of the applicable individual or rights holder. - The publication, distribution, or transmission of, or otherwise making available to the public, an unauthorized digital replica, if the person engaging in that activity has knowledge that the digital replica was not authorized by the applicable individual or rights holder The law has a private right of action with damages being the greater of (1)$5000 per violation or (ii) any damages suffered by the injured party as a result of the violation plus punitive damages in case of willful violation. Image by vectorjuice on Freepik #dataprivacy #dataprotection #AIgovernance #voiceprints #privacyFOMO Draft bill: https://aboutbgov.com/baXc https://lnkd.in/dsUuqVQu
-
Last week, Tennessee formally enacted the Ensuring Likeness Voice and Image Security Act (brilliantly named the ELVIS Act) which grants property rights to an individual over their physical likeness and voice. The legislation passed through both legislatures unanimously, as it was widely seen as an effort to protect artists and the recording industry against audio deep fakes and other kinds of exploitation from AI. Artists like Billie Eilish, Katy Perry, Nicki Minaj, Jon Batiste, and others have submitted letters to AI developers requesting them to stop using AI to devalue human artists. According to Axios, the letter addresses artists’ concerns such as replicating their voices, using their work to train AI models without compensation, and diluting royalty pools that are paid out to them. It’s no surprise that this bill moved so quickly through the Tennessee legislature given Nashville’s prominence in the music industry, which contributes nearly $10bn a year to the local economy. This bill is an early example of a ‘protectionist AI’ law, that seeks to protect specific industries or interest groups from disruption from AI. Last year's SAG-AFTRA strike in Hollywood prominently featured the use of AI, and the final agreement defined ground rules for AI ‘replicas’ of union actors. We expect to see additional lobbying efforts by interest groups favoring similar protectionist industry laws both across the US states, as well as internationally. Trustible's Thought Bubble: This scenario could result in a highly fragmented regulatory landscape, with some jurisdictions aiming to promote AI development and others prioritizing the protection of their established industries. Other top insights in our recent newsletter include: 🏛 Federal agencies (NTIA, Treasury, OMB) opine on AI trust 🛡 The 3 lines of defense for AI governance 🤑 SEC cracks down on AI washing & fraud 🤖 Can machines unlearn? Read our latest newsletter here -> https://lnkd.in/dJXeuNX9
-
Just had a call with a potential client who is launching a business that will need to engage a number of independent contactors to create content. As I normally do, I cautioned her that it is imperative that she has an agreement with the content creators to make sure they assign their rights in the content to her venture as part of the engagement. At least in the US, unless the contract specifies otherwise, work of an independent contractor will likely not be considered "work for hire" and the work product viewed as only being licensed to the company hiring the contractor. The twist in this conversation was that I also told her the contract should address the use of AI in creating the content. Although the law is still evolving, use of AI in content creation can give rise to issues of content ownership and potential infringement. The agreement should contain a prohibition or limitations on the use of AI and, prior to engagement, the company should have a good understanding of if and how the contractor uses AI as part of the creative process. Similarly, when purchasing or licensing pre-existing content, the process should include diligence on AI use in the creative process. For those familiar with software licensing, this would be similar to diligence and contract language related to the use of open source software. However, AI raises several unique challenges not present with open source software (unless of course, AI was used in the creation of the open source software 😀). #ai #artificalintelligence
-
The U.S. Copyright Office has provided essential guidance regarding the registration of works containing material generated by Artificial Intelligence (AI). With more artists thinking about using AI as a part of their creative process, this is a critical document for not only for music lawyers but also for music managers who are helping their clients navigate the use of AI in music. Here are the key takeaways from the Copyright Office's policy statement (full paper is attached below for those who are interested): 🎵 Human Authorship Requirement: Works exclusively generated by AI without human involvement do not qualify for copyright protection as "original works of authorship" must be human-created. 🎵 Significant Human Contribution: The use of AI-generated content that is significantly modified, arranged, or selected by a human artist may be eligible for copyright protection, but only for the human-authored parts of the work. 🎵 AI as a Tool: While AI is acknowledged as a valuable tool in the creative process, using AI does not confer authorship. The extent of creative control a human exercises over the work's output is the key factor in determining copyright eligibility. 🎵 Registration of Works with AI-generated Material: Applicants must disclose the use of AI-generated content in their copyright applications, distinguishing between human-created aspects and AI-generated content. 🎵 Correcting Prior Submissions: If a work containing AI-generated content has already been submitted without appropriate disclosure, it should be corrected to ensure the registration remains valid. 🎵 Consequences of Non-disclosure: Applicants who fail to disclose AI-generated content could face the cancellation of their registration or the registration could be disregarded in court during an infringement action. 🎵 Ongoing Monitoring: The Copyright Office continues to monitor developments in AI and copyright law, indicating the possibility of future guidance and adjustments to the policy. #musicindustry #musicbusiness #musicpublishing #copyrightlaw
-
🚨 New U.S. Senate Bill Targets AI & Data Privacy and Copyright Law 🚨 On July 21st, Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) introduced the AI Accountability and Personal Data Protection Act. This bill, if enacted, could reshape how companies (or even individuals) handle any kind of personal data or copyrightable material that is collected, used, processed, sold, or otherwise exploited. Three initial thoughts when reading the bill: 🥘 1. This regulation is mixing together rights of action under privacy and copyright law. 😵💫 2. The text refers to the "generation" of copyrightable works by an individual, which is an interesting term. I think "creation" would be more appropriate given the nuances around human-creation vs. machine-generation in the age of gen AI. 🧾 3. For copyright owners, registration isn't necessary to bring a valid claim under this bill. That's a HUGE shift from current copyright law, which requires a valid registration (ideally prior to acts of infringement) to bring suit in federal court and take advantage of statutory fees and attorneys' fees recoupment. Here's a breakdown of the attached bill... 🔍 What it's trying to do: 👉🏻 Create a federal right to sue for unauthorized use of personal data or copyrightable works—including data used to train or generated by AI. 👉🏻 Require explicit, informed consent before collecting or sharing personal data. 👉🏻 Prohibit forced arbitration and class-action waivers for violations. 👉🏻 Apply to AI-generated content that imitates or derives from data of an individual. 💡 Why it matters: This bill introduces significant compliance and litigation risks for data-driven and AI platforms. Consent mechanisms, third-party disclosures, and AI training practices may all need to evolve. It's also going to be interesting to see how a bill like this fits into any future plans for fast-paced innovation AI initiatives from the government. #AI #Privacy #copyright #data #regulation #DataProtection #GenerativeAI #TechPolicy #LegalTech #Compliance #IP
-
The bipartisan “No Fakes Act” of 2023 seeks to standardize rules around the use of digital replicas of faces, names, and voices. The bill – sponsored by Sens. Chris Coons, Marsha Blackburn, Amy Klobuchar, and Thom Tillis – aims to prevent the unauthorized production of such replicas, with exceptions for news, public affairs, sports broadcasts, documentaries, biographical works, parodies, satire, and criticism. These rights persist throughout an individual’s lifetime and extend 70 years posthumously for their estate. Some states (like New York) have specific regulations concerning digital replicas. California has a bill pending, but the “No Fakes Act” proposes a federal approach. The Recording Industry Association of America (RIAA) supports the bill, emphasizing the distinction between AI as a creative tool and potential infringement. The Human Artistry Campaign echoes this sentiment, pointing out the risks of AI misappropriating copyrighted material and artist likenesses. If there is one bit of IP or AI regulation that I think everyone can agree upon, it is protection against the use of a person’s name, image, video, or voice without their consent (and compensation, where appropriate). Perhaps one day we’ll add “AI essence” to this list. It’s science fiction for now, but my kids already asked me if I’m building an AI chat version of myself that can speak to future generations. Would an AI model of me be eligible for IP protection? 'The “No Fakes Act”: Addressing AI Replicas in Entertainment' #ai #artificalintelligence #GenerativeAI
-
Today the Generative AI Copyright Disclosure Act was introduced by Adam Schiff, and it’s a great step towards fairer data practices in gen AI. - AI companies will have to disclose to the copyright office "a sufficiently detailed summary of any copyrighted works used" to train their models - Disclosure required 30 days before model release - Disclosure required every time the training data changes significantly - Also applies to previously released models - There will be a public database of these disclosures - There are fines for failure to comply Companies hiding training data sources is the main reason you don’t see even more copyright lawsuits against gen AI companies. Requiring data transparency from gen AI companies will level the playing field for creators and rights holders who want to use copyright law to defend themselves against exploitation. The public database is particularly important: it means anyone should be able to see if their copyrighted work has been used by a generative AI model.
-
Dr. Saju Skaria's Weekly Reflections: 20/2023 Intellectual Property (IP) issues and Generative AI The use of Generative AI has taken the industry by storm. I thought it’s imperative to touch on this critical issue, i.e., intellectual property, before we close the discussions on AI. AI technology, leveraging data lakes and question snippets to recover patterns and relationships, is helping immensely in creative industries. However, we have a critical issue that the legal fraternity is trying to address: copyright infringement, ownership of AI-generated work, and leveraging unlicensed content in training data. Trained AI tools can replicate copies of original work (for example, paintings or photographs), which is a copyright infringement. A further challenge is that the users might create copies of the original that need to be more transformative, thereby causing the credibility of the original work. These unauthorized derivatives can cause significant penalties, and the courts are already dealing with such issues. There is significant debate around the “fair use doctrine” that allows reviewing the copyrighted without the owner’s permission for purposes like criticism (including satire), comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research,” and for transformative use of the copyrighted material in a manner for which it was not intended. A word of caution for companies is how to use Generative AI and leverage content. It’s a tightrope walk. Not even accidentally using copyrighted content, directly or unintendedly, without adequate protection can cause significant penalties. How could we reduce the risk of getting stuck in an IP violation? Here are a few recommended steps. 1. AI developers (individuals /organizations) must ensure that they comply with the law regarding acquiring data to train their models. 2. Creators, both individual content creators and brands that create content, should take steps to examine risks to their intellectual property portfolios and protect them. 3. Businesses should evaluate their transaction terms to write protections into contracts. As a starting point, they should demand terms of service from generative AI platforms that confirm proper licensure of the training data that feed their AI. Finally, with appropriate protection, businesses can build portfolios of works and branded materials, meta-tag them, and train their generative AI platforms to produce authorized, proprietary (paid-up or royalty-bearing) goods as sources of instant revenue streams. I welcome your thoughts and views on the topic. #AI #Leadership Bharat Amin, NACD.DC ML Kabir Sandeep (Sandy) M. Krishnan CA Randhir Mazumdar Dr. Swati Karve, PhD Psychology Ashish Saxena Shiny Skaria
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development