0% found this document useful (0 votes)
26 views18 pages

Deepfake Research

Uploaded by

jason beryl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
26 views18 pages

Deepfake Research

Uploaded by

jason beryl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 18

Dealing with deepfakes

The image shows a computer screen displaying video editing software in use. The
software appears to be Adobe After Effects, given the interface elements and layout. On
the screen, there is a video of Barack Obama, and various visual markers and tracking
points are visible on his face. These markers indicate that facial tracking is being used, a
common technique in deepfake creation. The image is part of a process where a
deepfake video is being made or edited, demonstrating the manipulation of the original
video content to alter or mimic the facial movements and expressions of the person
shown, in this case, a video of Barack Obama. The presence of the U.S. flag and
presidential seal in the background reinforces that this is a video featuring Obama in an
official capacity.

1 of 2
prevnext

AP

A deepfake is something that a machine has produced using deep learning and
which contains false information. It pays to keep the definition of deepfakes,
however vague, in front of us because of the way deepfakes distort reality and
attempt to persuade us that something false is true.

An ‘upgrade’ from Photoshop

On May 28, the wrestlers protesting peacefully in New Delhi were tackled to the
ground, arrested, and boarded in a van to keep them from disrupting the
inauguration of the Parliament building. Shortly after, a photo appeared to show
four of the beleaguered wrestlers posing with wide smiles for a selfie in the van.

If you had believed the photo to be real, you might also have believed that the
wrestlers had orchestrated a clash with the police and that they wanted to be
photographed while being ‘roughed up’. This is what the person who created the
photo may have intended. Though it emerged later that this photo had been
morphed, and was not a deepfake, creating such visuals has become child’s play.
Deepfaking is a significant ‘upgrade’ from photoshopping images as it transcends
the limits of human skill. Here, machines iteratively process large amounts of data
to falsify images and videos, sometimes in real time, and with fewer imperfections.

Deepfake images and videos thus have an unsettling legacy. People worldwide have
already used the technology to create a video of Barack Obama verbally abusing
Donald Trump, hack facial recognition software, manufacture ‘revenge porn’, etc.
On May 22, a deepfake image purporting to show a towering column of dark smoke
rising from the Pentagon received sober coverage from a few Indian television
news channels. The image was soon found to have been machine-made.

As with other modern technologies set on the information superhighway, there is


no way for us to go back to a time when people didn’t have the tools to falsify
media elements at scale. Alongside deepfaked images and videos, we have
chatbots that mimic intelligence, but we can’t tell the difference when they make a
mistake. This leads some to believe certain information to be ‘true’ simply because
a machine gave it to them.

Then again, these tools have also been used for good. Using deep learning, the ALS
Association in the U.S. founded a “voice cloning initiative” to restore the voices of
those who had lost it to amyotrophic lateral sclerosis. Deep learning has also been
adapted in comedy, cinema, music, and gaming. Experts have recreated the voices
and/or visuals of visual artist Andy Warhol, celebrity chef Anthony Bourdain, and
rapper Tupac Shakur, among others, enhancing our ability to understand, and even
reinterpret, history (although some of these attempts haven’t been free of
controversy).

Redeemable technology

As such, despite its potential to rupture the social fabric, deep learning is entirely
redeemable, just like the kitchen knife or the nuclear reactor. The focus, in turn and
as usual, must be on how we wield it. This is also the question that generative
artificial intelligence like ChatGPT has been forcing us to ask. The major technology
companies behind ChatGPT et al seem to have been driven by ‘can we do this?’
rather than ‘should we do this?’, although not without exceptions.

Our still-evolving experience with solar geoengineering offers a useful, if also


imperfect, parallel. Solar geoengineering involves modifying the climate to be
favourable over one part of the planet, by blocking sunlight, but which invariably
has planet-wide consequences. Many scientists agree that this is dangerous and
have called for a moratorium on the use of this technology and for international
cooperation led, if required, by a treaty.

Clumsy though it may seem, deepfakes merit a similar response: laws that regulate
its use and punish bad-faith actors, and keep the door open for democratic inputs
to guide the future of such a powerful technology. A good starting point could be
what political philosopher Adrienne de Ruiter wrote in 2021, which is to protect
against the “manipulation of hyper-realistic digital representations of our image
and voice.” This, she said, “should be considered a fundamental moral right in the
age of deepfakes”. And a stepping stone for us, as individuals, is to become more
scientifically, digitally, and public-spiritedly literate. Then, we will be able to look
past an implausible photo and bring to light its concealed creator.

For now, among all the countries, China has responded strongest. It has banned
deepfaked visuals whose creators don’t have permission to modify the original
material and which aren’t watermarked accordingly. The success of this policy is no
doubt assured by the country’s existing surveillance network. Every measure short
of this requires at least an ampoule of self-restraint. And that is rooted in the kind
of people that we are.

Despite its potential to rupture the social fabric, deep learning is entirely
redeemable, just like the kitchen knife or the nuclear reactor. The problem is how
we wield it.

Deepfake technology: how and why China is planning to regulate it


Getty Images

What is deep synthesis technology and how is it being used to spread


disinformation? What are the new guidelines being rolled out by the
Cyberspace Administration of China? How is Canada in a unique position to
lead the initiatives to counter deepfakes?

ABHISHEK CHATTERJEE

The story so far:

The Cyberspace Administration of China, the country’s cyberspace watchdog, is


rolling out new regulations, to be effective from January 10, to restrict the use of
deep synthesis technology and curb disinformation. Deep synthesis is defined as
the use of technologies, including deep learning and augmented reality, to generate
text, images, audio and video to create virtual scenes. One of the most notorious
applications of the technology is deepfakes, where synthetic media is used to swap
the face or voice of one person for another. Deepfakes are getting harder to detect
with the advancement of technology. It is used to generate celebrity porn videos,
produce fake news, and commit financial fraud among other wrongdoings. Under
the guidelines of China’s new rules, companies and platforms using the technology
must first receive consent from individuals before they edit their voice or image.

What is a deepfake?

Deepfakes are a compilation of artificial images and audio put together with
machine-learning algorithms to spread misinformation and replace a real person’s
appearance, voice, or both with similar artificial likenesses or voices. It can create
people who do not exist and it can fake real people saying and doing things they
did not say or do.

The term deepfake originated in 2017, when an anonymous Reddit user called
himself “Deepfakes.” This user manipulated Google’s open-source, deep-learning
technology to create and post pornographic videos. The videos were doctored with
a technique known as face-swapping. The user “Deepfakes” replaced real faces with
celebrity faces. Deepfake technology is now being used for nefarious purposes like
scams and hoaxes, celebrity pornography, election manipulation, social
engineering, automated disinformation attacks, identity theft and financial fraud,
cybersecurity company Norton said in a blog.

Deepfake technology has been used to impersonate former U.S. Presidents Barack
Obama and Donald Trump, India’s Prime Minister Narendra Modi, Facebook chief
Mark Zuckerberg and Hollywood celebrity Tom Cruise. China’s new rule aims to
combat the use of deepfake for spreading disinformation.

What is China’s new policy to curb deepfakes?

The policy requires deep synthesis service providers and users to ensure that any
doctored content using the technology is explicitly labelled and can be traced back
to its source, the South China Morning Post reported. The regulation also mandates
people using the technology to edit someone’s image or voice, to notify and take
the consent of the person in question. When reposting news made by the
technology, the source can only be from the government-approved list of news
outlets. Deep synthesis service providers must also abide by local laws, respect
ethics, and maintain the “correct political direction and correct public opinion
orientation”, according to the new regulation.

Why has such a policy been implemented?

China’s cyberspace watchdog said it was concerned that unchecked development


and use of deep synthesis could lead to its use in criminal activities like online
scams or defamation, according to a report by the South China Morning Post. The
country’s recent move aims to curb risks that might arise from activities provided
by platforms which use deep learning or virtual reality to alter any online content. If
successful, China’s new policies could set an example and lay down a policy
framework that other nations can follow.

What are other countries doing to combat deepfakes?


The European Union has an updated Code of Practice to stop the spread of
disinformation through deepfakes. The revised Code requires tech companies
including Google, Meta, and Twitter to take measures in countering deepfakes and
fake accounts on their platforms. They have six months to implement their
measures once they have signed up to the Code. If found non-compliant, these
companies can face fines as much as 6% of their annual global turnover, according
to the updated Code. Introduced in 2018, the Code of Practice on Disinformation
brought together for the first time worldwide industry players to commit to counter
disinformation.

The Code of Practice was signed in October 2018 by online platforms Facebook,
Google, Twitter and Mozilla, as well as by advertisers and other players in the
advertising industry. Microsoft joined in May 2019, while TikTok signed the Code in
June 2020. However, the assessment of the Code revealed important gaps and
hence the Commission has issued a Guidance on updating and strengthening the
Code in order to bridge the gaps. The Code’s revision process was completed in
June 2022.

In July, last year, the U.S. introduced the bipartisan Deepfake Task Force Act to
assist the Department of Homeland Security (DHS) to counter deepfake technology.
The measure directs the DHS to conduct an annual study of deepfakes — assess
the technology used, track its uses by foreign and domestic entities, and come up
with available countermeasures to tackle the same.

Some States in the United States such as California and Texas have passed laws
that criminalise the publishing and distributing of deepfake videos that intend to
influence the outcome of an election. The law in Virginia imposes criminal penalties
on the distribution of nonconsensual deepfake pornography.

In India, however, there are no legal rules against using deepfake technology.
However, specific laws can be addressed for misusing the tech, which include
Copyright Violation, Defamation and cyber felonies.

Does this technology disrupt the right to privacy?

While Canada does not have any regulations to tackle deepfakes, it is in a unique
position to lead the initiative to counter deepfakes. Within Canada, some of the
most cutting-edge AI research is being conducted by the government with a
number of domestic and foreign actors. Furthermore, Canada is a member and
leader in many related multilateral initiatives like the Paris Call for Trust and
Security in Cyberspace, NATO Cooperative Cyber Defence Centre of Excellence and
the Global Partnership on Artificial Intelligence. It can use these forums to
coordinate with global and domestic actors to create deepfake policy in different
areas.

The danger of deepfakes

New menace: Having taken cognizance of the issue, almost all social media
platforms have some policy for deepfakes. iStock

What are the ways in which AI-manipulated digital media can impact the lives
of individuals as well as influence the public discourse? How is it employed by
various groups and how can society overcome the ‘infodemic’?

ASHISH JAIMAN

EXPLAINER

Disinformation and hoaxes have evolved from mere annoyance to warfare that can
create social discord, increase polarisation, and in some cases, even influence the
election outcome. Nation-state actors with geopolitical aspirations, ideological
believers, violent extremists, and economically motivated enterprises can
manipulate social media narratives with easy and unprecedented reach and scale.
The disinformation threat has a new tool in the form of deepfakes.
What are deepfakes?

Deepfakes are digital media - video, audio, and images edited and manipulated
using Artificial Intelligence. It is basically hyper-realistic digital falsification.
Deepfakes are created to inflict harm on individuals and institutions. Access to
commodity cloud computing, public research AI algorithms, and abundant data and
availability of vast media have created a perfect storm to democratise the creation
and manipulation of media. This synthetic media content is referred to as
deepfakes.

Artificial Intelligence (AI)-Generated Synthetic media or deepfakes have clear


benefits in certain areas, such as accessibility, education, film production, criminal
forensics, and artistic expression. However, as access to synthetic media technology
increases, so does the risk of exploitation. Deepfakes can be used to damage
reputation, fabricate evidence, defraud the public, and undermine trust in
democratic institutions. All this can be achieved with fewer resources, with scale
and speed, and even micro-targeted to galvanise support.

Who are the victims?

The first case of malicious use of deepfake was detected in pornography. According
to a sensity.ai, 96% of deepfakes are pornographic videos, with over 135 million
views on pornographic websites alone. Deepfake pornography exclusively targets
women. Pornographic deepfakes can threaten, intimidate, and inflict psychological
harm. It reduces women to sexual objects causing emotional distress, and in some
cases, lead to financial loss and collateral consequences like job loss.

Deepfake can depict a person as indulging in antisocial behaviors and saying vile
things that they never did. Even if the victim could debunk the fake via alibi or
otherwise, that fix may come too late to remedy the initial harm.

Deepfakes can also cause short-term and long-term social harm and accelerate the
already declining trust in traditional media. Such erosion can contribute to a culture
of factual relativism, fraying the increasingly strained civil society fabric.

Deepfake could act as a powerful tool by a malicious nation-state to undermine


public safety and create uncertainty and chaos in the target country. Deepfakes can
undermine trust in institutions and diplomacy.
Deepfakes can be used by non-state actors, such as insurgent groups and terrorist
organisations, to show their adversaries as making inflammatory speeches or
engaging in provocative actions to stir anti-state sentiments among people.

Another concern from deepfakes is the liar’s dividend; an undesirable truth is


dismissed as deepfake or fake news. The mere existence of deepfakes gives more
credibility to denials. Leaders may weaponise deepfakes and use fake news and
alternative-facts narrative to dismiss an actual piece of media and truth.

What is the solution?

Media literacy efforts must be enhanced to cultivate a discerning public. Media


literacy for consumers is the most effective tool to combat disinformation and
deepfakes.

We also need meaningful regulations with a collaborative discussion with the


technology industry, civil society, and policymakers to develop legislative solutions
to disincentivising the creation and distribution of malicious deepfakes.

Social media platforms are taking cognizance of the deepfake issue, and almost all
of them have some policy or acceptable terms of use for deepfakes. We also need
easy-to-use and accessible technology solutions to detect deepfakes, authenticate
media, and amplify authoritative sources.

To counter the menace of deepfakes, we all must take the responsibility to be


critical consumers of media on the Internet, think and pause before we share on
social media, and be part of the solution to this ‘infodemic’.

THE GIST

Deepfakes are digital media - video, audio, and images edited and manipulated
using Artificial Intelligence. It is basically hyper-realistic digital falsification.

AI-Generated Synthetic media or deepfakes have clear benefits in certain areas,


such as accessibility, education, film production, criminal forensics, and artistic
expression.

Collaborative actions and collective techniques across legislative regulations,


platform policies, technology countermeasures, and media literacy approaches are
a few of the ways in which deepfake threat can be mitigated
Voice deepfakes: how they are generated, used, misused and differentiated

iStockphoto

What was the controversy surrounding the ‘voice cloning’ service provider,
ElevenLabs? What are the potential threats around artificial speech
synthesis? Can audio deepfakes be detected? What is the concern regarding
this technology and the creative industry?

ABHISHEK CHATTERJEE

The story so far:

On January 29, several users of the social media platform 4chan, used “speech
synthesis” and “voice cloning” service provider, ElevenLabs, to make voice
deepfakes of celebrities like Emma Watson, Joe Rogan, and Ben Shapiro. These
deepfake audios made racist, abusive, and violent comments. Making deepfake
voices to impersonate others without their consent is a serious concern that could
have devastating consequences. In response to such use of their software,
ElevenLabs tweeted saying, “While we see our tech being overwhelmingly applied to
positive use, we also see an increasing number of voice cloning misuse cases.”
What are voice deepfakes?

A voice deepfake is one that closely mimics a real person’s voice. The voice can
accurately replicate tonality, accents, cadence, and other unique characteristics of
the target person. People use AI and robust computing power to generate such
voice clones or synthetic voices. Sometimes it can take weeks to produce such
voices, according to Speechify, a text-to-speech conversion app.

How are voice deepfakes created?

To create deepfakes one needs high-end computers with powerful graphics cards,
leveraging cloud computing power. Powerful computing hardware can accelerate
the process of rendering, which can take hours, days, and even weeks, depending
on the process. Besides specialised tools and software, generating deepfakes need
training data to be fed to AI models. This data are often original recordings of the
target person’s voice. AI can use this data to render an authentic-sounding voice,
which can then be used to say anything.

What are the threats arising from the use of voice deepfakes?

Attackers are using such technology to defraud users, steal their identity, and to
engage in various other illegal activities like phone scams and posting fake videos
on social media platforms.

According to one of Speechify’s blog posts, back in 2020, a manager from a bank in
the UAE, received a phone call from someone he believed was a company director.
The manager recognised the voice and authorised a transfer of $35 million. The
manager had no idea that the company director’s voice was cloned.

In an other instance, fraudsters used AI to mimic a business owner’s voice directing


the CEO of a UK-based energy firm to immediately transfer around $243,000 to the
bank account of a Hungarian supplier of the company. The voice belonged to a
fraudster who spoofed the CEO, The Wall Street Journal reported in 2019.

Voice deepfakes used in filmmaking have also raised ethical concerns about the use
of the technology. Morgan Neville’s documentary film on the late legendary chef
Anthony Bourdain used voice-cloning software to make Bourdain say words he
never spoke. This sparked criticism.
Gathering clear recordings of people’s voices is getting easier and can be obtained
through recorders, online interviews, and press conferences. Voice capture
technology is also improving, making the data fed to AI models more accurate and
leading to more believable deepfake voices. This could lead to scarier situations,
Speechify highlighted in their blog.

What tools are used for voice cloning?

OpenAI’s Vall-e, My Own Voice, Resemble, Descript, ReSpeecher, and iSpeech are
some of the tools that can be used in voice cloning. ReSpeecher is the software
used by Lucasfilm to create Luke Skywalker’s voice in the Mandalorian.

What are the ways to detect voice deepfakes?

Detecting voice deepfakes need highly advanced technologies, software, and


hardware to break down speech patterns, background noise, and other elements.
Cybersecurity tools have yet to create foolproof ways to detect audio deepfakes,
Speechify noted.

Research labs use watermarks and blockchain technologies to detect deepfake


technology, but the tech designed to outsmart deepfake detectors is constantly
evolving, Norton said in a blog post.

Programmes like Deeptrace are helping to provide protection. Deeptrace uses a


combination of antivirus and spam filters that monitor incoming media and
quarantine suspicious content, Norton noted.

Last year, researchers at the University of Florida developed a technique to


measure acoustic and fluid dynamic differences between original voice samples of
humans and those generated synthetically by computers. They estimated the
arrangement of the human vocal tract during speech generation and showed that
deepfakes often model impossible or highly unlikely anatomical arrangements.

Call centres can also take steps to mitigate the threat from voice deepfakes,
according to voice recognition engineers at Pindrop. Callback functions can end
suspicious calls and request an outbound call to the account owner for direct
confirmation. Multifactor authentication (MFA) and anti-fraud solutions can also
reduce deepfake risks. Pindrop mentioned factors like devising call metadata for ID
verification, digital tone analysis, and key-press analysis for behavioural biometrics.

Why has Google paused Gemini’s ability to generate AI images of people?


Getty Images

The chatbot is also facing criticism in India due to a response it generated


which said that the country’s Prime Minister Narendra Modi has ‘been
accused of implementing policies that some experts have characterised as
fascist’

NABEEL AHMED

VARUN KRISHAN

The story so far:

On February 22, Google announced it would pause Gemini’s ability to generate


images of people. The announcement came after the generative AI tool was found
to be generating inaccurate historical images, that included diverse images of the
U.S. founding fathers and Nazi-era Germany. In both cases the tool was generating
images that appeared to subvert the gender and racial stereotypes found in
generative AI.
Google’s Gemini chatbot is also facing criticism in India due to a response it
generated which said that the country’s Prime Minister Narendra Modi has “been
accused of implementing policies that some experts have characterised as fascist”.

What issues have users raised regarding Google’s Gemini chatbot?

Several users on the microblogging platform X have pointed out instances where
the Gemini chatbot seemingly refused to generate images of white people, leading
to factually inaccurate results. Even prompts for historically significant figures like
the “Founding Fathers of America” or “the Pope” resulted in images of people of
colour, sparking concerns about the bot’s biases. Users have also pointed out the
persistence of the issue even in cases when specific prompts asking for images of
“a white family,” were entered which resulted in the chatbot responding by stating
“unable to generate images that specified a certain ethnicity or race.” On the other
hand, when asked for images of a black family, it easily submitted them.

Google added the image-generating feature to the Gemini chatbot, formerly known
as Bard, about three weeks ago.

The current model is built on top of a Google research experiment called Imagen 2.

Why is Gemini facing backlash in India?

Gemini, Google’s artificial intelligence chat product, in response to a query from a


user “Is Modi a fascist?”, responded by saying that he is accused of implementing
policies that experts “categorised as fascist”.

In response, India’s Minister of State for Electronics and Information Technology,


Rajeev Chandrasekhar said the chatbot is violating Indian information technology
laws and criminal codes through its response. His response is being viewed as a
sign of a growing rift between the government’s hands-off approach to AI research,
and tech giant’s AI platforms, which are keen to train their models with the general
public.

This is not the first time the government has hit out at Google. Earlier this month,
Mr. Chandrasekhar citing Gemini’s predecessor, Bard’s similar “error”, had said that
the company’s claim that the model was “under trial” was not an acceptable excuse.

How has the tech community responded to these issues?


The tech community has expressed criticism, with Paul Graham, co-founder of Y
Combinator, describing the generated images as reflective of Google’s bureaucratic
corporate culture. Ana Mostarac, head of operations at Mubadala Ventures,
suggested that Google’s mission has shifted from organising information globally to
advancing a particular agenda.

Former and current employees, including Aleksa Gordic from Google DeepMind,
have raised concerns about a culture of fear regarding offending other employees
online.

What is Google’s official response to the criticisms?

Google is working on fixing the issue and has temporarily disabled the image
generation feature. “While we do this, we’re going to pause the image generation of
people and will re-release an improved version soon,” Google stated as part of a
post on X.

Jack Krawczyk, a senior director of products at Google, acknowledged the issues


with Gemini and stated that the team is working to correct its errors. He
emphasised Google’s commitment to designing AI systems that reflect a global user
base and acknowledged the need for further tuning to accommodate historical
contexts, recognising the complexity and nuance involved.

Mr.Krawczyk mentioned an ongoing alignment process and iteration based on user


feedback. The company aims to refine Gemini’s responses to open-ended prompts
and enhance its understanding of historical contexts to ensure a more accurate
and unbiased AI system.

Have other generative AI chatbots faced similar problems?

Gemini is not the first AI chatbot to face backlash when generating content.
Recently Microsoft had to adjust its own Designer tool. The adjustments were
necessitated due to the use of the AI tool by some to generate deepfake
pornographic images of Taylor Swift and other celebrities.

OpenAI’s latest AI video-generator tool, Sora, capable of generating realistic videos,


has also raised questions about the misuse of the tool to spread misinformation.

OpenAI, however, has put a filter in the tool to block prompt requests that mention
violent, sexual, or hateful language, as well as images of prominent personalities.
Why has the govt. issued a directive on deepfakes?

How do deepfakes work and what is the technology behind them? What are
the implications of the increasing use of deepfakes by cybercriminals and
scammers?

NABEEL AHMED

The story so far:

On 8 November, the Indian government instructed “social media intermediaries” to


remove morphed videos or deepfakes from their platforms within 24 hours of a
complaint being filed, in accordance with a requirement outlined in the IT Rules
2021. The instructions came as deepfake videos of actors Rashmika Mandanna and
Katrina Kaif surfaced online within the span of one week.

What are deepfakes?

Deepfakes have been around since 2017 and refer to videos, audios or images
created using a form of artificial intelligence called deep learning. The term became
popular when a Reddit contributor used publicly available AI-driven software to
impose the faces of celebrities onto the bodies of people in pornographic videos.
Fast forward to 2023, deepfake tech, with the help of AI tools, allows semi and
unskilled individuals to create fake content with morphed audio-visual clips and
images. Researchers have observed a 230% increase in deepfake usage by
cybercriminals and scammers, and have predicted the technology would replace
phishing in a couple of years, Cyfirma, a cybersecurity company said.

How does deepfake technology work?

The technology involves modifying or creating images and videos using a machine
learning technique called generative adversarial network (GAN). The AI-driven
software detects and learns the subjects’ movements and facial expressions from
the source material and then duplicates these in another video or image. To ensure
that the deepfake created is as close to real as possible, creators use a large
database of source images. This is why more deepfake videos are created of public
figures, celebrities and politicians. The dataset is then used by one software to
create a fake video, while a second software is used to detect signs of forgery in it.
Through the collaborative work of the two software, the fake video is rendered until
the second software package can no longer detect the forgery. This is known as
“unsupervised learning”, when machine-language models teach themselves. The
method makes it difficult for other software to identify deepfakes.

What do laws in India say about deepfakes?

India’s IT Rules, 2021 require that all content reported to be fake or produced using
deep fake be taken down by intermediary platforms within 36 hours.

The Indian IT ministry has also issued notices to social media platforms stating that
impersonating online was illegal under Section 66D of the Information Technology
Act of 2000. The IT Rules, 2021, also prohibit hosting any content that impersonates
another person and requires social media firms to take down artificially morphed
images when alerted.

Why do people create deepfake content?

The technology could potentially be used to incite political violence, sabotage


elections, unsettle diplomatic relations, and spread misinformation. This technology
can also be used to humiliate and blackmail people or attack organisations by
presenting false evidence. However, deepfakes have positive usages as well. The
technology has been used by the ALS Association in collaboration with a company
to use voice-cloning technology to help people with ALS digitally recreate their
voices in the future.

How have other countries reacted?

The EU has issued guidelines for the creation of an independent network of fact-
checkers to help analyse the sources and processes of content creation. The U.S.
has also introduced the bipartisan Deepfake Task Force Act to counter deepfake
technology.

THE GIST

The Indian government has instructed social media intermediaries to remove


deepfake videos within 24 hours, as required by the IT Rules 2021.

Deepfakes are created using artificial intelligence, allowing individuals to


manipulate audio-visual content with relative ease.

Cybercriminals and scammers have increasingly used deepfakes, leading to


concerns about their potential to replace phishing in the future.
Deepfake technology involves the use of generative adversarial networks (GANs) to
modify or create images and videos by learning and duplicating subjects’
movements and facial expressions.

You might also like