0% found this document useful (0 votes)
75 views7 pages

Legal Implications of Deepfake Technology

The document discusses deepfake technology and its current legal status. It provides an overview of the main algorithm models used for deepfakes, including autoencoders and generative adversarial networks. It then points out existing risks of deepfake technology and explores the current legal and policy status for regulating deepfakes.

Uploaded by

rashiamazing27
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
75 views7 pages

Legal Implications of Deepfake Technology

The document discusses deepfake technology and its current legal status. It provides an overview of the main algorithm models used for deepfakes, including autoencoders and generative adversarial networks. It then points out existing risks of deepfake technology and explores the current legal and policy status for regulating deepfakes.

Uploaded by

rashiamazing27
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Deepfake Technology and Current Legal Status of It

Min Liu1, Xijin Zhang2*


1
People’s Public Security University of China, Beijing, China
2
Key Laboratory of Police Internet of Things Application Ministry of Public Security. People’s Republic of China
[email protected], [email protected]

Abstract
Deepfake refers to the artificial intelligence technology that splices individual sounds, facial expressions, and body
movements into false content with the help of neural network technology. It makes it possible to tamper with or generate
highly realistic audio and video contents and make it difficult to identify, which observers fail to distinguish with the
naked eye. Therefore, the abuse of deepfake technology will accelerate human beings into the "post-truth era", which
will cause a series of social risks, endangering personal legitimate rights and interests, social and public security, and
even national security. This paper provides an overview of the main algorithm model—Autoencoder and Generative
Adversarial Network—of deepfake, and then points out the existing risks and legal regulation of deepfake technology.

Keywords: Deepfake, Autoencoder, Generate Adversarial Networks, Legal Regulation

1 INTRODUCTION 2 TECHNOLIGY FOUNDATIONS


"Deepfake" is a portmanteau of "deep learning" and Before the emergence of deepfake technology,
"fake" [2], which is based on the deep learning algorithm forgery is usually achieved through the splicing of videos
model that can learn independently, especially and images. The process of splicing is also a process of
Generative Adversarial Networks. It first came to the covering, by removing, duplicating, shifting, or deleting
public view in 2017, when a Reddit user named to achieve the covering and splicing of certain objects
"deepfaker" posted a deepfake video replacing a female [18]. Unlike the splicing of the images and videos,
star's face with a heroine in a pornographic video. The deepfake technology originated from Convolutional
user's name, "deepfaker", was then used to name the Neural Network, which is one of the representative
deepfake technology [15]. After that, similar algorithms of deep learning. Initial video image forgery
pornographic deepfake videos have gone viral, with mainly depends on the Auto-encoder Network.
many celebrities and even the public becoming victims
of pornographic videos. Moreover, some deepfake videos 2.1 Autoencoder
about politicians such as Trump and Obama have also
emerged, seriously endangering the national image and Autoencoder is an artificial neural network
diplomatic security. architecture divided into two parts: encoder and decoder
[16]. The encoder encodes and compresses face images
The rapid development of technology makes the
by extracting the face features, transforming the image
threshold for the use of deepfake lower, which is
into vector values in the latent space, while the decoder
popularized in a low-cost way and is easily accessible to
reconstructs the original face according to the face
amateurs so that all individuals may become malicious
features extracted by the encoder, making the data as
users or victims of deepfake technology.
close as possible to the input data of the encoder.
This paper begins with an introduction of the
It requires two pairs of encoder-decoder to enable
technology, which is used to create deepfakes. We then
source and target image face exchange, and the
move on to discuss the current harms of deepfake. In the
parameters of two sets of input images are shared
end, we move to explore the current legal and policy
between the two encoders, respectively reconstructing
status of deepfakes and provide prospects for the
the images using different decoders during decoding [22].
regulation of deepfake technology.
The specific operation process is shown in Figure 1.

© The Author(s) 2023


B. Fox et al. (Eds.): IC-ICAIE 2022, AHCS 9, pp. 1308-1314, 2023.
https://doi.org/10.2991/978-94-6463-040-4_194
Deepfake Technology and Current Legal Status ... 1309

Figure 1. Deepfake generation based on Autoencoder Figure 2. Deepfake generation based on GAN

First, the two encoders extract facial features from the The objective function formula of the GAN training
source image and the target image respectively. Then, is given by equation 1:
two different decoders would reconstruct their facial min max (𝐷, 𝐺) = 𝐸𝑥~𝑃(𝑥) [log 𝐷(𝑥)]
expression. Finally, by exchanging the decoder of the 𝐺 𝐷
source and target image, where the feature set of the face + 𝐸𝑧~𝑃(𝑧) [log(1 − 𝐷(𝐺(𝑧)))] (1)
A is connected to the decoder B, to generated fake image. Among them, G is the generator, D is the
The newly generated target image has the face features of discriminator and E represents expectation of the
the source image A while maintaining the face expression
distribution function. The generator maps the input data
and characteristic attributes of the target image B. z satisfying the random distribution from the input space,
However, the Autoencoder network needs to recorded as x=G (z), and then maps x to D (x) through
deliberately approximate the probability distribution of the discriminator, and solves the objective function by
the real sample data to improve the fidelity of deepfake, taking the expectation.
resulting in insufficient network generalization
The generator and discriminator are trained in a min-
performance and limited generated fidelity.
max way method [13]. The minimum value 0 represents
the fake output and the maximum value 1 represents the
2.2 Generative Adversarial Network authentic output. The cost function of G wants V (D, G)
to be as small as possible, while D wants V (D, G) to be
The GAN technology, as the underlying model of
as large as possible to form a game between the two [21].
"deepfake", was proposed in Generative Adversarial
Networks by Ian J. Goodfellow et al. in October 2014
D tries to get close to 1 to create an authentic deepfake
output. If we get D (G (z)) ~0, then this means that G
[13]. Its core idea comes from the two-person zero-sum
cannot fool D. After repeated training and detecting, until
game in game theory. Traditional deep learning
the discriminator cannot accurately identify the
technology is basically a single-level process, but GAN
difference between the generator's outputs and the
introduces an "adversarial" mechanism, which relies on
original data set, which means that the generated data and
the repeated creating and detection of internal
the real data have the same distribution, the whole
algorithmic data. GAN is carried out bidirectionally by
forgery process comes to an end, leading to an incredibly
two sets of deep convolutional neural networks learning
realistic video that can deceive the eyes of most people.
in a dynamic, including generator and discriminator.
The "generator", based on the deep learning of the 3 HARMS OF DEEPFAKES
statistical patterns in a data set, generates convincing
forged images or videos. The "discriminator" identifies As the deepfake technology becoming increasingly
the authenticity of the simulated samples based on the accessible to non-professionals, leading to a surge in the
real image, sends the discrimination results back and number of fake audio and video products. According to
informs the part to be corrected, the generator then takes the 2018 report "Malicious Use of Artificial Intelligence:
a turn, refining the video and eliminating errors. The two Prediction, Prevention and Mitigation" released by the
are trained in an iterative process, as shown in Figure 2. Institute for Future Life [3], there is a risk of malicious
use of AI, and deepfake technology is one of them.

3.1 The Impact on Individuals


Data from The State of Deepfakes, Landscape, Threat
and Impact [1] shows that ninety-six percent of deepfake
1310 Min Liu and Xijin Zhang

videos were pornographic videos, and the victims were the Middle East is considered to be related to false
absolutely women. The first use of GANs was to create information events [14].
deepfake pornographic videos, especially revenge porn
Many fake videos of politicians and national leaders
and celebrity deepfakes. These pornographic deepfakes
have been circulating on social media. Although the
cause substantial injuries to women, not only workplace
videos are now more just laughed off as entertainment, as
discrimination, emotional and reputational harm, but also
the deepening of deepfake technology, these fake videos
the sexual exploitation or even the death and rape threats.
will become more and more realistic and more difficult
for ordinary people to distinguish them, so the
3.2 The Impact on Society destruction of political figures will become more serious.
Deepfake technology will also further blur the
boundary between truth and illusion, causing a crisis of 4 EXISTING LEGISLATION OF
trust in the whole society. In March 2021, the Hongkou DEEPFAKE
District People's Procuratorate of Shanghai Municipality
in China prosecuted a large false Value Added Tax For the risk of technology, the usual logic is "Defeat
Invoice. The criminal suspect forged action videos magic with magic"—with technology. However, the rate
including nodding, shaking head, blinking, and opening of technological progress is often faster than the speed at
mouth through the technical processing of other people's which the technology can be broken. So, it is urgent and
high-definition profile pictures and ID card information, necessary to regulate deepfake technology by means
to crack the face recognition technology and falsely issue other than technology. On March 7, 2020, a
ordinary VAT invoices. symposium—"When Seeing Isn't Believing: Deepfakes
and the Law"—was held in New York, focusing on the
Deepfake technology could be used to spread legal and regulatory response to deepfakes [20].
misinformation division and create social unrest. In 2018,
for instance, more than twenty people across India were 4.1 The United States
violently killed because of the rumors of kidnapping
young children or involving other crimes on WhatsApp The United States was the first country to respond to
[6]. artificial intelligence technology. In December 2018, the
U.S. Congress passed Malicious Deep Fake Prohibition
Deepfake technology also poses threats to judicial
Act of 2018 [17], which was the first act to define the
practice and the legal system. Artificial intelligence
Deepfake. DEEPFAKES Accountability Act was
technology is more and more used in the courts, if the
introduced in June 2019 [12]. However, it has been
detection technology cannot keep up with the pace of
challenged and opposed by the public for its vague
deepfake technology, it may cause misjudgment of cases,
definitions and a potential conflict with the First
seriously affecting the judicial justice and the interests of
Amendment to the United States Constitution [11]. In the
the victims.
same year, the Congress proposed the Deepfake Report
Act of 2019 [5], requiring the U.S. Department of
3.3 The Impact on Nations Homeland Security to regularly issue the evaluation
There have been many concerns about the political reports on deepfake technology.
deepfake videos to interfere with elections. Similar In addition, some states respond quickly to the
videos were used to target President Joe Biden in 2020 improper use of "deepfake", especially on "pornographic
election in the United States [9]. Deepfakes would also videos" and "political elections".
disrupt diplomatic relationships. The diplomatic crisis in
Table 1. Legislation of the United States

Regulations Time Content


Malicious Deep Fake Prohibition
In December 2018 set up reporting systems
Act
Federal DEEPFAKES Accountability Act
In June 2019 label the altered media
legislation of 2019
issue reports on deepfake
The Deepfake Report Act of 2019 In June 2019
technology
Unlawful Dissemination or Sale nonconsensual deepfake
Virginia In July 2019
of Images of Another Person pornography
Deepfake Technology and Current Legal Status ... 1311

In September
Texas Tex. SB 751 election
2019
nonconsensual deepfake
Calif. AB-602 In February 2019
pornography
California Calif. AB-730 In October 2019 election
In September election and nonconsensual
Calif. AB-1280
2021 deepfake pornography
Washington SB 6280 Act In March 2020 face recognition
nonconsensual pornography,
New York N.Y. A08155, S0587-B In November 2020 "digital replica" and commercial
exploitation
An Act to Protect Against Deep
establish liability on "facilitate
Massachusetts Fakes Used to Facilitate Criminal In January 2019
criminal or tortious conduct"
or Torturous Conduct

4.2 The European Union a European Approach, putting forward some principles to
avoid information publishers illegally manipulate public
The EU has not issued special legislation on opinion [8]. In May 2018, the European Union formally
"deepfake" but has adopted a series of regulations and implemented the General Data Protection Regulations.
programs to incorporate deepfake into the regulatory This regulation set strict rules on the use of deep
framework, limiting the application of deepfake from synthesis technology, protecting personal data such as
disinformation governance, individual information images of citizens that may be used for deepfake [19]. In
protection and artificial intelligence regulation. June 2018, the European Council adopted the EU Code
of Practice on Disinformation, actively promoting self-
In April 2018, the European Commission published a regulation of the industry and consciously restricting and
long open letter entitled Tackling online disinformation: controlling the illegal content of "deepfake"[7].

Table 2. Legislation of the European Union

Regulations Time Content


General Data Protection Regulations In May 2018 personal data
Tackling online disinformation: a European
In April 2018 illegally manipulate public opinion
Approach
advocate self-regulation of
Code of Practice on Disinformation In June 2018
platforms
Ethics Guidelines for Trustworthy Artificial
In April 2019 privacy and data management
Intelligence

4.3 China security. Moreover, its legal regulations focus on the


obligation of labelling.
China also does not carry out special legislation on Unfortunately, there are no punitive provisions for
deepfake, but standardizes and restricts the creation, violations of the labelling obligation, which makes the
release and dissemination of deepfake information from declaration of the provisions more meaningful than the
the perspective of protecting citizens' right of portrait, practical value, resulting in the absence of legal
reputation, and safeguarding national security and social protection.
1312 Min Liu and Xijin Zhang

Table 3. Legislation of China

Regulations Time Content


Data Security Management Measures (draft) In May 2019
The Regulations on the Administration of Online Audio and
In January 2020 the obligation of
Video Information Services
labelling
Network Information Content Ecological Governance
In March 2020
Regulation
The Civil Code of the People's Republic of China In January 2021 personal right

5 CONCLUSION [2] Brandon J. Terrifying high-tech porn: creepy


'deepfake' videos are on the rise[J]. Fox news, 2018,
Today, video has been a relatively reliable source of 20.
information, but once "deepfake" becomes more popular,
the value of any video—whether true or false— [3] Brundage M, Avin S, Clark J, et al. The malicious
inevitably falls, because there is no reliable way to use of artificial intelligence: Forecasting,
determine whether a video is forged or not. prevention, and mitigation [J]. arXiv preprint
arXiv:1802.07228, 2018.
The law is only a kind of passive post-event relief.
Although it can exert certain constraints on the [4] Chesney R, Citron D K. 21st century-style truth
dissemination of false information in certain fields and decay: Deep fakes and the challenge for privacy,
specific scenes, it is ineffective to the negative impact free expression, and national security [J]. Md. L.
already caused, and the social credibility of social media Rev., 2018, 78: 882.
often gradually weakens with the development of
emergencies. [5] Deepfakes Report Act of 2019,
https://www.congress.gov/bill/116th-
Therefore, prior prevention and in-process control are congress/house-bill/3600/.
particularly crucial. The most critical links are the
creators, platforms and audiences, what should be done [6] Donie O’Sullivan. House Intel chair sounds alarm
is to delimit the application boundaries of new in Congress' first hearing on deepfake videos
technology by ethical norms, guide the development [EB/OL].
direction of new technology with industry self-discipline https://edition.cnn.com/2019/06/13/tech/deepfake-
and strengthen the education of critical thinking of the congress-hearing/index. html.
public.
[7] European Commission. EU code of practice on
More importantly, such regulatory responses will not disinformation [EB/OL]. https://www.
be so efficient without significant technical expertise, so hadopi.fr/sites/default/files/sites/default/files/ckedit
we need both lawyers and technologists to tackle this or_files/1CodeofPracticeonDisinformation.pdf.
problem [4]. As the deepfake technology becomes more
[8] European Commission. Tackling Online
and more mature, the corresponding detection will be
more and more advanced. It will be a never-ending race, Disinformation: A European Approach[J].
which Doermann compared to a "cat-and-mouse game" Communication from the Commission to the
[10]. European Parliament, the Council, the European
Economic and Social Committee and the
ACKNOWLEGMENTS Committee of the Regions. COM/2018/236, 2018:
final.
The work described in this paper was supported by
[9] Generally 'FBI Chief Calls Capitol Attack Domestic
Key Laboratory of Police Internet of Things Application
Terrorism and Rejects Trump's Fraud Claims', The
Ministry of Public Security. People’s Republic of China.
Guardianhttps://www.theguardian.com/us-
news/2021/jun/10/capitol-attackfbi-christopher-
REFERENCES
wray-congress.
[1] Ajder H. Deepfake Threat Intelligence: А statistics [10] Hao K. Deepfakes have got congress panicking. this
snapshot from June 2020 [J]. 2020. is what it needs to do[J]. online] MIT Technology
Review, 2019, 20.
Deepfake Technology and Current Legal Status ... 1313

[11] Hayley Tsukayama, India Mckinney, Jamie


Williams. Congress Should Not Rush to Regulate
Deepfakes [EB/OL].
https://www.eff.org/deeplinks/2019/06/congress-
shouldnot-rush-regulate-deepfakes.
[12] H.R.3230-Defending Each and Every Person from
False Appearances by Keeping Exploitation Subject
to Accountability Act of 2019, http:/
/www.congress.gov /bill /116th-congress/house-bill
/3230.
[13] Ian Goodfellow, et al. Generative adversarial nets.
Advances in neural information processing systems,
2014, 2672-2680.
[14] Krishnadev Calamur, Did Russian Hackers Target
Qatar?, THE ATLANTIC [EB/OL]. https:
//www.theatlantic.com/news/archive/2017/06
/Qatar-russian-hacker-fake-news/529359 /.
[15] Leo Kelion. Deepfake Porn Videos Deleted from
Internet by Gfycat [EB/OL]. http://www.
bbc.com/news/technology-42905185.
[16] Ramadhani K N, Munir R. A Comparative Study of
Deepfake Video Detection Method [C]//2020 3rd
International Conference on Information and
Communications Technology (ICOIACT). IEEE,
2020: 394-399.
[17] S. 3805, 115th Cong. (2018).
[18] Thakur R, Rohilla R. Copy-move forgery detection
using residuals and convolutional neural network
framework: a novel approach[C]//2019 2nd
International Conference on Power Energy,
Environment and Intelligent Control (PEEIC).
IEEE, 2019: 561-564.
[19] Voigt P, Von dem Bussche A. The EU General Data
Protection Regulation (GDPR)[J]. A Practical
Guide, 1st Ed., Cham: Springer International
Publishing, 2017, 10(3152676): 10.5555.
[20] Yamaoka-Enkerlin A. Disrupting disinformation:
Deepfakes and the Law[J]. NYUJ Legis. & Pub.
Pol'y, 2020, 22: 725.
[21] Yuxuan Bao, Tianliang Lu, Yanhui Du, Overview
of Deepfake Video Detection Technology [J].
Computer Science, 2020,47(09):283-292.
[22] Yuzhi Zhang, Ruifang Wang, Liang Zhu, et al. The
Review of Generation and Detection Techniques for
Deepfakes [J]. Journal of Information Security
Research, 2022,8(03):258-269.
1314 Min Liu and Xijin Zhang

Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International
License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution
and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a
link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated
otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use
is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright
holder.

You might also like