0% found this document useful (0 votes)
48 views4 pages

Cybersecurity Issues in Generative AI

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
48 views4 pages

Cybersecurity Issues in Generative AI

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

2023 International Conference on Platform Technology and Service (PlatCon)

Cybersecurity Issues in Generative AI


Subin Oh Taeshik Shon
dept. of AI Convergence Network dept. Cyber Security
Ajou University Ajou University
Suwon, Republic of Korea Suwon, Republic of Korea
[email protected] [email protected]

Abstract— Generative AI technology is being applied in Generative AI and corresponding Generative Models in each
various fields. However, the advancement of these technologies field.
also raises cybersecurity issues. In fact, there are cases of cyber
attack using Generative AI, and the number is increasing. • Text – A technology that enables AI models to generate
2023 International Conference on Platform Technology and Service (PlatCon) | 979-8-3503-0599-9/23/$31.00 ©2023 IEEE | DOI: 10.1109/PLATCON60102.2023.10255179

Therefore, this paper analyzes the potential cybersecurity issues text or response based on learned content, used for tasks
associated with Generative AI. First, we looked at the fields such as document summarization, interactive chatbots,
where Generative AI is used. Representatively, Generative AI is and automatic sentence generation. It can also assist in
being used in text, image, video, audio, and code. Based on these creative processes like writing or marketing. Examples
five fields, cybersecurity issues that may occur in each field were of Text Generative Models include GPT-4, LaMDA,
analyzed. Finally, we discuss the obligations necessary for the LLaMA, and BLOOM.
future development and use of Generative AI.
• Image – A technology that combines computer vision
Keywords—Generative AI, Generative Models, Cybersecurity and AI techniques to generate or transform images. It
is used for creating artistic works, image style transfer,
I. INTRODUCTION and image restoration. It can also be utilized for data
augmentation, data correction in data scarcity
Generative AI is a field of artificial intelligence that situations, and sample generation. Examples of Image
encompasses AI models capable of generating or modifying Generative Models include Imagen, DALL-E, and
new data from given input. With the emergence of Stable Diffusion.
conversational AI models such as Chat GPT, Generative AI has
significant advancements in the field of interactive and • Video – A technology implemented using deep
personalized content creation. According to a survey conducted learning and computer vision techniques to generate or
by Gartner in May 2023, it was reported that 45% of over transform videos. It is used for generating video art,
2,500 executives were influenced by the promotion of Chat special video effects, virtual reality content. It can also
GPT to increase their investments in AI. Furthermore, 89% of be used for video data generation, video style transfer,
executives states that their organizations are utilizing and video restoration. Examples of Video Generative
Generative AI[1]. Models include the Gen series and Make-A-Video.
However, while there have been extensive discussions on • Audio – A technology that utilizes deep learning to
the potential changes brought about by Generative AI [2-4], generate music, speech, and audio content. It is used
there has been insufficient attention given to the cybersecurity for music composition assistance, speech synthesis,
risks associated with Generative AI. As the use of Generative voice transformation, and can also assist in music style
AI continues to increase, concerns about cybersecurity have transfer and video augmentation. Examples of Audio
also grown. According to Dark Reading, which covers Generative Models include MusicLM.
cybersecurity news, more than 30,000 open-source projects
• Code – A technology that uses machine learning and
were using GPT 3.5 on Github as of June 2023. At the same
natural language processing techniques to generate
time, it has been investigated that the security level of the most
new code or transform existing code. It is used for
popular project in Github mostly represents a dangerous level
code autocompletion, bug fixing assistance, education,
of security [5]. In fact, there is a case where a paid subscriber’s
and can also aid developers in productive coding. An
information leakage accident occurred through Chat GPT [6].
example of a Code Generative Model is Codex.
Therefore, this paper analyzes the cybersecurity issues that may
arise in Generative AI. It also discusses directions for safe As such, Generative AI are being provided in various fields.
Generative AI use. Bing Image Creator creates custom images based on text input.
InVideo automatically creates a video by entering text and
II. BACKGROUND selecting a template, without requiring any technical
knowledge. Soundraw automatically creates a song by
The AI models used to implement Generative AI are selecting the genre, instrument, and mode of the. According to
referred to as Generative Models. By using appropriate the user’s needs, AI suggests dozens of alternatives. Chat GPT
Generative Models in various fields, creative outputs can be is a conversational artificial intelligence service based on
obtained. Below are examples of representative applications of

-97-
Authorized licensed use limited to: Universidad Industrial de Santander. Downloaded on October 16,2024 at 01:13:29 UTC from IEEE Xplore. Restrictions apply.
979-8-3503-0599-9/23/$31.00 ©2023 IEEE
prompts. When a user enters a dialogue through a prompt, a strategies and tools to counter security threats such as
response is generated. Github Copilot X is based on the GPT deepfakes using GAN [17].
model and helps developers write code more efficiently. Table
1 shows examples of services organized by type. Fig. 1. Shows IV. CYBERSECURITY ISSUES
examples of fields used by Generative AI and Generative
models. In this chapter, new cybersecurity risks that can arise from
the various types of Generative AI introduced above are
Fig. 1. Examples of Generative AI Models and Serives [7] analyzed. It also looks at possible security threats at the data
level related to Generative AI.

A. Text-Based Generative AI
Text-Based Generative AI, such as Chat GPT, can be
exploited for cyber attacks. In fact, there are cases where Chat
GPT was used for activities like writing malicious code,
generating scripts for illegal web marketplaces and cyber
security jailbreak [18]. The case of writing malware using Chat
GPT is shown in Fig 2. To date, Text-Based Generative AI has
been mainly used to generate malicious code. However, unlike
Code-Based Generative AI Chat GPT is likely to be more
sophisticated and difficult to detect in that it communicates and
completes code.
Fig. 2. Cybercriminal shows malware made using Chat GPT[18]

III. RELATED WORKS


Research is being conducted to improve existing
Generative Models for better results in various application
fields. Generative Adversarial Networks (GAN), an AI
algorithm in which generative and discriminant models
compete and are first introduced through lan Goodfellow et al. B. Media-Based Generative AI
[8] Studies have utilized GAN to enhance image generation Media-Based Generative AI encompasses Image, Video,
performance [9-12]. In addition, recently, studies using and Audio Generative AI. These media technologies have the
diffusion have been conducted. Diffusion is a probabilistic potential to be used in phishing attacks. Attackers can create
modeling technique used for tasks such as data generation or deepfakes through Video-Based Generative AI, cloned human
restoration, and there are studies on new models using it [13- voices through Audio-Based Generative AI, and fake images
15]. through Image-Based Generative AI. Attackers can use this to
perform identity theft fraud and threats though false
Furthermore, research has been conducted approaching
information. In fact, as the creation of deepfakes became easier
Generative Models from a security perspective. Hacker,
in the IK and Europe, it was announced that fraud statistics
Philipp, et al. address important issues related to regulation of
using deepfakes are increasing [19].
Large-Scale AI Models (LGAIMs). And they associate new
generative models with credible existing AI regulation and
explore legal measures appropriate to the feature of these C. Code-Based Generative AI
models. It also presents strategies and obligations to ensure Even inexperienced attackers can write advanced hacking
reliable AI development and use [16]. Dutta, Indira Kalyan, et code using the code auto-generation function. Code-Based
al. recognize GAN as a promising means in the field of security, Generative AI can be used to hide malware within applications
explaining the potential of GAN in addressing security issues and bypass existing security tools. As such, there is a
and highlighting key challenges. They also present defense possibility that the entry barriers that even those with little

-98-
Authorized licensed use limited to: Universidad Industrial de Santander. Downloaded on October 16,2024 at 01:13:29 UTC from IEEE Xplore. Restrictions apply.
coding experience can easily attempt Cyber Attacks will be VI. CONCLUSION
lowered. Although the various advantages that can be obtained from
the effective use of Generative AI are acknowledged, the
D. Another Issues proliferation of these technologies causes new security issues.
In addition to the Generative AI types mentioned above, Therefore, in this paper, cybersecurity risks arising from the
cybersecurity issues exist. Attackers can combine different development of Generative AI technology were investigated
types of Generative AI to perform more complex attacks. For and analyzed. There are various Generative Models to
example, attackers can generate fake media (such as deepfake, implement Generative AI, which are learned and utilized for
fake image) through Media-Based Generative AI and create AI’s purposes. We looked at a total of five Generative AIs,
convincing emails that look like they were written by a specific including Text-Based Generative AI such as Chat GPT. Based
person in real life through Text-Based Generative AI. As such, on this, possible security risks were analyzed, and
Media-Based Generative AI, which is mainly used for identity countermeasures were discussed. When analyzed by referring
theft and fraud, can be combined with other types of to real cases, most of them used Generative AI for their own
technology to become more sophisticated and difficult to detect attacks. As real cases exist and various types of cybersecurity
attacks. Generative AI training data is of various types such as risks are predicted, we must prepare strong cybersecurity
Text and Image, and the data sources are also diverse. measures for Generative AI technologies. Expert and
Therefore, it is very likely that an attacker can access and governance in each field should prepare security measures and
manipulate the training data at any source point. In addition, regulations at the technical level. It is also necessary to prepare
there is Generative AI that continuously learns based on user- education and regulations for corporate members, and
entered data. In this case, personal information or sensitive individuals should also strive to understand Generative AI
information input by the user may be learned and exposed to technology and foster ethical awareness.
other uses. Attackers can use this information to commit crimes
such as account hijacking. REFERENCES
[1] "Gartner Poll Finds 45% of Executives Say ChatGPT Has Prompted an
V. DISCUSSION Increase in AI Investment." Gartner, 3 May 2023,
www.gartner.com/en/newsroom/press-releases/2023-05-03-gartner-poll-
Generative AI technology has made innovative advances in finds-45-percent-of-executives-say-chatgpt-has-prompted-an-increase-
recent years, but it has become a major concern in terms of in-ai-investment. accessed 2023.
security and ethics. Therefore, it is necessary to respond to the [2] Deloitte AI Institute. A new frontier in artificial intelligence
security threats of Generative AI through technical security Implications of Generative AI for businesses. Deloitte, 2023.
measures and ethical responsibilities. [3] KPMG International. Generative AI models — the risks and potential
rewards in business. KPMG, 2023.
Basically, it is necessary to check security vulnerabilities [4] "5 generative AI takeaways for CEOs." pwc, 25 May 2023,
for Generative AI. Update Generative Models regularly, check https://www.pwc.com/us/en/tech-effect/ai-analytics/generative-ai-
for security flaws, fix bugs, and optimize other security. takeaways.html. accessed 2023.
Security issues for fixes identified during the update process [5] “Open Source LLM Projects Likely Insecure, Risky to Use.” Dark
should be shared with users to prevent cybersecurity threats Reading, 28 June 2023, https://www.darkreading.com/tech-trends/open-
and make service available in safe environment. Additionally, source-llm-project-insecure-risky-use. accessed 2023.
security teams can monitor existing code and network [6] "[Exclusive] ChatGPT User Payment Information Leak.. Investigation
vulnerabilities to establish a security control process to respond into User Damage in South Korea." MBC news, 4 Apr 2023,
https://imnews.imbc.com/replay/2023/nwdesk/article/6470787_36199.ht
to security threats in real time. In this process, data governance ml. accessed 2023.
or security tools can be used to respond more efficiently to [7] “AI art has brought about the butterfly effect.” Brunch story, 25 Oct
cyber threats, That’s why expanding investments in data loss 2022, https://brunch.co.kr/@capitaledge/24. accessed 2023.
prevention, cloud-native application protection platforms [8] Goodfellow, Ian J., et al. "Generative adversarial networks (2014)."
(CNAPP), extended detection and response (XDR) tools can arXiv preprint arXiv:1406.2661 1406 (2014).
also help. [9] Radford, Alec, Luke Metz, and Soumith Chintala. "Unsupervised
representation learning with deep convolutional generative adversarial
Since generative AI tools are simple to use and misuse, networks." arXiv preprint arXiv:1511.06434 (2015).
training of organizational members at the company level is also [10] Mirza, Mehdi, and Simon Osindero. "Conditional generative adversarial
essential. When using Generative AI, it is important to ensure nets." arXiv preprint arXiv:1411.1784 (2014).
that employees are aware of inputable data and comply with it [11] Zhu, Jun-Yan, et al. "Unpaired image-to-image translation using cycle-
by specifying regulations on AI utilization in workflows. In consistent adversarial networks." Proceedings of the IEEE international
addition, training employees on basic cybersecurity awareness conference on computer vision. 2017.
to help members identify phishing attempts and other [12] Karras, Tero, et al. "Progressive growing of gans for improved quality,
cyberattack vectors helps prevent cyber threats. It would be stability, and variation." arXiv preprint arXiv:1710.10196 (2017).
more efficient to create a simulated attack or scenario [13] Lehtinen, Jaakko, et al. "Noise2Noise: Learning image restoration
without clean data." arXiv preprint arXiv:1803.04189 (2018).
environment by referring to actual cases in the curriculum and
[14] Song, Yang, and Stefano Ermon. "Improved techniques for training
use it for education. score-based generative models." Advances in neural information
processing systems 33 (2020): 12438-12448.
[15] Zheng, Huangjie, et al. "Truncated diffusion probabilistic models." stat
1050 (2022): 7.

-99-
Authorized licensed use limited to: Universidad Industrial de Santander. Downloaded on October 16,2024 at 01:13:29 UTC from IEEE Xplore. Restrictions apply.
[16] Hacker, Philipp, Andreas Engel, and Marco Mauer. "Regulating [18] "OPWNAI : CYBERCRIMINALS STARTING TO USE
ChatGPT and other large generative AI models." Proceedings of the CHATGPT." Check Point, 6 Jan 2023, research.checkpoint.com/
2023 ACM Conference on Fairness, Accountability, and Transparency. 2023/opwnai-cybercriminals-starting-to-use-chatgpt/. accessed 2023.
2023. [19] "New digital fraud statistics: forced verification and deepfake cases
[17] Dutta, Indira Kalyan, et al. "Generative adversarial networks in security: multiply at alarming rates in the UK and continental Europe." Business
a survey." 2020 11th IEEE Annual Ubiquitous Computing, Electronics Wire, 30 May 2023, www.businesswire.com/news/home/
& Mobile Communication Conference (UEMCON). IEEE, 2020. 20230530005196/en/. accessed 2023.

-100-
Authorized licensed use limited to: Universidad Industrial de Santander. Downloaded on October 16,2024 at 01:13:29 UTC from IEEE Xplore. Restrictions apply.

You might also like