0% found this document useful (0 votes)
55 views5 pages

Towards Grounded Dialogue Generation in Video Game Environments

This document presents research on grounded dialogue generation in video game environments, specifically using the game Disco Elysium: The Final Cut as a case study. It introduces a dataset of approximately 1.1 million words of dialogue, structured as a graph, which allows for the exploration of interactive storytelling and the application of large language models (LLMs) to generate context-aware dialogue. The findings suggest that the methods developed could enhance dialogue writing in video games and encourage further research on similar datasets from other narrative-driven games.

Uploaded by

bifefi5549
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
55 views5 pages

Towards Grounded Dialogue Generation in Video Game Environments

This document presents research on grounded dialogue generation in video game environments, specifically using the game Disco Elysium: The Final Cut as a case study. It introduces a dataset of approximately 1.1 million words of dialogue, structured as a graph, which allows for the exploration of interactive storytelling and the application of large language models (LLMs) to generate context-aware dialogue. The findings suggest that the methods developed could enhance dialogue writing in video games and encourage further research on similar datasets from other narrative-driven games.

Uploaded by

bifefi5549
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Towards Grounded Dialogue Generation in Video Game Environments

Nader Akoury, Ronan Salz, Mohit Iyyer


University of Massachusetts Amherst
[email protected], [email protected], [email protected]

Abstract explored encoding stories as graphs and deployed classical


planning algorithms (Riedl and Young 2004) to simultane-
Video games provide rich interactive environments that have
proven to be great testbeds for AI research in areas such as
ously incorporate user actions while staying true to authorial
grounded language generation and reinforcement learning. In intent (Mateas and Stern 2005). Similarly, while commercial
this preliminary work, we show that commercial video games video game environments have become increasingly popular
can also be an excellent resource for interactive storytelling. testbeds for grounded language (Suhr et al. 2019) and rein-
We introduce a dataset extracted from the widely acclaimed forcement learning (Bellemare et al. 2012; Kempka et al.
computer role-playing game Disco Elysium: The Final Cut. 2016), to the best of our knowledge no commercial video
With roughly 1.1M words of dialogue spread across a com- games have been utilized for storytelling research.
plex graph of possible utterances, the game provides a strong Although Disco Elysium is a single testcase of the ideas
research foundation for interactive storytelling. Furthermore, presented in this paper, this task is important. We envision
nodes in the dialogue graph can express conditional logic that
permanently alter the game state, and thus affects reachabil-
research using the Disco Elysium dataset to help produce
ity in the graph. These conditions are encoded in the form of tools to suggest dialogue options during game design and
Lua scripts written by the game’s designers. To demonstrate in-game systems which dynamically react to player input.
the utility of the dataset, we cluster dialogue based on sim- Additionally, the proposed approach is widely applicable to
ilarity and linearize all possible utterances for the next turn narrative-driven video games. Numerous video games struc-
of dialogue into a Lua script containing a mix of natural lan- ture their dialogue in a manner similar to Disco Elysium as
guage and game logic. We then mask out one utterance from it uses Pixel Crushers Dialogue System2 , a common frame-
the script and use a large language model to generate a plau- work for video game dialogue. Thus any successful ap-
sible alternative. Analyses of these generations demonstrate proaches to improving dialogue in Disco Elysium can also
the difficulty of this task and suggest future avenues for re- be applied to those games. Future work could even endeavor
search, which if successful, have the potential to profoundly
impact dialogue writing in video games.
to extract dialogue from more of these games to build larger,
more diverse datasets.
Disco Elysium provides an interesting dataset for
Introduction grounded language researchers because it is a large-scale
Interactive storytelling allows its consumers to actively dialogue-driven game: while players control a character in
guide a story as it unfolds. Broadly speaking, this type of a virtual environment, the majority of a player’s interac-
storytelling takes on many forms, including tabletop role- tion takes the form of dialogue grounded in the current
playing games like Dungeons and Dragons (Callison-Burch game state. We treat Disco Elysium as a testcase for other
et al. 2022), choose your own adventure books (Clark and dialogue-driven video games and use it to extract a dataset
Smith 2021), interactive fiction (Hausknecht et al. 2020), of dialogue paired with game state references. The extracted
and narrative-driven video games. In this paper, we fo- dialogue is encoded as a directed graph, with certain nodes
cus on the latter medium by collecting a dataset from the acting as boolean gates defined by Lua scripts conditioned
highly-acclaimed video game Disco Elysium: The Final on the current state of the game. Additionally, the dataset
Cut1 (Kurvitz et al. 2021) and applying large language mod- includes explicit representations of actors, items, game vari-
els (LLMs) to generate dialogue conditioned on the game ables, and individual conversations, each of which may con-
state. tain annotations from the game designers describing their
As such, our work diverges from previous AI-driven in- intent.
teractive storytelling research. Historical approaches have Given this rich dataset, we devise an approach to use
LLMs trained on a mix of code and natural language (Chen
Copyright © 2023, Association for the Advancement of Artificial
Intelligence (www.aaai.org). All rights reserved.
et al. 2021; Fried et al. 2022) to diversify the game dialogue
1
Currently rated the #1 PC video game of all time on while maintaining the designers’ vision. We first cluster sim-
Metacritic, see https://www.metacritic.com/browse/games/score/
2
metascore/all/pc https://www.pixelcrushers.com/dialogue-system/
Figure 1: In this example taken from the intro dream sequence of Disco Elysium: The Final Cut, we first cluster dialogue nodes
by a similarity measure predicated on game state variables and spoken dialogue. Then, we linearize the next turn of dialogue in
the graph into a Lua script that contains game logic and dialogue. Finally, we <MASK> one utterance from the cluster and ask
a LLM trained on code and natural language to complete the masked dialogue.

ilar dialogue, then linearize the utterances and game state state variables; see Table 1) from the PC version of Disco
into a Lua script, and ask a LLM to predict a masked line Elysium: The Final Cut using the open source tool AssetStu-
of dialogue (Figure 1). We then conduct a preliminary set dio.3 The game, with over 1.1M words of dialogue across
of experiments using two LLMs: a finetuned GPT-3 Curie roughly 70K utterances (Table 1), is by most measures of
(Brown et al. 2020) model and Codex (Chen et al. 2021) creative fiction one of the longest works in the genre.4
in a few-shot setup. We compare the quality of the gener- Dialogue entries contain references to the actor who is
ated dialogue using two metrics, BLEURT (Sellam, Das, and speaking and any precondition, in the form of a boolean-
Parikh 2020) and a bag-of-words F1. While both metrics valued Lua expression, required to speak the utterance (see
indicate Codex performs the task better despite the lack of the if statements in Figure 1b), e.g.,
fine-tuning, the low F1 scores indicate there is a large room Variable["whirling.dreamone brave"]
for improvement even when only predicting text clustered
by similarity. In addition, each dialogue entry may contain Lua statements
which alter game state (see Figure 1b), e.g.,
Dataset SetVariableValue(
"whirling.dreamone champion est",
Disco Elysium: The Final Cut is a critically acclaimed
true
dialogue-driven computer roleplaying game where the
)
player takes on the role of a down-on-his-luck detective.
There are many genre-defining characteristics of the game Each top-level entity in the catalog may also contain anno-
that contribute to its popularity, which also make it an excel- tations used internally by the game’s designers as a way to
lent resource for interactive storytelling research. The ma- document their intent (see the Lua comments in Figure 1b),
jority of interactions in the game world are in the form of di- likely as a way to keep aspects of the complicated story clear
alogue, which includes interactions with not only other char- during the design process, e.g.,
acters but also inanimate objects in the game world (e.g., a No DISCO TIME if you haven’t
ceiling fan and a bathroom mirror from the first scene of the established you’re a CHAMPION
game). Furthermore, the player character has twenty-four at- These annotations provide a rich, natural source of context
tributes that govern in-game skills and also act to convey to better explain the vast array of top-level entities (Table 1)
internal thoughts which frequently interject commentary on used throughout the game logic.
the current situation. Finally, the sheer scope of the dialogue We note that one of the key challenges in using this dataset
(Table 1), combined with the complexity of the choices and for research is that Disco Elysium represents a single overar-
their subsequent consequences, make this game an excellent ching story. For this reason we take special care in splitting
case study to explore interactive storytelling research within
an existing commercial video game. 3
https://github.com/Perfare/AssetStudio
We begin by extracting a catalog of all top-level enti- 4
https://gamicus.fandom.com/wiki/List of longest video
ties (actors, items, conversations, dialogue entries, and game game scripts#Video games
Dataset Splits tially reachable in the next turn of conversation to form a Lua
Train Valid Test Total script representing all possible utterances and game state al-
89.8% 5.4% 4.7% tering commands (Figure 1b). Finally, we mask one utter-
Utterances 65316 4143 3237 72696 ance from the dialogue cluster and have the LLM (GPT-3
Words 1001191 59877 52816 1113884 Curie or Codex) generate a plausible alternative (Figure 1c).
Nodes 98442 6950 5092 110484
Forks 17283 1544 964 19791 Clustering
Variables 99015 6989 5132 111136 LLMs are often poor at generating relevant continuations for
creative text (Akoury et al. 2020). For that reason, we de-
(a) sign our preliminary experiments by clustering nodes in the
Variable
T Overlap Dataset Totals graph by similar text that only varies slightly based on game
Train T Valid 2897 Actors 424 state (see Figure 1), e.g.,
Train T Test 2871 Items 259 "Stop! I don’t want to hear
Valid Test 303 Conversations 610 anything more about this
(a) *sensation*. Take me back
(b) (c) to the formless, disembodied
nothing!"
Table 1: For the Disco Elysium dataset: (a) we split the data "Please, no! I changed my mind!
into training, validation, and test sets based on the number (b) Take me back to the formless,
of dialogue words, while ensuring an approximately similar disembodied nothing!"
proportion of conditional dialogue forks and referenced Lua We experiment with a number of algorithms for cluster-
variables; (b) we take special care to minimize the number of ing the game’s dialogue, including the Levenshtein distance,
referenced variable overlaps amongst the splits; (c) though Jaccard index, and the Dice coefficient (equivalent to a bag-
we do not attempt to disentangle Actors and Items across of-words F1). We also vary the features used for clustering
splits. by splitting the words into characters, grouping by ngrams,
and through the use of lowercasing. We conduct a manual
inspection of the various approaches to clustering, includ-
the dataset into train, valid, and test splits. Considering the ing a hyperparameter sweep of the similarity threshold. This
high degree of interconnectedness across conversations in inspection indicated that solely clustering based on the dia-
the game, it is not possible to create a dialogue split with a logue utterances would either systematically miss semanti-
disjoint set of actors, items, and game state variables. cally similar text, or cluster dissimilar utterances when the
As a single variable can be referenced in multiple conver- similarity threshold was made more permissive.
sations, the game’s dialogue graph implicitly forms a hyper- To combat this tendency, we additionally tried cluster-
graph, with hyperedges defined by Lua variables. Optimal ing nodes by inspecting the associated Lua conditions. We
partitioning of a hypergraph is known to be NP-hard (Papa first parse the Lua expression, extract identifiers (which of-
and Markov 2007), and in smaller hypergraphs an exhaus- ten refer to functions) and string literals (which often re-
tive enumeration can often be more efficient in practice than fer to variables). We then split the literals into their con-
specialized algorithms (Papa and Markov 2007). We attempt stituent words (e.g., whirling.dreamone brave be-
to generate a 90%/5%/5% train/valid/test split of the con- comes whirling, dreamone, brave), before run-
versations in the dataset, while allowing a total ϵ = 1.5% ning the above battery of clustering approaches. In the end,
variation from the desired splits, using branch and bound to we find that clustering based on a combination of dialogue
enumerate all valid partitions of the conversations which sat- and Lua expressions produced the best results, without the
isfy the percentage constraints. Luckily, many distinct con- need for the extra feature engineering, while only relying on
versations are connected by one or more dialogue edges, the simple Dice coefficient with a threshold d >= 0.5.
making it such that a handful of connected components in
the graph make up roughly 70% of the dialogue (and thus
are required to be in the training set). For the remaining di- Model Prompt Model OpenAI API
alogue, we opt to minimize the overlap in game variables Class Tokens Type Name
across splits, as they are crucial for representing conditional Curie 5 2048 Finetuned curie
dialogue. A final split of 89.8%/5.4%/4.7% is achieved with Codex 8000 Few-Shot code-davinci-002
minimal overlap in variables (Table 1).
Table 2: Details of the models used in our experiments. As
Evaluation OpenAI does not provide parameter counts or details on
To probe the current capabilities of current LLMs to suc- finetuning, we also provide the API name for the models
ceed at the proposed task, we first cluster the dialogue nodes to help reproducibility.
to discover utterances which are similar (Figure 1a), but may
5
vary slightly based on the game state. Then, starting from a Likely 6.7B parameters, see:
given dialogue node, we linearize all dialogue nodes poten- https://blog.eleuther.ai/gpt3-model-sizes/
Model Examples Tokens BLEURT F1 We finetune Curie for 1 epoch, with a batch size of 32 ex-
Curie 2,668 3,041,299 41.9 25.6 amples and a learning rate of 0.2× the learning rate of the
Codex 2,668 21,077,200 44.2 29.5 pretrained model and we weight the loss for the prompt to-
kens by 0.01. For the few-shot Codex model, we prefix each
Table 3: Preliminary experiments over the validation set linearized Lua script with several samples from the valida-
show that few-shot Codex outperforms a finetuned Curie tion set such that they take up nearly the full context win-
model for generating context-aware dialogue. dow (we reserve 100 tokens of the context for generation).
We also ensure there are no overlaps in dialogue between the
Model Examples Copied few-shot examples and the script. Consequently, each Codex
script has 7 few-shot examples on average.
Curie 2,668 235
We choose to measure the performance of the models on
Codex 2,668 455 (8)†
the validation set using a bag-of-words F1, as the clustered
utterances have a large overlap with the masked text the
Table 4: We find that both Curie and Codex occassionally model is tasked with infilling. In addition, we use BLEURT
copy dialogue from the prompt, and in 8† instances Codex which has proven to be robust for semantic similarity of gen-
directly copies a completion from the few-shot examples. erated text (Karpinska et al. 2022). Both metrics favor Codex
slightly, though given the low F1 score, it’s clear the models
have much room for improvement on this simplified form
Linearization and Masking of our task. That is to say, naı̈vely applying our preliminary
approach to all the dialogue in the game, not just to the sub-
Now that we have a set of clusters, we need to convert each set of dialogue clustered via similarity, is even more likely
cluster into a Lua script that can be fed to a language model. to fail. We also posit that Codex likely outperforms Curie
We do that by prefixing all the top-level entities (e.g. ac- since it is a larger model that is explicitly trained on a large
tors, conversations, and variables) referenced by the clus- corpus of code, even though it uses a few-shot approach to
tered nodes at the top of the script, and any default value they inference.
may have. We additionally include any annotations by the
game’s designers in the form of Lua comments. Each node Analysis
in the cluster is visited in sequential order and its Lua con- To better understand the performance difference between
ditions, dialogue, and any associated actions when the dia- the two models we also conduct a small analysis of each
logue is spoken is included in the script. Lastly, we generate model’s output. We find that both models tend to copy from
all variants of each script by enumerating every clustered ut- the prompt (Table 4), but Codex does it nearly twice as often.
terance from the prompt, masking them out one-by-one, and A qualitiative inspection of the generations from the
asking the LLM to generate the masked dialogue as its com- Codex model (our best performer) seem to indicate the
pletion (see Figure 1bc). We use a prefix-suffix-mask (Don- model may struggle to generate plausible completions due
ahue, Lee, and Liang 2020) ordering of the masked infilling to a lack of historical context to the current conversation.
examples rather than a suffix-prefix-mask (SPM) form, as Our script-based prompts do not include any previous dia-
SPM only achieves better performance when pretraining on logue utterances, but rather rely only on the combination of
billions of tokens (Bavarian, Jun, and Tezak 2022). dialogue that can be emitted next and conditional game logic
gating those options. It is clear the models also do not make
Experiments effective use of the game designer’s annotations to fill in the
We conduct experiments using two LLMs: GPT-3 Curie gaps. While these comments are likely useful reference for
and Codex (Table 2). GPT-3 Curie is a strong generation the writers of the game, they may not contain enough con-
model for natural language (Brown et al. 2020), especially text alone to guide generation. Considering Codex has a very
when finetuned on a downstream task, while Codex is an long context window and performs better than a finetuned
extremely capable few-shot LM for code (Chen et al. 2021). Curie (Table 3), future experiments could attempt to include
As our task contains elements of both natural language and previous turns of dialogue in the prompt to see if that im-
code, it is important to assess the capabilities of each model proves generation quality.
paradigm.
Since the two models perform different tokenization6 and Future Directions
support different context lengths, we filter the clusters, keep- While these preliminary experiments provide useful in-
ing only those that fit the smallest context length (2048 sights, automatic metrics of creative writing are known to
tokens) using the GPT-3 tokenizer. We then generate all poorly correlate with human judgements (Karpinska, Ak-
the linearized scripts representing semantically related text oury, and Iyyer 2021). For that reason, we have already de-
for the next turn of dialogue. After filtering and generating veloped a webapp with an embedded Lua VM which recre-
masked variants of the clusters, we are left with 30,501 train- ates the in-game dialogue system while incorporating gen-
ing examples and 2,668 validation examples. erated dialogue from LLMs. Having a test harness separate
from the game environment allows for a controlled human
6 evaluation protocol. We plan an interactive user-study along
Codex uses a modified tokenizer that collapses whitespace
since it is commonly used in code formatting. with further modeling improvements.
References Hausknecht, M.; Ammanabrolu, P.; Côté, M.-A.; and Yuan,
Akoury, N.; Wang, S.; Whiting, J.; Hood, S.; Peng, N.; and X. 2020. Interactive Fiction Games: A Colossal Adventure.
Iyyer, M. 2020. STORIUM: A Dataset and Evaluation Plat- Proceedings of the AAAI Conference on Artificial Intelli-
form for Machine-in-the-Loop Story Generation. In Pro- gence, 34(05): 7903–7910.
ceedings of the 2020 Conference on Empirical Methods in Karpinska, M.; Akoury, N.; and Iyyer, M. 2021. The Per-
Natural Language Processing (EMNLP), 6470–6484. On- ils of Using Mechanical Turk to Evaluate Open-Ended Text
line: Association for Computational Linguistics. Generation. In Proceedings of the 2021 Conference on
Empirical Methods in Natural Language Processing, 1265–
Bavarian, M.; Jun, H.; and Tezak, N. 2022. Efficient
1285. Online and Punta Cana, Dominican Republic: Associ-
Training of Language Models to Fill in the Middle.
ation for Computational Linguistics.
arXiv:2207.14255 [cs], 30.
Karpinska, M.; Raj, N.; Thai, K.; Song, Y.; Gupta, A.; and
Bellemare, M. G.; Naddaf, Y.; Veness, J.; and Bowling, M. Iyyer, M. 2022. DEMETR: Diagnosing Evaluation Metrics
2012. The Arcade Learning Environment: An Evaluation for Translation. In Proceedings of the 2022 Conference on
Platform for General Agents. CoRR, abs/1207.4708. Empirical Methods in Natural Language Processing.
Brown, T. B.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J.; Kempka, M.; Wydmuch, M.; Runc, G.; Toczek, J.; and
Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, Jaśkowski, W. 2016. ViZDoom: A Doom-based AI Re-
A.; Agarwal, S.; Herbert-Voss, A.; Krueger, G.; Henighan, search Platform for Visual Reinforcement Learning. 2016
T.; Child, R.; Ramesh, A.; Ziegler, D. M.; Wu, J.; Winter, IEEE Conference on Computational Intelligence and Games
C.; Hesse, C.; Chen, M.; Sigler, E.; Litwin, M.; Gray, S.; (CIG), 1–8.
Chess, B.; Clark, J.; Berner, C.; McCandlish, S.; Radford, Kurvitz, R.; Hindepere, H.; Tuulik, A.; De Cuir, C.; and
A.; Sutskever, I.; and Amodei, D. 2020. Language Models Moskvina, O. 2021. Disco Elysium: The Final Cut. ZA-
Are Few-Shot Learners. arXiv:2005.14165 [cs]. /UM Studios.
Callison-Burch, C.; Tomar, G. S.; Martin, L. J.; Ippolito, D.; Mateas, M.; and Stern, A. 2005. Demonstration: The In-
Bailis, S.; and Reitter, D. 2022. Dungeons and Dragons as a teractive Drama Façade. In Proceedings of the First AAAI
Dialog Challenge for Artificial Intelligence. In Proceedings Conference on Artificial Intelligence and Interactive Digital
of the 2022 Conference on Empirical Methods in Natural Entertainment, AIIDE’05, 153–155. Marina del Rey, Cali-
Language Processing (EMNLP), 15. fornia: AAAI Press.
Chen, M.; Tworek, J.; Jun, H.; Yuan, Q.; Pinto, H. P. d. O.; Papa, D. A.; and Markov, I. L. 2007. Hypergraph Partition-
Kaplan, J.; Edwards, H.; Burda, Y.; Joseph, N.; Brock- ing and Clustering. In Handbook of Approximation Algo-
man, G.; Ray, A.; Puri, R.; Krueger, G.; Petrov, M.; Khlaaf, rithms and Metaheuristics.
H.; Sastry, G.; Mishkin, P.; Chan, B.; Gray, S.; Ryder, N.; Riedl, M. O.; and Young, R. M. 2004. An Intent-Driven
Pavlov, M.; Power, A.; Kaiser, L.; Bavarian, M.; Winter, C.; Planner for Multi-Agent Story Generation. Proceedings of
Tillet, P.; Such, F. P.; Cummings, D.; Plappert, M.; Chantzis, the Third International Joint Conference on Autonomous
F.; Barnes, E.; Herbert-Voss, A.; Guss, W. H.; Nichol, A.; Agents and Multiagent Systems, 2004. AAMAS 2004., 186–
Paino, A.; Tezak, N.; Tang, J.; Babuschkin, I.; Balaji, S.; 193.
Jain, S.; Saunders, W.; Hesse, C.; Carr, A. N.; Leike, J.;
Achiam, J.; Misra, V.; Morikawa, E.; Radford, A.; Knight, Sellam, T.; Das, D.; and Parikh, A. 2020. BLEURT: Learn-
M.; Brundage, M.; Murati, M.; Mayer, K.; Welinder, P.; Mc- ing Robust Metrics for Text Generation. In Theoretical Is-
Grew, B.; Amodei, D.; McCandlish, S.; Sutskever, I.; and sues in Natural Language Processing-2, 7881–7892. On-
Zaremba, W. 2021. Evaluating Large Language Models line: Association for Computational Linguistics.
Trained on Code. arXiv:2107.03374. Suhr, A.; Yan, C.; Schluger, J.; Yu, S.; Khader, H.;
Mouallem, M.; Zhang, I.; and Artzi, Y. 2019. Execut-
Clark, E.; and Smith, N. A. 2021. Choose Your Own Adven-
ing Instructions in Situated Collaborative Interactions. In
ture: Paired Suggestions in Collaborative Writing for Eval-
Proceedings of the 2019 Conference on Empirical Meth-
uating Story Generation Models. In Proceedings of the
ods in Natural Language Processing and the 9th Interna-
2021 Conference of the North American Chapter of the As-
tional Joint Conference on Natural Language Processing
sociation for Computational Linguistics: Human Language
(EMNLP-IJCNLP), 2119–2130. Hong Kong, China: Asso-
Technologies, 3566–3575. Online: Association for Compu-
ciation for Computational Linguistics.
tational Linguistics.
Donahue, C.; Lee, M.; and Liang, P. 2020. Enabling Lan-
guage Models to Fill in the Blanks. In Proceedings of the
58th Annual Meeting of the Association for Computational
Linguistics, 2492–2501. Online: Association for Computa-
tional Linguistics.
Fried, D.; Aghajanyan, A.; Lin, J.; Wang, S.; Wallace, E.;
Shi, F.; Zhong, R.; Yih, W.-t.; Zettlemoyer, L.; and Lewis,
M. 2022. InCoder: A Generative Model for Code Infilling
and Synthesis. arXiv:2204.05999.

You might also like