December 9, 2023

Scrolling through Twitter and lurking in artificial intelligence communities over the past few months, I’ve seen a lot of big claims. In the few days since OpenAI unveiled its GPT-4 model, those have only intensified — in thread after thread, people are claiming that ChatGPT can develop games. An AI so advanced that it can program a game that real people can play sounds like science fiction, or a far-off future. But actually, game developers and enthusiasts already use AI technology all the time.

Developers and AI enthusiasts have used ChatGPT to recreate classic games like Pong, Tetris, and Snake. Developers have also used it to write code for original games and to generate story ideas and dialogue for text-based role-playing games. In a combination of these ideas, someone recently asked the AI model to turn the Game Boy Advance game Pokémon Emerald into a text adventure. Some AAA studios are also looking to integrate advanced AI as a tool for writing dialogue in games.

For some, ChatGPT’s impact seems big enough to drastically alter our world; others aren’t so sure, questioning the capability of this new technology beyond the surface-level dazzle. Chatbots and AI have been around for quite some time, and there’s no denying that OpenAI’s iteration is significant. But its real impact has yet to be seen, despite the near-constant chatter about AI on social media and in the news. Let’s take a step back and break down how the technology works in order to understand how ChatGPT works in turn, and how it’s currently being used — and could be used in the future — in video games.

ChatGPT screenshot, where the AI model outputted the next: “Hi there! ChatGPT is a very smart computer program that can talk to you just like a person! It works by using something called “machine learning” which means it has learned a lot of things by looking at lots and lots of words and sentences. It uses all that information to understand what you’re saying and come up with answers to your questions.”

Image: OpenAI via Polygon

What is ChatGPT?

ChatGPT is an AI chatbot released by OpenAI in November 2022. It’s the user-friendly version of GPT, which its creators call a “large multimodal model.” GPT, now in its fourth iteration (called GPT-4), examines patterns within data and then outputs natural-sounding language or code in response; the “multimodal” part means that GPT-4 can understand text, audio, images, and more. (Traditional AI can only process one type of data.) ChatGPT is groundbreaking because it not only learns from its huge database of information, but also from human feedback. This allows it to “exhibit human-level performance on various professional and academic benchmarks,” according to its creator, OpenAI, which also admits GPT-4 is “less capable than humans in many real-world scenarios.”

You can access ChatGPT for free in a browser, but the new GPT-4 model is available only to paid subscribers at $20 a month. There’s an input at the bottom of the website where you’re able to enter text. If you want ChatGPT to rewrite the ending of Game of Thrones and insert you into it, this input field is where you’d ask. Or, if you want to ask it to explain a complex topic, you could enter that query there, too.

I asked ChatGPT how it works, and it produced this:

ChatGPT is a large language model that uses a deep neural network architecture called GPT-3.5 (Generative Pre-trained Transformer 3.5). The model is pre-trained on a large corpus of text data, which allows it to understand and generate natural language.

The training data consists of a diverse range of sources, including web pages, books, and articles. During training, the model learns to predict the next word in a sequence of words based on the preceding words. This process is known as language modeling.

Once the model is trained, it can be fine-tuned for a specific task, such as answering questions or generating text. During fine-tuning, the model is trained on a smaller dataset specific to the task, which allows it to learn the specific nuances of that task.

When a user interacts with ChatGPT, the model processes the user’s input and generates a response based on its understanding of the language and the task it has been trained for. The model is designed to generate responses that are coherent and relevant to the user’s input, and it can generate text in a variety of styles and tones.

Overall, ChatGPT works by leveraging the power of deep learning to understand and generate natural language, making it a powerful tool for a wide range of language-related tasks.

If you don’t understand an answer from ChatGPT or you want to tweak its response, you could enter the same question again once its first response has been generated. ChatGPT will “remember” the previous query. On my second attempt, I asked ChatGPT to explain itself to me like I was 5, in one paragraph. Here’s what it produced:

ChatGPT is a very smart computer program that can talk to you just like a person! It works by using something called “machine learning” which means it has learned a lot of things by looking at lots and lots of words and sentences. It uses all that information to understand what you’re saying and come up with answers to your questions. It’s kind of like how you learn things by listening to your parents and teachers, except ChatGPT learned by reading lots of books and articles. So when you ask ChatGPT a question, it tries to give you the best answer it can based on what it’s learned!

The caveat in getting answers from ChatGPT is that the AI can produce incorrect information, as well as what OpenAI described as “harmful instructions or biased content,” and it’s limited to world events after 2021, due to the data it’s learned from.

If you wanted to create a video game using ChatGPT, you’d start the process in that very same text input box.

Can ChatGPT and AI really create a game?

ChatGPT can produce the code for a video game, but the video game can’t run in ChatGPT. For instance, AI enthusiast Ammaar Reshi used GPT-4 to generate code for a game of Snake. Specifically, he asked for the HTML, CSS, and JavaScript needed to make it run. The program was able to produce that, after which point Reshi needed to copy and paste the code into a program used to build and run software. Reshi said the game didn’t immediately work, but he was able to ask ChatGPT for tweaked code, which it provided alongside explanations of the changes.

He later used the model to create code for a version of the 1993 game SkyRoads. Similarly, AI aficionado Javi Lopez was able to produce code for a basic rendition of Doom.

ChatGPT is not necessarily producing the game itself, but rather the code required to make a game. These games have relatively simple code bases compared to modern video games, but they are still games, of course. When making a rendition of an already existing version of a video game, the main caveat is that the game must be something ChatGPT is familiar with.

And ChatGPT certainly appears to be familiar with a ton of games, including the aforementioned Pokémon Emerald. In that case, ChatGPT didn’t output code for Pokémon Emerald; it just pulled from its knowledge of the Pokémon game to walk through it, similar to a text adventure that responds to player prompts. Dan Dangond, who did the experiment, told Polygon, “The best analogy to what’s happening is that it’s like asking a friend to pretend to be Pokémon Emerald for you. That kind of experience is only as good as your friend’s memory of the game, and GPT-4 seems to know a surprising amount about the game.”

People have used ChatGPT to create original games, too. In particular, developers have used ChatGPT to produce dialogue; it’s how ChatGPT was integrated into a dating sim called Love in the Classroom, highlighted by Inverse in March. Others have described using the AI model for generating ideas or for advice on putting together a game. One modder used GPT AI to make it feel more realistic and iterative to speak to NPCs in Mount & Blade 2: Bannerlord; the modder also created limitations so characters would “respect” their roles in AI conversations.

You can also choose to play text adventures within the ChatGPT interface by entering a starting prompt that lays out the world and its rules. It’s something like playing Dungeons & Dragons online, by yourself, and the Dungeon Master is an AI rather than a human being. Beamable CEO Jon Radoff used the program to create a prompt for a fantasy adventure, complete with standards for commands, inventory, and a map. He said it was able to “enforce rules and constraints” in a way that AI Dungeon, an AI program specifically made for dungeon-crawling text games, cannot.

Are any big studios actually using it?

AI is a huge part of video games. Non-playable characters can be controlled by AI and react to player input; AI is used in both companion characters (like Ellie in The Last of Us) and enemies (sticking with The Last of Us, the infected are controlled with AI). Some multiplayer games make extensive use of AI to make playing against a computer more like playing against another human. Procedural generation can also make use of AI. Ubisoft’s Watch Dogs: Legion and Minecraft work in that way.

But if we’re talking about GPT-4 and ChatGPT specifically, it’s likely too new to have been integrated into any AAA game out right now. But some major studios intend to use similar tech. NetEase, for one, vowed in February to include a ChatGPT-like model for its Chinese MMO Nishuihan; players will be able to chat freely with characters in the game, according to the South China Morning Post. NetEase expects to use “conversational AI” in more games, too, to vary “new game tasks and content.”

More recently, Ubisoft unveiled a generative AI tool during a GDC 2023 talk in March, with an accompanying blog post. The tool is called Ghostwriter, and it’s intended to help video game writers, not replace them, Ubisoft said. “Ghostwriter effectively generates first drafts of barks — phrases or sounds made by NPCs during a triggered event — which gives scriptwriters more time to polish the narrative elsewhere,” Ubisoft’s Roxane Barth wrote, describing the tool created by Ubisoft R&D scientist Ben Swanson. Once a character has been created, Ghostwriter will generate dialogue barks based off specific needs, and the writer will then pick and edit the responses. Ubisoft didn’t say which or if any current projects are using the tool, but that Swanson is not supporting Ghostwriter into its production processes.

Is it a tool that game writers want? Ubisoft says so — it’s “the result of conversations with narrative designers who revealed a challenge, one that Ben [Swanson] identified could be solved with an AI tool,” according to the blog post. But there’s been an immediate and mixed response on social media following Swanson’s GDC talk. There are people who see the tool as a way to free up game writers to work on more creatively fulfilling projects. To some, writing barks is creatively fulfilling: It’s the lines people hear repeatedly in video games, and should be treated with such importance. Others are worried the tech will erase entry-level game jobs; junior writers are sometimes tasked with writing barks.

Tools like ChatGPT and Ghostwriter will have an impact on the labor of making video games, but there’s no consensus just yet on what that impact will be.

Why are AI chat programs getting popular now?

AI chat programs have been popular for a long, long time. A program called Eliza, first released in 1966, is considered one of the first — if not the first — chatbot. It was programmed to act like a therapist, and it was pretty convincing at the time, according to CNN. (There’s even a video game inspired by it, also called Eliza, created by Zachtronics.)

Microsoft created an infamous Twitter AI chatbot called Tay, which will forever be remembered for how quickly it turned into a “racist asshole” (it only took one day). Tay was designed as a “machine learning project” that learned from human input, and humans helped turn it bad very quickly — an important lesson on the dangers of these projects. It’s a reminder, too, that AIs are created and trained by humans with biases, which means they’re going to reproduce these biases.

The GPT language models from OpenAI have been around since 2018, when the first model was released. OpenAI has continued to update its model, from GPT-2 in 2019 to GPT-3 in 2020. ChatGPT made waves because it made interacting with GPT-3 easier to do. It also impressed people immediately because of its use of fluid language that can, at times, sound quite natural and conversational, like talking to another person, even though the technology is just using algorithms to systematize its answers. GPT-4 was released earlier in March, further advancing ChatGPT’s capabilities.

To put it simply, ChatGPT is new, and a lot of people — even non-tech people — are finding and experimenting with GPT models for the first time. A bunch of other companies, like Microsoft, are working on ChatGPT competitors, too.

Mount and Blade 2: Bannerlord. A knight looks onto a battlefield with soldiers charging forward.

Image: TaleWorlds Entertainment

Will AI ever replace game developers?

No. For one, ChatGPT cannot do anything without human input. It needs a person to undertake the task of telling it what to do, whether that’s generating ideas, code, dialogue, or anything else creative. It’s not advanced enough to do much of that work without mistakes, too. Headlines that suggest AI will put engineers, developers, artists, or writers out of their jobs are vastly overstated fearmongering.

At this point, ChatGPT is in testing, and OpenAI lists its shortcomings right on the program. ChatGPT has a tendency to confidently make errors. As Jenna Burrell, the research director at Data & Society, put it in Poynter, ChatGPT can do a lot, but it isn’t able to do “research, fact-checking, or copyediting at a minimally adequate level.”

ChatGPT is pulling from an existing set of data — albeit tons of varied data — and using that data to produce its output. It’s certainly come up with some original things, but its output will always be based on something else. That’s how the human brain works in creativity, too; our experiences and memories shape what we create. But humans have an ability beyond even advanced AI, because we are humans creating things, and that ability to create something wholesale can’t be replaced just yet. Humans have yet to create an AI that doesn’t need an initial prompt or guidance in order to create something new. That’s why there are still always limits to AI that humans don’t have.


Leave a Reply

Your email address will not be published. Required fields are marked *