Will NPCs Kill Us All?
Our future is unknown. Is AI the key to our destruction or salvation? Image generated by author using DALL·E 3
Introduction
"Hello, my friend, stay awhile and listen!" Deckard Cain’s famous salutation has stuck around in my memory since I first heard it playing Diablo in the early 2000’s. Non-Playable Characters, like Deckard Cain who guides the player through Diablo’s grim narrative, populate the world of Sanctuary and bring it to life. They fulfill critical character roles of mentors, merchants, and enemies that you, playing as the protagonist, must interact with to drive the story forward. Interacting with NPCs in games like Diablo was my, and I suspect many others, first experience with Artificial Intelligence.
At first, NPC intelligence was very simplistic, think a Goomba in Super Mario that could just walk side to side. They were little more than environmental obstacles. In contrast, modern-day NPCs have evolved to have branching storylines and sophisticated combat tactics that change based on how you choose to interact with them. As game developers have explored ways to improve NPC intelligence, we’ve been gifted with comical pathfinding errors, leading characters to aimlessly walk into walls, or amusing dialogue glitches where characters would repeat the same line infinitely. Although this could be frustrating to the player if the bug prevented them from completing a game objective, NPC errors have never resulted in anything insidious.
This evolution of AI in gaming is a microcosm of the broader AI journey. Simply stated, AI has undergone a radical transformation over the last few decades. The largest consumer facing impacts have been executed relatively quietly, usually bundled with some other significant product. But AI has become increasingly integrated into our daily lives from Alexa’s voice echoing in our living rooms to smart recommendations on Netflix. However, it has been anything but quiet when AI has challenged humans and beaten them at their own games. These notable milestones have briefly dominated global headlines over the years.
In 1997, Deep Blue bested chess grandmaster Garry Kasparov after 19 moves. In 2011, Watson decimated top champions Brad Rutter and Ken Jennings on the trivia show "Jeopardy!". And in 2016, AlphaGo beat Lee Sedol in a five-game Go match that also demonstrated AI's creative capabilities like in move 37 that astonished human onlookers at the time. These events captured the world’s attention for a day, but they all quickly faded back into nerd novelty.
Challenger Approaching: The Model that Changed the Game
Enter ChatGPT. On November 30th, 2022, OpenAI released their generative large language model to the public and completely changed the game. ChatGPT demonstrated a remarkable adaptability and depth in the text it generated. It was able to impress whether discussing the theory of relativity, sharing Jamaican beef stew recipes, or indulging in small talk. It can now also write code, analyze data, recognize images, generate art and do math. Within two months of its release, ChatGPT reached 100 million monthly active users, making it the fastest-growing consumer application in history. Previously, it took Tik Tok nine months and instagram two and a half years. ChatGPT stands apart from its newsworthy predecessors in two regards:
It’s accessible by anyone with an internet connection and an email address.
It engages in meaningful interactions, offering insights and information across a broad spectrum of topics, unlike its predecessors whose skills were confined to specific games or data retrieval.
Although continuously breaking headlines across the globe, ChatGPT is less of a large technological leap forward than it seems. Given the text of the internet and asked to predict the next word, this class of model inherited from gradually developing capabilities that have operated behind the scenes for many years like spam detection, image recognition, and ad targeting to name a few. OpenAI had even produced several other language models before packaging GPT3 into a chat interface. However, that was the decisive action and ChatGPT became the largest “center stage” application of supervised learning overnight.
OpenAI’s raging success has added fuel to the growing firestorm of AI investment. Google, Meta, and Microsoft all rushed to release their own consumer accessible LLMs and thousands of startups have appeared to explore AI applications in various sectors, snatching up $50B in VC funding in 2023 alone. In healthcare, for instance, AI has the potential to assist in diagnosing conditions by analyzing vast amounts of medical literature. In education, it could offer tailored learning experiences, and in customer service, it's poised to revolutionize interactions by providing efficient, automated support. However, these applications aren’t just strictly theory, but there have been actual examples of AI already having massive impact for new research. Material scientists used Google’s DeepMind to find 2.2 million crystal structures, 381k that are promising, saving an estimated 800 years of work. And yes, the most recent batch of LLMs are being tested for improving NPC intelligence too.
However, the news isn’t all good. LLMs have a tendency to hallucinate - they provide the most probabilistic answer, but not necessarily the correct answer or even a real answer. This has created embarrassing public instances where people have over relied on generated content. Also, the models inherit the biases of their training data which has raised important ethical questions of their use. And reports are emerging of businesses possibly replacing human jobs with AI.
But these negatives, while very significant, don’t threaten our extinction as the title of this post teases. That risk enters the picture when we begin to contemplate where this technology is headed at its current rate of growth. Today we have Artificial Narrow Intelligence, which is designed to perform specific tasks or understand specific domains. The end game is Artificial General Intelligence. AGI can understand, learn, and apply its intelligence to solve any problem, much like a human being. This even applies to being able to improve itself – perhaps gaining superintelligence. This theory has fostered a winner take all dynamic, motivating companies and countries alike into a race for AGI not too dissimilar from the nuclear or space race. The concern is that developers may forsake caution in the name of progress, potentially creating an uncontrollable superintelligence that has no love for its human creators.
Warnings about AI exceeding human intelligence and destroying us all have been around since the 1950s. But they’ve recently hit the mainstream media news cycle, dramatically increasing their exposure. Online, the debate is very polarizing with figures like Marc Andreessen and Eliezer Yudkowsky becoming figureheads of the two sides, accellerationists and safetyists, respectively. However, as those two try to pull the Overton window in their direction, there are also plenty of thinkers like Vitalik Buterin and Margaret Gould Stewart taking up residence somewhere in the middle. At the moment, it appears the case for concern is winning. The public seems to be souring on AI. According to a recent 2023 Pew Research survey, 52% of US adults are more concerned than excited about the increased use of artificial intelligence in our daily lives. A significant increase from 38% the year prior.
So Will NPCs Kill Us All? Let’s find out
As we stand on the brink of what seems like an AI renaissance, experts and thinkers have raised a crucial question: What does the future hold for this technology? Will AI be a guiding light, helping us navigate through unknown complexities that lie before humanity, much like Deckard Cain in Diablo? Or do we face a future where the very technology we developed to enlighten and aid us becomes the source of our extinction?
Ultimately, I’m trying to craft my own opinion on if we will be the engineers of our own destruction or this is all just a bunch of hype. I’m planning to explore the topic broadly, not just focusing on the existential risk, and look at it from a few different angles to round out my perspective.
What can we learn from history: How has AI developed up to this point? What were the impacts of other technological revolutions?
What are the opportunities: What are some of the ways that AI can be used to significantly benefit humanity? What is the opportunity cost of not pursuing this?
What are the risks: What are the downsides? How likely are they to occur and do any mitigations exist?
I’m not an expert, so I’ll be learning as I go. This plan will probably change the more I learn and I might select a few topics to dive into more deeply along the way if they appear particularly interesting. Let’s explore the question, Will NPCs kill us all?