What movies about artificial intelligence routinely get wrong
As the impact of AI continues to grow, modern media depictions of the technology are becoming more common.
Unfortunately, most of these depictions are inaccurate at best, or dangerous at worst. They either lead to wild overestimations of AI capability, or create a false sense of security about the risks associated with AI development.
Make no mistake: AI is advancing incredibly rapidly. Many of the tropes and ideas previously accessible only in science fiction will soon become reality.
But it's important to ground that knowledge in fact. Below, I'll cover the three main ways that movies about AI routinely get things wrong, as well as some common misunderstandings about how AI works.
1. AI timelines are a lot shorter than most people think
The idea of a machine gradually becoming intelligent is a holdover from early science fiction, where we didn't understand the exponential nature of technology.
In reality, AI growth starts with a trickle, and ends with a waterfall. And we're well past the trickle stage.
In the literature, this is called "hard takeoff". After AI reaches a sufficient level of performance - say, a level on par with that of human being - the slope of progress will increase dramatically, since it can then use its knowledge to bootstrap its own growth.
By becoming more intelligent, the rate of self-improvement will grow, too, resulting in a steep exponential curve. At that point, AI will probably begin making major advancements on a monthly, or even daily, basis.
We're quite close to this already. Over the course of the last two years, for example, we went from AI struggling to understand simple sentences to PaLM beating the average human on word-level tests.
Naturally, hard takeoff makes for a poor movie plot. It's much more endearing to Hollywood audiences for a machine to slowly gain sentience or capability over time, usually making friends along the way (looking at you, Chappie).
So writers and directors, either out of a lack of understanding or a desire to maximize ratings, often fudge these timelines considerably.
2. AI & humans probably won't work together
Another common way that movies about AI get things wrong is in the portrayal of humans working alongside AI in the future. Think Westworld or the Terminator.
Spoiler alert: a real superhuman AI wouldn't need John/Sarah Connor's help. It would probably just insert itself into the Internet & begin a nuclear firing sequence that ends the vast majority of life on the globe.
Fortunately for us, five second films don't sell as well as two-hour ones. (I, for one, welcome inefficient robot overlords).
As I often allude to, AI will outperform humans on all tasks by orders of magnitude. Within a few decades, the market will inevitably lead to AI displacing humans in most jobs, even with the harsh regulations that are incoming.
Humans and AI working hand-in-hand, like some sort of communist utopia, is probably a far-fetched dream. Collaboration would require AI to bottleneck its performance considerably, and there's simply no reason why it should.
Hopefully, AI being the main value producer will lead to an era of abundance for humanity. We'll no longer need to work, since it'll be clear robots can just do anything 10,000x faster anyway.
But it's also possible that this will lead to global strife and danger as our economy transitions. Only time will tell.
3. AI won't have emotions
In movies about artificial intelligence, writers and directors love to portray AI as possessing emotions. This is because it makes machines a extension of humanity, which is a straight line to drama and conflict (the prerequisite to good entertainment).
To that end, AI is often shown as being happy or sad, evil or malicious - think Chappie, or Interstellar, or even the Matrix.
In reality, though, advanced AI is unlikely to possess emotions for the simple fact that they're just not economically efficient.
Humans evolved emotions like love, hate, and jealousy to help our species survive before more cognitively adept machinery became available. They're not something that's necessary in current day society - they're merely a holdover from an earlier evolutionary age.
Unless machine learning engineers wanted to be purposefully inefficient, there'd be no reason for them to build AI in a similar way. Superhuman AI doesn't need room for an amygdala: better to just make the frontal cortex a thousand times better instead.
This doesn't mean AI wouldn't be capable of displaying the human equivalent of emotions, by the way. Text models are already great at convincing us they feel happiness, sadness, and fear. But the key difference is that these are simulated responses: AI certainly wouldn't be driven by these "feelings" the same way humans are.
Movies about artificial intelligence that get it right
As great as most movies and stories about AI are, don't let your knowledge be hamstrung by directors or authors that don't fully understand it.
In that vein, here's some science fiction that I think gets AI right:
- Ex Machina (movie), by Alex Garland
- Crystal Society (book series), by Max Harms
- Transcendence (movie), by Wally Pfister. Warning: critics hated this. But it's one of my favorite AI films of all time.
Closing thoughts
In short, here's how movies about AI get things wrong:
- In films, AI capabilities often grow linearly. In reality, growth is exponential.
- In films, AI & humans are shown working together to achieve common goals. In reality, that would be extremely inefficient - AI will probably just do everything for us.
- In films, AI often develop emotions. In reality, why would we give our machine overlords an amygdala if we didn't have to?
Over the coming years, the lines between sci-fi and real life will continue to blur. And this transition period will bring with it the potential for new, exciting fiction that will challenge our minds and force us to look further into the future. Personally, I can't wait.
Keep your eyes peeled for the aforementioned AI tropes & happy watching!