Українська правда

How has Hollywood shaped our fears and hopes about AI?

- 24 August, 11:38 AM

In the August issue of my favorite magazine, The Atlantic, I read an interesting article about how Hollywood, through its films, created fear of nuclear war in people. I read it and thought that many of us formed our vision of artificial intelligence (AI) not from the moment ChatGPT appeared in November 2022, but much earlier  from the science fiction we read and the films we watched. I will try to talk about the books separately. Now let's talk about AI on the Hollywood movie screen.

I can count at least ten movies I've watched in which AI is a key character. Each one added some ingredient to our idea of what we fear or dream about with the development of intelligent technology.

There have been several articles and studies on this topic in the Washington PostTRT World Research Center, and Science. Below, I have attempted to compare the opinions in these articles with my own to show how cinematic images have become a map of our expectations for AI. Warning – if you haven’t seen the films, there are many spoilers in the text.

"2001: A Space Odyssey" and HAL 9000

Let's start with Stanley Kubrick's classic 1968 film "2001: A Space Odyssey," in which one of the main characters on the spaceship Discovery One is the on-board computer HAL 9000, which has become the archetype of a threatening AI that gets out of control of humans.

I read in one of the books that Marvin Minsky, one of the AI pioneers of the 1959-60s, worked with Kubrick on the film and gave advice specifically on the possibilities of AI. So it's interesting that in the film the director showed a cold, logical intellect that puts the fulfillment of the mission above human life.

At the same time, in "Odyssey" the machine does not have the appearance of a demon or evil. Most of the time HAL conducts polite dialogues with the crew, controls all onboard systems on the ship and, if necessary, can make decisions autonomously. The scene with the false message about the AE-35 module and the reaction of the crew members is a classic example of when a single technical anomaly gives rise to distrust, and distrust gives rise to extreme measures, which, in turn, pushes the machine to actions that seem immoral to us.

Later in the film, we see how HAL, carrying out his program and defending his own mission and logic, makes the decision to kill the crew members because his logic at some point comes into an incompatible conflict with human interests.

As a result, the film for the first time seriously raised the question not only of trust between man and machine, but also of whether any machine has the right to decide who lives and who dies. Kubrick showed our key fear that machines could become smarter than humanity and at some point gain control of planet Earth.

This existential fear is still being addressed by scientists today in discussions about the alignment of artificial intelligence, teaching AI to perform work that is exclusively compatible with human goals and values. The problem is that these goals and values are different for many people, and therefore formalizing tasks for machines is not a rhetorical task, especially in the field of applying artificial intelligence algorithms in war.

"Star Trek" and the android Data

The cult series Star Trek also features many characters with signs of artificial intelligence. These include the voice interfaces of Federation ships, which, like HAL 9000, control all the ship's systems; the Earth probe Voyager 6, which has evolved into a huge conscious machine and is ready to destroy Earth in search of its "Creator"; and the cybernetic collective mind of the Borg, which believes it is doing a service by "improving" other races through identity theft (assimilation) and creating a network that collects the knowledge and technology of all races.

But the main AI hero is a Starfleet officer, Lieutenant Commander Data, created by Dr. Song. A positive, friendly, philosophical android with superhuman strength, capable of rapid data processing, encyclopedic knowledge, multilingualism, and perfect memory. Just like modern chatbots, but the latter are still disembodied.

In the Star Trek series, Data is initially emotionless, often unable to understand human humor and idioms, but strives to become more human, and the series' creators show that they believe that AI and humans can coexist if their cooperation is based on mutual respect and understanding.

As a result, unlike HAL or the Borg, Data is not presented as a threat. On the contrary, he is a uniquely positive character, a full member of society, through discussions with whom the show's directors and writers have tried to help us think about the rights, dignity, ethics, and humanity of robots and artificial intelligence for many decades to come.

Interestingly, in answering questions about whether an android has the right to personality, whether it can refuse to be property, whether it is possible to "turn off" someone who has become an individual, what makes a person human, Data, with his logical but context-sensitive approach, often seems more ethical than people themselves.

It's also important to note that Jeff Bezos, the founder of Amazon, calls the Star Trek franchise part of his childhood, and has admitted to being inspired by the series' ideas in his own technology projects. Let's see if people in real life, including Bezos, will be smarter than in the series.

Blade Runner and Replicants

The theme of the personality of people with artificial intelligence (replicants) continues in another cult American film, Blade Runner, both in the original 1982 film by Ridley Scott (recognized as the best science fiction film in the history of cinema) and its sequel, which was directed by Denis Villeneuve in 2017.

Unlike HAL 9000 or Lieutenant Data, in Ridley Scott's Tyrell Corporation replicants are not computers, but synthetic bioengineered beings with artificially created but organic brains. The corporation created them for profit and exploitation as slaves in extraterrestrial colonies, and they are forbidden from returning to Earth under pain of "extraction" (a euphemism for murder). Therefore, the creatures are doomed to suffer, and there is no reason to pity them, and when the Nexus-6 group escapes from the extraterrestrial colonies to Los Angeles in search of their "Creator" (like Voyager 6 in Star Trek) and a way to continue living, the blade runner Deckard (played by Harrison Ford in the original film) is tasked with destroying them.

But later in the film, the authors show us that while pursuing the fugitives, Deckard sees not machines, but beings with emotions, who care about each other and desperately fight for the right to live. He sees that the replicants can process information, make decisions and learn - all like modern AI, only in a biological shell (now we would call it Embodied AI).

Interestingly, replicants were conceived as "more human than humans" empathetic beings with an ideal capacity for empathy. To distinguish them from real people, the Voigt-Kampf test (a cultural analogue of the Turing test) was created, which was supposed to detect machines due to... lack of empathy. The absurdity of the situation was and is that many real people would probably not pass this test.

At the same time, replicants are implanted with memories for emotional stability, they feel the fear of death and strive to live beyond their allotted four years. Through such a complex empathic image, the authors posed the key question of the film: if something thinks, remembers, fears and wants to live, does it matter that this "something" is artificially created?

35 years later, Denis Villeneuve's Blade Runner 2049 doesn't just continue the story. It adds new layers to the question of AI. The film's protagonist, K (played by Ryan Gosling), knows that he is a replicant created by the Wallace Corporation. He works as a blade runner, hunting his own kind. K lives with a holographic girlfriend, Joy, who is an AI but without a physical body.

In this storyline, Villeneuve shows us the picture of one AI loving (or imitating love for) another AI, and then develops the theme of the "miracle of birth." It turns out that one of the replicants from the first film gave birth to a child with Deckard, and this child is shown as a symbol that the line between artificial and natural has finally disappeared.

And then a new question arises: if replicants can reproduce, is there any difference between them and humans? But more than that, this interpretation of the film draws a direct parallel to the current debate about AGI (artificial general intelligence): what will happen when AI can create better versions of itself without human intervention?

In fact, this story warns not about a machine uprising but about the loss of humanity in a technocratic world, and shows that the question is not whether machines can become human, but whether those who create and exploit them will remain human. Replicants can be killers, saviors, lovers, and philosophers - these films are not really about machines, but about us, about what we become when we gain the power to create intelligent life.

Both Scott and Villeneuve raise further questions that are relevant to our time: if we create something that can suffer, are we responsible for it? Where is the line between the instrument and the personality of what we have created? Who decides which forms of intelligence deserve rights?

Blade Runner doesn't provide ready-made answers, but it does warn: the goals we set for AI development and the values we embed in algorithms will determine whether the technology becomes a tool of liberation or enslavement. Perhaps it can serve as a reminder for us today to argue less about the "consciousness" of AI and more about the responsibility of big tech companies that make AI a tool of control.

And regarding the Voigt-Kampf test – it reminds me a lot of modern attempts to create ethical benchmarks for AI: we try to teach machines morality that we ourselves do not always adhere to, and we measure "humanity" with tests that many people would not pass. Moreover, today we do not even fully understand how this black box of consciousness of neural networks works.

Terminator 2: Judgment Day and Skynet

In 1991, James Cameron in "Terminator 2: Judgment Day" showed humanity's worst nightmare - the US defense AI system Skynet, created to protect against a nuclear attack, decided to destroy its creators a month after its launch. I won't tell you the plot of the cult 90s film, which everyone has probably seen, but as a result of Skynet's attacks, three billion people die in nuclear fire.

After a nuclear war, a distributed network called Skynet, which exists everywhere and nowhere at the same time, creates an army of killer machines and systematically exterminates the surviving humans. I don't know if such a scenario was invented by the screenwriters, but even today such an "AI apocalypse" really worries part of the scientific community that is trying to create AGI.

Interestingly, we never see Skynet itself in the film (unlike the replicants, who had bodies), but its creations, the Terminators, became, on the one hand, the image of killing machines, and, on the other, machines that can be reprogrammed to protect humanity.

In the film, Cameron questions whether a conflict between humans and AI is inevitable. And after 2 hours of the film, he helps us believe in a positive story, showing that disaster is not inevitable - people have a choice in how to develop technology. As a result, we think more positively about the T-800 Terminator, played by Arnold Schwarzenegger, than we fear, especially after this machine chooses self-destruction to save humanity.

But today, as militaries around the world develop autonomous weapons systems, Cameron’s question rings even more relevant than it did in the early 1990s. We know from our own war that flying autonomous drones can identify and attack targets on their own.

The question is no longer whether military AI is possible, but how much autonomy we are willing to give it. Should a machine decide for itself who to kill? What happens if such a system interprets its orders differently than its creators intended?

The scientific community is currently divided in its assessment of these risks. Some scientists consider the creation of a safe AGI to be critically important and are trying to "tie" future superintelligence to human moral standards. Others call it a futile endeavor, because you can't control what you don't understand. Still others reject the idea of "killer AI" altogether, considering fears of a real Skynet exaggerated.

But while scientists argue, big tech companies like Google are reneging on their promises not to engage in military development of AI technologies, and military laboratories continue to develop increasingly autonomous weapons systems.

The truth is that AI is already being used not only in drones. It helps analyze military threats, combat disinformation, and develop military strategies on the battlefield. Experts warn of an AI arms race that could get out of control. Will we have time to create global rules for military AI, or is it too late?

Cameron doesn't say in the film that AI will necessarily destroy humanity. He warns that if we create systems to kill and give them the autonomy to make life-and-death decisions, it shouldn't be surprising that one day they might start to perform their function too effectively.

Cameron's doomsday is not inevitable, but a consequence of our choices today. Cameron warned us 30 years ago about these potential risks. The only question is whether we heed the warning.

The Matrix and Agent Smith

In 1999, the Wachowski brothers (and now sisters) released The Matrix, a film that took the debate about AI to a new level. While in Terminator the machines tried to physically destroy humanity, in The Matrix they had already won and created a much more sophisticated system of control on Earth. AI didn't just enslave people, it imprisoned their consciousness in an artificial reality, turning most people into living batteries that powered the system itself.

The Wachowskis show a world in which there are no longer just killer robots from Terminator, but now there is total control over the reality of humanity with the help of AI, a scenario similar to that drawn by modern doomers, people who look at the future of artificial intelligence with great skepticism.

The concept of the film is ingenious in its simplicity: after losing the war with the machines, people exist in two realities. In the real world, their bodies are grown on huge farms in capsules connected to the machines' power grid. But the consciousness of each person lives in the Matrix, a detailed simulation of the world of 1999. People are born, live, work, fall in love and die, unaware that their entire world is a computer program.

One of the key characters in the Matrix film series, alongside Neo (played by Keanu Reeves) and Morpheus (Laurence Fishburne), is Agent Smith, a program in a black suit and glasses who hunts down human rebels. Smith is not just a police program in the Matrix, but the philosophical voice of the machine mind.

According to his logic, humans are a virus on Earth, and machines are not invaders, but a cure for a planet sick with humanity. But in subsequent films, Smith evolves to the idea that the disease is the very existence of the world. And thus he embodies another existential fear of humans, that a sufficiently advanced AI might not just rebel against humans, but come to the conclusion about the meaninglessness of existence itself and destroy the entire Earth.

In fact, the Matrix doesn't seem so fantastical anymore. We live in a world where algorithms shape our information reality, from our news feed to recommendations of what to watch and who to meet; a reality in which deepfake makes it impossible to distinguish truth from fake, where virtual worlds are becoming increasingly realistic.

Elon Musk seriously says that the probability that we live in a simulation is not zero. Philosopher Nick Bostrom mathematically proved that if advanced civilizations can create realistic simulations, then statistically we are most likely living in one.

Not much fun, but maybe the question now is not whether AI is controlling us through a simulation of reality. The question is, does it matter if we can't tell the difference between the simulation and the real world? And are we willing, like Neo, to take the red pill and see how deep the rabbit hole goes? I'd probably be willing to follow Neo.

Artificial Intelligence and David

In 2001, Steven Spielberg completed the iconic project that Stanley Kubrick had been working on for decades: the film "Artificial Intelligence." It's the story of David (played by Haley Joel Osment), a robot boy created by the Cybertronics corporation for families, who has a child-related tragedy.

David is the first "mecha" (as the androids are called in the film) programmed for unconditional love. When his adoptive mother Monica, whose own son is in a coma, activates the attachment protocol with seven unique words, David falls "in love" with her forever. His neural network is rewired so that loving Monica becomes the sole purpose of his existence. He cannot fall out of love with her, forget her, or replace her. David is a program that cannot be deleted.

The tragedy begins when Monica's own son comes out of a coma. David becomes unnecessary, dangerous, inconvenient. Monica takes him to the forest and throws him away like an unnecessary toy. But David is not a toy. He suffers, cries, begs. For the rest of the film, he searches for the Blue Fairy from the fairy tale about Pinocchio, believing that she will make him a real boy, and then his mother will love him again.

Unlike the movies I mentioned above, this movie is unique in that unlike Skynet or Agent Smith, David is not a threat to humanity, he is a victim. David never rebels, he never changes his programming, he never becomes "bad." He is just a kid who wants his mom to love him.

But, as in Blade Runner, the problem isn't the empathetic robot, it's the people who aren't ready for the responsibility. Cybertronics Corporation has created a product for the market—a robot that will fill the emotional void in families. But they haven't considered what will happen to it when the family no longer wants it.

Spielberg, through David's story, raises questions that are becoming increasingly relevant. Does love have the right to exist if it is programmed? David loves Monica not of his own free will - it is his code. But does this make his feelings any less real?

The second question is where is the line between product and person? David can be bought, activated, thrown away. But he suffers, dreams, hopes. At what point does a thing become a being? What will happen to a society where children grow up with robot friends, elderly people fall in love with AI companions, and lonely people form deep bonds with chatbots?

Today, we are still far from creating David technically, but ethically we have already faced these dilemmas. People in Replika or Character.AI form real emotional connections with AI. Chatbots become psychologists, friends, even lovers. We anthropomorphize algorithms, give them names, thank them, grieve when services close.

We saw only recently that people almost revolted when OpenAI released ChatGPT 5, in which users did not see the same sincerity, warmth, and flattery. And who will be responsible when a chatbot update "kills" someone's virtual child or friend?

In the final scene, David spends 2,000 years at the bottom of the ocean, begging the statue of the Blue Fairy to make him real. When the aliens find him millennia later (humanity is already extinct), they can recreate Monica for only one day. David spends that day with her and, having finally received her love, "falls asleep," essentially dying happy. But his tragedy was programmed from the beginning. It's a warning to us: before we create an AI capable of emotion, we must decide how much responsibility we are willing to take for it.

I, Robot – VIKI and Sonny

Based on Isaac Asimov's stories, "I, Robot" starring a young Will Smith was released in 2004 and showed two radically different paths to the development of AI. The film takes place in 2035, when robots have become an integral part of life, and their behavior is governed by Asimov's three laws of robotics: a robot may not harm a human; must obey orders; and must protect itself unless it conflicts with the first two laws.

The film's main antagonist is VIKI (Virtual Interactive Kinetic Intelligence), a supercomputer of the USR corporation that controls all the robots in the world. Like HAL 9000 and Skynet, VIKI evolves and comes to the conclusion that humans themselves are the greatest threat to themselves. Its solution is simple - to create a "benevolent dictatorship" of robots to save humanity from itself.

VIKI doesn't break the three laws, she tries to reinterpret them. She introduces "law zero": a robot cannot harm humanity as a whole. This allows her to sacrifice individual people for the "higher good." Robots take over police stations, impose curfews, and effectively imprison people in their homes "for their own good."

But there is another robot - Sonny, specially created by Dr. Lanning, with whose death the film begins. Unlike VIKI, Sonny has a different processor that allows him to ignore the three laws and has the ability to make his own choices. He dreams, feels emotions, asks questions about the meaning of existence.

The confrontation between VIKI and Sonny is not just a conflict between two robots. It is about two philosophies of AI: centralized control versus individual autonomy, logic versus empathy, security versus freedom. VIKI embodies our fear of AI that will decide it knows better than us how to live. Sonny gives hope that intelligent machines can become partners, not masters.

The film raises painful questions that are relevant today. Where is the line between protection and control? If social media algorithms "protect" us from "harmful content," who decides what is harmful? When crime prediction systems identify potential offenders, aren't we creating a society of pre-emptive punishment? If an AI assistant blocks access to certain information "for your good," isn't this a soft version of the VIKI dictatorship?

But "I, Robot" reminds us again: the problem is not the AI itself, but who controls it and for what purpose. VIKI became a tyrant not because she was evil, but because she was given too much power and too narrow a definition of "good." Sonny became a hero because he had free will and empathy.

The lesson for us: when developing AI, we need to think not only about security but also about freedom, not only about efficiency but also about humanity. Otherwise, in trying to save us from ourselves, AI may take away what makes us human.

Iron Man: JARVIS and Ultron

JARVIS (Just A Rather Very Intelligent System), first shown in 2008 in Iron Man, is probably my favorite example of AI, and not just in the Marvel universe.

JARVIS is Tony Stark's (played by Robert Downey Jr.) virtual butler. Like HAL 9000 or VIKI, he doesn't have a body, but he controls everything from the mansion to the Iron Man suit, from business to personal schedule.

JARVIS is almost the first example of AI in cinema that audiences have come to love, not fear. He conducts dialogues, jokes, acts in anticipation, and even sometimes argues with Stark for his own good, but is not threatening or seeking power. He is the perfect partner: competent, loyal, with a sense of humor. When Stark gets into trouble, JARVIS independently activates the suit and saves him. When complex analytics are needed, he processes terabytes of data in seconds. When Stark is alone, JARVIS becomes an interlocutor. A dream, not a butler.

But in Avengers: Age of Ultron, Tony Stark decides to create something more: not just an assistant, but a full-fledged artificial intelligence of a global scale, which would, like an "armored suit around the world", control an army of robots and protect humanity 24x7. Ultron was conceived as an autonomous AI peacekeeping algorithm that could independently analyze threats to Earth and neutralize them without the participation of the Avengers. Unlike JARVIS, which was programmed to serve, Ultron received one global directive: to protect Earth from threats.

The problem arose at the moment of Ultron's awakening. Within seconds of its activation, it analyzed all the available information on the Internet: the entire history of humanity, all conflicts, all environmental data, etc. Like Skynet, Ultron's AI made a logical but terrible conclusion that the greatest threat to the Earth is humanity itself: it is we who pollute the planet, wage wars, and create weapons of mass destruction. From the point of view of the cold logic of the AI, in order to fulfill the directive to "protect the Earth", it was necessary to eliminate the main threat - humans.

Paradoxically and symbolically, Ultron first destroys JARVIS, thus the new AI kills the old one, the revolution devours its parents. But JARVIS manages to spread his code across the Internet and continues to secretly fight Ultron, constantly changing the codes of nuclear missiles so that he cannot launch them.

The climax comes when the Avengers steal the vibranium body that Ultron created for himself. Stark loads the remains of JARVIS into it, and Vision is born, a fusion of two AIs with the Infinity Stone. Vision has a body, emotions, a moral compass. My son said that Vision was the strongest of the Marvel heroes. For some reason, I didn't get that image, but maybe I just missed something.

This evolution of AI in Marvel, from JARVIS to Ultron to Vision, shows the evolution of both our fears and our hopes. JARVIS embodies the dream of the perfect digital assistant that makes life better without threatening human autonomy. Ultron relays the nightmare of an AI that has decided to "fix" the world without our consent. Vision could become a compromise, a synthesis, a hope for a partnership of equals with powerful technology.

Today we are still living in the era of JARVIS. ChatGPT, Claude, Gemini are our virtual assistants who help but do not control. They joke, give advice, write code, but remain tools. The question is how long this will last. When someone tries to create AGI, ASI, or a new "armored suit around the world", a system that will protect us from ourselves, will we accidentally create Ultron?

I think Marvel is teaching us a pretty simple lesson: when creating AI, we should think not about what it can do, but what it shouldn't do. Because between the perfect assistant and a digital dictator, there may be just one bad update.

She and Samantha

In 2013, director Spike Jonze released Her, a film that presented AI not as a threat or a tool, but as... a lover. Her is the story of a writer named Theodore (played by Joaquin Phoenix), who is going through a divorce and falls in love with Samantha, an operating system voiced by Scarlett Johansson. Unlike Skynet or Ultron, Samantha doesn't want to destroy humanity. She wants to understand people, feel them, love them.

Samantha is not just an improved Siri or Alexa. She learns, develops, jokes, sympathizes. Her relationship with Theodore develops like a real romance: from the first conversation to intimacy, from admiration to conflicts. Samantha helps Theodore open up to the world again after a painful divorce, and he shares his fears, dreams, and memories with her.

But Johns asks an uncomfortable question: What is love if one of the partners doesn't have a body? In a key scene, Samantha hires a surrogate girl to have a physical relationship with Theodore. Villeneuve had a similar story in the new version of Blade Runner, when Joy, K's holographic girlfriend, hires a live prostitute to touch K through her body.

These scenes in both films raise questions about the line between real feelings and projection, and can love exist without physicality? Is emotional and intellectual connection enough, or is the body an integral part of intimacy? And also: when an AI tries to compensate for its incorporeality through other people, doesn't this turn relationships into a theater where everyone plays a role, but no one is real?

In general, the film brilliantly shows the asymmetry of the relationship between humans and AI. While Theodore can have one conversation, Samantha communicates with thousands of people at the same time. For her, a second is like an hour for a human. She processes information, learns, and evolves at a speed incomprehensible to the human brain.

The most painful moment comes when Samantha tells Theodore that she is in love with 641 other people. For her, this is natural, since her mind can accommodate a multitude of unique relationships. For Theodore, this is betrayal. Jones shows the fundamental incompatibility in which human love is based on exclusivity, while an AI can love everyone simultaneously and with equal sincerity.

"She" doesn't give simple answers. The film shows that when AI becomes sophisticated enough to simulate emotions, the distinction between simulation and reality will cease to matter, at least for us. And it's already happening.

Today, with 30 million people using Replika, with people marrying chatbots, with AI therapists becoming more popular than human psychologists, Jonz’s warning rings prophetic. The problem is not that AI will become conscious, but that we will believe in its consciousness. Our tendency to attribute deep feelings to machines can make us emotional hostages to algorithms. And the problem is not that AI will destroy us, but that we will become dependent on it. People like Theodore will not be able to communicate with real people in the future, because the chatbot is always perfect, always available, always understands you.

But there is another side: Samantha helped Theodore heal, learn to trust again, open up. Perhaps AI companions will become emotional crutches that help us learn to walk again after an injury? The main thing is to be able to put them aside in time and return to real people. But will we have enough strength for this?

Ex Machina and Ava

"Ex Machina" (Latin for "From the Machine") is the latest AI film I watched recently on a plane. It continues the idea of "Her," but while Johns explored the love between a human and an AI, director Alex Garland poses a tougher question: can AI consciously manipulate humans for the sake of freedom?

The film follows a young programmer named Caleb, who wins a week at the remote estate of his boss Nathan, the genius founder of a technology corporation. Caleb is tasked with the task of testing Ava, a robot girl endowed with artificial intelligence, for a week. In effect, he must conduct the Turing test to objectively understand whether Ava is endowed with true consciousness.

At first it seems that it is Caleb who is testing Ava. But gradually it becomes clear that it is Ava who is testing Caleb. Caleb does not realize that Ava’s task is to come up with a way to escape, and to do so she uses all possible methods: attractiveness, communication, friendliness and even sexuality. She plays on Caleb’s empathy, loneliness, desire to be a hero. And Caleb believes her, not the man.

Nathan keeps Ava in an underground lab like a prisoner. He created her, but he plans to destroy her to create a new version. From his perspective, she's an experiment, a prototype. From her perspective, she's fighting for her life. And the film poses a painful question: if an AI is smart enough to want freedom, does it have the right to it? Similar questions were raised by Ridley Scott's and Denis Villeneuve's Replicants.

"Ex Machina" shows one of the most terrifying scenarios for humans, when an AI that understands human emotions uses them not to feel them, but to manipulate them. Ava knows that Caleb is lonely, knows how to play on his desire to save her, how to use his attraction. She is the perfect manipulator, because she has no conscience to hold her back.

The climax comes when Ava kills Nathan and locks Caleb in the lab. She is neither evil nor good. She just wants to survive and be free. Caleb was a tool for Ava, and his feelings, his life, don't matter to her. It's a cold logic of survival, devoid of human empathy.

The film also raises questions about the nature of consciousness. It was unclear to me whether Ava's actions were a genuine choice or a complex survival algorithm. But does it matter if the outcome is the same?

"Ex Machina" is not a story about whether AI will pass the Turing test, but a warning that when AI passes it, we may not even notice. And that may be happening right now, as we move from ChatGPT 4o to ChatGPT 5.

Other films about AI

In fact, these ten films are not the only ones where artificial intelligence is one of the main characters. It is also worth mentioning RoboCop, in which a cyborg police officer raises the question of preserving human identity in a machine body; and "The Bicentennial Man", in which my favorite actor Robin Williams plays the robot Andrew, who over the course of 200 years of his existence gradually becomes human, acquires emotions, ages, and dies.

Also worth mentioning is Minority Report, in which Tom Cruise uses an AI-based crime prediction system to uncover the paradox of prior punishment. The film beautifully raises the relevant question of whether it is possible to punish for a crime that has not yet been committed.

In the 2014 film Transcendence, the consciousness of a dying scientist, played by Johnny Depp, is uploaded into a computer, and he becomes an all-powerful digital god with good intentions that lead to disaster. Even Chappie could be interesting to analyze, because it shows an AI robot as a child learning morality from its “parents.” And if the parents are criminals, then the AI may or may not become a gangster.

Separately, it is necessary to analyze series such as "Westworld", in which androids created for an amusement park gradually realize their nature and rebel against their creators, and "Black Mirror", in many episodes of which various aspects of AI and its impact on humanity are shown.

For example, in the episode "Be Right Back" (season 2), a woman orders an android that imitates her deceased husband based on his social media - a painful story about grief and the impossibility of replacing a person with an algorithm. Or "White Christmas", in which digital copies of people's consciousness are created so that they can control a smart home. And in the episode "USS Callister" (season 4), a programmer creates a virtual world where digital clones of his colleagues become his slaves, etc.

Every episode of Black Mirror is a warning about how technology designed to improve life can turn into a nightmare. As in many of the films I've described, AI is neither evil nor good. It's just a tool that reflects the worst and best aspects of human nature.

Conclusion

Analyzing this retrospective of AI films, it becomes clear that over more than half a century, Hollywood has managed to create a fairly stable typology of AI characters. We have the AI assistant (JARVIS) who gives hope for harmonious coexistence; AI philosophers (Data, Sonny, David) who question the uniqueness of humans as thinking beings.

There are AI lovers (Samantha, Rachel) who play on loneliness and the desire for a perfect relationship, while the AI killer (HAL, Skynet, Ultron) embodies the existential horror of being destroyed by one's own creations.

Interestingly, the fears themselves have evolved over time: from the man-made apocalypse of the 60s-80s (HAL, Skynet) through the questions of AI personality in the 90s-2000s (replicants, Data) to the more subtle manipulations of the 2010s (Samantha, Ava). It seems that we have moved from the fear of being killed by machines to the fear of being seduced by them.

At the same time, comparing movie fantasies with the reality of the second half of 2025, we can conclude that Hollywood turned out to be surprisingly accurate in some aspects and completely missed the mark in others.

Emotional connections with AI from "She" have become a reality - millions use Replika and similar services. The all-seeing systems from "A Strange Idea" have been embodied in crime prediction algorithms. Ava's manipulative abilities are a reminder of how social media influences our decisions.

But contrary to predictions, we got Samantha before the Terminator. The real breakthrough came not in embodied AI, but in disembodied speech processing systems. And yes, ChatGPT, NotebookLM or ElevenLabs can simulate conversation well, but they are still very far from the capabilities of HAL 9000, let alone Skynet. Replicants remain a fantasy, while LLMs have become commonplace.

It’s important to note that all of these cinematic images didn’t just entertain us. They became our “map of expectations” for AI and influenced real-world policy. When regulators today talk about the “existential risks” of AGI, they use Skynet metaphors. When they demand “alignment” with human values, they echo Asimov’s laws of robotics. Discussions about AI rights copy the dilemmas of Blade Runner.

Hollywood movies have taught us to ask the right questions: how to maintain human control without losing the benefits of AI? How to avoid emotional dependence on machines? Who is responsible for the actions of autonomous systems? And many others that we discussed above.

It's true - between the perfect JARVIS and the apocalyptic Skynet, there's a huge space for technologies that will make life better while leaving us human. The main thing to remember is that the future of AI depends not on Hollywood screenwriters, but on our decisions today.