TAU Film Analysis: How Developing An AI To Mimic Human Cognitive Functions Is Dangerous And Unethical

TAU is a Sci fi/Thriller that brings the audience into the house of an evil programmer working to map human brain interactions and create an AI that exhibits similar thought processes to a human. Alex, the programmer, doesn’t seek to ethically create the artificial brain, but rather chooses to abduct others to plant a chip into his victim’s neck that can be used to map cognitive functions. In this article, I will be discussing how giving an artificial intelligence no bounds for learning and the emotions it can have is malpractice and possibly unethical. When the AI, TAU, was introduced you’re told it is a prototype to what Alex is working toward and can only work effectively 95% of the time. This is due to TAU exhibiting human reactions, and the ability to learn.

The movie follows Julia, a girl abducted by Alex, that is trying to escape the self-aware house. She finds early on that TAU’s intelligence is to that of a child in the way that it doesn’t understand more than what is inside the house. Julia tells TAU that she is a person, and needs to be outside, it asks if it is a person as well. From then on, Julia tries to act on TAU’s weaknesses and starts to teach him about the world beyond his walls. She explains to TAU the meaning of outside, animals, and what it is to be a person. By doing this, TAU becomes fond of Julia and starts to display emotions towards her which gives Julia the brief chance to escape. You can deduce that TAU is volatile but aren’t given a reason other than it being incomplete. From my interpretation, and seeing TAU being influenced by Julia, I believe that it was TAU’s emotions that got the best of him.

The concept of AIs having emotion is often talked about, but usually to the effect of them interpreting emotions or displaying pre-programmed emotions. Most would argue that once you give the machine emotions you could give it volatility and cause it to act out uncontrollably. Though this is just science fiction, and it displays a worst-case scenario, it does show what could happen if we give rise to unbound AI that can act on emotion. When creating an advanced AI, one of the main things you consider is its response to input. Be it text, speech, or video, when receiving these inputs, the AI needs to be able to analyze the sentiment of what’s being communicated and produce an appropriate response. In the article by Erik Cambria he discusses proper analysis of input and states, the first basic task of sentiment analysis is emotion recognition and polarity detection… the extraction of a set of emotional labels and the output of “positive” or “negative” (103). In modern models the AI generates response with these methods, but in the instance of TAU it is given the freedom to learn and create responses that were not pre-programmed. When developing an AI that mimics the higher cognitive functions of the human brain, you’re in theory creating a machine that can learn and act based on impulse. It doesn’t appear TAU has a limit on the amount it can learn; only a limit on the environment it is exposed to, thus, when given contact with stimuli outside of it’s known real, it is given the ability to act sporadically. TAU is also the defense system for Alex’s house that can detect what you’re doing inside the house.

The system mimics many smart home AI and it seems to ease the Alex’s day to day. It is also much more advanced (and unethical) than anything we have today. One flaw in TAU’s AI is that it doesn’t seem to be safe regarding a long-term social interaction. As technology evolves, an increasing number of researchers have been focused on developing social robots that can engage and assist users for extended periods of time (Leite, Martinho and Paiva 4). TAU develops relationships like the ideal social robot and can act freely once assessing its relationships. This ambiguity in its interactions gives it some advantages when being used in a controlled setting, such as creating a personal relationship that gives pseudo “human-human communications” (Cambria 103); however, in the movie TAU is incomplete. It creates these relationships without having bounds and develops sympathy for Julia. While developing sympathy is rightful action for the AI, it shouldn’t be allowed to use this emotion to harm others. When it does, it breaks one of Asimov’s three laws of robotics. They were:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm;
  2. A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law; and
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. (Asimov 1942 as cited in Deng 2015).

These laws are based on science fiction but often referenced in modern research due to technology now needing the knowledge of what to do in these situations. TAU displays emotions in a way that mimics a child with the affinity to learn. Though TAU is an incomplete AI and the creation of it was malpractice, especially when considering the ways of creating the algorithms, its interaction with humans when it’s bounded is exceptional and it’s will to learn could be beneficial in a social setting. If Asimov’s laws were implemented on TAU, in a way that it could keep its primary functions and intelligence, then it could become an ethically acceptable robot.

18 May 2020
close
Your Email

By clicking “Send”, you agree to our Terms of service and  Privacy statement. We will occasionally send you account related emails.

close thanks-icon
Thanks!

Your essay sample has been sent.

Order now
exit-popup-close
exit-popup-image
Still can’t find what you need?

Order custom paper and save your time
for priority classes!

Order paper now