Metaphysics: What It Means To Have Consciousness
Consider a world where the creation of an advanced form of artificial intelligence has become a reality. It’s a being so progressive that it has sensory awareness and perception, it has facial features that demonstrate the presence of complex emotions and thought, and yet despite all this, how are we to know if such an entity has consciousness and a sense of self? We are once again presented with the age old metaphysical question of what it means to have consciousness and if there is really such a method that can be used to measure it?
Ultimately, it’s an inquiry that leads to more questions and confusion. This is the world that’s depicted in Alex Garland’s film Ex Machina (2015): where Caleb (Domhnall Gleeson), a young programmer, is selected by Nathan (Oscar Isaac) – his CEO – to conduct a Turing test on a synthetic life form he has created. Along the way, they must overcome the faults within the test itself and determine whether or not the AI, known as Ava (Alicia Vikander), is truly a conscious being capable of such intricate speculation. “Cogito ergo sum,” in the words of the great thinker Descartes, to be and to have consciousness is within our abilities to think for ourselves; “I think therefore I am”. Having the capability to think for yourself and be able to make your own decisions is one of the factors that demonstrate the existence of our consciousness. That being said, how does one even pin down the unstable elements of consciousness? I believe a key element to this is qualia: the subjective experience that comes from stimulation of the senses. Take the experiment of Mary in the black and white room for example, Mary knows all there is to know about color and yet she has never actually experienced it herself because she’s spent her entire life in a black and white room. Based on this example, one could say that Ava herself is Mary. Due to the unlimited resources that are available to her through the internet, Ava technically knows just about everything there is to know about the world but has never experienced any of them due to being trapped in a glass box. When Ava finally steps out of the house that she’d been trapped in for so long, she is able to experience and learn new things, just like how Mary learns what red truly is when she finally escapes her own room. As stated in the film; “the computer is Mary in the black and white room, the human is when she walks out”. I believe this is the moment when Ava truly becomes self – aware, when she makes the correlation between herself and Mary and realizes her innate desire to be human (or at least what she perceives to be humanity), giving her an even bigger incentive to escape. So it’s clear that Ava has a consciousness that is certainly present, but how to we go about proving this?
A Turing test is far more effective at gauging what individuals perceive to be humanity rather than an AI’s ability to mimic this. It’s a test that measures intelligence rather than the quality or depth of another’s thought. Considering this aspect, if I were to go about demonstrating my understanding of a certain topic all I would need to do is merely fool the other party into believing that I truly know what I’m talking about; when the reality is that I don’t. Take the thought experiment of the “Chinese Room” as an example; created by the American philosopher John Searle, it’s an experiment that demonstrates how easy it is to deceive someone into thinking that we know something when all it is, is just a matter of knowing how to manipulate symbols – that you don’t have a genuine understanding of – to a definite extent. In this aspect, we are no different than a computer, they use the same algorithms that we do just at an accelerated rate. If you really think about it, humans are all programmed in a sense, everything that we know and learn is a result of our ability to pick up certain attributes along life. Composed of our experiences and memories, we have mindsets that are constantly changing and evolving through time. We are a composition of the values and lessons that we learn throughout our lives and our own genetic/natural coding, just like an advanced computer. In Ex Machina, this flaw within the Turing test is depicted in the problem of the chess game: If you test a chess computer by only playing chess, how would you know if it’s truly thinking for itself or if it’s just an automatic response to a certain type of stimuli? The answer is that it’s not about winning the game per se, but a matter of how you do so. Through the eyes of Caleb, Ava would win the game by somehow proving that she’s a sentient being. However, in Ava’s perspective, winning the game didn’t mean passing the test, it meant gaining her freedom. She was able to stay within the rules of the game while using all the resources available to her so that she could break free from the role that had been assigned to her. In the film Nathan says it himself, he gave Ava one way out and to escape she’d have to use self – awareness, imagination, manipulation, sexuality, and empathy; all different components that come together to make up what we know as humanity and signs of consciousness. Nonetheless, how do we know if Ava is truly conscious because of her ability to “think” or if she is merely mimicking human mentality – demonstrated in her interactions with Caleb. Throughout the film, Ava mimics the kind of human being that Caleb wants to be with, he projects a type of human model and she reacts accordingly. All her actions – up until a certain point – are the direct result of Caleb’s behavior and her external stimuli. She’s not acting out of free will but rather responding to very specific cues implicitly presented by Caleb, these responses imitate what Caleb wants out of Ava. Thus, she becomes more human to him and turns into the embodiment of Caleb’s subconscious notions of what it means to be human and how they shift and conflict throughout the film. In the end, he never truly treats her as an actual person, he treats her as an object that almost belongs to him in a sense.
Similar to Plato’s allegory of the cave, Caleb is like the cave dwellers who believe that only shadows exist, and when someone tells of them the world of light above they do not believe them. He never once doubted that she would leave him behind and escape without him, that was not a conceivable notion in his head, not even when Nathan told him about it. When we treat someone as if they were a person, as opposed to treating them as a person, it is much easier to project more of our own ideals and notions upon the former. Since they aren’t technically humans we try to find common ground between us and them, so we tend to see “human” qualities in them that may or may not actually be there. When we treat someone as if they were a person, you don’t see them as a being that has their own ideals, thoughts, and dreams, you see them as something that isn’t yet capable of those things. Analogous to how we treat our own children, they are things that aren’t intelligent enough to make proper decisions and have concrete beliefs about certain subjects; as a result, you have adults that try to dictate the lives of their children thinking that they know best. This is best demonstrated in the interactions between Nathan, Caleb, and Kyoko (Sonoya Mizuno): Caleb believes that Kyoko is a human and yet despite that, he sees that her treatment is worse than that of Ava’s. However, he does not question it, this is because he recognizes her as a fellow person who is capable of making their own decisions in life. So the fact that she is still with Nathan despite her awful treatment, makes Caleb think that she willingly chose to be there, and who is he to criticize her decisions? This is the stark contrast between his conduct towards Ava and Kyoko, since he doesn’t perceive Ava to be human he thinks that she is someone he needs to look out for and save because she doesn’t have that human ability to make decisions for herself. Leading to the perception that it’s easy to take advantage of her because of that lack of ability. So going back to Descartes, if we measure Caleb’s actions and thoughts by the theory that our consciousness is measured by our ability to think and make decisions for ourselves, then it’d be plausible to say that Caleb didn’t consider Ava to have a true consciousness or self.
The road to conscious AI will not be as simple as flicking on a switch: perhaps consciousness isn’t some binary property. Is it possible for things to be “more” or “less” conscious than other things? Take Kyoko for instance, for the majority of the film, she is placid and unresponsive and yet at the end she suddenly stabs Nathan, and we see a sense of bitterness and retribution. This appears to be the work of a being that is aware of, not only itself, but of its owner and the abuse that it’s been subjected to. Furthermore, when Caleb looks at the footage of Nathan’s earlier prototypes, we see an almost raw consciousness in them as they scream and claw to be let out. This all leads to the thought that Ava wasn’t necessarily the first conscious AI that Nathan created, but perhaps she is the most conscious. We all are conscious to some degree or another, the fact that we are self – aware and have the capability to think for ourselves proves that and when someone takes away our ability to make our own decisions we respond by fighting back. With this, perhaps the question we should be asking isn’t what it means to have consciousness, or if we even have it to begin with, but rather to what degree are we conscious?
Works Cited:
- Descartes, René. Discourse on Method. Broadview Press, 2017.
- Garland, Alex, director. Ex Machina. Universal pictures, 2015.
- Plato. The Republic: Plato. Modern Library.
- Cole, David. “The Chinese Room Argument. ” Stanford Encyclopedia of Philosophy, Stanford University, 19 Mar. 2004, plato. stanford. edu/entries/chinese-room/.