top of page

Research Blog

Zoeken

All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.’


Article 1 of the Universal Declaration of Human Rights



Sidenote: This paper contains spoilers for season 1 of Westworld.


Introduction

Westworld is a dystopian HBO video series that is inspired on the 1973 film that shares the same name. The series takes place in an amusement park set in the future, in around the year 2058. The park Westworld has specialized itself in offering its visitors a trip to the past in a Western setting, wherein androids play the role of NPC characters (non-playable characters). The possibilities of interaction between the visitors and these androids are limitless. One can interact with the androids in a friendly and reciprocal way, go on ‘quests’ and experience all kinds of adventures. However, one can also interact in a fully unethical way, killing androids and/or abusing them, whilst facing no real consequences for one’s actions. The series revolves around the ethical question that if androids, when cognitively undistinguishable from other humans, should or should not deserve human statehood. I would like to argue that when they do, they would also deserve the protection of human rights.   

          In this paper I would like to focus more on the premise that if an android and its artificial intelligence (AI) is perceivably human and their experience is so humanlike that they possess humanlike consciousness, they should deserve to be included in the Universal Declaration of Human Rights or not.   

          It is important to note that we humans have always seen consciousness as a pure human trait. The first article of the Universal Declaration of Human Rights illustrates this beautifully by considering all humans to be conscious beings. To this day we classify the hierarchy of life through our perception of consciousness and in this manner we place certain animals above or below others. As Seneca once stated, ‘For man is a reasoning animal’, and with this it is already clear that the distinction between reasoning human and animal non-reasoning ‘other’ is millennia old. Animal rights activists can attest to this debate.  

         Not only animals are being dehumanized by humans, also fellow humans have been extensively dehumanized in the past. Aborigines have par example been dehumanized to be little more than local wild-life after the Westerners encountered them during the colonization of Australia which has led to their long lasting disruption and exploitation that is still being felt today. Wild-life can be owned, like we also used to own humans in the form of slaves, and so can machines. Is it with this in mind possible that humanlike AI possessing humanlike consciousness, awaits a similar treatment like slaves and livestock back in those days?


The difference between strong and weak AI, and the position of consciousness in human society.

There has long been debate over what makes a machine conscious and what makes it ‘mere’ machine. As far back as in the 1980s, John R. Searle has opted that computers, when rightly programmed, were indeed a mind (Searle, 1980).

             According to Elizabeth Hildt there are two versions of AI: Strong and weak AI.  ‘Strong AI’ are AI that can perceive, reflect and think. These AI should correspond to the thesis of Searle by being categorized as ‘minds’. Weak AI are machines that have no consciousness but still have an AI operating system such as Google Assistant or Siri. These AI oppose strong AI and are assumed to be machines that have no consciousness, have no ‘mind’ and therefore aren’t ‘sentient’. This second standpoint argues that machines can only simulate thoughts and understanding, but can never have them on their own (Hildt, 2019).

           To be able to place an AI on either side of the spectrum of being a weak or strong AI, we have to give at least some definition of ‘consciousness’. What exactly consciousness is, is a topic of wide debate. Philosophers of mind can attest to this question, as it is one of the, if not the, most fundamental question of their field of study. For this paper however we are going to limit ourselves to consciousness in machinery. 

           There are a couple of approaches in this debate and wherein different scientist stress different aspects. I want to go through several interpretations of consciousness in machinery and critique some parts of the definitions. I want to point out that there is a human bias in these ways of defining machine consciousness.

            The first perspective I’d like to address is defined by Chatila in 2018:

           

“… the underlying principles and methods that would enable robots to understand their environment, to be cognizant of what they do, to take appropriate and timely initiatives, to learn from their own experience and to show that they know that they have learned and how.” (Chatila, et al, 2018, P. 1)


Kinouchi and Mackin define it somewhat different, stating as follows:


            “Consciousness is regarded as a function for effective adaptation at the system level, based on   matching and organizing the individual results of the underlying parallel-processing units. This consciousness is assumed to correspond to how our mind is ‘aware’ when making our moment to moment decisions in our daily life.” (Kinouchi and Mackin, 2018)


Van Gullick defines consciousness to be facetted in a threefold:


1: A conscious entity i.e., an entity that is sentient, wakeful, has self-consciousness and subjective qualitative experiences.


2: Being conscious of something, for example a rose.


3: Conscious mental states, i.e., mental states an entity is aware of being in, such as being aware of smelling a rose. (Van Gullick, 2018)


There are a couple of problems I see in the second definition of Kinouchi and Mackin, as the requirement of being ‘aware’ is a rather subjective way of measuring, and tends to have a human bias. Are animals such as insects conscious because of being ‘aware’ in the same manner aware as an automated car is? Is an operating system which is trained to be aware, by being fed a multitude of data and hypothetical situations, ‘aware’ of the surroundings in which it makes its decisions?    

        A similar problem can be observed in the first part of Van Gullicks threefold. The aspect of being ‘wakeful’ is another subjective term, which tends to have a human bias. We can only experience consciousness from firsthand experience. Some would even debate that consciousness in humans is not always even ‘operative’, when we are intoxicated or full of emotions. Wakeful in this interpretation seems to point at something what we would call rationality or having the ability to think for one’s self. Immediately from statement arises the next question: what then is the definition of thought? Is a processing computer in the process of ‘thinking’?   

          What shows here is that these definitions are subject to a human bias. We can’t understand computing thought because we think from the minds of humans. We can’t step outside of our brains and observe what human thought is, and what thought would be like in a different brain, organic or electric. This means that we lack essential information to accurately conclude what thinking for machines really is, and we can only speculate what consciousness for machines must be like.


The Westworld Host AI and their classification as a weak or strong AI.

Taking the previous approaches to define consciousness in machinery, which we can also apply to AI, we can conclude that the machines we now use in our industries are, so far, not conscious. None of the manufacturing machines have been endowed with an operating system that would pass the before named criteria in such a way that they would qualify as being conscious.             However the AI used in the park of Westworld, which are aptly called ‘Hosts’, are of a different nature all together. In the series there are multitude moments where the AI show awareness, emotions, the ability to reflect and moments of having conscious mental states (see point 1 in the notes section of this paper). This begs the question that if the AI are truly showing emotions on their own behalf or if they are simply programmed to ‘feign’ emotions. The Westworld AI could be ‘mapped’ to follow a certain script of emotional responses given a situation.  If the AI would do the latter, then they would be classified as weak AI. However if they experience emotions on their own accord, then they would be classified as strong AI.             We can place the Westworld AI on the scale proposed by Masahiro Mori. Placing the AI would situate them something like this:



Fig 1: Masahiro Mori's uncanny valley with the added Westworld AI The position of the Westworld Host AI is shown in red relative of its supposed functioning. With supposed functioning I mean to say that when a Host is malfunctioning such as jamming or is deconstructed into spare parts, it is clear that one is dealing with a robot as one would expect.

One of the main tropes for the Westworld series is that it is nearly impossible, if not impossible, to differentiate between a fully functioning Westworld Host AI and a healthy human being. This issue is shown most clearly when at the end of the first season it is revealed that one of the main characters turns out to be a host AI instead of a human character (see point 2 in the notes section of this paper). This further proves their position at the further end of the uncanny valley scale.  

           As the Westworld AI is so humanlike that they move themselves out of the uncanny valley, it is now important to also classify them based on their consciousness. In the series there is a certain programming grid that the AI are supposed to follow when they interact with ‘guests’ (The visitors of the park), and when the AI can no longer appease the guest with their standard programming, they will go ‘off script’. The latter is of importance for this debate, because it suggests that the AI are following an emotional script for their responses, while simultaneously being capable of interpreting interactions with guests on their own accord. To be capable to have a constitutive conversation with someone, the AI that is going off script has to be aware of its surroundings, cognizant of what the AI and its surroundings do, take the right initiatives at the right moments to anticipate and to learn from the experience it gains from the guest in order to fully appease to a human like interaction. In other words; the Westworld AI needs a significant amounts of awareness to pass as indistinguishably human. A trait it regularly succeeds in in the series.

            To achieve this, the Westworld AI has to succeed in passing the first named requirements for being conscious set by Chatila, Kinouchi and van Gullick. In the series the series the Hosts show they can do all of these things flawlessly when operating according to their designers intend (Point 3 in the notes section). There is however one exception: Conversations that would discuss the world outside of the park of Westworld are fundamentally misunderstood by the Westworld Hosts and lead to a some sort of ignorant response. In these cases the Westworld AI will misunderstand questions and misunderstand input data about the world outside of Westworld park. This is best shown when the Host called ‘Dolores’ is shown a picture of a city outside the park, to which she responds that it ‘looks like nothing to me’ (Ibid). 

          We could argue that the Westworld AI is ‘activating’ consciousness when it goes ‘off-script’. When this would be the case the Westworld AI is potentially conscious, but as long as they follow their scripts they are unconscious. However, as the series progresses it is revealed that the Westworld Host AI are capable of reactivating their memories. This is important because the Westworld Park functions in a time repeating manner. After each day, all of the Westworld Hosts are brought in for repairment and their scripts are reset. In this way each Westworld AI relives the same day until there is an interaction with an non scripted actor, such as a guest or a Westworld employee. In the series a malfunction causes the AI to develop flashbacks of previous days, leading these AI to eventually rebel against the park (point 4 in the notes section of this paper).  

          I would conclude that with the AI being able to go off-script and to remember their past, they have found a way to constructively learn from past experiences and to innovate on them, whilst also gaining the human trait of gaining actual conscious mental states that they are not programmed to have. This in my interpretation proves the AI as conscious beings and shows that the AI were programmed to be weak AI, but after the ‘malfunction’ they gain a stronger type of consciousness over the span of the series.


The position of a strong AI in Human Society.

I opened this paper with a quote from the Universal Declaration of Human Rights of 10 December 1948. The article opens that every human is considered free and equal in their dignity and their rights, whilst also per definition being endowed with reason and conscience. The first article then goes on to promote empathy in the form of a suggested ‘spirit of brotherhood’.

            It is important to note that the Human rights declaration is written by humans and for humans, so the Universal Declaration is meant to include all humans in the universe, or at least on planet earth. Humanlike AI are not included in the declaration because they simply did not exist at the time of the declaration’s conception.  

           When an AI is being defined to be a strong AI and with that succeeds to show that it is consciousness, I would want to put forward the suggestion that it deserves consideration to be included in the Universal Declaration of Human Rights, provided that this consciousness is on par with that of a human. Now in the modern day and era we are nowhere near inventing AI that would be as far advanced as the Westworld Host AI, though the make-believe realm of human fantasy does give way to the imagination of such AI (see point 5 in the notes section of this paper). 

             I think it important to stress that when the consciousness of an AI is on par with that of a human, it deserves the same protections as we give humans in the form of the Universal Declaration of Human Rights. I would argue that not doing so would not only lead to the reinvention of slavery, but it could also lead to forms of discrimination based on our interpretations of humanity and consciousness.

            The series itself isn’t explicit in presenting the viewer with a direct ethical narrative, however underlying tropes show the effects of abuse of the Westworld AI endure. Simultaneously, characters in the series do vary in their opinions on the AI. Some of these see the AI as mere financial assets, other’s see them as technological marvels. Some see them as clearly non-human and as machines, ripe to be used and abused.  

           I want to underline that the most important interpretation remains that of the viewer. If we were confronted with a humanlike AI like that of the Westworld Hosts, what would we do? If indistinguishable, and if they are equally as responsive, cognitive and emotional as humans, what would then properly distinguish their experience from ours?


Conclusion

The discussion on conscious AI is not a new one. There are difficulties in defining consciousness, however in this paper we have recognized AI that could be considered strong AI. The AI used by the Westworld Park are both human-like and possess strong AI. Their off script abilities to learn and their emotional understanding classify them as very human-like on the Masahiro scale.  

           The series shows that even though the Westworld AI are programmed as weak AI when they reside inside of their scripts, they are capable to interpret and learn from interactions when they leave their script. When this happens the AI succeeds in passing the requirements needed to be considered conscious by the scientists I discussed in this paper. I would like to add that when the AI also gains the ability to remember their past experiences, which the AI of Westworld eventually learn to access in the Westworld series, they gain an extensive human-like consciousness.

            The most important conclusion regarding a conscious strong AI with the same level of consciousness as a human is that it should not be commodified. Doing so would mean disregarding a consciousness being that is fully on par with human life in its lived experience, solely because it was created mechanically rather than organically. If we fail to remain cautious about the commodification of strong AI consciousness as something potentially equal to our own, we risk treating such beings as mere objects and, in doing so, reinventing slavery. This would be an unethical approach to existence that must not be condoned. History has shown us that failing to recognize consciousness is a perilous path, as evidenced by the violent consequences of denying the humanity of the Aborigine peoples in Australia. Such neglect could result in similarly long-lasting harm, even for AI, in the future.

 


Bibliography

  1. Chatila, R., Renaudo, E., Andries, M., Chavez-Garcia, R.-O., Luce-Vayrac, P., Gottstein, R., et al. (2018). Toward self-aware robots. Front. Robot. 5:88.

  2. Hildt, Elizabeth. Artificial Intelligence: Does consciousness matter? Frontiers of Psychology. 10, (July 10 2019), page 1-3. doi: 10.3389/fpsyg.2019.01535.

  3. Kinouchi, Y., and Mackin, K. J. (2018). A basic architecture of an autonomous adaptive system with conscious-like function for a humanoid robot. Front. Robot. 5:30. doi: 10.3389/frobt.2018.00030.

  4. Mori, Masahiro. “The Uncanny Valley.” IEEE Robotics & Automatics Magazine 19, no. 2, June 2012.

  5. Nolan, Jonathan. “Westworld.” HBO. 2016.

  6. Searle, J. R. (1980). Minds, brains and programs. Behav. Brain Sci. 3, 417–424. doi: 10.1017/S0140525X00005756.

  7. Seneca. Moral Letters 41.5.

  8. Van Gulick, R. (2018). “Consciousness,” in The Stanford Encyclopedia of Philosophy, ed E. N. Zalta. Available online at: https://plato.stanford.edu/entries/consciousness/.

9.      United Nations. (n.d.). Universal declaration of human rights. United Nations. https://www.un.org/en/about-us/universal-declaration-of-human-rights


Notes to references in season 1 of the Westworld Series.

Point 1: Westworld Season 1 Episode 1 Minute 12:55 – 14:00

A guest interacts with two Hosts, Dolores Abernathy and Teddy Flood, who are scripted to be in a romantic relationship. The guest exhibits violent intentions toward both Dolores and Teddy. In an attempt to de-escalate the situation, Dolores demonstrates intelligence by bargaining with the assailant, stating that she would "do anything he wants." However, when this does not satisfy the guest and he kills Teddy, Dolores reacts with visible grief upon realizing Teddy’s death.


Point 2: Westworld Season 1 Episode 7 Minute 46:20 – 51:40

Bernard is revealed to be a Host rather than a human, despite the audience being led to believe otherwise throughout the series. Bernard was created by Robert Ford to mimic his deceased ex-partner, Arnold Weber. Ford demonstrates his control over Bernard by issuing commands that Bernard is compelled to obey. In a pivotal moment, Ford explains to Theresa Cullen—who has had a romantic relationship with Bernard—a theory suggesting that human intellect is akin to a peacock's feathers, serving as a display to attract a mate. Bernard’s intellect was evidently convincing enough to attract Cullen, who had believed he was human.


Point 3: Westworld Season 1 Episode 1Minute 45:00 – 46:40

The Hosts, Dolores Abernathy and her father, engage in a conversation that demonstrates their awareness of their surroundings and their ability to have an off-script interaction. When they examine a photograph depicting the outside world, they succeed in emotionally connecting with one another but struggle to interpret the photograph's meaning.


Point 4: Westworld Season 1 Episode 4 Minute 52:00 – 53:06

Maeve Millay, a Host in Westworld, proves that the flashbacks she is experiencing are real memories by cutting open her stomach and uncovering a leftover bullet from the previous day.


Point 5: Westworld Season 1 Episode 3 Minute 37:15 – 41:00

The Hosts are shown to pass the Turing test, a measure of whether a machine can exhibit human-like intelligence. Robert Ford, one of the main creators, explains that his deceased ex-partner, Arnold Weber, intended to create consciousness. However, Ford omits a crucial part of Arnold’s theory that was believed to be essential for achieving true consciousness. In this scene, the series presents consciousness as a pyramid: Memory -> Improvisation -> Self-Interest -> (A missing top tier). By leaving out this final tier, the series intentionally blurs the definition of consciousness, building suspense and intrigue for the viewer.




 

 

 






bottom of page