Paarshva (Year 12)
Editor’s note: Year 12 student Paarshva skillfully gets to grips with complex philosophical and functional arguments in seeking to better understand the extent to which machines can/could ever experience emotion like we can. Paarshva suggests that machines are increasingly able to recognize and simulate emotion, but this is distinct from the more unlikely experiencing of emotions. CPD
The more important machines are in society, the more apparent it is that we must understand their limitations and capabilities and avoid the potential repercussions of misuse. If AI were able to experience emotions, the line that separates us from machines would blur, challenging our definition of ‘humanity’. From a philosophical standpoint, there has been much debate about the intrinsic nature of human emotion, which would relate to whether it can be replicated in an artificial machine (Lewis, 1929). ‘Experiencing emotion’ is by nature fundamentally different from ‘having’ emotions, as emotion is innate within sentient beings and driven by cognitive experiences (Lewis, 1929). Machines could potentially simulate human emotion; however, this would still differ massively from human emotion and therefore they cannot experience emotion like we can.
An important aspect of emotion used in this debate is the idea of qualia (Lewis, 1929). Qualia refers to the subjective and individual experiences of perception and sensation. It is what makes the taste of food, the sight of colour and the pain of an injury distinct to every person. For example, chocolate could taste different to different people and different people may see red how others see green. Qualia is innately incommunicable and personal, as the most you can do it describe a sensation, but you cannot transfer it. The subjectivity of human emotion causes us to react to situations and experiences in different ways, which is something that machines cannot do. We develop our individual emotional responses by independent cognisant growth, something a machine is incapable of and therefore they respond to each situation in the same way that they are programmed, machines cannot be unpredictable like humans, which is due to qualia in human emotion.
A thought experiment by Frank Jackson titled ‘Mary’s Room’ (Jackson, 1982) becomes pertinent in this debate and the idea of qualia in human experience and emotion. This thought experiment was introduced to debate the idea of physicalism, which is the idea that all processes can be explained by physical science and replicated in the same way. If physicalism where true, it would allow for machines to be able to experience emotion like we can. This thought experiment posits a scenario where a brilliant scientist called Mary is raised in a black and white room, however she has a complete understanding of colour theory. If she were to leave this room and see colour for the first time, would she learn something new about colour? If she were to learn something new by experiencing the colour, this suggests that physical knowledge is not sufficient when it comes to explaining emotion and experience, which would mean that machines cannot experience emotion like we can. Many philosophers would argue that the subjective experience (qualia) of seeing colours and how they resonate with an individual would transcend physicalism, and therefore she would learn something new about colour. Frank Jackson would support this interpretation of the thought experiment and therefore support the overall conclusion that machines cannot experience emotion like we can.
Emotion is an internal, experienced, and subjective sensation which intimately ties to our awareness, holding the power to distort our perception of reality – this is described by the term qualia (Putnam, 1978). The same situation experienced through different emotions can feel entirely different and alter our understanding of true events, showing the power of emotion over episteme (Minsky, 2006). Emotion exists beyond jurisdiction or rationality; in that we cannot control or explain the emotions we feel at a certain time. Therefore, emotions are not passive reactions; they are dynamic and can influence how we act, think, and remember (Picard, 1995). Emotions like insecurity can make a genuine statement appear backhanded; emotional states do not just taint our perception of events but alter them entirely. Emotions can also be paradoxical, in a sense that they feel immanent and primitive, connecting us to something within us and our experiences while on the other hand being intrinsically linked with our thoughts and expectations (Lewis, 1929). This raises the question of whether emotions are created by our material minds or if they predate our cognisant bodies entirely, existing within the soul. Emotion can also act as a link between these dualist ideas of the body and the soul mentioned, as they arise from our sensory experience but relate to something external and higher.
Looking back at the link between epistemology and emotion, it can be questioned whether emotion is a form of knowledge. When we experience an emotion, it often seems as though we are learning about ourselves through how we subconsciously interpret an empirical experience – an interpretation which rationalism may overlook (Minsky, 2006). Thus, emotions can sometimes seem wiser than logic and conscious thought, as if they are revealing something deeper. This refers to how we can intuitively ‘feel’ that something is wrong before we can logically deduce why. In this way, emotions can be philosophically interpreted as an agent acting alongside rational and episteme in the human condition, which all in tandem shape our empirical experience in the material world. This is important when examining the capability of machines to experience emotion exactly like we can. The derived essence of how we experience emotion can be compared with how machines can potentially experience emotion to deduce whether they can truly ‘experience emotion like we can.’
Many have theorised that machines can develop to recognise, interpret, and simulate human emotion in a field of computing called affective computing. Rosalind Picard, the originator of affective computing, thought that by collating a mass amount of data on human emotions, machines could be programmed with the ability to detect our emotional state and understand what they mean in context (Picard, 1995). The final component in her idea of affective computing was the ability to express emotion; however, she expressed that machines can only simulate experience without experiencing it (Picard, 1995). This was also explored in a book titled The Emotion Machine by Marvin Minsky (Minsky, 2006). In this book, Minsky suggests that emotions are merely alternative ways of thinking, for example, fear being a way of avoiding pain. When mimicked, these processes can replicate human emotion and allow machines to experience emotion like we can (Minsky, 2006). An objection to this is that emotions serve an adaptive purpose in humans that has shaped our responses as our cognisant minds and physical bodies develop. Machines lack this context and idea of independent development and growth, and therefore still cannot feel emotions like we can. As of today, developers have not been able to create machines capable of engaging in conscious thought. The most advanced models of machines today are unable to reflect on their experiences like humans do, which is an experience full of emotionally charged episteme (Bengio, 2019). Human consciousness also requires states of belief, desire, or objective. Machines do not hold beliefs like humans do; for example, a conversation about belief with an AI model would result in the model explaining that they “do not have personal beliefs, emotions, or experiences” as these are tied with consciousness (OpenAI, 2024).
However, there could still be scope for further evolution in the field of affective computing. One such theorised development is AGI, Artificial General Intelligence, a machine that possesses the ability to complete any intellectual task that a human can do. This is viewed by researchers as a development towards future consciousness (Bengio, 2019). Yoshua Bengio, a Canadian computer scientist, stated that a step to achieving machine consciousness would be to mimic the human cognitive process. However, there are many questions as to whether this would lead to true consciousness from a philosophical standpoint or whether it would merely be a highly sophisticated version of artificial intelligence. The factors determining this would be whether the machines can also replicate the qualia (Lewis, 1929) of human emotion, leading to distinct outcomes every time, if this were to be achieved then further discussions may be in order. However, this seems to merely be a theory and looks quite difficult to achieve.
John Searle’s thought experiment of the Chinese Room is highly relevant in this discussion. Searle imagines himself in a room with an algorithm that allows him to manipulate Mandarin characters without understanding their meaning. Although Searle would produce results that Chinese people would be able to understand, it would not make him a speaker of Mandarin (Searle, 1980). This highlights that simulating emotion is not the same as experiencing it, showing the difference between syntax (the manipulation of symbols) and semantics (true meaning). Machines may be able to produce responses that appear emotionally charged, however that does not mean that these responses have the semantics of true emotion. The semantics of human emotion link innately to thought, and they have a front-end role in human decision making and response, this cannot be done with machines as the emotions that they produce are an after thought done for the sake of appearance. Therefore, it can be said that machines cannot experience the semantics of human emotions. Finally, dualists would argue that emotion and consciousness are so firmly rooted in dualism that they cannot be replicated in machines, as they would require something beyond material which is not possible as machines do not have a soul that can interact with the external world (Putnam, 1978).
Greek philosophy could also be used in this argument, specifically the perspective of Plato, who believed in a dualist perspective of there existing a material world and a world of the forms, where perfect forms of concepts that exist on earth reside. Plato believed that each perfect concept can only be imperfectly represented in our realm, due to our world being imperfect and mutable whereas concepts such as truth and goodness are perfect and immutable. In this debate, Platonists would argue that the perfect form of emotion, which can be understood through our recollection of what we witnessed when our souls were stored in the world of the forms (anamnesis), can be used as a standard by which to compare how we experience emotion and how machines experience emotion. Platonists would argue that the way we experience emotion more closely represents the form of emotion and is therefore closer to perfect than the closest machines can come the experiencing true emotion. Furthermore, in his allegory of the cave, he makes the distinction between reality and imitation, with those that were restricted to only seeing shadows of puppets compared to the enlightened who were able to experience reality. Perhaps if we were to apply this question to the allegory of the cave, machines would be the prisoners that see an imitation of emotion whereas we are free to experience true emotion. These arguments align with the conclusion that machines cannot experience emotion like we can.
Immanuel Kant’s perspective on epistemology could also add a unique perspective to this question. Kant was a rationalist who felt that humans differentiated themselves from animals, and perhaps even machines, due to their ability to think rationally and critically. He argued that emotions are linked with our rational capabilities and our ability to make decisions, which links back to an earlier argument. Given this, Kant may argue that machines lack autonomy, and consciousness therefore cannot experience emotions in the same way humans do even if they can be programmed to simulate emotional responses. Therefore he would likely also conclude that the replication of human emotion would not be the same as experiencing human emotion like we do.
On the other hand, there are many prevalent counterarguments which would assert that simulating emotion is the same as experiencing emotion, which would therefore mean that machines could experience emotion like we can (Lewis, 1929). This resembles a functionalist perspective, that mental states are dependent on functional roles rather than internal makeup. A functionalist would say that although machine emotion lacks conscious thought and experience, it would result in a similar outcome therefore meaning that the emotion has the same functional role (Putnam, 1978). Functionalists would also disagree with the notion that experiencing emotion like humans would require a soul, as they contend that mental states can be realised in multiple systems (Bengio, 2019). This would mean that machines experience emotion like we do, as the focus on the intrinsic value of emotion is not valued from a functionalist perspective; the sole factor would be the resulting function derived from machine emotion which, as established, would be akin to that in humans.
Ultimately, however, emotions are not just about functional roles; they are intertwined with the human experience and how we live our lives – this influence can certainly not be replicated within machines lacking consciousness and qualia (Lewis, 1929) (Lewis, 1929). Machines operate on pre-defined algorithms which cannot simulate the dynamic interplay that causes us to react emotionally in unique and subjective ways. Therefore, functionalist arguments are unable to convey that machines can experience emotion like we can. In conclusion, while advancements in affective computing have allowed for machines to be able to simulate human emotion, these systems lack what causes something to experience emotion like we can from a philosophical standpoint. The main reason for this is that emotion is deeply intertwined with subjective and internal human experience related to our consciousness (Picard, 1995). This is impossible within machines and would therefore prevent them from ‘experiencing emotion’ despite being able to recognise and simulate emotion.
References
- Lewis, C.I. (1929). Mind and the world order.
- Jackson, Frank (1982). Epiphenomenal Qualia
- Picard, R. (1995). Affective Computing.
- Minsky, M. (2006). The Emotion Machine.
- DeepAI (2024). ChatGPT [AI model]. Retrieved from https://www.openai.com/chatgpt
- Bengio, Y. (2019). The Challenge of General AI.
- Putnam, H. (1978). The Nature of Mental States.
- Searle, J. (1980). Minds, Brains and Programs.
- Plato, (exact date unknown). The Republic and The Allegory of the Cave
- Kant, Immanuel (1785). Groundwork of the Metaphysics of Morals
