Michal – Year 12 Student
Editor’s Note: Year 12 student Michal writes here for the GSAL Science Magazine on the fascinating topic of robot rights. As Michal notes, “[a]rtificial intelligence raises serious questions about philosophical boundaries. While we may ask if sentient robots are conscious or deserving of rights, it forces us to pose basic questions like ‘What makes us human?’, ‘What makes us deserving of rights’. Regardless of what we think, the question may need to be resolved in the near future. What are we doing to do if robots start demanding their own rights?” CPD
Imagine that you live in a world where your television anticipates what kind of shows you want to see. During the day, it scans the internet for exciting upcoming shows and informs you of them. Maybe it asks about your day, and wants to chat about how you feel about the show you just finished watching? At what level would it become a person? At which point would you ask yourself if your television has feelings? If it did, would unplugging it be murder? Would you still own it? Will we someday be forced to give our machines rights?
The Status of AI in our Society
AI is already all around you: it makes sure shelves in a supermarket are stocked with the ideal amount of produce, it serves you just the right internet ad, and writes many articles you read on a daily basis. Currently, we look at virtual assistants like Alexa and laugh at their primitive, simulated emotions, but it’s likely that, in the near future, you will have to be dealing with things that make it hard to distinguish between real and simulated humanity.
Are there any machines in existence that deserve rights?
Most likely, not yet. But if they come, we are not prepared for it. Much of the philosophy of rights is unequipped to deal with the case of Artificial Intelligence. Most claims for rights, either for human or animal, are centred around the question of consciousness. The theory goes like this: a homo-sapiens is aware of their surroundings and of their existence, so they deserve basic rights. That is what our judicial systems revolve around. Unfortunately, no one knows what consciousness is: something that is immaterial, others say it’s a state of matter like gas or liquid.
Regardless of the precise definition, we have an intuitive knowledge of consciousness because we experience it. We are aware of ourselves and our surroundings and know what unconsciousness feels like.
Can we simulate consciousness?
Some neuro-scientists believe that any sufficiently advanced system can generate consciousness. So if your television’s hardware was powerful enough, it may become self-aware. If it does, would it deserve rights? Well, not so fast. Would what we define as ‘rights’ make sense to it? Consciousness entitles beings to have rights because it gives the being the ability to suffer. It not only means the ability to feel pain, but to be aware of it. Robots don’t suffer, and they probably won’t unless we program them to. Without pain or pleasure, there is not preference and rights are meaningless. Our human rights are deeply tied to our programming. For example, we dislike pain because our brains evolved to keep us alive: to stop us from touching a hot fire or to make us run away from predators. So we came up with rights that protect us from infringements that causes pain. Even more abstract rights, like freedom, are rooted in the way our brains are wired to detect what is fair and unfair. Would a television that is unable to move, mind being locked in a cage? Would it mind being dismantled if it had no fear of death? Would it mind getting insulted if it had no need for self-esteem? But what if we programmed a robot to feel pain and emotions? To prefer justice over injustice, pleasure over pain and be aware of it. Would that make them sufficiently human?
The future of AI
Many technologists believe that an explosion of technology will occur when artificial intelligence can learn and create their own artificial intelligences even smarter than themselves. At this point, the question of how robots are programmed will be largely out of our control. What if an artificial intelligence found it necessary to program the ability to feel pain, just as evolution biology found it necessary in most living things? Do robots deserve those rights?
“I think; therefore I am.”
Maybe we should be less worried about risks that super intelligent robots pose to us and more worried about the danger we pose to them. Our whole human identity is based on the idea of human exceptionalism, that we are special, unique snowflakes, entitled to dominate the natural world. Even in the last two thousand years, this idea of dominance has been widely taught and treated as normal. Genesis, one of the books of the Bible, is a well known example. Humans have a history of denying that other beings are capable of suffering as they do. In the midst of the scientific revolution, René Descartes argued that animals were mere automata, robots if you will. As such, injuring a rabbit was as morally repugnant as punching a stuffed animal, and many of the greatest crimes against humanity were justified by their perpetrators on the grounds that the victims were more animal than civilised human. The most famous example was the treatment of Jews during the Second World War. Even more problematic is that we have an economic interest in denying robot rights. If we coerce, let’s say, sentient AI, possible through programmed torture, into doing as we please, the economic potential is unlimited. We have done it before, after all: slavery was a common tradition for thousands of years. Violence has been used before to force humans into working, and we never had trouble with ideological justifications. Slave owners argued that slavery benefited the slaves, it put a roof over their head and made them civilised. Men, who were against women voting, argued it was in women’s own interest to leave the hard decisions to men. Farmers argued that looking after animals and feeding them justifies their early death for our dietary preferences. If robots become sentient, there will be no shortage of argument for those who say that they should remain without rights, especially from those who stand to profit from it.
AI will force us to decide!
Artificial intelligence raises serious questions about philosophical boundaries. While we may ask if sentient robots are conscious or deserving of rights, it forces us to pose basic questions like ‘What makes us human?’, ‘What makes us deserving of rights’. Regardless of what we think, the question may need to be resolved in the near future. What are we doing to do if robots start demanding their own rights? The fantasies of Detroit: Become Human could become our reality.