Rahul – Year 12 Student
Editor’s note: Year 12 student Rahul writes here in response to the fascinating philosophy question set for the New College of the Humanities essay competition, 2021. ‘Should robots have rights?’ – what do you think? CPD
“Man is a robot with defects.” 1Emil Cioran (The Trouble With Being Born)
From behind pristine glass doors in Silicon Valley to the depths of agricultural Japan2, the development of robots in recent decades has had an extensive effect on the standard of living for everybody, for better or for worse. Their existence has become so embedded and interwoven with ours, establishing a relationship like never before. In today’s day and age, man and robot are dependent on each other – and once you understand this reality, then you will also realise that the question above is of the utmost significance. At the rate technology is developing, this inevitable question may evoke fear or consternation for certain people, but the answering of it is paramount to preventing prospective moral and legal grey areas regarding property and the essence of humanity.
In this essay, I will tackle the foundations of this question, scrutinizing it so we are satisfied with what the crucial words entail and truly mean. I will then proceed to evaluate the different arguments for either side and eventually reach the justified conclusion that robots should, in fact, not have rights.
The definition of a ‘robot’ fluctuates among roboticists, but a common occurrence is ‘autonomy’ or ‘semi-autonomy’- and it is here where the distinction is made between a machine and a robot. Essentially, this means that robots are able to operate independently, sensing their environment and carry out corresponding functions without explicit human control, whereas machines require some kind of human operation. Semi-autonomous means they have ‘a degree of, but not complete’ self-government. This is a pivotal distinction to make because it whittles down our preconceived ideas as to what robots may be. Many question whether robots have full autonomy at all – surely if they need to be plugged in or programmed, they aren’t fully autonomous? Robots themselves have a plethora of practical uses, ranging from medical, like the ‘Da Vinci surgical robot’7, to the military to agriculture. It is also worth pointing out that not all robots are ‘semi- autonomous’ – Telerobots are robots operated wirelessly from a distance by humans, hence eradicating any modicum of their autonomy, yet they are still classed as robots. I’m using this example to demonstrate how volatile a definition can be, and how unfeasible it is to confine these complex systems within a single term, but also the irony in it too. These machines are the epitome of ‘man-made’, yet we still struggle to provide a universal definition.
Three constituent features of a robot that I think are most consistent are semi-autonomy, them being products of complex man-made systems and their ability to efficiently carry out work and jobs.
The Oxford Dictionary defines a right as ‘a moral or legal entitlement to have or do something’, but this insufficient definition rests on a maelstrom of ethical, legal and even religious disputes and ambiguities. One key definition issue I would like to highlight is the argument of natural rights (universal rights that are intrinsic to human life and are derived from ‘human nature or the edicts of god’3) versus legal rights (rights based on society’s customs or statutes). Whether robots have natural rights is effectively what this debate truly explores, because legal rights are a choice we as a society must make after a practical and ethical evaluation, whereas natural rights are intrinsic, and are often debated as to who should possess them. It is also necessary to note that, whilst some may believe that robots do have intrinsic rights, such as copyrights for software or trade rights for machines, these are statutory rights of the creator, not the object4. The technology itself hasn’t had any rights bestowed upon itself by God. A common assumption is that, for something to have rights, they must also have responsibilities and liabilities – things which robots cannot possess if they are only semi-autonomous. Furthermore the definition ceases to confine a ‘right’ to animate beings therefore we can extrapolate that they apply to the deceased and even objects. A court in Northern India gave the river Ganges the ‘status of living institutions’, meaning it would have the corresponding rights, duties and liberties as a human being (so B polluting the Ganges would be legally equivalent to B physically harming C). 6 To the surprise of many, human rights are solely a branch of legal rights in general, as this example illustrates.
The real question is, whilst human rights are directly applicable to humans, can the other types of legal rights, such as contractual, equality and economic rights, be solely applicable to inanimate objects, or must they stretch back to some original individual?
In an article, Andrew Sherman predicts that ‘By the year 2025, robots…are predicted to perform half of all productive functions in the workplace’ 8. This is an interesting prediction because, whilst robotic rights may seem absurd, it actually highlights the benefits of this, or so it seems to. For instance, if the manager fails to provide these robots with a safe, regulated working environment, then there is a great prospect of damage for the robots. Whilst this may not seem like a pressing issue, (as we can always purchase more robots) from an environmental perspective, we will be wasting masses of resources and energy in manufacturing new robots and also fuelling the ongoing war against landfills and waste. Moreover, from a moral perspective, faulty products could pose a threat to users and have serious injurious effects on them. So it ultimately seems sensible to concede that robots deserve ‘workers rights’, especially since they’ll contribute to half of all productive functions in the workplace. However, delving deeper into the situation, we can see that it certainly isn’t as elementary as the arguments for robotic rights may seem. It’s the intrinsic right of nature we seek to protect with the environmental argument and the individual human rights of the consumer we seek to protect with the moral argument. So the disregard of working environments is an offence subject to sanction, but this is to protect the rights of these larger bodies that we have already established have natural rights. Referring back to our definition of ‘rights’, it doesn’t stand that the robots themselves deserve rights because they are merely instrumental in satisfying the entitlements of others. They don’t have isolated moral or legal entitlements solely due to the fact that they have no emotion, and lack desires.
Assigning rights to robots may even seem immoral to some. The very act of giving them rights will have a multiplier effect that’ll reverberate through our justice system. More rights leads to more laws which lead to more court cases. This spawns a stress upon the legal system financially, meaning we will either have to deprive other aspects of our economy, such as healthcare and education, to fund courts, or the overall quality of justice will diminish over time. Adopting an act utilitarian approach (aim to maximise overall ‘goodness’ through actions) demonstrates the extent of the immorality in three ways;
- We derive no benefit from robots having isolated, individual rights;
- Robots themselves don’t count towards the ‘Hedonic Calculus’ (which is used to measure resultant ‘goodness’) due to their lack of sentience, so they also derive no benefit from robotic rights;
- Humans will collectively suffer from a bedlam of a justice system due to the immense mass of new law we are injecting into the justice system to protect the rights of robots.
Therefore an Act Utilitarian, like Jeremy Bentham, would argue that, not only is it impractical to give robots rights, but it’s also immoral as it spirals into an overall decrease in utility for the community due to injustice or deprivation of healthcare, education and other vital parts of society.
It seems apparent that the crux of this argument boils down to two fundamental points. Firstly, robots don’t have natural, intrinsic rights. Abrahamic religions would contend that this is because robots have no divine causation. They, like basketballs, are man-made objects in their truest essence. Evolutionists would proclaim that, whilst humans possess inherent DNA that, theoretically, stretches back to the earliest organisms, robots have no natural, inbuilt claim to existence. Secondly, robots cannot have individual legal rights. They have no capacity for emotions or desires and therefore have nothing to gain or lose in any given situation. In addition, they are only autonomous to a degree, as many would argue, hence it follows that robots have no entitlements or responsibilities. A similar example would be the environment – we have laws to protect the environment, but this isn’t due to intrinsic, isolated rights of the environment itself – it’s due to our humane and animalistic natural rights, which, by protecting the environment, we are indirectly protecting too.
After deconstructing the question to its essence, and appraising various outlooks on either side of the argument, I firmly believe that, not only do robots not qualify for rights, it would be immoral to give these to them, infringing on some of our intrinsic human rights too. However, as science advances and technology improves, we may start to replicate entire genomes, ‘playing God’ and starting to construct ‘human beings’. It is here where the fundamental distinctions, between humans and robots, I have established begin to melt. Then, we must be prepared to navigate through roaring seas of morality, ethics and philosophy, in search of a new answer to this question.