[Column] 'Science Leaps : Law Stays Put'. By Justice Anand Venkatesh

Justice Anand Venkatesh
9 April 2020 7:15 AM GMT
Your free access to Live Law has expired
To read the article, get a premium account.
    Your Subscription Supports Independent Journalism
Subscription starts from
(For 6 Months)
Premium account gives you:
  • Unlimited access to Live Law Archives, Weekly/Monthly Digest, Exclusive Notifications, Comments.
  • Reading experience of Ad Free Version, Petition Copies, Judgement/Order Copies.
Already a subscriber?

This article is an attempt to explore how the burgeoning sophistication of Artificial Intelligence (A.I) and Robots and their widespread deployment everywhere, from homes and hospitals to public spaces and the battlefield, necessitates a rethink of a wide variety of philosophical and public policy issues, and their uneasy interaction with the existing legal regimes thereby impelling a revamp of our existing law and policies. This situation is analogous to how policymakers and lawmakers had to approach and tackle issues of the Internet when it virtually took over the whole world in the 1990's.

For the purpose of this article, a robot must be understood as a constructed system that displays both physical and mental agency but is not alive in the biological sense. The most common robot in the world at present is the I-Robot Roomba. This is a small robot that is capable of autonomously vacuum-cleaning your house. They are fully autonomous in the sense that they need no human assistance. They make rational decisions as they scoot around the floor. In the military, ground-based robots such as the Packbot (IRobot) and the Talon (Foster Middler) are becoming ubiquitous. These systems could be used to replace human soldiers in highly risky operations such as disabling explosives, attack on buildings, counter-insurgency operations etc. Autonomous warehouse robots have been put into operation, which appear to have their own mental agency as they avoid colliding against each other and reconfigure storage locations of items in the warehouse based on customer demand. Google has a fleet of self-driven cars, the prototypes of which have already hit the roads in the Unites States of America. Robots used as therapeutic aides are available and they are being used to assist individuals with severe motor disabilities in their homes. These are mere illustrations. More and more robots will eventually enter our daily lives in the coming decade. This surge in the emergence of robots into the functioning of our day-to-day lives is bound to result in a plethora of legislative challenges.

As robots operating with sophisticated AI become more and more multipurpose, it will be trickier to imagine, a priori, as to how and where they may be used. In the extreme case (hopefully in the future) of a robot which is fully capable of performing everything that a human being can- there are few practical boundaries on what the robot cannot do. How does one legislate to legally reign in such a system? As robots become more autonomous, the question of where the liability rests when something goes wrong will assume relevance. This is an issue labyrinthine in its nature. Is it the manufacturer or the programmer or the user (who might have given an inappropriate or misleading instruction) severally, or a combination of all three jointly? The sheer variety of applications and tasks that robots can execute will, thus, lay enormous pressure on the existing legal system in a wide range of substantive areas including tort, contract, consumer protection, privacy and penal law among others. The internet, which emerged just 30 years back has virtually glued every other human being into cyber space and it has and continues to throw up new challenges for law enforcement. Humans never did imagine that development would happen at this startling pace. By the time a new law or an enforcement mechanism is brought into effect, there is further development, which makes it a struggle to keep pace with the challenges posed by cyber space. Therefore, it is crucial that we do not remain in a fool's paradise thinking that robots will not get more autonomous and get into the real world, which up until now was in the domain of human beings.

Given the benefits that robots might someday confer on humanity, should we, or can we, remain in control once they emerge superior to us in respect of certain abilities? 'Jeopardy' is a classic game show with a twist. The answers are given first and the contestants supply the questions. The two all-time champions of this show Ken Jennings and Brad Rutler were pitted against IBM's Robot Watson. Watson is a deep question-answering super computer. It won hands down. With this IBM system, human monopoly over natural language was obliterated, rendering Watson the world's go-to expert at Jeopardy. Watson's success raises questions as to the role that humans would occupy once robots can perform a multitude of tasks traditionally delegated to human experts- and not just performing them but performing them more efficiently and effectively than humans.

Today, scientists can come up with algorithms which enable AI to understand facial expressions and act accordingly. What happens if a driverless car, which is claimed to be safer than a human driven car, causes an accident and kills or grievously hurts someone? What happens if an autonomous robot misreads a facial expression and attacks a human being and causes death or injury? What happens if a malfunctioning AI mishandles a patient suffering from some disability while assisting him/her during a treatment? What happens if robots are programmed to carry out terrorist activities in a building or locality? Who should be made liable? These are looming and persistent questions for which there exists no satisfactory answers.

There are other facets to this. Sex-Bots are being created using a combination of existing AI technology, sensory perception capabilities, synthetic physiological responses and affective computing. Proponents of sex-bots predict that these robots will facilitate sexual interaction and provide companionship for human users. The idea of fabrication of a woman for a man's purpose can be traced back to pre-biblical myths. This historically regressive and oppressive belief that women are created to serve men is brought to life in the designing and programming of female-like robots. Anthromorphised robots that can interact with humans and learn from their own environment have already started hitting the market. Aiko is a female robot built out of silicone, which has a similar appearance and texture to the human skin. Actroid F technology now has taken it further and improved facial movements that allow it to detect and initiate human expressions. Sex-Bots are physical and interactive manifestations of a woman that are programmed into submission. These sex-bots do not have the capacity to decline, criticize or become dissatisfied with the user, unless they are programmed to do so. Documentaries like "The Mechanical Bride", "My Sex-Robot" reflect the deleterious effects of this invention on the attitude of men towards women. The use of sex-bots and the potential creation of an industry that commoditizes the circumvention of female consent may de-value female personhood, encourage misogynistic reactions upon women and impair values about the role of women in society. Sex-Bots that appear stereotypically like a female and act as sexual slaves to their owners send a very damaging message about womanhood. Consent strengthens personal autonomy which is an integral facet of the right to life. It is an essential element of contract law, medical treatment and sexual interaction. Preserving human autonomy by ensuring the presence of consent is especially important in situations where lack of consent could negatively affect an individual's physical and psychological integrity. How are we going to deal with this dangerous situation in the near future?

When it comes to the liability and the question as to who is to be held liable for the acts of AI and Robots, there are several if's and buts. The legal liability of AI systems depends on at least three factors:

  1. Whether AI is a product or a service? This is ill-defined in law; different commentators offer different views.
  2. If a criminal offence is being considered, what is the mens rea required? It seems unlikely that AI programs will contravene laws that require knowledge that a criminal act was being committed; but it is highly possible they might contravene laws for which 'a reasonable man would have known' that a course of action could lead to an offence, and it is almost certain that they could contravene strict liability offences.
  3. Whether the limitations of AI systems are communicated to a purchaser? Since AI systems have both general and specific limitations, legal cases on such issues may well be based on the specific wording of any warnings about such limitations.

There is also the question of who should be held liable. It will depend on which of the following models will apply (perpetrator-by-another; natural-probable- consequence; or direct liability):

  • In a perpetrator-by-another offence, the person who instructs the AI system – either the user or the programmer – is likely to be found liable.
  • In a natural-or-probable-consequence offence, liability could fall on anyone who might have reasonably foreseen the product being used in the way it was; the programmer, the vendor (of a product), or the service provider. The user is less likely to be blamed unless the instructions that came with the product/service spell out the limitations of the system and the possible consequences of misuse in unusual detail.
  • AI programs may also be held liable for strict liability offences, in which case the programmer is likely to be found at fault.

However, in all cases where the programmer is deemed liable, there may be further debates: whether the fault lies with the programmer; the program designer; the expert who provided the knowledge; or the manager who appointed the inept expert, program designer or programmer?

Am I over-reacting to the situation? I believe I am not. Western countries which are the major contributors to this development are working at a feverish pace to simultaneously develop laws to ensure that legal liability is properly fixed in cases of adverse impacts caused by AI/Robots. A lot of study is going on and experts are churning out a lot of research to set down normative standards for an effective enforcement mechanism to fix liability. India reacted very slowly when it came to laws relating to cyber space crimes and even now, we are grappling with it with very few experts in the field. The system is yet to catch up with the enormity of the situation and we are facing new challenges on a daily basis in dealing with offences such as hacking, scamming, identity theft, ransomware, phishing, cyberstalking, child pornography, etc.,

We must learn from our mistakes and inadequacies and not be lagging-behind while dealing with AI/Robots which will very soon get into our world on a day-to-day basis. It is high time that we start educating ourselves with these developments and start a dialogue as to how we are going to handle the situation by bringing in appropriate amendments to existing laws or legislate enactments that will exclusively deal with this issue. Let us not remain lackadaisical thinking that this might not challenge us soon. It is only a matter of time and therefore, we ought to prepare ourselves to face it.

Scientific developments are here to stay and it cannot be stopped. The onus is upon us to prepare ourselves to counter its ill-effects. We now have the time to think about it. Why not use this time and look into what is happening around the world and take appropriate measures to bring in laws to handle the monster that is already staring at us.

Views are personal only.

(Justice N. Anand Venkatesh is a Judge at Madras High Court)

Next Story