In the 1940s, American writer Isaac Asimov developed the Three Laws of Robotics arguings that intelligent robots should be programmed in a way that when facing conflict they should remit and obey the following three laws: 

 

  •  

    A robot may not injure a human being, or, through inaction, allow a human being to come to harm 

     

  •  

    A robot must obey the orders given it by human beings except where such orders would conflict with the First Law

     

  •  

    A robot must protect its own existence as long as such protection does not conflict with the First or Second Law

     

 

 

 

 

Isaac Asimov’s Laws of Robotics were first introduced in the short science fiction story Runaround, (PDF) published in the March, 1942 issue of Astounding Science Fiction. 

 

 

 

 

Fast-forward almost 80 years into the present, today, Asimov's Three Laws of Robotics represent more problems and conflict to roboticists than they solve. 

 

Roboticists, philosophers, and engineers are seeing an ongoing debate on machine ethics. Machine ethics is a practical proposal on how to simultaneously engineer and provide ethical sanctions for robots. 

 

Who or what is going to be held responsible when or if an autonomous system malfunctions or harms humans?

To read more, click here.