As robots get smarter and more widespread, independent cars are bound to end up making life-or-death decisions in unpredictable places, thus taking over or at least seeming to assume moral authority. Weapons systems currently have human operators “in the loop”, simply as they get more advanced, it will be possible to shift to “on the loop” operation, with machines carrying out orders autonomously.
As that occurs, they will be faced with ethical dilemmas. Should a drone fire on a house where a quarry is recognized to be hiding, which may also be sheltering civilians? Should a driverless car swerve to avoid pedestrians if that means running into other vehicles or endangering its occupants? Should a robot involved in disaster recovery tell people the truth about what is happening if that risks doing a scare? Such questions have guided to the emersion of the discipline of “machine ethics”, which purports to give machines the ability to pull in such choices appropriate, in other language, to distinguish right from wrong. 2.0 Background of case study An operator, Bart Matthews was killed by his assembly line robot at Cybernetics, Inc, Silicon Heights. An investigation into the cause of the accident led authorities to the conclusion that Randy Samuels, a Silicon Techtronic's Inc programmer, was responsible for the erratic and violent robot behaviour with his software module, which in turn lead to the death by decapitation of Bart Matthews. The victim was crushed to death when the robot that he was operating malfunctioned and started to wave its robot arm violently by throwing him against a wall and crushing his skull Samuels wrote the particular piece of computer program responsible for the robot malfunction regarding to the indictment. The project physicist provided the hand written formula and each formula described the motion of the robot in one direction. Which are east-west, north-south and up –down. Bill Park a professor of physics at Silicon Valley University confirmed that these equations could be used to describe the motion of a robot arm and checked the program code written by Samuel whether …show more content…
The rules were introduced in his 1942 short story "Runaround", even they had been foreshadowed in a few earlier stories (Norman, 2015). The "Handbook of Robotics, 56th Edition, 2058 A.D." is being quoted with The Three Laws that listed the rules as shown as below:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These forms an organizing principle and unifying theme for Asimov's robotic-based fiction, appearing in his Robot series, the stories linked to it, and his Lucky Starr series of youthful-adult fiction (Norman, 2015). The Laws are incorporated into nearly all of the Positronic robots appearing in his fable, and cannot be bypassed, being designated as a safety feature. Many of Asimov's robot-focused stories involve robots behaving in strange and counterintuitive ways as an unintended result of how the robot uses the Three Laws to the situation in which it sees itself (Norman, 2015). Other authors working in Asimov's fictional universe have taken them and references, often parodies, appear throughout science fiction as easily as in other