- Shopping Bag ( 0 items )
Machines that look like people, fall in love, and wreck worlds may be on their way, Wallach (Ctr. for Bioethics, Yale Univ.) and Allen (history & philosophy of science, Indiana Univ.) suggest. Realistically, however, the problem now is with computer programs that act autonomously by playing roles in electric blackouts and blocking credit cards and machines that drive subway trains and guide military vehicles. The authors carefully examine how morality is conceptualized; on the face of it, robots can't be moral agents because intelligent machines work on a combination of fixed programs and randomizing devices that create new data from which their programs can generate novelties. Wallach and Allen don't pretend that any robots we know can have full moral agency, but they see the problem instead as being one of balancing goals and risks and keeping both within the limits that people, after rational reflection, can accept. Robots can do this balancing, they argue, and it is time to get on with it. Every library should have this book.