Akratic robots and the computational logic thereof

TitleAkratic robots and the computational logic thereof
Publication TypeConference Paper
Year of Publication2014
AuthorsBringsjord, S, G. Sundar, N, Thero, D, Mei, S
Conference NameEthics in Science, Technology and Engineering, 2014 IEEE International Symposium on
Date Published23-24 May 2014
Publication Languageeng
Keywordsakrasia , akratic , AMERICAN , COGNITION , Computational , computer , disease , educational , enemy , ethics , knowledge , ROBOTS , slips , Substrates
AbstractAlas, there are akratic persons. We know this from the human case, and our knowledge is nothing new, since for instance Plato analyzed rather long ago a phenomenon all human persons, at one point or another, experience: (1) Jones knows that he ought not to - say - drink to the point of passing out, (2) earnestly desires that he not imbibe to this point, but (3) nonetheless (in the pleasant, seductive company of his fun and hard-drinking buddies) slips into a series of decisions to have highball upon highball, until collapse.1 Now; could a robot suffer from akrasia? Thankfully, no: only persons can be plagued by this disease (since only persons can have full-blown P-consciousness2, and robots can't be persons (Bringsjord 1992). But could a robot be afflicted by a purely - to follow Pollock (1995) - “intellectual” version of akrasia? Yes, and for robots collaborating with American human soldiers, even this version, in warfare, isn't a savory prospect: A robot that knows it ought not to torture or execute enemy prisoners in order to exact revenge, desires to refrain from firing upon them, but nonetheless slips into a decision to ruthlessly do so - well, this is probably not the kind of robot the U.S. military is keen on deploying. Unfortunately, for reasons explained below, unless the engineering we recommend is supported and deployed, this might well be the kind of robot that our future holds.