Robots and the Non Aggression Principle

Robots and the Non Agression Principle
It seems that any science fiction involving sufficiently advanced robots usually includes some element of struggle between them and their human creators.  Is this just a useful plot device, or is their really an inherent danger in creating powerful, rugged, autonomous beings?  Perhaps the behavioral framework within which we have been considering robots is flawed.  Maybe the solution would be treating robots the same way we treat humans (or at least the way we ought to).  Consider the most well-known rules for programming well behaved robots. 

Isaac Asimov defined the Three Laws of Robotics as follows:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Those seem like pretty good laws, right?  They're pretty airtight.  But, what if we tried something a little bit different?  What if we paused for a moment and decided to program our robots to obey the Non-aggression Principle, the axiom that one shall not initiate the use of force against other persons or their property.  This is a fundamentally different way of treating robots that transcends Asimov's master/slave-relationship-based rules.

I think this is great fodder for any budding libertarian-sympathizing science fiction writers out there.  What would be different? What are the potential pitfalls, and what are the potential benefits of programing robots to be actually autonomous?  Not just autonomous slaves, but nonviolent, nonsubservient, synthetic beings.  Why can't the rules that govern human action also govern the action of artificially intelligent beings? 

No comments:

Post a Comment

Related Posts Plugin for WordPress, Blogger...