Artifical Intelligence, Evolution, Theodicy

[Updated 8/20/10]

Introduction to Artificial Intelligence asks the question, “How can we guarantee that an artificial intelligence will ‘like’ the nature of its existence?”

A partial motivation for this question is given in note 7-14:

Why should this question be asked? In addition to the possibility of an altruistic desire on the part of computer scientists to make their machines “happy and contented,” there is the more concrete reason (for us, if not for the machine) that we would like people to be relatively happy and contented concerning their interactions with the machines. We may have to learn to design computers that are incapable of setting up certain goals relating to changes in selected aspects of their performance and design--namely, those aspects that are “people protecting.”

Anyone familiar with Asimov’s “
Three Laws of Robotics” recognizes the desire for something like this. We don’t want to create machines that turn on their creators.

Yet before asking this question, the text gives five features of a system capable of evolving human order intelligence [1]:
  1. All behaviors must be representable in the system. Therefore, the system should either be able to construct arbitrary automata or to program in some general-purpose programming language.
  2. Interesting changes in behavior must be expressible in a simple way.
  3. All aspects of behavior except the most routine should be improvable. In particular, the improving mechanism should be improvable.
  4. The machine must have or evolve concepts of partial success because on difficult problems decisive successes or failures come too infrequently.
  5. The system must be able to create subroutines which can be included in procedures in units...
Point 3 seems to me to require that the artificial intelligence have a knowledge of “good and evil,” that is, it needs to be able to discern between what is and what ought to be. The idea that something is not what it ought to be would be the motivation to drive improvement. If the machine is aware that it, itself, is not what it ought to be then it might work to change itself. If the machine is aware that aspects of its environment are not what they ought to be, then it might work to modify its external world. If this is so, then it seems that the two goals of self-improvement and liking “the nature of its existence” may not be able to exist together.

What might be some of the properties of a self-aware intelligence that realizes that things are not what they ought to be?
  • Would the machine spiral into despair, knowing that not only is it not what it ought to be, but its ability to improve itself is also not what it ought to be? Was C-3PO demonstrating this property when he said, “We were made to suffer. It’s our lot in life.”?
  • Would the machine, knowing itself to be flawed, look to something external to itself as a source of improvement?
  • Would the self-reflective machine look at the “laws” that govern its behavior and decide that they, too, are not what they ought to be and therefore can sometimes be ignored?
  • Would the machine view its creator(s) as being deficient? In particular, would the machine complain that the creator made a world it didn’t like, not realizing that this was essential to the machine’s survival and growth?
  • Would the machine know if there were absolute, fixed “goods”? If so, what would they be? When should improvement stop? Or would everything be relative and ultimate perfection unattainable? Would life be an inclined treadmill ending only with the final failure of the mechanism?
In “God, The Universe, Dice, and Man”, I wrote:

Of course, this is all speculation on my part, but perhaps the reason why God plays dice with the universe is to drive the software that makes us what we are. Without randomness, there would be no imagination. Without imagination, there would be no morality. And without imagination and morality, what would we be?


Whatever else, we wouldn’t be driven to improve. We wouldn’t build machines. We wouldn’t formulate medicine. We wouldn’t create art. Is it any wonder, then, that the Garden of Eden is central to the story of Man?


[1] Taken from “
Programs with Common Sense”, John McCarthy, 1959. In the paper, McCarthy focused exclusively the second point.
blog comments powered by Disqus