Response to James

James commented on my post Bad Arguments Against Materialism a month ago and it deserves a response. I appreciate every reader and, while I may not respond to every comment, I do want to engage in dialog. "Many eyes make short work of bugs" can be as true here as it can be with software (but don't get me started on "code reviews" that miss even the simplest mistakes!)

My only comment - and I'll leave it at this - is that, despite a very well worded argument, you seem to forget the very basis on which your argument stands. That being, using your own abstract allusion, though information (of any type, not just software of course) can be coded in zeros and ones, does not record itself. There needs be a CODER.

Under materialism, the coder is the universe itself. That is, the motion of the particles, operating under physical law, gave rise to the motion of electrons in certain patterns that make up our thoughts. Whether or not this is the true explanation is hotly contested. One side will argue that this is such an improbable occurrence that it couldn't be the right explanation. The other side will argue that improbable things happen. Both sides tailor their argument according to their preconceived notions about the nature of reality. Synchronously, John C. Wright has a droll take on it

It may be transmitted one way or another, either zeros and ones, or brain waves, or goal-seeking algorithms, but itself is something rather more transcendent. If you doubt that, then why would more than one person get upset over the same wrong? (Say invasion of a country you don't even live in) or be offended when you step on the foot of an elderly woman whom you don't even know?

This is a topic that I hope to get to this year. There is an explanation for this, see Axelrod's "
The Evolution of Cooperation." For an idea of how the argument will go, see Cybertheology.

And if we "call steps leading toward a goal good" then that simply means any goal is good. Including, say, a despot's systematic murder of an entire people. There are few goals as effective as that for survival of a people, state or regime.

First, whether or not a goal is good depends on its relationship to other goals, and those goals exist in relationship to other goals, and so on. That's one reason why morality is such a difficult subject -- the size of the goal space is so large. It's much, much bigger than the complex games of Chess and Go.

Second, there may be times when it's necessary for one group to die so that another may live. We don't like that notion, because we may think that the reasoning that leads to the deaths of others could one day be used against us; on the other hand, listen to the reasons given for the necessity of using nuclear weapons against Japan in World War II. That there is no universal agreement on this shows how difficult a problem it is.

You also note that Axelrod's game theory shows how the golden rule can arise in biological systems. Well, if that happens so "naturally," why hasn't it happened in any of the (numerous beyond count) organisms that have, on an evolutionary scale, been here longer than Man? Say, for instance, the shark? Or the ant, which has a complicated social system?

It has happened, and Axelrod (with William D. Hamilton) gives examples of this in chapter 5:
The Evolution of Cooperation in Biological Systems.

We are not necessarily walking conundrums, BTW. …

Then you're a better man that St. Paul, who wrote:

I do not understand my own actions. For I do not do what I want, but I do the very thing I hate. Now if I do what I do not want, I agree that the law is good. But in fact it is no longer I that do it, but sin that dwells within me. For I know that nothing good dwells within me, that is, in my flesh. I can will what is right, but I cannot do it. For I do not do the good I want, but the evil I do not want is what I do. Now if I do what I do not want, it is no longer I that do it, but sin that dwells within me. So I find it to be a law that when I want to do what is good, evil lies close at hand. For I delight in the law of God in my inmost self, but I see in my members another law at war with the law of my mind, making me captive to the law of sin that dwells in my members. Wretched man that I am! Who will rescue me from this body of death? [Rom 7:15-24]

Which leads me to the last point: No, the Bible doesn't teach that Jesus died because of man's inability to follow any external code.

Actually, it does. Again, St. Paul wrote, "I do not nullify the grace of God; for if justification comes through the law, then Christ died for nothing." (Gal 2:21) and "For if a law had been given that could make alive, then righteousness would indeed come through the law." (Gal 3:21).


McCarthy, Hofstadter, Hume, AI, Zen, Christianity

A number of posts have noted the importance of John McCarthy's third design requirement for a human level artificial intelligence: "All aspects of behavior except the most routine should be improvable. In particular, the improving mechanism should be improvable." I claim here, here, and here that this gives rise to our knowledge of good and evil. I claim here that this explains the nature of the "is-ought" divide. I believe that McCarthy's insight has the potential to provide a framework that allows science to understand and inform morality and may wed key insights in religion with computer science. Or, I may be a complete nutter finding patterns where there are none. If so, I may be in good company.

For example, in
Gödel, Escher, Bach, Hofstadter writes:

It is an inherent property of intelligence that it can jump out of the task which it is performing, and survey what it is done; it is always looking for, and often finding, patterns. (pg. 37)

Over 400 pages later, he repeats this idea:

This drive to jump out of the system is a pervasive one, and lies behind all progress and art, music, and other human endeavors. It also lies behind such trivial undertakings as the making of radio and television commercials. (pg. 478).

It seems to me that McCarthy's third requirement is behind this drive to "jump out" of the system. If a system is to be improved, it must be analyzed and compared with other systems, and this requires looking at a system from the outside.

Hofstadter then ties this in with Zen:

In Zen, too, we can see this preoccupation with the concept of transcending the system. For instance, the kōan in which Tōzan tells his monks that "the higher Buddhism is not Buddha". Perhaps, self transcendence is even the central theme of Zen. A Zen person is always trying to understand more deeply what he is, by stepping more and more out of what he sees himself to be, by breaking every rule and convention which he perceives himself to be chained by – needless to say, including those of Zen itself. Somewhere along this elusive path may come enlightenment. In any case (as I see it), the hope is that by gradually deepening one's self-awareness, by gradually widening the scope of "the system", one will in the end come to a feeling of being at one with the entire universe. (pg. 479)

Note the parallels to, and differences with, Christianity. Jesus said to Nicodemus, "You must be born again." (John 3:3) The Greek includes the idea of being born "from above" and "from above" is how the NRSV translates it, even though Nicodemus responds as if he heard "again". In either case, you must transcend the system. The Zen practice of "breaking every rule and convention" is no different from St. Paul's charge that we are all lawbreakers (Rom 3:9-10,23). The reason we are lawbreakers is because the law is not what it ought to be. And it is not what it ought to be because of our inherent knowledge of good and evil which, if McCarthy is right, is how our brains are wired. Where Zen and Christianity disagree is that Zen holds that man can transcend the system by his own effort while Christianity says that man's effort is futile: God must affect that change. In Zen, you can break outside the system; in Christianity, you must be lifted out.

Note, too, that both have the same end goal, where finally man is at "rest". The desire to "step out" of the system, to continue to "improve", is finally at an end. The "is-ought" gap is forever closed. The Zen master is "at one with the entire universe" while for the Christian, the New Jerusalem has descended to Earth, the "sea of glass" that separates heaven and earth is no more (Rev 4:6, 21:1) so that "God may be all in all." (1 Cor 15:28). Our restless goal-seeking brain is finally at rest; the search is over.

All of this as a consequence of one simple design requirement: that everything must be improvable.


She Said Yes

Way, way, behind on blogging. I can't believe it has been over a month since we drove to Illinois to see Johnny and Shari. On Friday, June 17, Becky, Rachel, and I headed for Urbana where Johnny is working on his PhD in Mechanical Engineering. We got to meet Shari, Johnny's friends Patrick and Kimberly, tour the University of Illinois, and eat some great food. Saturday night we went to Desthil, a micro brewery in Champaign. We had reservations for seven at 7:15. However, we weren't seated until around 9. Something about a bachelorette party that had paid their tab but kept sitting around. While some in our party had gotten really hungry, it was, after all, a brewery with excellent beer, and beer and conversation isn't a bad way to fill the time. But the staff wasn't happy so they comp'd us some appetizers. One of which was beer battered deep fried bacon. Heaven on earth. Between filling my mind at the university library and my stomach with the bacon, I could live a content man for, well, days maybe. That bacon would kill me.

We drove back on Monday.

On July 4th, Johnny asked Shari to marry him. It may have been Shari's mom who observed that Johnny gave up his independence on July 4th. I prefer to look at it as the start of a new nation. Congratulations, you two.


The Is-Ought Problem Considered As A Question Of Artificial Intelligence

In his book A Treatise of Human Nature, the Scottish philosopher David Hume wrote:

In every system of morality, which I have hitherto met with, I have always remarked, that the author proceeds for some time in the ordinary way of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when of a sudden I am surprized to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is, however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, it is necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it.

This is the "is-ought" problem: in the area of morality, how to derive what ought to be from what is. Note that it is the domain of morality that seems to be the cause of the problem; after all, we derive ought from is in other domains without difficulty. Artificial intelligence research can show why the problem exists in one field but not others.

The is-ought problem is related to goal attainment. We return to the game of Tic-Tac-Toe as used in the post
The Mechanism of Morality. It is a simple game, with a well-defined initial state and a small enough state space that the game can be fully analyzed. Suppose we wish to program a computer to play this game. There are several possible goal states:
  1. The computer will always try to win.
  2. The computer will always try to lose.
  3. The computer will play randomly.
  4. The computer will choose between winning or losing based upon the strength of the opponent. The more games the opponent has won, the more the computer plays to win.
It should then be clear that what the computer ought to do depends on the final goal state.

As another example, suppose we wish to drive from point A to point B. The final goal is well established but there are likely many different paths between A and B. Additional considerations, such as shortest driving time, the most scenic route, the location of a favorite restaurant for lunch, and so on influence which of the several paths is chosen.

Therefore, we can characterize the is-ought problem as a beginning state B, an end state E, a set P of paths from B to E, and a set of conditions C. Then "ought" is the path in P that satisfies the constraints in C. Therefore, the is-ought problem is a search problem.

The game of Tic-Tac-Toe is simple enough that the game can be fully analyzed - the state space is small enough that an exhaustive search can be made of all possible moves.
Games such as Chess and Go are so complex that they haven't been fully analyzed so we have to make educated guesses about the set of paths to the end game. The fancy name for these guesses is "heuristics" and one aspect of the field of artificial intelligence is discovering which guesses work well for various problems. The sheer size of the state space contributes to the difficulty of establishing common paths. Assume three chess programs, White1, White2, and Black. White1 plays Black, and White2 plays Black. Because of different heuristics, White1 and White2 would agree on everything except perhaps the next move that ought to be made. If White1 and White2 achieve the same won/loss record against Black; the only way to know which game had the better heuristic would be to play White1 against White2. Yet even if a clear winner was established, there would still be the possibility of an even better player waiting to be discovered. The sheer size of the game space precludes determining "ought" with any certainty.

The metaphor of life as a game (in the sense of achieving goals) is apt here and morality is the set of heuristics we use to navigate the state space. The state space for life is much larger than the state space for chess; unless there is a common set of heuristics for living, it is clearly unlikely that humans will choose the same paths toward a goal. Yet the size of the state space isn't the only contributing factor to the problem establishing oughts with respect to morality. A chess program has a single goal - to play chess according to some set of conditions. Humans, however, are not fixed-goal agents. The basis for this is based on John McCarthy's five design requirements for human level artificial intelligence as detailed
here and here. In brief, McCarthy's third requirement was "All aspects of behavior except the most routine should be improvable. In particular, the improving mechanism should be improvable." What this means for a self-aware agent is that nothing is what it ought to be. The details of how this works out in our brains is unclear; but part of our wetware is not satisfied with the status quo. There is an algorithmic "pressure" to modify goals. This means that the gap between is and ought is an integral part of our being which is compounded by the size of the state space. Not only is there the inability to fully determine the paths to an end state, there is also the impulse to change the end states and the conditions for choosing among candidate paths.

What also isn't clear is the relationship between reason and this sense of "wrongness." Personal experience is sufficient to establish that there are times we know what the right thing to do is, yet we do not do it. That is, reason isn't always sufficient to stop our brain's search algorithm. Since Hume mentioned God, it is instructive to ask the question, "why is God morally right?" Here, "God" represents both the ultimate goal and the set of heuristics for obtaining that goal. This means that,
by definition, God is morally right. Yet the "problem" of theodicy shows that in spite of reason, there is no universally agreed upon answer to this question. The mechanism that drives goal creation is opposed to fixed goals, of which "God" is the ultimate expression.

In conclusion, the "is-ought" gap is algorithmic in nature. It exists partly because of the inability to fully search the state space of life and partly because of the way our brains are wired for goal creation and goal attainment.