Notes on Feser's "From Aristotle..."

[updated 5/5/2020 for clarity, 5/6/2020 to add an aside on qualia]

Some notes on Edward Feser's "
From Aristotle to John Searle and Back Again: Formal Causes, Teleology, and Computation in Nature". This is not a detailed rebuttal; rather it's an outline of points of disagreement with various statements in his paper. To better understand why I disagree the way I do, previous experience with the lambda (λ) calculus is helpful. Reviewing my disagreement with Searle's Chinese Room Argument may also be useful. I wrote that article over a year ago and promised to revisit it in more detail. One of these days. Still, my understanding of Searle's argument is this:

We can, in theory, construct a machine that can translate from Chinese to another language, without it understanding Chinese. Therefore, we cannot construct a machine that can both translate and understand Chinese.

The conclusion simply doesn't follow and I don't understand how it manages to impress so many people. One possibility is confirmation bias.
1 Fortunately, one of the Fathers of computer science, John McCarthy, independently came to the same conclusion. See "John Searle's Chinese Room Argument".

Feser makes the same kinds of mistakes as Searle.

Syntax is not sufficient for semantics.

From John Searles's Chinese Room paper, quoted by Feser.

True, but incomplete. The λ calculus has syntax (λ expressions) and semantics (λ evaluation).

The problem is this. The status of being a "symbol," Searle argues, is simply not an objective or intrinsic feature of the physical world. It is purely conventional or observer-relative.

  • This is exactly right, that is, it is observer-relative but this isn't a problem. In the λ calculus, meaning is the arbitrary association of a symbol with another set of arbitrary symbols. It is simply an arbitrary association of this with that. What Searle and Feser miss is that the most fundamental thats are our sense impressions of the (presumably) external world. Because our brains are built mostly the same way, and because we perceive nature in mostly the same way, we share a common set of "this with that" mappings, upon which we then build additional shared meaning.
  • This is why there is no problem with qualia. It doesn't matter how a brain encodes this and that. it is the association that determines meaning, not the qualia themselves. (See here).
  • In the final analysis, nature observes itself, since we observers are a part of nature. As the Minbari say, "We are 'star stuff.' We are the universe, made manifest - trying to figure itself out."

It's status as a "computer" would be observer-relative simply because a computer is not a "natural kind," but rather a sort of artifact.

  • First, as Feynman wrote, "Computer theory has been developed to a point where it realizes that it doesn't make any difference; when you get to a universal computer, it doesn't matter how it's manufactured, how it's actually made."2
  • We have been made by nature. We can, and likely will, argue forever over how this actually happened, but this paper cannot concern itself with either "why does the universe exist?" or "why does the universe exist the way it does?".
  • We observe ourselves ("Cogito ergo sum").

In short, Searle says, "computational states are not discovered within the physics, they are assigned to the physics.

  • I think this betrays "linear parallel" thinking. This is "this" and that is "that" and the two don't meet. But what Searle and Feser miss is that nature is self-referential. Nature can describe itself. And that's why the objection, "Hence, just as no physicist, biologist, or neuroscientist would dream of making use of the concept of a chair in explaining the natural phenomena in which they deal, neither should they make use of the notion of computation." is wrong.
  • Chairs aren't self-referential objects. Computation, and intelligence, -- and nature -- are. Recursion is fundamental to computation. In implementing a λ calculus evaluator, Eval calls Apply; Apply calls Eval. We may (or may not) not use the concept of "chair" to explain natural phenomena, but we can't escape using the concept of intelligence to explain intelligence. This computer science aphorism is instructive: to understand recursion you must first understand recursion.
[Referring to Kripke's quus example: x quus y = x + y if x + y < 57, otherwise 5. 10 quus 7 is 17; 50 quus 60 is 5.]

For, whatever we say about what we mean when we use terms like "plus," "addition," and so on, there are no physical features of a computer that can determine whether it is carrying out addition or quaddition, no matter how far we extend its outputs.

This is, of course, false. The programming is the wiring. One could, in theory (although it might be nigh impossible in practice to untangle how symbols flow through the wires), recover the method by reverse engineering the wiring. Then one could determine whether addition or quaddition was being performed. Since the methods are different, the wiring would be different.

[Searle] is not saying, whether there are [rigourously specifiable empirical criteria for whether something ... is a computer] or not, that something fitting those criteria counts as a computer is ultimately a matter of convention, rather than observer independent facts.

How nature behaves is empirical fact. Putting labels on different aspects of that behavior is a matter of convention. Searle is objecting to the very nature of nature.

[Searle holds] that having a certain physical structure is a necessary condition for a system's carrying out a certain computation. Searle's point, though, is that is nevertheless not a sufficient condition.

This is false for systems that compute. For the example of a Turing machine, the wiring, the physical structure, is both necessary and sufficient. It is a self-referential structure. For systems with less computational power than a Turing machine, the wiring will be simpler.

If evolution produced something that was chair-like, it would not follow that it had produced a chair, and if evolution produced something symbol like, it would not follow that it had produced symbols.

  • First, this is the Sorities paradox on display. At what point is something like x actually x? It depends on definitions and definitions can be fuzzy.
  • Second, and absolutely devastating to Feser's argument, is that in the λ calculus, symbols are meaningless.
  • Third, in the λ calculus, symbols are nothing more than distinct objects. And nature is full of distinct objects that can be used as symbols. Positive and negative charge is important because they are distinguishable and they are self-distinguishing!
  • Fourth, how evolution builds a self-referential structure in which symbols acquire meaning is through the equivalent of λ evaluation is, of course, contentious.

If the computer scientist's distinction between "bugs" and "features" has application to natural phenomena, so too does the distinction between "software" and "hardware."

The λ calculus consists of λ expressions and λ evaluation. λ evaluation is just a list of substitution rules for symbols, and symbols are just distinguishable objects. In this sense, the program (λ expressions) and computer (λ evalution) distinction exists. However, λ evaluation can be written in terms of λ expressions. And here the program/computer distinction disappears. It's all program (if you observe the behavior) and it's all computer (if you look at the hardware). A λ calculus evaluator can be written in the λ calculus (see Paul Graham's
The Roots of Lisp) which is then arranged as a sequence of NAND gates (or whatever logic gates you care to use. Cf. the Feynman quote, above). So it's very hard to know if something is a "bug" or a "feature" from the standpoint of the computer. It's just doing what it's doing. It's only as you impose a subjective view of what it should be doing, and how it should do it, that bugs and features appear. Nature says "reproduce" (if one may be permitted an anthropomorphism). And nature has produced objects that do that spectacularly.

But no such observer-relative purposes can be appealed to in the case of the information the computationalist attributes to physical states in nature.

The λ calculus simply specifies a set of symbols and the set of operations on those symbols that comprise what we call computation. What needs to be understood is that symbols as meaningless objects and symbols as meaning
are the same symbols. The λ calculus does not have one set of symbols that have no meaning and another set of symbols that have meaning. There is only one alphabet of a least two different symbols. If you follow a symbol through a computational network, you can't easily tell at some point in the network, whether the object is being used as a symbol or if it's being used as a value. Only the network knows. We might be able to reverse engineer it by painstaking probing of the system, but even there our efforts might be thwarted. After all, a symbol could be used one way in the network and a completely different way in another part of the network. That is, computers don't have to be consistent in the way they use symbols. All that matters is the output. Even our computing systems aren't always consistent in the way things are arranged. For example, when little-endian systems interface with big-endian peripherals. Due to the complexity of "knowing" the system from the outside, you have to hope that the system can tell you what it means and that you can translate what it tells you into your internal ideas of meaning. I can generally understand what my dog is telling me, but that's because I anthropomorphize his actions. I have to. It's the only way I can "understand" him.

Moreover, as John Mayfield notes, "an important requirement for an algorithm is that it must have an outcome," and "instructions" of the sort represented by an algorithm are "goal-oriented."

  • It is true that algorithms must terminate. That's the definition of "algorithm".3 But algorithms are a subset of computing. A computational process need not terminate.
  • All computing networks are goal oriented. The fundamental unit of computation is the combination of symbols and selection therefrom. By definition, the behavior introduces a direction from input to output, from many to fewer. (One might quibble that the idea of inversion takes one symbol and produces the "opposite" symbol, but one can implement "not" using "nand" gates, and "nand" gates are goal oriented.) So if logic gates are goal oriented, systems built out of gates are goal oriented. The goal of the individual gate may be determinable; determining the goal of the system built out of these elements can be extremely difficult, if not impossible to fathom. Sometimes I understand my dog. Other times, all I see is emptiness when I look into his eyes. All we can do is compare the behavior of a system (or organism) to ours and try to establish common ground.
The information content of the output [of a computation] can be less than the input but not greater.

True, but irrelevant for systems that get input from the environment. That is, computers need not be closed systems. With the correct peripherals, a computer can take as input all of the behavior of the universe.

Darwin's account shows that the apparent teleology of biological process is an illusion.

  • Underlying this claim is the idea that randomness exhibits purposelessness.
  • However, one can also equally make the claim that randomness hides purpose. As Donald Knuth wrote, "Indeed, computer scientists have proved that certain important computational tasks can be done much more efficiently with random numbers than they could possibly ever be done by deterministic procedure. Many of today's best computational algorithms, like methods for searching the internet, are based on randomization."4
  • Whether someone thinks randomness is purposeless or hides purpose is based on one's a priori worldview.

The key is to reject that [mechanistic] picture and return to the one it supplanted [Aristotle-Scholastic].

The fallacy of the false dilemma. Another alternative is to deeply understand the "mechanistic" picture for what it actually says.



[1] Battlestar Galactica: "
I'm not a Cylon..."
[2] Simulating Physics with Computers, International Journal of Theoretical Physics, Vol. 21. Nos. 6/7, 1982
[3] The Art of Computer Programming, Volume 1: Fundamental Algorithms, Section 1.1; Donald Knuth
[4]
Things A Computer Scientist Rarely Talks About

blog comments powered by Disqus