Whether or not the √2 is irrational cannot be shown by measuring it.1,2
Whether or not the Church-Turing hypothesis is true cannot be shown by thinking about it.

[1] "Euclidean and Non-Euclidean Geometries", Greenberg, Marvin J., Second Edition, pg. 7: "The point is that this irrationality of length could never have been discovered by physical measurements, which always include a small experimental margin of error."
[2] This quote is partially inspired by Scott Aaronson's "
PHYS771 Lecture 9: Quantum" where he talks about the necessity of experiments.

Mayors of Judsonia, Arkansas

I had occasion to need to know who succeeded my grandfather as mayor of Judsonia, Arkansas. I was able to eventually find the answer, but I was unable to locate a list of past mayors. Amber at City Hall went above and beyond to compile a list of past mayors from available records. Thank you, Amber!

This list needs to be preserved.

April 19151916ST Hughes
April 19161917James Huntley
April 1917March 1919JC Gibson
19191921RC Mann
May 1921April 1922John White
April 19221923Paul Bauer
April 1923March 1924J NO White
April 1924March 1925Walter Ladd
April 1925February 1926GM Walters
March 19261951Wylie Robert Felts, MD1
November 19511956Ralph L Van Meter
1956May 1957Chester Bramlett
May 19571957Billy Wood Acting Mayor
19581958Walter Smith
19591978Jimmy Miller
19791982Bill Stutts
19831986Johney Gibson
1986July 31, 1993Jim Harris
August 1993September 1994 DoD 11-30-1994Chester Williams
September 1994March 1995LaJunta Whitener Acting as Mayor
April 1995Resigned May 16, 1996Lawrence Mcintire
June 1996LaJunta Whitener Acting Mayor
June 19961998Charles Bice
1998November 2013 DoD 11-18-2013Rickey Veach
December 20132018Ronnie Schlem
2019CurrentStan Robinson

[1] My paternal grandfather. To date, he had the longest tenure of any mayor.

Searle's Chinese Room: Another Nail

[updated 2/28/2024 - See "Addendum"]

I have discussed Searle's "Chinese Room Argument" twice before:
here and here. It isn't necessary to review them. While both of them argue against Searle's conclusion, they aren't as complete as I think they could be. This is one more attempt to put another nail in the coffin, but the appeal of Searle's argument is so strong - even though it is manifestly wrong - that it may refuse to stay buried.

Searle's paper, "Mind, Brains, and Programs" is
here. He argues that computers will never be able to understand language the way humans do for these reasons:
  1. Computers manipulate symbols.
  2. Symbol manipulation is insufficient for understanding the meaning behind the symbols being manipulated.
  3. Humans cannot communicate semantics via programming.
  4. Therefore, computers cannot understand symbols the way humans understand symbols.
In evaluating Searle's argument:
  1. Is certainly true. But Searle's argument ultimately fails because he only considers a subset of the kinds of symbol manipulation a computer (and a human brain) can do.
  2. Is partially true. This idea is also expressed as "syntax is insufficient for semantics." I remember, from over 50 years ago, when I started taking German in 10th grade. We quickly learned to say "good morning, how are you?" and to respond with "I'm fine. And you?" One morning, our teacher was standing outside the classroom door and asked each student as they entered, the German equivalent of "Good morning," followed by the student's name, "how are you?" Instead of answering from the dialog we had learned, I decided to ad lib, "Ice bin heiss." My teacher turned bright red from the neck up. Bless her heart, she took me aside and said, "No, Wilhem. What you should have said was, 'Es ist mir heiss'. To me it is hot. What you said was that you are experiencing increased libido." I had used a simple symbol substitution, "Ich" for "I", "bin" for "am", and "heiss" for "hot", temperature-wise. But, clearly, I didn't understand what I was saying. Right syntax, wrong semantics. Nevertheless, I do now understand the difference. What Searle fails to establish is how meaningless symbols acquire meaning. So he handicaps the computer. The human has meaning and substitution rules; Searle only allows the computer substitution rules.
  3. Is completely false.
Because 2 and 3 are false, his conclusion does not follow.

To understand why 2 is only partially true, we have to understand why 3 is false.
  1. A Turing-complete machine can simulate any other Turing machine. Two machines are Turing equivalent if each machine can simulate the other.
  2. The lambda calculus is Turing complete.
  3. A machine composed of NAND gates (a "computer" in the everyday sense) can be Turing complete.
    • A NAND gate (along with a NOR gate) is a "universal" logic gate.
    • Memory can also be constructed from NAND gates.
    • The equivalence of a NAND-based machine and the lambda calculus is demonstrated by instantiating the lambda calculus on a computer.1
  4. From 3, every computer program can be written as expressions in the lambda calculus; every computer program can be expressed as an arrangement of logic gates. We could, if we so desired, build a custom physical device for every computer program. But it is massively economically unfeasible to do so.
  5. Because every computer program has an equivalent arrangement of NAND gates2, a Turing-complete machine can simulate that program.
  6. NAND gates are building-blocks of behavior. So the syntax of every computer program represents behavior.
  7. Having established that computer programs communicate behavior, we can easily see what Searle's #2 is only partially true. Symbol substitution is one form of behavior. Semantics is another. Semantics is "this is that" behavior. This is the basic idea behind a dictionary. The brain associates visual, aural, temporal, and other sensory input and this is how we acquire meaning. Associating the visual input of a "dog", the sound "dog", the printed word "dog", the feel of a dog's fur, are how we learn what "dog" means. We have massive amounts of data that our brain associates to build meaning. We handicap our machines, first, by not typically giving them the ability to have the same experiences we do. We handicap them, second, by not giving them the vast range of associations that we have. Nevertheless, there are machines that demonstrate that they understand colors, shapes, locations, and words. When told to describe a scene, they can. When requested to "take the red block on the table and place it in the blue bowl on the floor", they can.
Therefore, Searle's #3 is false. Computer programs communicate behavior. Syntax rules are one set of behavior. Association of things, from which we get meaning, is another.

I was able to correct my behavior by establishing new associations: temperature and libido with German usage of "heiss". That associative behavior can be communicated to a machine. A machine, sensing a rise in temperature, could then inform an operator of its distress, "Es ist mir heiss!". Likely (at least, for now) lacking libido, it would not say, "Ich bin heiss."

Having shown that Searle's argument is based on a complete misunderstanding of computation, I wish to address selected statements in his paper.

A Country Doctor In Washington

[updated 2/13/2014]

Monday, 2/12/2024, will be the twentieth anniversary of my Dad's death. My sister graciously reminded me today that I have had Dad's manuscript for his autobiography, "A Country Doctor In Washington", for twenty years having received it from her care so that I could transcribe it into digital form. Dad originally composed it using WordPerfect and saved it on 3.5" floppy disks. If I had the discs I suppose I could buy an external floppy drive and read the files using LibreOffice or similar. But I don't think the disks exist. Too, he printed out the massive manuscript (I estimate 750 pages) and started proofing it manually.

I started scanning it and used OCR technology to transcribe it. But the OCR software of the time wasn't up to the task. So I set it aside. Today, my wife pulled some boxes out of storage to see if we could get rid of anything and the manuscript was in one of them. OCR technology has greatly improved - I managed to get through 10 pages in about an hour.

This work in progress is

A copy of my parents'
final divorce decree was found between the pages of the manuscript.

Dialog with an Atheist #4

Here is another case where an atheist won't answer how they know what they claim to know. Two others are here and here.[1]. All of these feature atheists making a claim to knowledge for which they can provide no objective justification. When pressed, they go silent.

@PhilosophieW is the straight man who has (unwittingly) set up the conversation for me (@stablecross) as the joker. The two atheists are @alancolquhoun1 and @theosib2. @theosib2 was featured in the third dialog.

@PhilosophieW: I would describe the perception of God as emotional and/or intuitive perception. I experience a deep connection, closeness, indeed love, often intertwined with a recognition of what is currently right or would be. Nothing extraordinary, and nothing that others do not also report.
@alancolquhoun1: My question asked for a characterisation of its perceptible qualities. I think I should (rationally) interpret your fourth failure to answer my question as either die [sic] to unwillingness or die [sic] to inability. Either way, we're no more enlightened than we were when you joined in.
@stablecross: Perhaps you missed my previous answer to your question. The perceptible qualities are those of sentience, by which you conclude that a being, other than yourself, has an “I”. And we know that one of those qualities isn’t physical construction.[2]
@alancolquhoun1: Sentience qua sentience is imperceptible.
@stablecross: Is your partner sentient?
@alancolquhoun1: Of course. But her sentence [sic] is not perceptible.
@stablecross: Then on what basis should anyone accept your claim when, to all appearances, she’s indistinguishable from a philosophical zombie?

Later, in another thread, @theosib2 wrote:
@theosib2: How else are you going to support something? If you claim something, you also need to provide SOME way to CHECK. "Trust me bro" is not an argument. And arguments grounded in unverified facts are unsound.
Seizing an opportunity I jumped in:
@stablecross: @alancolquhoun1 made the claim that his partner is sentient. I’ve asked why I should accept that she’s sentient and not a philosophical zombie. He hasn’t responded, despite prompts. Perhaps you can help him devise such a way so that it works on her, other animals, and machines?
@theosib2: I haven't ruled out that some humans might effectively be philosophical zombies. The majority of them are on Twitter and Facebook.
@stablecross: That’s irrelevant to the question posed. So what if you make that determination? Why should anyone else believe you?
@theosib2: It's relevant insofar as any kind of humor makes life better. I'm not an expert in philosophy of mind. All I have to go on is some study of neuroscience, which is probably not adequate on its own.
@stablecross: You don’t need Phil Mind or neuroscience. Everything you need to know you should have learned as part of your PhD. You simply cannot determine internal logical behavior by objective external measurement. You know this, because your only answer was “a lookup table.”

And so the conversation with @theosib2 dead ends where it did before, and with @alancolquhoun1 in the same place. The atheist makes claims to knowledge for which they can produce no objective support. Reason fails them when it comes to detection of mind. But they can't admit it, because that would remove one of the stabilizing legs of the chair on which atheist arguments sit.

[1] A third conversation is
here, but it's based on philosophical stance instead of intelligence. The claim is that skepticism should be one's default position, which self-defeating. There has to be a ground of knowledge which just so happens to be the subjective "I". And that ties these conversations together.
The Physical Ground of Logic.