Page 1 of 1

Ch. 15: The Emperor's New Mind, and Other Fables

Posted: Thu Feb 16, 2017 11:45 pm
by Chris OConnor
Ch. 15: The Emperor's New Mind, and Other Fables


Please use this thread to discuss the above listed chapter of "Darwin's Dangerous Idea: Evolution and the Meanings of Life" by Daniel Dennett.

Re: Ch. 15: The Emperor's New Mind, and Other Fables

Posted: Fri Jul 28, 2017 4:42 am
by Harry Marks
Some interesting stuff in this chapter, but in the end it makes for tedious reading. For Dennett to convince us that it is worth reading his refutations of ideas such as Penrose concerning "the impossibility of AI" or "quantum gravity effects" he has to first explain why we should take them seriously. In his world, the reputation and prestige of the person putting forward the idea is sufficient reason to take it seriously, but that doesn't make the book worth reading for those of us not competing in the reputation game.

My first, intuitive sense of what Dennett is going on about turns out to be correct and insightful. He sees skyhooks everywhere, and he is bent on showing them to be just that. (To a man with a hammer . . .) Thus his main point is to argue that the example of evolution, in which apparently impossible order turns out to be pretty much explainable if we just take cognizance of the workings of evolution, is a good model for AI. There must be incremental processes at work, with algorithmic properties of working without any external guidance or sense of the big picture, which nevertheless accomplish the "design work" necessary to give the appearance of intelligence at work.

If he was just glibly throwing around accusations of impossible mechanisms (perpetual motion machines), that would not be interesting. But he makes, IMO, two mistakes that are actually very interesting.

The first is to read unexplained mechanisms (especially if unexplained to him, as in punctuated equilibrium) as skyhooks when in fact they are efforts at perceiving the nature of the mechanism in question. The problem with Dennett's interpretations of "crane" and "skyhook" is that he reads "mechanisms whose nature we already understand" as being synonymous with "mechanisms whose nature is material rather than inherently unknowable." So, in other words, if a person asserts the likelihood of a mechanism whose nature is material, but at present not known though eventually knowable, he still perceives a skyhook, (perhaps a secret one).

A simple example from a different realm would be Hoyle's perpetual creation of matter. Now, we don't know any mechanism by which that could happen, but frankly we don't know any mechanism by which the Big Bang could happen, either. Hoyle asserted, for reasons with which I am unfamiliar, that perpetual creation was as good a theory as the Big Bang. Dennett would call it a skyhook. Why? Because the mechanism is not yet known, so he perceives it as not a mechanism at all. (It is worth asking why he does not have the same perception about the Big Bang).

Essentially, Dennett is a deductionist rather than an inductionist. He is hyper-suspicious of people using intuition to say, "Well, it looks to me like what is going on is . . ." because he trusts deductive logic, with its necessary and sufficient conditions, but not inductive logic which perceives a pattern without being able to spell out how we know the pattern is there (much less how the pattern is to be explained). To repeat something I posted at the beginning of all this, it is my observation that policemen of methodological purity are deductionists. Now, in a book about the nature of AI, that is an interesting phenomenon.

The second interesting mistake made by Dennett is to assume away emergent phenomena. It is important to him that there be continuity between the incremental (he calls it "algorithmic") processes at the micro level and the results at the gross, visible level. "Continuity" is my effort at putting a term to what I think I am seeing, which is an inductive process. When we start asserting that there are emergent processes, such as biological processes which are governed by different principles than the chemical processes of which they are made up, then it creates a discontinuity in our explanatory process. To assert that biology is not "just" chemistry offends this prejudice by Dennett. Of course it is just chemistry - what else could it be? Well, the problem is not an assertion that chemistry is not following its rules, or that there is some "non-chemical" process going on, but that the phenomena of biology are not perceivable by our understanding of chemical processes. Our knowledge of chemical processes does not give us the ability to see how biology will work. The problem is not a discontinuity of nature, (though Dennett would perceive it as a claim to such) but a discontinuity of mind.

In principle this should not be a problem. Dennett is familiar with the business of emergent properties, and several times he asserts that they must be at work in this or that discontinuity. But he hasn't really taken it on board in the actual cases with which he grapples. In particular, he misreads Searle rather drastically, because Searle is looking at the emergent property of "meaning" or "understanding" and Dennett wants to see this as a denial of the possibility that it can work by programmed computers. Searle most certainly has acknowledged the possibility that programmed computers may one day work via understanding, but has simply denied that the various simulations we have so far can be read as actual understanding. I think he should put more effort into elucidating the mechanism of understanding, but that is beside the point I am making. Dennett wants us to believe there is no discontinuity - that human understanding is "just programming" or "just algorithm" even though in fact it is working by a process different in kind from the way our programs have to date.

So, essentially, Dennett is demonstrating by his own errors the essential nature of the difficulty with which AI is faced. He rejects fumbling efforts at "grasping" or "understanding" emergent phenomena because they are working inductively. But induction is exactly what is missing in current AI to be able to have a process of understanding the world. Animal intelligence builds up internal models of the the regularities perceived. This is inductive. It is a matter of guesswork, or not even guesswork but "seeing tigers in the forest" where they may not actually exist. As the models prove to be useful, they are refined into "knowledge". But this is not programmed in by enormously convoluted look-up functions, it is an associative process capable of discerning different types of relationships between phenomena, and making those discerned relationships into (in the human case, at least) "concepts."

If you already know what kinds of relationships you are going to pay attention to, because you are "programmed" that way, then my guess is you will never be said to "understand" them. Understanding emerges at a higher level of mental processing from "prediction" or even "description". This is pretty much what Turing said in his statement quoted by Dennett at the beginning of Ch. 15, that a machine cannot be both infallible and intelligent. To work inductively, you simply have to be able to make mistakes. You have to make guesses, and see if they lead to useful inquiry, (which will involve mistakes), and you have to build up that internal representation from the way phenomena actually work, not from the counting and associating that you were told to do.

AI will someday be able to work inductively. In fact, I would say to some extent the use of self-modifying algorithms has already given it that capacity. But for that to be the main mechanism of AI, at some high level of processing, will be a ways off yet. We are still bossing them around.