Page 1 of 1

Ch. 13: Losing Our Minds to Darwin

Posted: Thu Feb 16, 2017 11:46 pm
by Chris OConnor
Ch. 13: Losing Our Minds to Darwin
Please use this thread to discuss the above listed chapter of "Darwin's Dangerous Idea: Evolution and the Meanings of Life" by Daniel Dennett.

Re: Ch. 13: Losing Our Minds to Darwin

Posted: Sat Jul 15, 2017 9:07 am
by Harry Marks
More shallow treatment of controversy.

I think I am beginning to get the idea of what is going on here. More on that in a minute.

First, I have really liked some of the material Dennett has presented. For example, his levels of intelligence as "Darwinian", "Skinnerian", "Popperian" and "Gregorian" gives a really useful way of thinking about thinking. An elephant can work on a problem in its head - imagining different actions and the response. But it is not clear that the elephant can do the Gregorian thing of learning what methods of inquiry are useful.

Another is the "commando team principle" of flexibility vs. the "beaver dam" ability to do something complex but fairly inflexible because it is programmed in.

I suspect that kind of classification may turn out to be very helpful with the "hard problem" of consciousness. But whether we get that far in this book or not, I have my doubts.

One reason for those doubts is Dennett's seeming obsession with scientific heresy. There is definitely a "team" consciousness in his reading, so for example he is willing to give a free pass to those on the team for how others read their thoughts, but if any of those readings involve tempting lay persons to question mechanical explanations (cranes) he begins to treat the heterodox ideas as anathema. The double standard becomes almost glaring in his discussion of "good tries" when he waves through people who have been truly nasty and obstructionist, like B.F. Skinner, because their framework was useful (which it surely was and still is), but not analysts whose point he doesn't like.

So, on to the two big controversies. One is over adaptationist explanations of linguistic mechanisms. We can be fairly confident that there are genetic predispositions to language which are intrinsic to homo sapiens - that is, no members are passing on genes which do not include mechanisms for the acquisition of language. This is by contrast with such incidental items as ability to process lactose as an adult, and even inclination to respond to signs of distress in others.

Evidently Gould wanted to argue that this is (or at least could be) an incidental outcome of the general level of intelligence of humans - a spandrel. If we have "Gregorian" capacities for figuring out how to design tools, for example, we must have internal representations of processes, and thus, in some sense, of concepts. The trick of associating a series of sounds with that and thus being able to discuss it with others could be non-biologically based, just as the trick of associating a series of pictures with the sounds and thus being able to preserve the discussion in writing is surely not biologically based. So far we have a plausible alternative to an adaptationist account, without a clear presentation of that view by Dennett (or of Pinker and Bloom's response). He is more interested in the strange views of the crowd than in giving a fair picture of the items at issue. This leads me, to say the least, to doubt his word on such matters.

So which evolved: the general capacity for thinking about processes, or, in addition, the trick of being able to discuss them with others? Given that proto-Indo-European had words for wagon but not, apparently, for many abstract processes, I am in some doubt as to whether specific evolution of language boosted the process of evolution of conceptualization. If it did, it is probably only in the last 200 generations or so, which has phenomenal implications for the biological selection which might still be going on. (And might account for why many working people have trouble putting language to the things they know implicitly about how to go about their work, and thus are not such good trainers of new workers). So that might mean the spandrel hypothesis makes some sense - that we evolved the general capacity for conceptualization before language, and language grew up as a special, and culturally important, outcome.

The problem for that Gouldian view is the mechanisms which do seem to be biologically programmed. And the odd thing is that Noam Chomsky, Gould's partner in heresy, is the one most associated with delineating the evidence for such "structural" biological features in language.

I am not sufficiently acquainted with Chomsky's work to grasp why he objects to the notion that these structures evolved under adaptationist pressure, but I don't find it completely incredible that they could be due to constraints of "physics" rather than natural selection, as Dennett represents his ideas. For example, there are neurological structures in the mammalian optical system which respond to "edges" (especially straight edges) and even "corners". As a result of some of these structures, we have well-known optical illusions which cannot be "unlearned" - that is, even after you know what is going on, you still "see" the illusion. These "artifacts" of the shortcuts created by structures in the optical system are bred in. But to what extent to they represent adaptive problem-solving and to what extent do they represent constraints on ability to build in flexibility of the problem-solving structures? The latter type of explanation seems to be in the category of "physics" for Dennett.

What little I know of Chomsky's linguistics (as opposed to his brilliant but deeply flawed political criticism) involves the limited set of possible grammatical orders for Subject, Verb, Object and other syntactical structures. Looking into Wikipedia gives a little more insight: he argues that we have some innate Language Acquisition Device (LAD) which includes structural elements (like the processing of "corners" in the optical system, I gather). A young child associates a sound package or "word" with objects and actions, following the indications given by care-givers.

It strikes me as plausible, but unlikely, that this set of neurological structures is pure by-product of the general capacity humans have for associating things via the abstract representation of those things, and associating them strongly enough that we can process the abstractions themselves. That would be, I suspect, the Gould-Chomsky version.

As a simple "solution" I would propose the following: suppose that LAD's evolved as a result of social selection pressures, such as sexual selection, rather than direct adaptation to the environment. That is, wonderful as the ability to discuss problems is, that ability may only have been selected for socially and not for survival reasons. This provides a plausible explanation for the strong biological basis for LAD's which seems to be present, while avoiding the somewhat reductionist notion that we have to explain every neurological mechanism involved as a direct solution of some adaptive problem.

This occurred to me not directly as a solution, but as an implication of the proto-Indo-European evidence, along with the possibility that the main adaptive significance of language might be in ability to co-ordinate a process with other people, including the process of learning from the experience of others (how deeply to bank a fire at night, for example). One sociobiological theory even has it that the main adaptation is the ability to lie about marital infidelity, and the corresponding selection pressure for the ability to unmask deceptions. So the archetypal sentence may have been "Wasn't me!"

Re: Ch. 13: Losing Our Minds to Darwin

Posted: Sat Jul 15, 2017 9:08 am
by Harry Marks
(Note, my first submission violated guidelines, so I broke it in two to make it acceptable. This seems to have worked.)

The second controversy is around Searle, who has committed the unpardonable sin of questioning the intelligence of Artificial Intelligence. He is quoted in Ch. 8 questioning whether AI's have intentionality, and in this chapter is quoted for his attack on "functionality" as a useful description of their execution of their designed purpose. I tend to side with Searle on this, and, at least at the level Dennett engages, against Dennett.

Searle's view is that following procedures dictated by others may appear to be intelligence, or understanding, but (so far) it lacks something basic. His example is the "Chinese room" in which you feed Chinese writing into a room and out comes a faithful translation into English - does that tell us that we are dealing with an intelligent agent who ***understands*** the message in either language? No, because syntax is not semantics. Now we have Google Translate to use an example. Does anyone here believe that Google Translate works by understanding its content? (Please tell me you do not.) Semantic issues are issue of the degree of appropriateness of particular descriptions. To assess "Trump is a narcissist" you need complex understanding of both Trump and narcissism. The rules used by Google Translate are not up to such a task. Thus one can create an elaborate set of machinery which accomplishes certain tasks, but the intentionality which is at the heart of understanding is lacking.

A system such as "machine learning" which constructs an algorithm for assessing algorithms, and evolves effective algorithms for the purpose specified in its assessment criteria, cannot be said to be creating understanding. Understanding involves the motivation to create (finite in practice but infinite in potential) representations of how things work, for the purpose of being able to interact with those things to achieve as-yet-unspecified objectives. Good understanding creates good representations, and bad understanding creates bad representations. But good functionality of a process does not in any way imply good representation.

I have heard Searle give a talk, at U of MN. It was all new to me at the time, and I was baffled by much of it, including the distinction between syntax and semantics. But I distinctly remember that he allowed for the possibility that silicon may one day be given the ability to understand. That is, contrary to what Dennett represents, Searle does believe that Artificial Intelligence is possible. He just puts the bar (appropriately) high for what is to be considered actual intelligence, and most importantly, puts intentionality at the heart of that.

I personally find Searle's discussion of "function" (as represented by Dennett) to be semantically impoverished to the point of being wrong. But Dennett has a few such awkward steps in his dance as well. Back in Ch. 11 (on memes) there was a discussion of how Romeo and Juliet is like West Side Story. He argues that their commonality is "semantic not syntactic." Well, that is an inappropriate use of the distinction, much like Searle's abuse of the term "function." It is bad semantics. What Dennett is after is a "higher conceptual level" of organization of the relevant information. You can set the "essential properties" of Romeo and Juliet in the modern Bronx, change key elements of the story, add music and dance, and still have a recognizable identity between the two. But the higher level of abstraction in seeing that identity is not the essence of what makes an issue one of "semantics" not "syntax".

And here we arrive at the problem with Dennett's approach, and the reason why he goes all weird on us when smart people say things that he finds objectionable because they hint at some opposition to mechanism ("algorithm") itself. Dennett's vision is that material, mechanical processes are at the heart of both our origins (Darwinism describes these processes) and our consciousness (Artificial Intelligence, he deems, captures these processes). Any suspicion of either of these looks to him like a blind belief in "skyhooks", that is, processes which do not work by any conceivable means with which we are familiar. The idea of a "mind" which is not in AI would be an example. And here we get to the crucial error. For Dennett, there is no difference in kind between algorithmic processes and intelligent functionality, only a difference of degree of complexity. That is, he creates a black box of "complexity", (whose mechanisms we can understand if we simply exercise sufficient diligence), and concludes that it tells us all we need to know about how to get from the "here" of mechanical processes to the "there" of apparent mind.

In principle what he asserts is valid, that all the explanations must be mechanical. But turning it into our dominant guide for evaluating explanations is analogous to saying that the principles of biology are just principles of physics. Well, no. Physicists cannot even solve the "three body problem" in gravitational interaction with the help of a computer, much less tell us everything we need to know about protein-folding to understand whether Alzheimer's would be prevented by sufficient interaction with parasites, to cite a recent scientific hypothesis written up in the New York Times.

You simply cannot wave your hand and declare that the problems of understanding, say, the relative role of biology in speech versus reading have been solved by the principle of "appeal to cranes, not skyhooks." You can't jump to the conclusion that "the explanations we have are sufficient to account for this if we just apply ourselves diligently" and so attack anyone who questions this as questioning the basic mechanisms which we understand.

It may be that, like Fred Hoyle, those critical gadflies are motivated by some suspicion that skyhooks are at work, but it is ad hominem to make that inference from arguments which struggle with the complexity involved. Dennett himself is clearly settling for oversimplifications (partly motivated by taking sides on controversies, a time-honored method of selling books) and as a result making mutually incompatible statements, so he ought to be making more allowance for the same thing in others.