In this chapter he talks about Searle’s famous "Chinese room" thought experiment.
http://en.wikipedia.org/wiki/Chinese_roomI always thought that a good (at least partial) response was one that Blackburn gives, which is that’s like saying that neurons don’t understand Chinese. I assume Searle anticipated this response, I haven’t read the literature.
Blackburn mentioned this, but using Wikipedia’s description:
This doesn’t seem all that convincing to me, and Blackburn finds this to be unsatisfactory.
I certainly see a difference between saying a person “understands” Chinese while a computer does not, but it seems to be a matter of degree.
As Blackburn notes, it is much more difficult for a computer to pass a Turing test of any complexity than it was thought it would be in the early days of computer science and AI.
Again from Wikipedia,
I guess I am more sympathetic to the Strong AI position, but I’m very doubtful we’ve seen any technology that would approach being usefully described in those terms.