-
In total there is 1 user online :: 0 registered, 0 hidden and 1 guest (based on users active over the past 60 minutes)
Most users ever online was 789 on Tue Mar 19, 2024 5:08 am
Can machines think?
- Chris OConnor
-
- BookTalk.org Hall of Fame
- Posts: 17019
- Joined: Sun May 05, 2002 2:43 pm
- 21
- Location: Florida
- Has thanked: 3511 times
- Been thanked: 1309 times
- Gender:
- Contact:
- Dexter
-
- I dumpster dive for books!
- Posts: 1787
- Joined: Sun Oct 24, 2010 3:14 pm
- 13
- Has thanked: 144 times
- Been thanked: 712 times
Re: Can machines think?
In this chapter he talks about Searle’s famous "Chinese room" thought experiment.
http://en.wikipedia.org/wiki/Chinese_room
I always thought that a good (at least partial) response was one that Blackburn gives, which is that’s like saying that neurons don’t understand Chinese. I assume Searle anticipated this response, I haven’t read the literature.
Blackburn mentioned this, but using Wikipedia’s description:
I certainly see a difference between saying a person “understands” Chinese while a computer does not, but it seems to be a matter of degree.
As Blackburn notes, it is much more difficult for a computer to pass a Turing test of any complexity than it was thought it would be in the early days of computer science and AI.
Again from Wikipedia,
http://en.wikipedia.org/wiki/Chinese_room
I always thought that a good (at least partial) response was one that Blackburn gives, which is that’s like saying that neurons don’t understand Chinese. I assume Searle anticipated this response, I haven’t read the literature.
Blackburn mentioned this, but using Wikipedia’s description:
This doesn’t seem all that convincing to me, and Blackburn finds this to be unsatisfactory.Searle holds a philosophical position he calls "biological naturalism": that consciousness and understanding require specific biological machinery that is found in brains.
I certainly see a difference between saying a person “understands” Chinese while a computer does not, but it seems to be a matter of degree.
As Blackburn notes, it is much more difficult for a computer to pass a Turing test of any complexity than it was thought it would be in the early days of computer science and AI.
Again from Wikipedia,
I guess I am more sympathetic to the Strong AI position, but I’m very doubtful we’ve seen any technology that would approach being usefully described in those terms.Searle identified a philosophical position he calls "strong AI":
The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.
The definition hinges on the distinction between simulating a mind and actually having a mind. Searle writes that "according to Strong AI, the correct simulation really is a mind. According to Weak AI, the correct simulation is a model of the mind."
- Interbane
-
- BookTalk.org Hall of Fame
- Posts: 7203
- Joined: Sat Oct 09, 2004 12:59 am
- 19
- Location: Da U.P.
- Has thanked: 1105 times
- Been thanked: 2166 times
Re: Can machines think?
It isn't merely a matter of simulating the information. You'd have to simulate the chemistry as well, including some slightly more complicated organic systems. But why would we want to run that experiment? Why would we want to create an intelligence that could know true fear, only to have it panic when it finds itself a man-made creation without a body? Perhaps we would want to create one with only warm orgasmic feelings. But I don't think our AI's should be created in such a way. If what we seek is the processing of information, then we should leave out emotions. Emotions are an evolutionary construct meant to guide us through tribal life. The same can be accomplished with operating rules. Informational only, no pain but also no pleasure.The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds.
Such an intelligence, if self-aware, would be alien to us at the level of emergent phenomenon. Meaning, we would understand how to build it and would have its blueprints, but we could not close our eyes and imagine what it's conscious feels like, subjectively.
“In the beginning the Universe was created. This has made a lot of people very angry and has been widely regarded as a bad move.” - Douglas Adams
- Dexter
-
- I dumpster dive for books!
- Posts: 1787
- Joined: Sun Oct 24, 2010 3:14 pm
- 13
- Has thanked: 144 times
- Been thanked: 712 times
Re: Can machines think?
I don't think we really have a choice, we're not about to collectively decide not to pursue more and more advanced technology.Interbane wrote: Why would we want to create an intelligence that could know true fear, only to have it panic when it finds itself a man-made creation without a body? Perhaps we would want to create one with only warm orgasmic feelings. But I don't think our AI's should be created in such a way. If what we seek is the processing of information, then we should leave out emotions. Emotions are an evolutionary construct meant to guide us through tribal life. The same can be accomplished with operating rules. Informational only, no pain but also no pleasure.
http://en.wikipedia.org/wiki/Friendly_a ... telligence