Page 1 of 1

A.I. Anxiety

Posted: Mon Dec 28, 2015 9:01 am
by DWill
Johnson 1010 had a thread a while back with the great title, "I, for one, welcome our new robot overlords." Thought of that as I read the story "A.I. Anxiety" by Joel Achenbach, a science writer for the Washington Post. It's a good survey of the promise and peril of A.I. that some people might enjoy reading. It seems that, just as with the development of the atomic bomb, some of the scientists most outspoken about the dangers of A.I. are the ones most on the cutting edge of that very field, such as Elon Musk and Stephen Hawking.

http://www.washingtonpost.com/sf/nation ... aianxiety/

Re: A.I. Anxiety

Posted: Thu Apr 14, 2016 10:26 am
by ant
I've just started reading this one. You might like it as well..,

http://www.amazon.com/Intelligence-Unbo ... 1118736281

Thanks for that link! I love this topic!

Re: A.I. Anxiety

Posted: Thu Apr 14, 2016 11:02 am
by johnson1010
well hold on now,

are Elon Musk and Stephen Hawking really at the fore front of A.I.?
Hawking is a physicist who works a lot on black holes. Musk is an inventor and entrepreneur.
Maybe these guys think about this topic a lot... so do I! But I don't think that qualifies them.

The people at the fore are probably the folks at places making Watson, on one end, and Boston Dynamics, at the other. Cognition, and world interactivity respectively. These are the people I would like to hear from. How far are they from actually getting where they are trying to go? How goddam frustrating is it to inch forward in those fields?

Think of old sci fi movies. They were thinking that AI could happen on the equivalent of your cell-phone's hardware. Far less!
There are chat bots now that can pretty reliably beat the turing test... but I doubt anyone here would think that those chat bots are sentient. It's more an indication of how easy it is to fool people!

I'm betting we are still pretty far off from the AI we have all been programmed to fear... no matter how creepy big dog is.

Re: A.I. Anxiety

Posted: Sat Apr 23, 2016 2:53 pm
by ant
The deeper I delve into AI or AGI the more questions I run into that are not typically discussed.

Here are two that some of you who like a challenge can attempt to answer:

Would the full development of AI be better off and safer if it were worked on and accomplished by a private company of hand selected individuals, or if it was an open source endeavor (like Linux)?

As a safety measure against a potential runaway AI that might very well endanger the existence of a much inferior species (namely us) how would you go about programing it for a bias towards friendliness? What methodology would you use?