Don't know whether this has been mentioned here on the forums already, but I'm gonna go ahead and post this anyway. This is too darn interesting to just let blow past ;p |
|
Don't know whether this has been mentioned here on the forums already, but I'm gonna go ahead and post this anyway. This is too darn interesting to just let blow past ;p |
|
Last edited by TimB; 03-01-2011 at 09:08 PM.
I don't reckon there have ever been any leaps for AI software. |
|
^Nor could there be. Nevertheless, software incrementally improves with every passing decade. Sooner or later, it will be almost human. |
|
What I meant is that there has never been a significant AI advancement of any kind, slow or not, and your post nicely demonstrates the misconception: the approach is always brute force, which has nothing to do with intelligence. AI needs a conceptual enlightenment, not more and more CPU thrown at it. |
|
...no it isn't? |
|
...yes it is? If you could simulate the interactions of all the molecules in a brain, then that would be very much a brute force approach. |
|
Last edited by cmind; 03-01-2011 at 10:30 PM.
Ah, I think I misunderstood (though you were clear as mud). When you said the brain is brute force, I thought you meant the brain in nature is a brute force approach to intelligence. Did you mean that simulating the brain in silico would be a brute force approach to AI? |
|
Indeed, brute force programs haven't been tried until very recently for the simple reason that no one has had the computing power. But now, projects like Blue Brain are beginning to get into the realm of possibly, in the next few years, starting to take a bite out of the problem. Blue Brain itself is still too high level and too slow, but the successor to Blue Brain should be much more impressive. |
|
I think developing AI that fits the structure of computing hardware will be more rewarding, than trying to replicate everything biological brains do. As Xei says, an analogous model. |
|
---------
Lost count of how many lucid dreams I've had
---------
Abstraction is when you simplify your model in order to gain higher understanding. Yes, it's hard to describe. |
|
I agree with the people who said this isn't really a leap in anything. Sounds like a big database. Having a big database doesn't make it any smarter. Heck he isn't even listening to the questions, which would be pretty cool for advancement in information input. Instead it is taking a typed message and then just looking up the information in its huge database. |
|
Well I was just watching the episode of Jeopardy, and it does look like it is just looking up answers. Questions with a lot "key" words in them confuse him. It seems very much like it is looking through a database and picking out what matches the best. So when multiple key words or phrases are used, it confuses him. |
|
I agree with Alric, and that was the point I was originally making about brute force. |
|
Indeed, there's not really anything elaborate about Watson, apart from the hardware it's made from. The software is not particularly fascinating. |
|
---------
Lost count of how many lucid dreams I've had
---------
We don't have a good handle at all on how humans come up with answers to trivia questions. But the very different patterns of errors committed by human contestants and by Watson suggest that they are approaching these problems in very different ways. Which shouldn't be the least bit surprising, because the makers of Watson never set out to design a program that would perform a human task the way that humans do it, they simply set out to design a program that would perform a human task as well as humans. |
|
People do not think like that. Like in the example I gave, no one who read harry potter would think Harry potter killed the three people mentioned. The computer doesn't appear to understand the sentences at all, instead it is just chopping up the sentences into key words then looking them up in it's database. I think if you took the sentences and mixed the words up so they were in a random order, the computer would still get the same results, which would show it has no clue what it is reading. |
|
The next leap in AI is going to be real speech recognition. |
|
Speech recognition is useless if the AI can't understand what words mean. |
|
The AI will have "Chinese room" understanding, and that's quite sufficient as long as it's deep enough in its rule set. And when you get down to it, humans also only have "Chinese room" understanding. |
|
Bookmarks