Watson / Jeopardy 'robot', whats so cool?



HAHAHAHAH, Senor Hosie to you, Doofuss.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
snipped-for-privacy@att.bizzzzzzzzzzzz wrote:

It is ratings?
THERE IS NO APOSTROPHE IN THE POSSESSIVE ITS!!!!!!!!
Thanks, Rich Grise, Self-Appointed Chief, Internet Apostrophe Police.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
wrote:

I knew I screwed that pooch as soon as I hit send. Over-editing does it every time.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

...
Its so annoy'ing when peoples' dont do the r'ight thing with aposrophsesses' itsnt it;
--
[A]ll science is lies and the only thing we can trust is right wing rhetoric.
-- snipped-for-privacy@27-32-240-172.static.tpgi.com.au [86 nyms and counting], 14 Jan 2011
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
wrote:

There krw, Keith Williams, you've been corrected!!! You can't spell either.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
wrote:

What were you doing?.... too buzy jerking off!
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Hey Shawn, found a job yet?
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

--
Geez, only your second post to seb and, single-handedly, you've
already branded yourself as an idiotic, combative asshole.
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
1jam wrote:

(...)
That is what is so cool and mostly the point.
Perhaps you have been totally nonplussed by a question put to you outside of your current 'context space'? I do that two or three times a day, (but I'm old).
Note to self: She asked 'Soup or Salad?' not 'SuperSalad?'
--Winston
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 2/15/2011 10:39 PM, 1jam wrote:

There was an interesting program on PBS's NOVA on this project.
http://www.pbs.org/wgbh/nova/tech/smartest-machine-on-earth.html
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
1jam wrote:

Presumably, the same as the point of Deep Blue, that kicked that chess champion's ass. I saw that clip. The chess master (was it Kaspaov?) was heard to say, "Well, at least it didn't enjoy it."

That was no more allowed that it would have been for a human contestant.
They even rigged up a mechanical hand to press the same button as the humans.
And it's about to win a million dollars, which IBM is going to donate to charity.
And it's just WAY KEWL!
Hope This Helps! Rich
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

The point is that it took a lot of arm waving and head scratching to get a machine to make the same logical/illogical word associations that a human does - and learn from its mistakes, deal with puns, associate disparate clues, etc..
It is way more complicated than playing chess where there is a very limited list of rules and a more finite number of possibilities.
Yeah Google might help, but then it wouldn't be artificial intelligence, just a fancy text to speech engine.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

The project is motivation for the IBM researchers to push the envelop in natural language processing. The show just makes it fun, while at the same time, acting as PR for IBM.

It has no speech recognition. The questions are fed to it as text files. That was explained at the beginning of the first show. It seems to be very good text to speech like you say however. But even my GPS is pretty good at reading the names I type into it.

Neither Google or wolfram alpha even come close to being able to answer Jeopardy questions. Wolfram alpha is still pitiful and mostly useless - it only looks "good" in a canned demo. Google is a fantastic tool to allow humans to find relevant web pages from keywords, but it doesn't have the level of context based understanding that Watson seems to have. That is, Watson is very good at correctly identifying what subject the question is about from the limited clues that are in typical Jeopardy questions, and can correctly deduce what type of answer is correct, as well as what answer is correct.
In addition, Watson is fast. It was built to win the button pushing race that's a key part of the game. And it doesn't just push the button and then hope it will get the answer before it's asked for. It comes up with the answers before it decides to push the button, and will only push the button, if it's statistical probability is right. But yet, it is still winning the button race most the time. It's basically making the human champions look like first graders.

Yeah, that's the question. How well can the algorithms that they used to make it good at Jeopardy questions be applied to other areas. The IBM video they ran on last nights show implied they would be able to put it to use in the medical field to allow doctors to quickly find relevant answers to diagnoses questions.
The question remains, how much English language facts ended up being hard coded into the Watson to help prevent it form making stupid mistakes and how much of it's knowledge base was learned just by scanning (reading) documents on its own?

Well, I assume they wouldn't have actually put together the match unless they had gotten it to the point that they expected it would be able to win. So my bet was on Watson as well. At the end of day one, Watson was not in the lead. He was tied with Brad. But now, at the end of day two, Watson is way out in front. Maybe something like $36K in winnings vs $12K for the next closest guy? The other guys spend most the show watching Watson answer questions.
But Watson does make mistakes. Most the points the other guys got was from when Watson wasn't sure of the answer, and didn't buzz in at all. They are showing a graphic at the bottom of the screen with Watson's three top answers and a bar graph showing Watson's estimated probability of the answer being right. It also has a vertical line showing the buzz-in limit. If the top answer is not past that line, Watson won't push the button.
Most the time, Watson's top answer is above 90% and most the time it's right. And when it's wrong, it's often a much lower probability. But a few of Watson's wrong answers were way up in the high 90% range - so it was "sure" it was right, even though it was dead wrong. I think that happened maybe once on the first night and once on the second night.
It also made a stupid mistake which it was obviously not programed to avoid on the first night. Ken won the buzz-in, and produced a wrong answer. Watson then got to try next, and answered the same as Ken - Watson obviously had no way of knowing what Ken had answered so they both made the same mistake.
It is somewhat amazing to me how much knowledge it seems to have, and how small the server farm is. It's something like 15 racks of 10 servers each with no connections to the internet or other systems. But yet, this small set of servers seems to have encyclopedic knowledge of general human affairs and has it coded in a way that it can extract highly relevant answers to the Jeopardy style questions.
I've seen nothing about where Watson get's it's information from? Does it digest normal written text documents on it's own (I believe so - but I'm not sure). And did they let it loose on the internet, or did they hand feed it high quality information documents - such as maybe wikipedia?
However, Jeopardy questions are normally answered with a single word. So it's clearly a very specialized type of information extraction. Finding the single word that best fits the clues, is not the same thing as being able to produce a language answer to a complex question such as "explain to me how you work Watson.".
With Google being a part of our lives for years, I have to admit that you are right that this next "milestone" in machine vs computer is not nearly as interesting a challenge as chess was. Even though Google is not tuned to answer Jeopardy questions, you do have to feel Watson is not doing anything that much more special than what Google does. Google ranks web pages from clues instead of ranking words, but otherwise, the technology is no doubt very similar.
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

To show how clever they have become with expert systems?

I suspect it demonstrates a lot of clever algorithms at work under the hood even if it does rely on brute search the way Deep Blue did. I suspect it is a long way from being able to hold a normal conversation. We must always be careful of the Eliza effect.
http://en.wikipedia.org/wiki/ELIZA_effect
JC
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
casey wrote:

I'm so proud of myself I can't shit! Tonight (well, last night), under the category, "Keys on the keyboard" or thereabouts, the clue was, "A garment that hangs straight down from the shoulders." Watson guessed, "chenille," Ken guessed "A?" and Brad didn't even ring in.
I GOT IT! The answer was "shift!"
I have beat Ken Jennings AND the smartest computer in the world!
(well, on one question, but that's infinitely more than zero!)
Cheers! Rich
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

I guess you really _didn't_ get it, Casey.
1) it was not hooked up to the internet. 2) the only information it received during the show was a plain text message of exactly what was on the revealed card, punctuation and all. 3) Nobody created search terms to help it find the answer. It had to extract the meaning of the clue, then formulate its own searches to develop an answer in which it had confidence.
FWIW... you don't understand Google either. It does not answer. It gives you a list of references from which you must develop confidence based upon what you already know of the subject on which you searched.
You must read the references, and decide if they're salient to your search.
How do you determine that? What thought process do you use to develop that?
LLoyd
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On Feb 17, 11:43 pm, "Lloyd E. Sponenburgh" <lloydspinsidemindspring.com> wrote:

Not sure what you think I don't understand.
One of the things I read on Watson was that one of the key developments was the programs ability to organize information in its memory on its own, without human assistance. Of course the method/s to do this were instructions worked out by the programmers. I admire all the human brain power that must have gone into this as I do also admire all the other clever thinking programs of the past but I don't believe it knows what it is doing in the way humans know things or is capable of enjoying its wins and being annoyed by its mistakes.
JC
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
wrote:

Google gives you [links to] information, which it ranks itself, by relevance. The 'answer' is in there.

Yes, how to boil out the 1,2,or 3 word answer.. An interesting software problem. But still, I fear lay-people will be thinking 'oh wow the computer is so smart it can play jeopardy and beat the best of the best! its smarter then us!'.
My thought is, how can it lose. Watson has a database of factual knowledge. A trivia question has *A* right answer. One. If the thing is working, it must win. If it is not working, it may lose (the humans may or may not err alot also, so it might still win).
Chess on the other hand, does not have *A* right move (so far as we know). Except near the end when it is possible to find the sequence of moves that result in an in-escapable checkmate.
..so I think playing chess is 'cooler'. :) just my 2 cents
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Good for you! I sure didn't now it.
I have to wonder if Watson even knew what the labels on the keyboard were. Did he get any of the answers from that category? Kinda funny that the computer didn't seem to know anything about keyboards. :)
--
Curt Welch http://CurtWelch.Com /
snipped-for-privacy@kcwc.com http://NewsReader.Com /
  Click to see the full signature.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload
On 17 Feb 2011 16:59:06 GMT, snipped-for-privacy@kcwc.com (Curt Welch) wrote:

It didn't use speech recognition. It did OCR, reading the questions from the screen when it was shown on TV as it was read. Likewise, when another contestant answered correctly, the answer was shown on the screen and it saved that too, to help it get clues on the category.
...

It did "know" as it has this and other linked-to articles in its database: http://en.wikipedia.org/wiki/Keyboard
I think it was the Nova episode that said it has local copies of Wikipedia and many more databases of info. They said they decided to exclude Urban Dictionary to keep it from saying profanity on TV.
In that particular situation it perhaps couldn't figure out that the questions involved keys on a computer keyboard. Not figuring out context seems to have been a source of several missed questions.
The NOVA episode was great, I think it explained a lot of what Watson does. It seemed a bit mysterious before, but after seeing the NOVA eposide it seems to me like a trick.
Add pictures here
<% if( /^image/.test(type) ){ %>
<% } %>
<%-name%>
Add image file
Upload

Polytechforum.com is a website by engineers for engineers. It is not affiliated with any of manufacturers or vendors discussed here. All logos and trade names are the property of their respective owners.