Shared publicly  - 
 
recently i was looking at a project that would have used natural language processing (NLP) (probably a proof of concept via python's NLTK (natural language toolkit)). so last week, one of the things i did was play with it via a trivia game bot.

previously (5-6 years ago) i had written my wife an IRC trivia bot based on the blitzed trivia bot (with some minor modifications) and an extensive, 110,000 question database. "brainiac", as he is known, is still in use and provides hours of diversions to people.

so, in this NLTK exploration phase, i decided to see if i could write a bot to answer the trivia questions. it turns out you can, but with some (annoying, funny, and otherwise unwelcome) side effects. enter watson. (the use of the watson name is entirely tongue in cheek, if that isn't clear.)

watson - so named for the IBM machine that killed in jeopardy - is a simple python bot that reads a trivia question, looks for answers online, and spits out probable answers. it turns out that this isn't as easy as it may sound.

watson 1.0 - written in about 110 lines of code in about 2 hours - had a simple strategy. take the question from brainiac, spit it at google, use NLTK to find the proper names in the search results that came back, and try those as answers. it "scored" the answers by looking at the ones that occurred the most frequently, and in doing so would score 2 or 3 right per game (10 points wins the game). not bad, respectable. when i let it spit out everything, he won a couple of games. but would keep spitting out random, senseless answers well past the point where he got the question right. plus he wouldn't answer with the right kind of reply for the question, use a hint, etc. it was brute force, ugly, etc. i could do better.

watson 2.0 - written in about 300 LoC in the course of about 3 hours - has a multi-tier strategy. first, read the question and infer the type of reply required. "how many" means a number, "in what year" means a four digit number, etc. the fallback is a proper name. then, just like watson 1.0 before it, search for possible answers. use google, dig around via NLTK, find the proper names, score them, try the common ones, etc. just like watson 1.0, but with a few more smarts. but it would also take the question, look at previously seen questions, find similar ones, and try those answers. when a hint came by, it would turn the hint into a regex, so it would then go over all of its data it's collected - past answers, searched data, etc - and try that.

the result? watson gets more right with less "noise" than watson 1.0. he still hasn't won a game, but he gets things right. watson 2.0's first right answer was, i think, "james woods", hence a running joke in the channel. but as i recall, he also was quiet for several questions and, when that question popped up, he got it right. on the first try. wow i thought. i was impressed.

here's an exchange from last night:


22:03 < brainiac> From The vault: Music: Albums: Keith Richards' solo debut

22:03 < playerx> totes

22:03 < watson> don't

22:03 < watson> keith richards

22:03 < playerx > mumble

22:03 < playerx > cigarette

22:03 < playerx > cigarettes

22:04 < brainiac> Here's a hint: t--- -s -h---

22:04 < watson> don't

22:04 < watson> Talk Is Cheap

22:04 < brainiac> 1 point to watson, who gave the correct answer talk is cheap

watson came up in that round with five points. not bad! watson 1.0 and 2.0 are both really good at questions for peoples' proper names, like movies and television actors.

this version of watson ran over the weekend but stopped yielding useful answers. i had a look last night and google, which i had been using, had blocked him for a ToS violation. i now use bing until i get banned there, too. google and yahoo have search APIs that cost you about 80 cents a day for 1000 queries. is it worth it to me to pay to let watson play? probably not. so i'll either keep violating ToS agreements or shut him down. i had also looked at using wolfram alpha, but hand tests showed it wasn't very useful for my needs.

so, at this point i explored NLTK, learned some stuff about NLP, and had a lot of fun. if watson has to be shut down i won't be too sad. my goal wasn't to win every game but instead compete, and watson has done that. (if he won all the time it would remove the sport of the game.)

one thing i did find in this whole process, by the way, was a student project from this winter (now hosted on github) of a bot that plays trivial pursuit.

https://github.com/chriskelvinlee/trivial_pursuit

the code is horribly inefficient - super slow, lots of bad designs, etc - but was fun to look at from an NLTK perspective. i didn't borrow any strategies or code from them.


and that is how i spent a chilly january.
6
1
Chris Horsley's profile photojose nazario's profile photoJ. Randall Hunt's profile photo
 
Very cool! It would also be interesting to see what kind of results you'd get using a proxy into Siri / Evi... until you busted the rate limit there, too :)

I've also been having a play with NLTK of late, although it's in an effort for tools to assist learning foreign languages (Japanese and English, for a start). During my research I also came across OpenNLP (http://incubator.apache.org/opennlp/) - have you had any experiences if it's better / worse than NLTK?

I was also very interested to see Stanford are running a NLP course starting from this month, perhaps it's of interest: http://www.nlp-class.org/
 
watson, by the way, is now routinely winning games. he's learning as he goes along.
Add a comment...