We've started an "AI Salon" at Stanford where a few of us grad students interested in AI topics get together and chat about recent AI trends, past reflections and future directions. We're kicking things off tomorrow talking about IBM's Watson: "Has IBM’s Watson driven forward research or was it primarily an engineering accomplishment?"

I am one of two people moderating the discussion and I will be arguing (irrespective of my own opinion :p) that Watson has contributed to driving research forward. I'm preparing and finding links/arguments to support these claims in order to seed the followup discussion, and would be curious to hear what others think about the topic as well.

For instance, it can be argued that the main paper published on Watson (http://www.aaai.org/Magazine/Watson/watson.php) is too high level (many details of the involved modules are left out) and doesn't actually offer any contributions that can be  easily followed up on by the community. On the other hand, Watson's performance was very impressive on Jeopardy so the system is still an example of how far current technology can take us if we glue it together? Certainly that must have some value. In addition the paper has 330 citations so far, which I'm just going through and trying to categorize the citation contexts.

Another question is whether such complicated systems like Watson, with many moving parts and engineered modules are necessary to achieve that level of performance. Are we missing a more sophisticated and yet undiscovered algorithm/approach that is much cleaner, generalizable and less systemy? Or have we pretty much discovered all the necessary algorithms (lego pieces) and all that's left is for someone to just build the full lego castle and address all the edge cases that come up with hacks?

Few links so far:
DeepQA research team:

Neat video explaining the technology on high level:
Shared publiclyView activity