Watson’s a Kindle, humans are iPads
I missed the first and last days of IBM Watson’s assault on humanity, played out innocently on a game show. But Tuesday’s edition of Jeopardy alone was as demoralizing for me as a human as it exhilarated my android side.
Part of the fun is what the IBM Language Team came up with to make humans comfortable in Watson’s presence. The supercomputer had that tad of inflection and a tone of voice which put one in the mind of HAL 9000 before, well, you know. Watson mixed up the banter at least once with a “Let’s finish out … ,” instead of just naming the category and amount. Watson displayed some frailty on display by giving the same wrong answer human competitor Ken Jennings had just before — I have seen humans do this, so why not a supercomputer?
For many, though, Watson’s weakness wasn’t something with which to commiserate but a way to cling to a small hope that we weren’t sowing the seeds of our own destruction. As Wired put it on Twitter during day two’s massacre: “For those not watching @IBMWatson on Jeopardy, we won’t spoil it, but you might want to stock up on provisions. #skynet”
Watson was remarkable in many mundane ways. It showed the value of pure R&D. It continued an entrepreneurial tradition of showmanship to dramatize technology. It demonstrated, within limits, the kind of natural language processing power that Star Trek fans have always known is the future of computing.
One of those limits was that Watson doesn’t hear — it doesn’t respond to voice commands at all. This is just as well, since a voice interface isn’t ready for prime time. We humans are still required to adapt by providing the verbal equivalent of command-line instructions: particular words in a particular order. However, when I can say “I could really use a burrito” and trigger my car’s on-board computer to start tracking down menus, restaurants and phone numbers — or have it suggest a nice salad instead — then we’ll have something to talk about.
Even if interpretation is perfect, voice input isn’t always preferable. It could be great in a passenger car and for advanced avionics or a hospital’s operating theater — places where acting as fast as you can think is important, or where you need your hands for other things. But it’s the last thing you’d want in an office’s cubicle city.
Which is why Watson’s achievements in bridging the enormous gap between semantic and programmatic language are far more significant than it’s ability to quickly produce a correct fact, or even sound like one of the guys (it has a male simulated voice, for some reason).
Watson was designed for the specific showdown: It was prepared for simple questions, and “knew” they were questions (well, “answers” in the inverted world of Jeopardy). But the questions themselves were not tailored to accommodate Watson. Rather, it was the other way around — witness the first Final Jeopardy answer debacle in which Watson’s question implied it thought Toronto was a U.S. city.
In the end, we have to ask ourselves what really was the wow factor. There are internet search engines which parse semantic language. The first to make this claim, Ask Jeeves, was launched way back in 1996. The latest is from Wolfram-Alpha. Even Google serves up relevant answers pretty darn quick.
So far it’s much simpler for a machine to process an inquiry in text form than it is to figure out how to get a machine to “hear” you and interpret our irrational and incomplete ramblings. Indeed, Watson was getting the Jeopardy answers in text form as the human were hearing them.
For a studio-bound server farm Watson’s Jeopardy prowess certainly unleashed all manner of man versus machine gallows humor. And the machines we make may someday run amok even if they don’t become self-aware, like runaway trains on steroids.
But even after Watson’s superb Jeopardy showing, I’m not worried. Machines that do one thing well — even better than any human — are nothing to be afraid of. Yet.
Image: HAL 9000, from “2001: A Space Odyssey”. Credit: MGM