Watson, the Computer Jeopardy! Champion, and the Future of Artificial Intelligence (2024)

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Earlier this month, the nation watched as Watson, a computer system designed by IBM, drubbed the two all time champions of Jeopardy. It was a much more difficult challenge than, say, beating a grandmaster at chess. To win, Watson had to navigate the vagaries of human speech, the idioms, the puns, the cultural references -- all the things, in short, that make language delightful and deeply machine unfriendly. Journalist Stephen Baker spent a year behind the scenes, as the team of IBM engineers struggled to design and build Watson in time for the show. He tells the story of project Watson, and what it means for the future, in his new book, "Final Jeopardy: Man vs. Machine and the Quest to Know Everything." He and Gareth Cook, the editor of Mind Matters, discussed Watson and artificial intelligence.

COOK: For a long time, artificial intelligence was considered a failure. Does Watson represent a new way of thinking about AI?BAKER: It’s true, the early visions of AI never delivered. It turned out to be a lot harder than many imagined to build systems to handle the complexity and nuance of human communication and thought. You could argue that Watson does not come particularly close, even as it defeats humans in Jeopardy. As long as AI continues to fall short in that area--and it will be a long while--many will view AI as an unfulfilled promise.
However, in the last 15 years or so, there has been tremendous progress in functional aspects of AI. They use statistical approaches to simulate certain aspects of human analysis. This would include everything from Deep Blue, IBM’s chess computer, to the computers at Netflix, Amazon and Google, which study people’s behavioral patterns and automatically calibrate their offerings to them.
What’s new about Watson is the extreme pragmatism of the approach. It combines dozens of different approaches to question answering, from statistical to rules-based, and unleashes them on hunts to solve Jeopardy clues. There is no right or wrong approach. The machine grades them by their results, and in the process “learns” which algorithms to trust, and when. Amid the quasi-theological battles that rage in AI, Watson is a product of agnostics. That’s one new aspect. The other is its comprehension of tricky English. But that, I would say, is the result of steady progress that comes from training machines on massive data sets. The improvement, while impressive, is incremental, not a breakthrough.

Is any of what Watson does based on how the brain works, or is it really just computer scientists trying to solve a problem?

The IBM team paid little attention to the human brain while programming Watson. Any parallels to the brain are superficial, and only the result of chance. I would say that Watson is a true product of engineering: People using available technology to create a machine that meets defined specs by a hard deadline. If certain aspects of the brain had helped them design their circuitry or code their software, I’m sure they would have jumped at them. But the feeling was that decoding human thought was a challenge likely to last decades, and they were in a big hurry.

Does Watson reveal anything about our own thinking?

I find lots of parallels in Watson to our own thinking. Again, this is not because we share the same design, but instead because we’re tackling similar problems. Unlike many computers, for example, Watson is programmed for uncertainty. It’s never sure it understands the question, and never has 100 percent confidence in its response. It always doubts. As a machine operating in human language, that’s a smart approach.

What sort of tasks is Watson good at?

Watson is good at making sense of complex English questions and then digging through millions of electronic documents in search of answers. There's no doubt that we humans understand what we say and write far better than Watson. But it can "read" at great speed. Watson is also very good at a lot of Jeopardy-specific tasks, such as “Before and After” clues. If you ask it, for example, about a moonwalking singer who is also a southern city, it will come up in a second or two with: "What is Michael Jackson Mississippi." That skill is not likely to prove too useful outside of the Jeopardy studio.

What does IBM hope to use the technology for?

IBM has high hopes to sell Watson-based technology, or services built upon on it, in a wide range of industries. Any company that needs to draw evidence or conclusions from voluminous documents, they believe, could benefit from question-answering technology. An early deal has been signed with Nuance Technology and Columbia University Medical Center to adapt the system for question-answering in medicine. Doctors conceivably could ask the machine for diagnoses, or if certain medicines when combined have proven to cause dangerous side-effects. IBM also sees Watson as a paralegal, perhaps hunting down precedents in court cases. I think the most likely first job for Watson will be on technology help desks.

Are there other projects outside of IBM that are similar to Watson?

There is a lot of research into question-answering technology. Several years ago, Vulcan Technologies, the incubator run by Microsoft co-founder Paul Allen, launched an AI project, HALO, to teach a computer to pass high school advanced placement tests in chemistry. This project, unlike Watson, was based on knowledge taught to the computer. Since the computer "knew" the relationships between various chemicals, it could reason in a way that would be impossible for Watson. For example, it would assume that water would freeze at 0 degrees centigrade. Watson, by contrast, could easily find this fact, but could draw no conclusions from it. The downside of HALO, as I describe in the book, is that it cost lots of money to teach the machine, and it was anchored to that small base of knowledge and, consequently, inflexible.
Google is also introducing more question-answering into its technology. But the search giant is starting with simple questions calling for factoids, such as, "What is the capital of Mexico?" Google is also doing lots of work on machine language translation, where they use a statistical approach.

What did you find most interesting about seeing Watson being designed?

What I found most interesting was how they operate in a laboratory created entirely around statistics. Every clue Watson answers creates an enormous spreadsheet, and each of the variables can be tweaked, tested, refined, and then tested again in a blind batch, to see whether the adjustment affects a broader sampling. I imagine it's a bit like the continuous process improvement made famous by Japanese auto manufacturers. But at least they're building cars. Watson, it could be argued, really produces nothing but statistics. Its Jeopardy responses are a byproduct.

As technology like this becomes more common, what will happen to the way that we think?

We're already looking more and more to our networks for answers. (Just watch anyone with an iPhone.) This trend is bound to accelerate as more sophisticated technologies, like Watson's, become available. As this happens, I think we'll start to view general knowledge as a lower commodity. To succeed in the knowledge economy, people increasingly will have to put knowledge to work, coming up with original ideas. Those who fail in this are likely to be displaced by machines.
But this isn't just an economic issue. There's also the question of what we need and want in our heads to have happier and more fulfilling lives. After all, a person who outsources knowledge-work to the network might end up struggling to carry on interesting conversations, or to make friends. We shouldn't forget that Watson and its ilk are just powerful tools--and that we're the ones with brains.

Are you a scientist? Have you recently read a peer-reviewed paper that you want to write about? Then contact Mind Matters editor Gareth Cook, a Pulitzer prize-winning journalist at the Boston Globe, where he edits the Sunday Ideas section. He can be reached at garethideas AT gmail.com

Watson, the Computer Jeopardy! Champion, and the Future of Artificial Intelligence (2024)

References

Top Articles
Latest Posts
Article information

Author: Jeremiah Abshire

Last Updated:

Views: 5773

Rating: 4.3 / 5 (74 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Jeremiah Abshire

Birthday: 1993-09-14

Address: Apt. 425 92748 Jannie Centers, Port Nikitaville, VT 82110

Phone: +8096210939894

Job: Lead Healthcare Manager

Hobby: Watching movies, Watching movies, Knapping, LARPing, Coffee roasting, Lacemaking, Gaming

Introduction: My name is Jeremiah Abshire, I am a outstanding, kind, clever, hilarious, curious, hilarious, outstanding person who loves writing and wants to share my knowledge and understanding with you.