Artificial Intelligence started with small data and rich semantic theories. The goal was to build systems that could reason over logical models of how the world worked; systems that could answer questions and provide intuitive, cognitively accessible explanations for their results. There was a tremendous focus on domain theory construction, formal deductive logics and efficient theorem proving. The problem, of course, was the knowledge acquisition bottleneck; it was too difficult, slow and costly to render all common sense knowledge into an integrated, formal representation that automated reasoning engines could digest. In the meantime, huge volumes of unstructured data became available, compute power became ever cheaper and statistical learning methods flourished.
AI evolved from being predominantly theory-driven to predominantly data-driven. Automated systems generated output using inductive techniques. Training over machine learning algorithms over massive data produced flexible and capable control systems, powerful predictive engines in domains ranging from language translation to pattern recognition, from medicine to economics. But what do the models mean? From the very inception of Watson, I put a stake in the ground; we will use a diversity of shallow text analytics, leverage loose and fuzzy interpretations of unstructured information. We would allow many researchers to build largely independent NLP components and rely on machine learning techniques to balance and combine these loosely federated algorithms to evaluate answers in the context of human-readable passages. The approach, with a lot of good engineering, worked! Watson became arguably the best factoid question-answering system in the world. WatsonPaths, its descendant, could connect questions to answers over multiple steps, offering passage-based “inference chains” from question to answer without a human writing a single “if-then rule.”
But could it reason over a logical understanding of the domain? Could it fluently dialog or automatically learn from language and build the logical or cognitive structures that enable and precede language itself? Could it understand and learn the meaning behind the words the way we do? This talk draws an arc from Theory-Driven AI to Data-Driven AI and positions Watson along that trajectory. It proposes that to advance AI to where we all know it must go, we need to discover how to efficiently combine human cognition, massive data and logical theory formation. We need to boot strap a fluent collaboration between human and machine that engages logic, language and learning to enable machines to learn how to learn and ultimately deliver on the promise of AI.
David Ferrucci is an award-winning AI researcher. He worked at IBM Research from 1995-2012 where he built and led a team of over 40 researchers and software engineers specializing in Artificial Intelligence (AI). In 2007, Dr. Ferrucci personally took on the Jeopardy Challenge – to create an intelligent computer system that could rival human champions at the game of Jeopardy. Watson, the computer system he and his team built, bested the highest-ranked Jeopardy champions of all time on national television in 2011. Now considered a landmark in AI, Watson delivered amazing results that out-performed all expectations, demonstrating world-class performance at a task previously considered well beyond the capabilities of any computer system. His work transformed IBM’s technical and strategic directions and the perceptions of AI and in business forever. Prior to Watson, Dr. Ferrucci was the chief architect and technical lead for UIMA; a software architecture, framework and standard for unstructured information processing. UIMA provided the foundation for integrating a wide range of text and multi-modal analytics for capturing insights of big-data. UIMA became in open-source APACHE project and is in wide use around the world. In 2011, Dr. Ferrucci was awarded the title of IBM Fellow and Vice President and has won numerous awards for his work on UIMA and Watson, including the Chicago Mercantile Exchange’s Innovation Award 2010 and Watson’s AAAI Feigenbaum Prize 2013. Dr. Ferrucci deeply believes the big advances in Artificial Intelligence will require more than big-data; they will require the tight integration of human cognition, knowledge representation and statistical machine learning. In 2012 he joined Bridgewater as an AI Researcher and Senior Technologist where he is exploring ways to combine learning and semantic technologies to accelerate the discovery, application, and continuous refinement of collaborative intelligent systems in domains including people analytics, enterprise management and markets.