Cognitive Computing: A New Era is Upon Us
On February 14th, 2011, IBM’s cognitive computer, Watson, bested the two highest grossing winners of all time from the TV show Jeopardy! during a live episode. June 7th, 2014, a computer posing as a 13-year-old boy named Eugene Goostman successfully passed the Turing Test, convincing 33% of human conversation evaluators that it was human. On Friday, March 27th, 2015, a computer program appears to have read a Dow Jones headline announcing that Intel may purchase Altera, prompting it to purchase $110,530 worth of stock options that grew in value to over $2.4 million by the day’s end.
Humans are teaching computers to learn, to harvest knowledge, and to be self-sufficient. While we are working on significant collaborations for the future, right now, today there are smaller programs quietly crossing the threshold from an era of computers programmed with pre-defined datasets/rules to computers that can acquire knowledge on their own and use logic to make decisions. These programs are laying the foundation and future of cognitive computing.
As we move into more advanced cognitive systems, we’re looking for an evolved relationship with computers–machines that we can collaborate with to accomplish great things.
Take the mobile keyboard application SwiftKey. It’s a keyboard designed for swiping rather than directly typing letters. And, like all mobile keyboards, it’s built with autocomplete to facilitate a higher level of accuracy. But SwiftKey’s autocomplete offers more than suggestions from a pre-defined dictionary; it learns your writing style, your slang, and even your puns/jokes. It creates a taxonomy of your personal language. It’s not just a keyboard, it’s your keyboard.
Then there’s Siri. It’s designed to provide the answers we need. It can define a song that’s playing, find a location for us, send a text on our behalf–learning as it goes. Our accent and dialect are used to improve accuracy of natural language detection, and information on our devices is used to facilitate better recognition of our personal language characteristics. Still, it’s a system primarily based on call-and-response. We ask, it replies.
As we move into more advanced cognitive systems, we’re looking for an evolved relationship with computers–machines that we can collaborate with to accomplish great things. We want computers that know us and can help us even before we realize we need it. Cognitive, thinking systems that provide us predictive and preemptive information, or even mine, discern, and process big data all on their own.
I react, therefore I am
We are already witnessing the power of cognitive computing and true human-robot interaction (HRI) with the emergence of collaborative robots like Baxter, which answers a call for advanced automation in the manufacturing industry. Its creator, Rethink Robotics, labels it a “workforce multiplier,” and they have made it trainable, meaning it can learn. Baxter is programmed with proprioception–the ability to adapt to its environment.
In the interest of intellect with respect to big data, IBM is exploring new territory with their own cognitive computer, Watson. They have partnered with TED to make the wealth of knowledge from TED Talks directly accessible to the inquisitive computer. Watson has “watched” and catalogued 1,900 Talks, and is prepared to harvest information from them on the fly. Ask Watson a question using natural language, and it will return one or more segments related to the answer.
What do we want in our robots?
We want to interact with computers naturally, even anthropomorphizing them in their construct. But we know computers are powerful, and have been subjected to artists’ imaginations of worlds wherein computers turn on us.
In the 1950s Isaac Asimov published the science fiction stories of I, Robot, that introduced humans to the “Three Laws of Robotics”:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
The idea that we must carefully govern the logic we program into computers stems from an inherent characteristic of the human psyche: survival. If we anthropomorphize computers with personality, natural language detection, and an array of sensors, they might replace us. Isaac explored this as a necessary consideration, and even coined the term “robopsychology” to underscore its importance.
The human/robot ecosystem
Alan Turing said “a computer would deserve to be called intelligent if it could deceive a human into believing that it was human.” But cognitive computing isn’t about turning machines into pseudo humans; it’s about defining a symbiotic relationship that extends the unique characteristics of humans and machines. Rather than programming computers to know everything, we program them with inferences so we can collaborate to find answers.
The result is an ecosystem that enables humans and computers to do things together in ways that we may not be able to do on our own. By harnessing the computational power of computers, humans can extend their expertise and cognition, not replace it. And together we can rule the world.
– – –
Mark Wyner has been working as a creative professional and technologist for sixteen years, partnering with many Fortune 100/500 companies in UX, ideation, productization, design, and development, in the spaces of web, mobile, wearables, IoT, and automotive IVI systems. He’ll talk about “Cognitive Computing: The Symbiosis of Humans and Machines” at WebVisions Chicago on Sept. 25, 2015.