April 26, 2017
Complex Behavior Versus Intelligence
The race to Artificial Intelligence is a grueling, sweaty marathon that human beings have been running for over 70 years. We’ve faced steep climb after climb, the gradients cranking higher every time we run into a new funding gap or technological roadblock. Recent advances in deep learning and neural networks are leading some to wonder if we might, finally, be approaching the finishing line.
We were all impressed when Google’s DeepMind computer beat a human at the game of Go. Despite its simple rules, the popular Chinese board game is mind-bogglingly complex, and possesses more possibilities than the total number of atoms in the visible universe. Mastery of Go is supposed to be the ultimate expression of human intuition – so when world champion Lee Sedol was beaten by a computer program, Google’s groundbreaking AI made headlines worldwide.
And from predicting the outcome of the US election to rushing an incapacitated driver with a blood clot to hospital, the achievements of artificial intelligence continue to excite.
"What we want is a machine that can learn from experience," adding that the "possibility of letting the machine alter its own instructions provides the mechanism for this." – Alan Turing, London 1947.
But really, if our end goal is a general artificial intelligence that can solve humankind’s biggest problems (climate change, inequality, resource depletion) then current techniques are running with their legs tied. To teach itself how to play, AlphaGo utilized the cutting edge of AI tech: learning over deep neural networks. These networks are inspired by the human brain but in no way think like a human brain. In truth these are largely deterministic systems, which means that the same set of inputs will result in the exact same output, each and every time.
We are right to begin dreaming about the possibilities such feats arouse and it's true we are living in exciting times. But if we are to make the first steps to a machine brain, we must be wary not to commit the first sin of AI research: the mistaking of complex behavior for intelligence.
Humor, Creativity and Wisdom
Consider the female digger wasp. Before she brings food into her burrow, she drops it off at the entrance and goes inside to check for intruders. Only once satisfied her nest is safe will she bring in the food. If, while the wasp is busy securing her burrow, a meddling human moves the food a couple of inches, the wasp will move the food back to its original drop-off point and repeat her burrow-check all over again. Because the wasp is not capable of remembering she just checked the nest, she can be made to repeat this cycle of behavior more than 40 times. Complex and premeditated it may be – but her behavior is not intelligent.
Humans’ unique combination of experiential learning, the ability to abstract concepts, and deft extrapolation leads to seemingly-incomprehensible phenomena like humor, creativity and wisdom. Even path-breaking technologies like AlphaGo and Watson come nowhere close.
Learning from large-but-limited datasets, most commercial applications of artificial intelligence exhibit cognitive shortcomings similar to the wasp. You might ask why this is important: Is it not enough that we have the complex behavior? If a computer can automate a human task, does it really matter if that computer doesn’t exhibit real human intelligence?
Well, yes. Eminent AI thinker Jack Copeland contrasts the behavior of the digger wasp – and by extension most 21st century AI – with that produced by human intelligence. In the same situation a human would probably wonder who has been messing about with their food – but they would not repeat the ritual of checking that their home was free from invaders. We know we don’t need to do this again because our intelligence has naturally evolved to enable not only experiential learning, but also the extrapolating of those experiences to new situations. It's obvious the burrow is still safe – without a moment’s thought we would adapt our behavior.
A digger wasp is incapable of learning from experiences at a conceptual level. So how can we make artificial intelligence smarter?
Copeland literally defines intelligence as the ability to adapt one’s behavior to fit new circumstances, and this is what made AlphaGo so special; through deep learning, it could extrapolate its experience of playing Go to new in-game situations. Ask it to attend to any other task outside of the game, even one as simple as booking the flights for your next holiday, and it would not be able to do so. It must be equipped with tools, connected with the appropriate interfaces, trained again from scratch and possibly rewritten in large parts to do anything in the non-Go world.
Robert Epstein, a research psychologist, points out that the human brain does not store facts or data, like a computer does; it continuously edits itself and relives experiences, like an artist painting a landscape. Our unique combination of experiential learning, the ability to abstract concepts and deft extrapolation leads to seemingly-incomprehensible phenomena like humor, creativity and wisdom.
If we are to build a truly intelligent artificial intelligence, we need to build something like a machine brain, and to build a machine brain we need to start being more ambitious.
The Incredible Wisdom of the Human Thought Process
Trying to imitate the human thought process is intoxicating. When we’re building tactical cognitive computing systems, a key Coseer design principle is to encapsulate every point of data in the form of an idea – concepts, rather than keywords. This is still nothing close to the processing power of a human brain, but even a relatively simple emulation leads to incredible results: Accuracy shoots up. Latencies and training times collapse. The platform can now afford advanced concepts like cognitive calibration which, in turn, add more to the machine’s power.
Those who have trained any AI system before understand the real power of this approach. To train effectively, current technologies like deep learning need massive sets of data, pre-tagged and appropriately labeled. Such data is easy when dealing with tables and numbers, and near impossible to create for amorphous natural language. Humans don’t think in these terms. Neither does Coseer.
All this with a relatively simple emulation of the human thought process.
Designing algorithms to mimic human thought processes makes business sense for real world problems even today: finding actionable stock market insights from three million documents everyday, assisting doctors through oncological pathways, seamless knowledge management are just some examples. Coseer is already helping some of the largest and most innovative organizations across sectors.
Cobrain – First Steps to a Machine Brain
Almost by accident, Coseer’s technological foundation is closer in emulating human thought processes than many other technologies. As we continue down this incredible journey, we ask two simple questions:
- First, what happens when all our applications can learn not only from the data they see, but from the experiences of other applications, and also from the data each application sees?
- Second, what happens when each application (and all of them collectively) can learn from the repository of collective human knowledge, available in the form of the Internet?
We are now writing the code for this mechanism. We call it Cobrain, and it has the power to disrupt existing business models in almost every industry you can think of. The most tantalizing immediate possibility is that Cobrain applications will be highly accurate without the need for training.
We are also waiting to be surprised as to what happens when lessons from various domains collide – something fantastic, we are confident.
The long-term goal is bolder still: Cobrain will set the technology free to pursue true intelligence. It will let AI apps benefit from the knowledge that already exists. Cobrain based applications will ponder on their own time to bring more accurate decisions that the applications are more confident about. And each time that happens, every application will become smarter.
These are still baby steps and on the horizon new steep climbs surely loom. For instance, we deal with highly sensitive client data, so how can we effectively share experiences with a central mechanism without violating confidentiality? And how can we look into the future to write code that accommodates for every possible application Cobrain might have tomorrow?
What gives us hope is this: three years ago we were asking similarly unanswerable questions; competing against companies like IBM Watson that are outspending us a million to one; and wondering how we could even begin to write software for cognitive computing when a thriving open-source community had largely been unable to. Yet here we stand today, with a client portfolio brimming with Fortune 100 companies and one of the most exciting engineering teams on the planet.
The marathon continues, but our feet are finally untied.