At its inception in the 1950s, AI aimed at producing human level general intelligence in machines. Within a decade or so the difficulty of that goal became evident, and it was scaled back to one of producing systems displaying intelligence within narrow domains. Over the past few years, however, there has been a resurgence of research interest in the original goals of AI, based on the assessment that, due to advances in computer hardware, computer science, cognitive psychology, neuroscience and domain-specific AI, we are in a far better position to approach these goals than were the founders of AI.
Cognitive science and neuroscience have taught us a lot about what a cognitive architecture needs to look like to support roughly human-like general intelligence. Computer hardware has advanced to the point where we can build distributed systems containing large amounts of RAM and large numbers of processors, carrying out complex tasks in real time. The AI field has spawned a host of ingenious algorithms and data structures, which have been successfully deployed for a huge variety of purposes.
There is no consensus on why all this progress has not yet yielded AI software systems with human-like general intelligence. Our hypothesis, however, is that the main reason is that
- Intelligence depends on the emergence of certain high-level structures and dynamics across a system’s whole knowledge base
- We have not discovered any one algorithm or approach capable of yielding
the emergence of these structures
- Achieving the emergence of these structures within a system formed by integrating a number of different AI algorithms and structures requires careful attention to the manner in which these algorithms and structures are integrated; and so far the integration has not been done in the correct way
The human brain appears to be an integration of an assemblage of diverse structures and dynamics, built using common components and arranged according to a sensible cognitive architecture. However, its algorithms and structures have been honed by evolution to work closely together — they are very tightly inter-adapted, in the same way that the different organs of the body are adapted to work together. Due their close interoperation they give rise to the overall systemic behaviors that characterize human-like general intelligence.
I suspect that the main missing ingredient in AI so far is cognitive synergy: the fitting-together of different intelligent components into an appropriate cognitive architecture, in such a way that the components richly and dynamically support and assist each other, interrelating very closely in a similar manner to the components of the brain or body and thus giving rise to appropriate emergent structures and dynamics.
With this in mind, one of my conjectures regarding the engineering of AGI is that the cognitive synergy ensuing from integrating multiple symbolic and subsymbolic learning and memory components in an appropriate cognitive architecture and environment, can yield robust childlike intelligence.
The reason this sort of intimate integration has not yet been explored much is that it’s difficult on multiple levels, requiring the design of an architecture and its component algorithms with a view toward the structures and dynamics that will arise in the system once it is coupled with an appropriate environment. Typically, the AI algorithms and structures corresponding to different cognitive functions have been developed based on divergent theoretical principles, by disparate communities of researchers, and have been tuned for effective performance on different tasks in different environments. Making such diverse components work together in a truly synergetic and cooperative way is a tall order, yet we believe that this — rather than some particular algorithm, structure or architectural principle — is the “secret sauce”” needed to create human-level AGI based on technologies available today.
And so, if this view is correct, it is particularly critical that all of us involved with AGI work together more and more closely — understanding each others algorithms and theories, working together on common frameworks, and thinking about how our approaches and systems can synergies together to create generally intelligent thinking machines.