Four Basic Questions about Artificial Intelligence – Pei Wang

PeiWang-photoGuest article contributed to IIIM
by Pei Wang

In this essay, I am going to discuss four basic questions about Artificial Intelligence (AI):

-What is AI?
-Can AI be built?
-How to build AI?
-Should AI be built?

Every AI researcher probably has answers to these questions, and so do many people interested in AI. These questions are “basic”, since the answers to them are often dependent on by the answers to many other questions in AI.

In the following, I will briefly discuss each of the questions in a casual and informal manner, without assuming much prior knowledge of the readers. Many hyperlinks are embedded to provide more detailed information, usually using on-line materials. This writing makes no attempt to provide a comprehensive survey of the related topics, but only proposes my analysis and answers, which are usually compared with a few other representative answers.

These four questions will be discussed in the given order, because the answer to the “What” question strongly influences the answers of the other questions, so must be addressed first. After that, if the answer to the “Can” question is negative, it makes little sense to talk about the “How” and “Should” questions. Finally, if nobody knows how to achieve this goal, we do not need to worry about whether it is a good idea to actually create AI.

The aim of AI

Though it is not unusual for the researchers in a research field to explore in different directions, the situation of AI is still special in the extent of the diversity among the research goals in the field.

The electronic digital computer was invented in the 1940s mainly to carry out numerical computations, which used to be a unique capability of the human mind. Very soon, some researchers realized that with proper coding, many other intellectual capabilities could be implemented in computers, too. Then, it was very natural for them ask whether a computer can have all the capabilities of the human mind, which is usually called “intelligence”, “thinking”, or “cognition”. Among the pioneering thinkers, there were Alan Turing, Norbert Weiner, and John Von Neumann.

The field of AI was founded by a group of researchers in the 1950s, including John McCarthy, Marvin Minsky, Allen Newell, and Herbert Simon. To them, “the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving.

In a general and rough sense, everyone agrees that “AI” means “to build computer systems that are similar to the human mind”. However, since it is obviously impossible for a computer to be identical to a human in all aspects, any further specification of AI must focus on a certain aspect of the human mind as essential, and treat the other aspects as irrelevant, incidental, or secondary.

On where AI should be similar to human intelligence, the different opinions can be divided into five groups:

  • Structure: Since human intelligence is produced by the human brain, AI should aim at brain modeling. Example: HTM
  • Behavior: Since human intelligence is displayed in human behavior, AI should aim at behavior simulation. Example: Turing Test
  • Capability: Since intelligent people are the ones that can solve hard problems, AI should aim at expert systems in various domains of application. Example: Deep Blue
  • Function: Since intelligence depends on cognitive functions like reasoning, learning, problem solving, and so on, AI should aim at cognitive architectures with these functions. Example: Soar
  • Principle: Since the human mind follows principles that are not followed by conventional computers, AI should aim at the specification and implementation of these principles. Example: NARS

In a paper on the definitions of AI, I argued that all the above opinions on intelligence lead to valuable research results, but they do not subsume each other. I choose the “principle” approach, because I believe that it is at the essence of what “intelligence” means in this context, and it will give AI a coherent and unique identity that distinguishes it from the well-established branches of science, such as cognitive psychology and computer science.

Even among the researchers who see intelligence as certain principles of rationality or optimization, there are still different positions on its condition or constraint. To me “intelligence” refers to the ability of adapting to the environment while working with insufficient knowledge and resources, which means that an intelligent system should rely on finite processing capacity, work in real time, open to unexpected tasks, and learn from experience.

According to this opinion, an AI system will be similar to a human being in how its behaviors are related to its experience. Since an AI will not have a human body, nor live in a human environment, its strcture, behavior, and capability will still be different from those of a human being. The system will have cognitive functions like a human, though these functions are different aspects of the same internal process, so often cannot be clearly separated from each other.

Even though such an AI can be built using existing hardware and software provided by the computer industry, the design of the system will not follow the current theories in computer science, since they typically assume some form of sufficiency of knowledge and resources, with respect to the problems to be solved. For this reason, AI is not merely a novel application of computer science.

The possibility of AI

As soon as the idea of AI was raised, the debate on its possibility started. In his historical article “Computing Machinery and Intelligence“, Turing advocated for the possibility of thinking machines, then discussed several objections against it. Even though his arguments are persuasive, they did not settle the debate once and for all. Most of the objections he criticized are still alive in the contemporary discussions, though often in different forms.

In the article “Three Fundamental Misconceptions of Artificial Intelligence“, I analyzed a few wide-spread misconceptions about AI, which are often explicitly or implicitly assumed by various “proofs” of the impossibility of AI, that is, there is something that the human mind can do, but computers cannot. One of them is a variant of what Turing called “Lady Lovelace’s Objection”, and nowadays it often goes like this: “Intelligence requires originality, creativity, and autonomy, but computers can only act according to predetermined programs, therefore computers cannot have intelligence.” One version of this argument can be found in the book “What Computers Still Can’t Do” by Hubert Dreyfus.

Though this argument sounds straightforward, its two premises are actually on different levels of description when talking about a system. Properties like originality, creativity, and autonomy are all about the system’s high-level behaviors. When saying that they are not “programmed” in an intelligent system, what it means is that for any concrete “stimulus” or “problem”, the system’s “response” or “solution” is not fully predetermined by the system’s initial design, but also depends on the system’s history and the current context. On the other hand, when saying that computer’s actions are “programmed”, it is about the low-level activities of the system. The fact that the system’s low-level activities are predetermined by various programs does not necessarily mean that its behaviors when solving a problem always follow a fixed procedure.

Intuitively speaking, a computer’s activities are controlled by a number of programs that are designed by human programmers. When the system faces a problem, there are two possibilities. In a conventional computer system, there is a single program responsible for this type of problem, so it will be invoked to solve this problem. When the system works in this mode, there is indeed no creativity or autonomy. It is fair to say that there is no intelligence involved, and that the system’s behavior is mechanical, habitual, or routine. However, it is not the only possible way for a computer to work. In principle, the system can also solve a problem by the cooperation of many programs, and let the selection of the programs and the form of the cooperation be determined by multiple factors that are constantly changing. When the system works in this mode, the solving process of a problem does not follow any program, and the same problem may get different solutions at different times, even though the system uses nothing but programs to solve it, with no magical or pure random force involved.

It can be argued that this picture is not that different from how the human mind works. When facing novel problems, we usually have no predetermined program to follow. Instead, we try to solve them in a case by case manner, using all the available knowledge and resources for each case. On the other hand, in each step of the process, what the mind does is neither arbitrary nor magical, but highly regular and habitual. A more detailed discussion can be found in my paper on “Case-by-case problem solving“.

Some other arguments target at certain definitions of AI. For example, since a computer will neither have a human body nor human experience, it is unlikely to perfectly simulate human behaviors. Therefore, a computer may never be able to pass the Turing Test. Personally, I agree with this argument, but since I do not subscribe to a behavioral definition of AI, the argument has little to do with whether AI can be achieved, by my definition. Similarly, I do not believe a computer can provide perfect solution to every problem. However, since this is not what “AI” means to me, I do not think AI is impossible for this reason.

In summary, though human-comparable thinking machines have not been built yet, none of the proposed arguments on its impossibility has been established.

The path toward AI

If it is possible to build AI, what is the most plausible way to do so? In terms of overall methodology, there are three major approaches:

The selection of methodology depends on the research goal. Since I see intelligence as a general principle, it is natural for me to take the unified approach, by developing a technique in which the principle of intelligence is most easily formalized and implemented. Compared to the other two, the unified approach is relatively more simple and more coherent.

Formalization turns an informal theory into a formal model. When describing a system, there are three most commonly used frameworks of formalization:

  • Dynamical system: The system’s state is represented as a point in a multi-dimensional space, and its activity is represented as a trajectory in the space, following differential equations.
  • Computational system: The system’s state is represented as data, and its activity is represented as the execution of programs that process the data, following algorithms.
  • Inferential system: The system’s state is represented as a group of sentences, and its activity is represented as the derivation of new sentences, following inference rules.

Though in principle these three frameworks have equivalent expressing and processing power, for a concrete problem they may have very different easiness and naturalness. For (general-purpose) intelligent systems, I prefer the framework of an inferential system (also known as “reasoning system”), mainly because of its following advantages:

  • Domain independence: The design of a reasoning system mainly consists of the specifications of its formal language, semantics, inference rules, memory structure, and control mechanism. All of these components should be independent of the application domain. The domain specificity of the system comes from the content of the memory. In this way, the “nature versus nurture” distinction is clearly made.
  • Step justifiability: Each inference step of the system must follow a predetermined inference rule, which is justified according to the semantics, so as to realize the principle of intelligence. Therefore, in every step, the system is indeed making its best choice, under the restriction of available knowledge and resources.
  • Process flexibility: Even though the individual steps are carried out by given programs, how these steps are linked together is not predetermined by the designer for each case. Instead, it is determined at run time by the system itself, according to several factors that depend on the system’s past experience and current context.

The major difficulty in building an intelligent reasoning system is that the study of reasoning systems is dominated by mathematical logic, where the systems are designed to be axiomatic, meaning that it starts with a set of axioms that are supposed to be true, then uses truth-preserving rules to derive true conclusions, without considering the resource expense. Under the assumption that intelligence means to adapt with insufficient knowledge and resources, an intelligent system has to be non-axiomatic, in the sense that its beliefs are not axioms and theorems, but summaries of the system’s experience, so every belief is fallible and revisible. Also, the inference rules are no longer “truth-preserving” in the traditional sense, but in the sense that their conclusions properly summarize the evidence provided by the premises. Furthermore, due to insufficient resources, the system usually cannot consider all relevant beliefs when solving a problem, but has to only use the available resources to consider the most relevant and important beliefs. Consequently, my system NARS (Non-Axiomatic Reasoning System) is very different from traditional reasoning systems, but is more similar to the human mind.

NARS is an on-going project. For the up-to-date technical descriptions of the system, as well as demonstrations with working examples, visit the project website.

The ethics of AI

Even if in principle we know how to build AI, should we really try it?

The ethics of AI is a topic that has raised many debates, both among researchers in the field and among the general public. Since many people see “intelligence” as what makes humans the dominate species in the world, they worry AI will take that position, and the success of AI will actually lead to a disaster for humans.

This concern is understandable. Though the advances in science and technology have solved many problems for us, they also create various new problems, and sometimes it is hard to say whether a specific theory or technique is beneficial or harmful. Given the potential impact of AI on human society, we, the AI researchers, have the responsibility of carefully anticipating the social consequences of their research results, and doing our best to bring the benefits of the technique, while preventing the harm from it.

According to my theory of intelligence, the behaviors of an intelligent system are determined both by its nature (design) and nurture (experience). The system’s intelligence mainly comes from its design, and is morally neutral. In other words, the system’s degree of intelligence has nothing to do with whether the system is considered as beneficial or harmful, either by a single human or by the whole human species. This is because the intelligence mechanism is independent of the content of the system’s goals and beliefs, which are determined mainly by the system’s experience.

Therefore, to control the morality of an intelligence means to control its experience, that is, to educate the system. We cannot design a “human-friendly AI”, but have to teach an AI to be human-friendly, by using carefully chosen materials to shape its goals and beliefs. Initially, we can load its memory with certain content, in the spirit of Asimov’s Three Laws of Robotics, as well as many more detailed requirements and regulations, though we cannot expect them to resolve all the moral issues.

Here the difficulty comes from the fact that for a sufficiently complicated intelligent system, it is practically impossible to fully control its experience. Or, put it in another way, if a system’s experience can be fully controlled, its behavior will be fully predictable, however, such a system cannot be fully intelligent.
Due to insufficient knowledge and resources, the derived goals of an intelligent system are not always consistent with their origins. Similarly, the system cannot fully anticipate all consequences of its actions, so even if its goal is benign, the actual consequence may still turn out to be harmful, to the surprise of the system itself.

As a result, the ethical and moral status of AI is basically the same as most other science and technology ó neither beneficial in a foolproof manner, nor inevitably harmful. The situation is similar to what every parent has learned: a friendly child is usually the product of education, not bioengineering, though this “education” is not a one-time effort, and one should always be prepared for unexpected events. AI researchers have to always keep the ethical issues in mind, and make the best selections at each design stage, without expecting to settle the issue once for all, or to cut off the research all together just because it may go wrong ó that is not how an intelligent species deals with uncertain situations.

The answers

Here is a summary of my answers to the four basic questions about AI:

  1. What is AI?
    AI is computer systems that can adapt to the environment and work with insufficient knowledge and resources.
  2. Can AI be built?
    Yes, since the above definition does not require anything impossible. The previous failures are mainly due to misconceptions.
  3. How to build AI?
    The most likely way is to design a reasoning system in a non-axiomatic manner, in which validity means adaptation under knowledge-resource restriction.
  4. Should AI be built?
    Yes, since AI has great potential in many applications, though we need to be careful all the time to avoid its misuse or abuse.

To show the similarity and difference between my opinion and those of other AI researchers, in the following I select some representative answers to similar questions, also in non-technical language: