Thursday, August 4th, 09:00-13:00. Seven accepted papers, five full papers and two short papers. Presentations will be around 20 minutes for full papers and 10 minutes for short- or position papers. Download all compressed (zip) or as individual files below.
- 9:00-9:20 — Self-Programming = Learning about Intelligence-Critical System Features. ()
- 9:20-9:40 — An Implemented Architecture for Feature Creation and General Reinforcement Learning. ()
- 9:40-10:00 — Behavioral Self-Programming by Reasoning. ()
- 10:00-10:20 — Heuristic Search in Program Space for the AGINAO Cognitive Architecture. ()
- 10:20-10:30 — Thinking Outside the Box: Creativity in Self-Programming Systems. ()
- 10:30-10:50 — Emergent inference, or how can a program become a self-programming AGI system? ()
- 10:50-11:00 — Self-Programming through Imitation. ()
J. Storrs Hall.
- 11:00-11:30 — Coffee break
- 11:30-13:00 — Panel Discussion
- A. For an AGI system to choose its own behavior and improve its performance in various situations, should the system modify its own source code, or only modify its data?
- B. Is the set of basic operations of a system constant or variable? If it is constant, then what basic operations are needed for a general-purpose system? If it is variable, then how does the set change?
- C. Does the system directly choose among the basic operations, or also among compound or macro operations that are recursively formed from the basic ones? If it is the latter, then what type of composition mechanism is needed?
- D. How is the system’s knowledge about an operation represented? Is the knowledge certain or uncertain? Is the knowledge given or acquired?
- E. When faced with a goal, does the system work out a complete plan and then subsequently follow it, or make decisions only step-by-step? In the former case, how is real-time handled in a dynamic, changing environment? In the latter case, how does the system avoid near-sighted decisions?
- F. When choosing what to do next, does the system exhaustively compare all alternative operations, only compare some of them, compare a randomly selected subset, or simply choose one randomly without comparison with the others?
- G. When comparing alternative operations, does the system take all relevant knowledge into account, or only pay attention to part of it? In the former case, how to do it in an affordable amount of time? In the latter case, how does the system avoid errors from hasty decisions?
- H. How is the successfulness of an operation evaluated? How reliable is a feedback or reward signal? How does the system turn successful operations into reusable skills?
- I. Can the system always “do the right thing”? If no, in what sense it is “intelligent”? If yes, in what sense are its decisions “correct” in an unanticipated situation?
- J. Are the consequences of a self-programming system’s behavior (accurately or approximately) predictable by its designer? Are they predictable by the system itself?
- K. How is self-programming related to other processes in an AGI system? Is self-programming a modular or distributed functionality?
- L. What kinds of meta-programming principles are needed to ensure robustness over a long period of time in the system’s evolution?
- M. What kinds of a programming language allows itself to be evaluated by programs written in itself – i.e. how do we achieve sufficient levels of semantic transparency for the system to be able to inspect and evaluate itself?
- N. How is architectural granularity related to system modularity, and how are these concepts related to self-programmability?