-----------------------------------------------

Analysis and Modeling of Learning Processes for Anticipatory Behavioral Control

Wolfgang Stolzmann & Joachim Hoffmann

The research project " Analysis and Modeling of Learning Processes for Anticipatory Behavioral Control" is founded by the German research foundation "Deutsche Forschungsgemeinschaft" (DFG). In this project we work on a learning mechanism, called Anticipatory Behavioral Control, that was introduced in Cognitive Psychology by Hoffmann in 1993. Our aim is to analyze and model this learning mechanism, i.e. to develop Anticipatory Behavioral Control into a learning algorithm. A basic learning algorithm was developed by Stolzmann (1997) and is called Anticipatory Classifier Systems (ACS). The goals of the research project are

Many observations in psychology have led to the learning theory of anticipatory behavioral control. The first principle is that learning can only occur if a need is present that wants to be satisfied. The observation of an operational drive in humans, additional to the known drive of satisfaction, showed that learning cannot rely solely on the satisfaction of the basic needs such as food, water, or sex. Rather, another type of need must exist in humans. In animal learning it was shown that rats are able to learn without any direct reinforcement (i.e. learn latently) (Blodgett, 1929, Tolman, 1932, Seward, 1949, Croake , 1971, ...). This and many other experiments showed that the need of accurate anticipations must exist in higher mammels. The theory of anticipatory behavioral control evolved out of this observation. It can be outlied as follows: First, a behavior R (=response) is always accomponied by the anticipated consequences E (=effect) and the actual given situation abstracted to a condition S (=stimulus). Second, a continuous comparison takes place between the anticipations and the successive perceptions. If the comparison was valid or invalid, it leads to an increase or decrease of the relation between the anticipation and the according stimulus-response relation, respectively. Finally, inaccurate anticipations lead to further differentiations of the conditions. In order to realize this learning theory in an artificial system, the basic necessity is that the anticipations must be represented in some form. This could be done by a recurrent neural network or by explicitly inking rules to possible effects represented by other rules. The most straight-forward approach, though, is to build S-R-E rules directly. This is done by the ACS. A detailed introduction to ACS is given by Stolzmann (2000).

Publications

-----------------------------------------------
generated: 24 February 2000; last update: 24 February 2000 / WST