Tuesday, 12 February 2013

The algebra and the space of the events in the brain


The first aspect concerns the definition of valuation. As a general rule, we can consider a measure of a dimension (i.e., the valuation) whatever method by which it is established an one-to-one relation  between the dimensions of a certain category and the numbers (integer, rational or real) [1, proposition 164].  All the entities which are in relation with the same concept are said to belong to the same “category”, and the category-membership is the necessary and sufficient condition for the existence of the relation of “greater than”-and-“lower than” [1, proposition 158]. The relation of “greater than”-and-“lower than”, which imply the asymmetry, transitivity and the mutual effectiveness of that relation between any couple of terms, it is said to be a serial relation and enables to order all the terms that are to be compared. 

This means that the comparison among different alternatives, which depends on a common function (say, the utility) for choice, gives rise to an ordered series of measured outcomes. Besides, given a series of terms, it is possible to establish between any couple of them a relation of distance with respect to some comparative terms [1, proposition 160]. Hence, the distance returns a relation of similarity with the comparative terms

This is coherent with the so called internal model (Figure 4) developed in the theory of motor control [2]. If the perception is an active process that enables to anticipate the sensorial effects of the movement, then from a computational point of view, this implies the existence in the brain of some kind of internal paradigm allowing the linkage between the sensorial and motor patterns. Therefore, the internal model would be utilized by the brain both to perceive the movement and to plan the actions. The assumption that the brain generates signals not only to control the movements, but also to interpret the results of those movements, has been extended  in models of adaptive control of the movement [3,4][a].

Figure4. Feedback motor control model. 


A motor signal from the central nervous system (CNS) to the periphery is called an efference, and a copy of this signal is called an efference copy. Sensory information coming from sensory receptors in the peripheral nervous system to the central nervous system is called afference. An efference copy is used to generate the predicted sensory feedback (corollary discharge) which estimate the sensory consequences of a motor command (top row). The actual sensory consequences of the motor command (bottom row) are used to compare with the corollary discharge to inform the CNS about external actions.

Likely, the DM process determines the firing rates in the neural circuit that may relate to the anticipation of the effects (the utility, e.g. the rewards) of the possible alternatives based on the estimated distance from some “optimal” level of neural activity (target). By evoking the bayesian mode, the more the firing rate corresponding to an action approaches the firing rate expected for getting the desired reward, the more probable is the selection of that action. The measurement of this distance then would characterize the valuation stage of DM (Figure 5). 

Figure5. The utility from the valuation. 
Likely, the DM process determines the firing rates in the neural circuit that may relate to the anticipation of the effects (the utility, e.g. the rewards) of the possible alternatives based on the estimated distance from some “optimal” level of neural activity (target).  By evoking the bayesian mode, the more the firing rate corresponding to an action approaches the firing rate expected for getting the desired reward, the more probable is the selection of that action.  The measurement of this distance then would characterize the valuation stage of DM.



Many processes in nature, including brain processes, are inherently noisy. Because of the presence of noise and because of theoretical questions (Heisenberg’s uncertainty principle), uncertainty rules and it is impossible to get absolutely precise measurements, while the quality of the measure is usually assessed in terms of accuracy and repeatability  (Figure 6). Therefore, the distance (or similarity or, ultimately, the utility) dimension can not be characterized by a single value, but needs to be represented by a confidence interval where the real value is likely to be located.

Figure6. Accuracy and repeatability of measurements. Possible configurations of the distance between some outcomes of the “utility” corresponding to some actions and the central target. In the upper row, the target is given by a single value, while in the lower row the target is represented by confidence intervals (grey shaded areas). Both the situations A and C are accurate because in average they tend to come closer to the target. However, the scores in A are more concentrated (i.e., repeatable) than the ones in C. Therefore, the dispersion does not affect the accuracy, but indicates low precision. In B the outcomes have low dispersion and so they are repeatable and precise, but since they do not tend to the central area, the accuracy is missing. The deviation from the target which is somewhat constant and repeatable indicates the presence of “systematic error”. The situation in D reports the jointly  lack of accuracy and precision.       

It means that the numerical space spanned by the measures of distance may be divided into different intervals, that may also have a certain level of overlapping, at which correspond a probability. The attribution of a probability, then, proceeds from a multi-to-one relation (surjective function) between the intervals of the measured distances (or similarities, or utility values) and the probability measure. In other terms, those intervals form the space of events associated with the valuation phase in the DM process, such that at each event corresponds only one measure of probability, while different events may return equal probability.

The formal definition of “events” is given by the following postulate: 

the events form the complete Boolean algebra

An algebra is a set of entities, operations and roles which associates the elements of the same algebra and a set of events E1,…,En is a complete Boolean algebra if it is closed under the complement and countable unions of its members. The algebra of the events then is a formal structure to link the events between themselves

The postulate means that the space of the events conserves the same properties of (i.e., it is isomorphic to) the points of the plane, such that the rules to deal with the events originate by the one-to-one correspondence with the set theory. As consequence, an algebraic structure provided with finitary operations is necessary for mapping the elements on the set of the structure so as to give rise to the choice stage of DM. The measurements of the basic dimensions above mentioned (distance, similarity, utility), involves two binary commutative and associative operations, say addition and multiplication [1, proposition 168], by which the derived space of events may be compared. 
The algebra of the events is then an algebraic structure that with those binary operations and the relation “greater than”-and-“lower than”, forms a numeric ordered field. In this way, also finer elaborations of information are possible. But since the surjective function from the space of events to their measures of probability assigns random variables to the alternative options, the choice stage, ultimately, relies on comparisons between the distribution function (PDF) of those random variables


Figure7. The valuation-choice circuit. 
Both the valuation and the decision stages have random structure. In the valuation stage the critical threshold indicates the firing rate of the neuronal populations involved, at which would correspond the expected (i.e., optimal) reward. The output of this stage, then, are the differences (or ratios) between the responses of observed neuronal activity at the stimuli provided by the alternatives and the target. These measurements enter as ordered series the next stage, where the decision is taken so as optimize some criterion (e.g., utility). Hence, the attainment of the threshold in the decision stage indicate the preferred alternative. Feedback information flow from the decision stage to the valuation stage in order to elicit the adaptation of the boundary in the valuation. In this way, a mechanism of reinforcement determines the competition between the alternatives and the valuation is biased to the most probable rewarded one. 


Thus, to select from the alternatives is a relatively easy task if the distribution functions are well separated, and otherwise if the distribution functions overlap.



Figure8. Choice stage involves probability distribution functions.

Since the mapping from the space of events to their measures of probability assigns random variables to the alternative options, the choice stage, ultimately, relies on comparisons between the distribution function of those random variables.















[a] According to this assumption, when an efferent signal is produced and sent to the motor system, a copy of that signal, (efference copy), is created. The efference copy feeds a forward internal model which generates the predicted sensory feedback (corollary discharge) that estimates the sensory consequences of a motor command. Next, a gap analysis of the actual sensory consequences of the motor command versus the corollary discharge, would inform the CNS about how much the expected action diverges from its actual external action  [5].





  1. Russell, B. (1913). Principia Mathematica. Cambridge University Press, UK.
  2. Robinson, D. (1975). Oculomotor control signals. In Lennerstrand, G. & Rita, P.B. (Eds), Basic mechanisms of ocular motility and their clinical implications. Pergamon, OxfordUK.
  3. Wolpert, D.M., Kawato, M. (1998). Internal models of the cerebellum. Trends Cogn Sci. 2: 338-347.
  4. Kawato, M. (1999). Internal models for motor control and trajectory planning. Curr Opin Neurobiol. 9 (6): 718–727.
  5. Morasso, P., Baratto, L., Capra, R., Spada, G. (1999). Internal models in the control of the posture. Neural Networks 12: 1173-1180.  

No comments:

Post a Comment

Understanding Anaerobic Threshold (VT2) and VO2 Max in Endurance Training

  Introduction: The Science Behind Ventilatory Thresholds Every endurance athlete, whether a long-distance runner, cyclist, or swimmer, st...