RESEARCH INTERESTS

(November 2003)

J. Kevin O'Regan



CURRENT WORK:

 

 
 
 

A sensorimotor theory of  phenomenal consciousness

Pursuing the idea of the "world as an external memory" (O'Regan, 1992) and making use of our previous surprising results on Change Blindness, I have put forward a more general theory of what causes the experience of vision and of what differentiates visual sensations from other sensations, like auditory or tactile. A target article in Behavioral and Brain Research with Alva Noë (Philosophy Dept, UC Santa Cruz) describes this work (O'Regan & Noë, 2001), and several further articles have been published. A book is to appear in 2004-5 under contract with Oxford University Press.

An extension of the work is in progress with Erik Myin (Philosophy Dept, Vreije Universiteit Brussel) involving the concepts of "bodiliness" and "grabbiness".  Using these, we claim to account for why perceptual sensations are accompanied by a sensory "feel" or "presence" that makes them similar to dreams, hallucinations and mental imagery, but distinguishes them from other mental states like memory or thought that have comparatively less sensory "feel".

Sensory substitution

One of the ideas of the sensorimotor theory is that the experienced nature of a sensory input does not arise from the particular neural input channel through which stimulation is supplied, but rather through the intrinsic structure of the sensorimotor invariants that are characteristic of the exploratory skill that one exercises when one perceives in a given sensory modality. Visual stimulation appears visual to us because it "behaves" in a certain way when we close our eyes, move them, or move objects. Auditory stimulation has a different set of laws of co-dependence between body motions and sensory input.

A prediction results from these ideas: it should be possible to construct devices which provoke visual-like sensations, even though the stimulation is delivered through, say, the auditory or tactile input channels. We are doing experiments to test these ideas with Sylvain Hanneton (Lab. Neurophysique & Phys. Syst. Moteur, Univ René Descartes, Paris), Charles Lenay (UTC, Compiègne), Vincent Hayward (Dept of Electrical Engineering, McGill University) and PhD students Malika Auvray and Aline Bompas.

Color - motion interactions

Another prediction of the sensorimotor theory is that the sensation of color should depend on the changes that occur in sensory input when you move with respect to a colored surface. Such changes generally arise through non-homogeneities in the way the eye samples colors, and from the way reflected light spectrum depends on object orientation, but can be artificially modified using eye-tracking equipment or special spectacles.

With PhD student Aline Bompas and with Jim Clark (Dept of Electrical Engineering, McGill University) we are pursuing a number of attempts to induce modifications in perceived color as a function of eye movements. In a replication of an old experiment of I. Kohler (1951), we confirm that wearing bi-colored spectacles modifies perceived hue as a function of the position of the eye in the head.

Characterizing the sensorimotor contingencies of space and color

With J-P. Nadal (Lab. Statistical Physics, Ecole Normale Supérieure, Paris), J. Clark (Dept of Electrical Engineering, McGill University), and Olivier Coenen (Sony CSL, Paris) we are working on a neurally implementable mathematical framework to characterize the intrinsic structure of the laws of sensorimotor co-dependence corresponding to the notions of space and color.

A PhD student (David Philipona) has made significant progress, and we have been able to show how a biological or artificial agent can deduce the euclidean structure of outside physical space by studying the Lie algebra underlying the relation between the organism's motor outputs and its sensory inputs. This structure is independent of the neural code which is used by the organism, and can be extracted by a simple, neurally implementable algorithm which has no a priori knowledge of the nature of the sensors and effectors. We are investigating a similar approach to show how notions like white, black, red, yellow, can be obtained by studying the interrelation of sensory inputs and motor commands whose codes and effects are a priori unknown to the algorithm.

Context effects in visual short term memory

With two PhD students (Juan Vidal and Hélène Gauchou), we are extending change blindness studies in order to determine to what extent individual items in simple symbol displays are coded independently, or together in a more global Gestalt-like structure. Work in progress suggests that change detection is adversely affected by irrelevant changes in the display when these irrelevant changes share features that are being attended to in the target items. The work is being accompanied by MEG imagery measurements in collaboration with Catherine Tallon-Baudry (LENA, Hôpital Salpétrière, Paris).

Visual stability and distortions during eye saccades

With PhD student Paul Reeve and with Jim Clark (Dept of Electrical Engineering, McGill University) we are studying the distortions of visual space which seem to occur when brief stimuli are flashed up at a moment near the beginning of an eye saccade. We are attempting to determine to what extent such effects can be replicated and to what extent it is necessary to appeal to non-retinal compensation mechanisms to account for them.
 
 
 

PAST WORK:


Change Blindness

An important idea underlying the notion of the "world as an external memory" (O'Regan, 1992), is that at any moment we actually only see those aspects of a scene which we are currently visually "manipulating". With R. Rensink and J. Clark at Nissan Cambridge Basic Research we set out to test this idea by having subjects look at natural scenes where a large change suddenly occurs. Usually under such circumstances the change will be immediately seen because the visual transient that it produces pulls processing ressources to the change location. But by the use of special techniques (such as "flicker" or "mudsplashes"), or by making the change simultaneous with an eye saccade, the visual transient is swamped by other transients, and attention is not drawn to the change location. Our results show that in these circumstances, very large changes can be missed, confirming that in a certain sense, our impression of seeing everything is an illusion (Rensink, O'Regan & Clark, 1998; O'Regan, Rensink & Clark, 1999). We have extended this kind of work to cases where the change occurs extremely slowly (Auvray & O'Regan, 2003).

The optimal viewing position phenomenon

One would expect that, because of the fast drop-off of visual acuity even within the central foveal zone of the eye, in order to recognize an object, a scene, or a word efficiently, it might be necessary to accurately place the eye in a position where the most useful information is available at the fixation point. Work I have done over the past years has indeed confirmed that in the recognition of words, there is a very well-defined "optimal viewing position", where recognition is most efficient. The exact location of this position depends on the statistical informativeness of the different parts of the word, on its morphological structure, and on other factors related to left-right processing and oculomotor scanning strategies. Because the optimal viewing position phenomenon is very robust and easy to demonstrate even in short, four or five letter words, it constitutes a promising new tool to investigate word recognition. It presents a challenge to the classic models of word recognition, since these take no account of eye position. (cf. O'Regan & Jacobs, 1992). The model has implications for automatic handwriting recognition systems (cf. Clark & O'Regan, 1998).

In the recognition of objects and scenes it might also be the case that there exists an optimal viewing position. However the phenomenon is harder to demonstrate because objects and scenes contain information distributed over many spatial scales, whereas for words, most of the information necessary for recognition is contained in the fine detail.

Reading

The optimal viewing position phenomenon also has implications for reading: because of the strong penalty for not fixating the optimal position, there ought to be a great advantage in normal reading for the eye to attempt to land at this position in words. But the eye cannot know in advance where the optimal position in each word is. In addition, oculomotor constraints severely limit the accuracy with which the eye can attain a given target. These considerations have led me to develop a "strategy-tactics" theory of eye movement guidance in reading in which the eye uses a general fixation strategy which has a good chance of leading the eye to the correct position without being too costly in oculomotor preparation time, coupled with correction "tactics" which adjust the eye position if the landing position does not allow recognition (cf. O'Regan 1990). This strategy-tactics theory is currently being tested and opposed to another theory in which eye movements are hypothesized to follow closely the movements of attention across the line of print as text processing progresses.

Translation (in?)variance in vision

Because of the very strong spatial non-homogeneity in retinal structure the question arises of how the visual system can recognize an object independently of where the eye is fixated in it. But can it really? A recent experiment investigated the question of whether humans are actually able to recognize an object, initially learnt at a given retinal position, when it is presented at a new position. The experiment showed that under certain conditions they could not. In particular, when the object learnt consisted of a previously unknown chinese-character type symbol (size: one degree square) which was difficult to categorize in terms of simple features like blobs or line segments. The reason is presumably that we are able only to translate simple features like blobs and line segments across the retina. Translating more complicated things must be done in terms of the simple features, and this leads to errors when the simple features are hard to extract or combined in a complicated way. (cf. Nazir & O'Regan, 1990)

Trans-saccadic integration

The question of how information from successive eye fixations is integrated to give a coherent view of the visual world has interested me for a number of years. Related questions are, why do we not see the blind spot, why do we not notice retinal imperfections and non-homogeneity, etc. Several experiments I performed showed that in normal conditions, extraretinal information need not be used to compensate eye movements (cf. O'Regan & Lévy-Schoen, 1983; O'Regan, 1984). I have recently published a review paper in which I suggest that the often-held view that imperfections of the visual system have to be compensated for by "filling-in" or other correction mechanisms suffers from the criticism that it implicitly presupposes the existence of a homunculus. Instead I propose that vision should be understood as an active, exploratory sense similar to the tactile sense, where no compensation for defects need be postulated. I propose the notion that to understand vision it is more fruitful to suppose not that the brain reconstructs an internal model of the world, but rather that the brain uses the outside world as a kind of "external memory" which it can explore at will in order to obtain information (cf. O'Regan 1992a).
 
 
 

last updated: Nov 8th, 2003
.