See the list of exciting guest speakers below!
Times subject to change.
Presentations
Jordi M. Asher, University of Essex, UK: Augmenting reality for visual field loss: integrating overlayed information to increase the field of view, a virtual reality study.
Visual field loss (VFL), such as hemianopia, is a debilitating and common impairment following brain injury or stroke that seriously impacts daily activities. The advancement of augmented and virtual realities brings opportunities to test and develop potential sight aids for VFL. There is an unmet need for people who have VFL, as their impairment differs from those individuals who have reduced acuity. The rapidly advancing functionality in consumer available AR smart glasses has opened the market for potential sight aids for individuals who have VFL. Our core technology is a software solution that builds on the functionality of optical prisms, that maps an individual user’s blind field to provide an enhanced live stream about content from their blind field into their sighted field (we call it the support-window). In this proof-of-concept experiment we are recruiting control participants and creating a simulated scotoma and are required complete a Tetris-inspired visual search task in a simulated virtual environment. Participants complete a scotoma-only condition followed by a scotoma plus window condition. Our results indicate that participants with a simulated scotoma are able to use the window to make use of visual information falling into the blind field, to improve their performance. Our next objective is to begin trials with people who have VFL
Visual field loss (VFL), such as hemianopia, is a debilitating and common impairment following brain injury or stroke that seriously impacts daily activities. The advancement of augmented and virtual realities brings opportunities to test and develop potential sight aids for VFL. There is an unmet need for people who have VFL, as their impairment differs from those individuals who have reduced acuity. The rapidly advancing functionality in consumer available AR smart glasses has opened the market for potential sight aids for individuals who have VFL. Our core technology is a software solution that builds on the functionality of optical prisms, that maps an individual user’s blind field to provide an enhanced live stream about content from their blind field into their sighted field (we call it the support-window). In this proof-of-concept experiment we are recruiting control participants and creating a simulated scotoma and are required complete a Tetris-inspired visual search task in a simulated virtual environment. Participants complete a scotoma-only condition followed by a scotoma plus window condition. Our results indicate that participants with a simulated scotoma are able to use the window to make use of visual information falling into the blind field, to improve their performance. Our next objective is to begin trials with people who have VFL
Walter F. Bischof, University of British Columbia, Canada: Coordination of head and eyes in visual perception
Visual input is determined by the coordinated adjustment of torso, head and eyes, with the head and body determining the visual field within space and the eyes performing a detailed analysis within the head-defined visual field. The coordination of head and eyes can be achieved either by the eyes guiding the head (where the eyes are moved within the visual field and head orientation is adjusted to reposition gaze towards the center of the visual field) or by the head guiding the eyes (where the visual field is selected through orienting the head and the visual input is analyzed within the head-defined visual field). Past research has suggested that both strategies may be used, one for passive visual exploration of the environment and the other for active searching of and interaction with the environment. I will discuss results from a series of studies that examine the coordination of head and eyes in the exploration of static 360° panoramas in virtual reality. First, I will present techniques for visualizing gaze behaviour with such displays. Second, I will present spatial and temporal analyses of eyes, head and gaze as well as different coordination models. Finally, I will present a number of open problems that should be addressed in future research.
Visual input is determined by the coordinated adjustment of torso, head and eyes, with the head and body determining the visual field within space and the eyes performing a detailed analysis within the head-defined visual field. The coordination of head and eyes can be achieved either by the eyes guiding the head (where the eyes are moved within the visual field and head orientation is adjusted to reposition gaze towards the center of the visual field) or by the head guiding the eyes (where the visual field is selected through orienting the head and the visual input is analyzed within the head-defined visual field). Past research has suggested that both strategies may be used, one for passive visual exploration of the environment and the other for active searching of and interaction with the environment. I will discuss results from a series of studies that examine the coordination of head and eyes in the exploration of static 360° panoramas in virtual reality. First, I will present techniques for visualizing gaze behaviour with such displays. Second, I will present spatial and temporal analyses of eyes, head and gaze as well as different coordination models. Finally, I will present a number of open problems that should be addressed in future research.
Karla Evans, University of York, UK: When Gist of an Image can Save Lives
Cancer screening saves lives. In recognition of this fact, screening programs in breast, cervical and lung cancer are performed by medical image experts who look for signs of cancer in difficult, time-consuming visual search tasks. Medical image interpretation requires radiologists to engage in perceptual and analytical processes to make decisions about diagnosis and treatment. Radiological images can be thought of as a specialized class of scenes and radiologists are experts who have learned to apply the processes of visual cognition to these unusual scenes. Building on recent advances in basic research on vision and attention, we have found a “global gist signal” in radiographs afforded to expects after an initial glimpse and containing information about the presence of disease that is independent of the locus of any lesion. The global gist signal carrying the global structure and statistics could be used to improve the speed and/or accuracy of breast cancer screening. I will address the nature of this signal and what allows medical image experts to detect it, how this expertise can be trained as well as ways to refine screening protocols and inform and enhance the capabilities of computer-based detection systems.
Cancer screening saves lives. In recognition of this fact, screening programs in breast, cervical and lung cancer are performed by medical image experts who look for signs of cancer in difficult, time-consuming visual search tasks. Medical image interpretation requires radiologists to engage in perceptual and analytical processes to make decisions about diagnosis and treatment. Radiological images can be thought of as a specialized class of scenes and radiologists are experts who have learned to apply the processes of visual cognition to these unusual scenes. Building on recent advances in basic research on vision and attention, we have found a “global gist signal” in radiographs afforded to expects after an initial glimpse and containing information about the presence of disease that is independent of the locus of any lesion. The global gist signal carrying the global structure and statistics could be used to improve the speed and/or accuracy of breast cancer screening. I will address the nature of this signal and what allows medical image experts to detect it, how this expertise can be trained as well as ways to refine screening protocols and inform and enhance the capabilities of computer-based detection systems.
Ione Fine, University of Washington: Do you hear what I see? How early blind people perceive the world
Almost one-quarter of the brain is normally devoted to processing visual information: reading text, recognizing faces, watching the Sunday football match, and much more. The brain’s visual cortex contains specialized regions devoted to processing motion, text, faces etc. In congenitally blind individuals, much of the ‘visual’ cortex responds strongly to auditory and tactile input, a phenomenon known as cross-modal plasticity. Here I will focus on what our laboratory has discovered about how early blind people process auditory motion.
Almost one-quarter of the brain is normally devoted to processing visual information: reading text, recognizing faces, watching the Sunday football match, and much more. The brain’s visual cortex contains specialized regions devoted to processing motion, text, faces etc. In congenitally blind individuals, much of the ‘visual’ cortex responds strongly to auditory and tactile input, a phenomenon known as cross-modal plasticity. Here I will focus on what our laboratory has discovered about how early blind people process auditory motion.
Tom, Foulsham, University of Essex, UK: Fixations beyond the picture frame.
We know a lot about how participants move their eyes, and what items attract their fixations, when they look at a picture. In my research, I have been comparing eye movement behaviour in screen-based tasks with gaze in more realistic or interactive tasks, where participants move around or interact with the scene rather than viewing a picture. I will describe two areas of research where there are interesting differences in how fixations are deployed. First, in real social interactions, gaze provides a signal that can be interpreted by other people. Second, in sequential action-based tasks, fixations may serve multiple functions and when and where they are guided will change with different stages of the task. I will discuss the degree to which screen-based tasks such as video watching and on-screen typing can capture these behaviours in a more controlled situation.
We know a lot about how participants move their eyes, and what items attract their fixations, when they look at a picture. In my research, I have been comparing eye movement behaviour in screen-based tasks with gaze in more realistic or interactive tasks, where participants move around or interact with the scene rather than viewing a picture. I will describe two areas of research where there are interesting differences in how fixations are deployed. First, in real social interactions, gaze provides a signal that can be interpreted by other people. Second, in sequential action-based tasks, fixations may serve multiple functions and when and where they are guided will change with different stages of the task. I will discuss the degree to which screen-based tasks such as video watching and on-screen typing can capture these behaviours in a more controlled situation.
John Franchak, University of California, Riverside, USA: Visual exploratory development in screen-based and natural tasks
Laboratory studies of attention examine how visual features in photographic and video stimuli shape visual exploration over development. However, exploration in daily life differs from the laboratory in several ways that impact ecological validity. First, real-world visual scenes are more diverse and complex compared with laboratory stimuli. Second, real-world tasks offer opportunities for interaction as opposed to passive viewing. Third, real-world visual exploration depends on full-body movement to coordinate the eyes, head, and body when orienting to different locations in the world. In this talk, I will describe two sets of studies to illustrate how these aspects of ecological validity impact our understanding of visual exploration. The first set of studies highlights the role of video stimuli in drawing conclusions about visual attention development through screen-based eye tracking. The second set of studies employs head-mounted eye tracking and inertial sensing of head rotation to reveal the role that interactive tasks and motor control play in determining how infants and adults gather visual information.
Laboratory studies of attention examine how visual features in photographic and video stimuli shape visual exploration over development. However, exploration in daily life differs from the laboratory in several ways that impact ecological validity. First, real-world visual scenes are more diverse and complex compared with laboratory stimuli. Second, real-world tasks offer opportunities for interaction as opposed to passive viewing. Third, real-world visual exploration depends on full-body movement to coordinate the eyes, head, and body when orienting to different locations in the world. In this talk, I will describe two sets of studies to illustrate how these aspects of ecological validity impact our understanding of visual exploration. The first set of studies highlights the role of video stimuli in drawing conclusions about visual attention development through screen-based eye tracking. The second set of studies employs head-mounted eye tracking and inertial sensing of head rotation to reveal the role that interactive tasks and motor control play in determining how infants and adults gather visual information.
Alan Kingstone, University of British Columbia, Canada: Visual attention in the real world
The present talk considers recent work suggesting that in social situations, such as when one is in the presence of another person, individuals will: (a) alter their overt looking behaviour in a manner that conforms to social norms, and (b) use their covert attention to gather information "secretly" from the environment, thereby avoiding communicating to others the direction and commitment of one's attention. I propose that decoupling overt and covert attention is much more prevalent than previously assumed, and I will put forward a novel method for studying covert attention -- one that allows for an enhanced spatial and temporal examination of covert attention, as well as redefining the act of covert orienting itself.
The present talk considers recent work suggesting that in social situations, such as when one is in the presence of another person, individuals will: (a) alter their overt looking behaviour in a manner that conforms to social norms, and (b) use their covert attention to gather information "secretly" from the environment, thereby avoiding communicating to others the direction and commitment of one's attention. I propose that decoupling overt and covert attention is much more prevalent than previously assumed, and I will put forward a novel method for studying covert attention -- one that allows for an enhanced spatial and temporal examination of covert attention, as well as redefining the act of covert orienting itself.
Ella Striem-Amit, Georgetown University, USA: Individual differences of brain plasticity in early visual deprivation and sight restoration
Early-onset blindness leads to reorganization in visual cortex connectivity and function: the blind visual cortex is recruited for many non-visual tasks across sensory modalities (audition, touch, smell) and cognitive domains (perception, action, memory, language). This has led to theoretical disagreement about the role the visual cortex in blindness, and more broadly, about the capacity of the human brain for plasticity. However, research of brain plasticity has mostly been conducted at the group level, largely ignoring differences in brain reorganization across early blind individuals.
In this talk, I will present resting-state functional connectivity (RSFC) findings in a large cohort of blind individuals that shows that reorganization is not ubiquitous, offering a solution to the diversity of group-level findings. It additionally highlights the important role for sensory experience during development in driving individual differences. Building on these findings, I will discuss how variability in reorganization in the early blind may affect the capacity to benefit from sight-restoring treatment. Overall, our data highlight the diversity in brain plasticity and the potential of harnessing individual differences for fitting rehabilitation approaches for vision loss.
Early-onset blindness leads to reorganization in visual cortex connectivity and function: the blind visual cortex is recruited for many non-visual tasks across sensory modalities (audition, touch, smell) and cognitive domains (perception, action, memory, language). This has led to theoretical disagreement about the role the visual cortex in blindness, and more broadly, about the capacity of the human brain for plasticity. However, research of brain plasticity has mostly been conducted at the group level, largely ignoring differences in brain reorganization across early blind individuals.
In this talk, I will present resting-state functional connectivity (RSFC) findings in a large cohort of blind individuals that shows that reorganization is not ubiquitous, offering a solution to the diversity of group-level findings. It additionally highlights the important role for sensory experience during development in driving individual differences. Building on these findings, I will discuss how variability in reorganization in the early blind may affect the capacity to benefit from sight-restoring treatment. Overall, our data highlight the diversity in brain plasticity and the potential of harnessing individual differences for fitting rehabilitation approaches for vision loss.
Shuo Wang, Washington University, USA: Multimodal investigations of visual social attention
People with autism spectrum disorders (ASD) demonstrate aberrant attention to social stimuli. In this talk, I will discuss attentional deficits in ASD. First, I show that people with ASD do not orient efficiently to social target-relevant items (pictures of other people) during visual search, an impairment of top-down / goal-directed attention. Second, in a comprehensive investigation of eye tracking in ASD, I show an impairment of bottom-up / stimulus-driven attention: people with ASD have a stronger image center bias regardless of object distribution, reduced saliency for faces and for locations indicated by social gaze, and yet a general increase in pixel-level saliency at the expense of semantic-level saliency. Interestingly, by comparing photographs taken by people with ASD and controls, I found that photos from people with ASD have unusual features and show strikingly different ways of photographing other people. Lastly, I will discuss the single-neuron mechanisms that may underlie the attentional deficits in ASD.
People with autism spectrum disorders (ASD) demonstrate aberrant attention to social stimuli. In this talk, I will discuss attentional deficits in ASD. First, I show that people with ASD do not orient efficiently to social target-relevant items (pictures of other people) during visual search, an impairment of top-down / goal-directed attention. Second, in a comprehensive investigation of eye tracking in ASD, I show an impairment of bottom-up / stimulus-driven attention: people with ASD have a stronger image center bias regardless of object distribution, reduced saliency for faces and for locations indicated by social gaze, and yet a general increase in pixel-level saliency at the expense of semantic-level saliency. Interestingly, by comparing photographs taken by people with ASD and controls, I found that photos from people with ASD have unusual features and show strikingly different ways of photographing other people. Lastly, I will discuss the single-neuron mechanisms that may underlie the attentional deficits in ASD.