About Me

The Thunker

I have a background in psychology and neuroscience with a primary research focus on studying the human visual system.  Since summer of 2016, I have worked at the University of Bristol as a senior research associate. Along with Drs. Casimir Ludwig, Iain Gilchrist, Dima Aldamen, and Walterio Mayol, I am working on the EPSRC sponsored project GLANCE developing an eye tracking and augmented reality system. As a graduate student, I worked with Mary Hayhoe and Dana Ballard at the University of Texas at Austin’s Center for Perceptual Systems.  I worked as a Rachel C. Atkinson Research Fellow at Smith-Kettlewell Eye Research Institute in San Francisco, CA, under the supervision of Laura Walker. Prior to Bristol, I worked at Tobii AB as a Experienced Researcher in the LanPercept ITN, part of the Marie Curie Actions.

My github page is here.

You can find my CV/Resume here

My research interests include psychology, neuroscience, computer vision, AI/machine learning. Additionally, I am greatly interested in science, engineering, and art, especially music.

 

My dissertation ‘The role of uncertainty and reward on eye movements in natural tasks’ examined human driving behavior and looked at how drivers use their eyes to gather information about their surroundings. In particular, we found evidence suggesting that when you need to attend to multiple events in the world that can only be looked at serially, drivers use information regarding the reward or cost associated with an event and their level of uncertainty. That is to say that even if you are uncertain about something in the world you tend to only look at it if it is something important (i.e. has a large reward or cost).

In addition to these driving experiments, we also built computational models of this behavior. Our theory, driven by prior work by Ballard & Hayhoe, is founded on the notion that human vision is a serial process (you can only look one place at a time) that picks up limited pieces of task relevant information during a fixation (a small portion of time typically about 200-300ms where the eye is not in motion). When we are in active in the world we must juggle multiple visual tasks and memories of things in the world. We demonstrated that you can simulate similar behavior with a simplified computer driving agent that must multitask and decides which task is most important to get new information about by using estimates of uncertainty and reward.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s