Gaze contingent auditory threshold evaluation in non-verbal ASD children

My colleagues Ellie Wilson, David Saldana and I have a new article out. We designed a gaze contingent game-like setup (a type of automated visual reinforcement audiometry) for testing audition in children, with a focus on non-verbal autism spectrum individuals. The paper and open source experiment code for Matlab & Tobii trackers (but should be adaptable to others). See: Development of a gaze contingent method for auditory threshold evaluation in non-verbal ASD children. Research in Autism Spectrum Disorders. Volume 62, June 2019, Pages 85-98


Pupil Labs Notes

Our group has had a binocular 200hz Pupil labs system for the past year and its been a nice piece of hardware to play with. For the price it is very hard to beat, although it is rough around the edges both in terms of physical build quality and software. That being said the great thing about an open project like this is that updates and discussions are on github and the community is constantly improving things.

Strangely I’ve had very mixed results on getting a solid framerate in Pupil capture, despite claims of it needing relatively minimal specs. Granted I am trying to use spare hardware lying around our lab and not buying a new pc – but this isn’t uncommon in academia or other places trying to be thrifty… so for those curious so far I’ve found:

Note I run this with 200hz binocular and 30hz 1080p capture with wide angle lens, using Pupil Capture v1.10 – so far I’ve seen both good (solid 30hz) and  poor performance (frame rate drops often 10-30hz) – Below are memory, hard drive and cpu specs (as far as I know the system does not offload any processing to gpu)

Good performance:
Intel Core i7-4790 @ 3.60GHz w/ 16GB ram and SSD – Windows 10

Poor performance:
Intel Core i7-2600 @ 3.40GHz w/ 8GB ram and 7200RPM drive – Windows 10

Intel Core i7-4558U @ 2.80GHz – Macbook Pro – w/ SSD and 8GB RAM, Mac OS 10.14 Mojave

Recap ECVP 2017

Got back last week from a very nice ECVP in Berlin. Lots of interesting research, I particularly liked the seminar on retinal physiology and functioning of single cone stimulation – lots of great techniques required to do those sorts of things. Regarding topics closer to my work in eye movements I came upon a two things worth highlighting: Tobii had a really nice demo of using eye tracking in a Vive, I had seen early versions but the final production model is really impressive – Having done eye tracking in VR since 2001, it was really nice to see such a seamless integration. As SMI have been bought out by Apple, this appears to be the best solution for vision research in VR I’ve seen (at least from my fast evaluation using the demos).

Acuity VR had a really nice demo of their analysis tool for VR, something that Tobii doesn’t address at the moment. Acuity’s tool does a 3D reconstruciton with some nice visualisations and AOIs mapped to objects in the scene.  Everything they showed looked familiar as the tools we developed in the Hayhoe & Ballard lab did the same thing but were tuned to each experiment and weren’t that easy to adapt.  Not sure how they will deal with complex 3D assets – We used to have problems with models that had multiple inseparable parts so it made it difficult to do the AOI selection without drawing bounding boxes – curious to see how it’s addressed by this software.

Blickshift are a company that has an interesting tool set for eye movement and other sensor stream analysis with the capability to observe many data streams across participants simultaneously.

Lastly, not at ECVP but ECEM and relayed via a colleague – a recent addition to the manual coding software I’ve discussed before, is Gazecode – Haven’t tried yet but seems promising if you use Pupil Labs or Tobii mobile trackers & have a copy of matlab.

Also here’s a copy of our poster: Predicting Eye-Head Coordination While Looking or Pointing

Video Annotation Revisited

We have some students doing a term project using the SMI eye tracking glasses. They need to manually annotate the eye tracking data and stimuli but we have more students than SMI BeGaze software, so we tried out some of the annotation tools I’ve mentioned previously. Unfortunately, recently while demoing the RIT Code app to some students it seems that with movies using newer codecs the application is painfully slow when trying to move thru video frames. While the software works well with older codecs (e.g. mpeg-2) – It seems to be showing it’s age as it was created 10+ years ago with the older quicktime framework – Will need to look into seeing if it can be updated. In the meantime one of the students found Anvil Video Annotation Tool

Video codecs and cross platform/application compatibility can drive you nuts – I messed around way too much today to actually get Anvil to work. The problem being it has a very particular list of ‘older’ codecs it supports. I am not sure how well maintained the software is as the links to their demo movies to test Anvil out had broken links.  I have movies from an SMI tracker which are xvid codec in an .avi container which is not supported by Anvil. To get something Anvil compatible I tried a few things out and the best I could find (after trying virtual dub, mpeg streamclip, handbrake, and looking at ffmpeg (but running into some problems wirth each.

Ultimately, I found that a combo of handbrake (which can open these avis but doesn’t support old codecs) and mpeg streamclip (which can’t open the avis but supports the old formats) will work.

Make sure to install handbrake: and mpeg streamclip:

First use handbrake to open the SMI tracker avi’s and convert to a .mp4 format with video encoder chosen as h.264 and framerate chosen as same as source.

Now you should be able to open this new video in mpeg streamclip. If you choose file->export as quicktime -and then choose the compression in the dropdown box. H.261 and H.263 work, but you can also try the others listed here:

Only problem is mpeg streamclip converts to these formats really slow (seems like it takes the same length as the video, so a 10 minute video is a 10 minute wait at least on my 2014 macbook pro) so it might be good to try alternates for better speed/quality as ymmv from mine.

–Update: Jan 23 2018–

Over the fall I had some mobile tracking data I wanted to code and due to the problems mentioned above searched again for annotation tools. I found CinePlay and found it relatively easy to use and effective for making notes on a frame-by-frame basis


Video Annotation Tool

I’ve mentioned in a prior post that Jeff Pelz’s group has a handy tool (note only for Macs) that allows you do frame-by-frame manual analysis that is common in mobile eye tracking experiments. I was showing the program to some students recently and realized that the sourceforge page does not have a compiled version and some of the quirks aren’t well explained. Here is a compiled copy with source code. Note the program expects video formats that can be opened via quicktime and must end in .mov (I find handbrake helpful for video conversion), also the labels.txt file must be on your desktop to have your predefined shortcuts. Otherwise the program is like a video player where you can easily mark time codes and milliseconds into a text editor and then save to a text file

Choosing Python Package Distributions

I recently updated an old post on using Tobii trackers with PsychoPy as I’ve started digging into python again.  Since that last post things have changed a bit on the scene. If you are new to python you typically have the option of download python, and individually installing libraries as needed manually or via a package manager. Because there can be many headaches with this approach for the unfamiliar, there have been 3 popular ways to provide users with a complete all-in-one python package (e.g. python command console, popular libraries, and an IDE).  Three popular ones include Enthought, Python(x,y), and Anaconda. Previously, I have recommended Python(x,y), and still do as it’s quite useful. That being said I was looking into IPython’s website recently to see that their recommended setup for the most recent version of their interactive command line is via Anaconda and I’m trying it out for the time being. Python(x,y,) as of this post has not been updated in over a year, whereas Anaconda appears to be growing in popularity as well as provides 32-bit & 64-bit and Python 2.7 & 3.5 versions. While quite similar in many ways (similar libraries, both provide Spyder & IPython) it appears Anaconda may be the way to go (at least when you want something quick, pre-packaged and intended to avoid the depths of dependency hell)

Open Eye Tracking API

Many labs have multiple eye trackers across brands, or would like to share code with colleagues in other labs with their own trackers. A common occurrence is that your code needs to be rewritten & compiled to use a tracker specific API/SDK. I recently became a member of COGAIN’s Eye Data Quality group and was informed about Oleg Špakov’s related project Eye-Tracking Universal Driver. His lab is working to make eye tracking software development easier by providing an API that supports multiple trackers, so code is written once and the API handles the details with the particular tracker. Something similar is implemented in PsychoPy, but I believe only supports SR Research, Tobii & SMI trackers, whereas ETUD supports more manufacturers and is accessed as a COM object so it should be supported by most programming languages.