Got back last week from a very nice ECVP in Berlin. Lots of interesting research, I particularly liked the seminar on retinal physiology and functioning of single cone stimulation – lots of great techniques required to do those sorts of things. Regarding topics closer to my work in eye movements I came upon a two things worth highlighting: Tobii had a really nice demo of using eye tracking in a Vive, I had seen early versions but the final production model is really impressive – Having done eye tracking in VR since 2001, it was really nice to see such a seamless integration. As SMI have been bought out by Apple, this appears to be the best solution for vision research in VR I’ve seen (at least from my fast evaluation using the demos).
Acuity VR had a really nice demo of their analysis tool for VR, something that Tobii doesn’t address at the moment. Acuity’s tool does a 3D reconstruciton with some nice visualisations and AOIs mapped to objects in the scene. Everything they showed looked familiar as the tools we developed in the Hayhoe & Ballard lab did the same thing but were tuned to each experiment and weren’t that easy to adapt. Not sure how they will deal with complex 3D assets – We used to have problems with models that had multiple inseparable parts so it made it difficult to do the AOI selection without drawing bounding boxes – curious to see how it’s addressed by this software.
Blickshift are a company that has an interesting tool set for eye movement and other sensor stream analysis with the capability to observe many data streams across participants simultaneously.
Lastly, not at ECVP but ECEM and relayed via a colleague – a recent addition to the manual coding software I’ve discussed before, is Gazecode – Haven’t tried yet but seems promising if you use Pupil Labs or Tobii mobile trackers & have a copy of matlab.
Also here’s a copy of our poster: Predicting Eye-Head Coordination While Looking or Pointing
We have some students doing a term project using the SMI eye tracking glasses. They need to manually annotate the eye tracking data and stimuli but we have more students than SMI BeGaze software, so we tried out some of the annotation tools I’ve mentioned previously. Unfortunately, recently while demoing the RIT Code app to some students it seems that with movies using newer codecs the application is painfully slow when trying to move thru video frames. While the software works well with older codecs (e.g. mpeg-2) – It seems to be showing it’s age as it was created 10+ years ago with the older quicktime framework – Will need to look into seeing if it can be updated. In the meantime one of the students found Anvil Video Annotation Tool
Video codecs and cross platform/application compatibility can drive you nuts – I messed around way too much today to actually get Anvil to work. The problem being it has a very particular list of ‘older’ codecs it supports. I am not sure how well maintained the software is as the links to their demo movies to test Anvil out had broken links. I have movies from an SMI tracker which are xvid codec in an .avi container which is not supported by Anvil. To get something Anvil compatible I tried a few things out and the best I could find (after trying virtual dub, mpeg streamclip, handbrake, and looking at ffmpeg (but running into some problems wirth each.
Ultimately, I found that a combo of handbrake (which can open these avis but doesn’t support old codecs) and mpeg streamclip (which can’t open the avis but supports the old formats) will work.
Make sure to install handbrake: https://handbrake.fr/ and mpeg streamclip: http://www.squared5.com/
First use handbrake to open the SMI tracker avi’s and convert to a .mp4 format with video encoder chosen as h.264 and framerate chosen as same as source.
Now you should be able to open this new video in mpeg streamclip. If you choose file->export as quicktime -and then choose the compression in the dropdown box. H.261 and H.263 work, but you can also try the others listed here: http://www.anvil-software.org/#codecs
Only problem is mpeg streamclip converts to these formats really slow (seems like it takes the same length as the video, so a 10 minute video is a 10 minute wait at least on my 2014 macbook pro) so it might be good to try alternates for better speed/quality as ymmv from mine.
I’ve mentioned in a prior post that Jeff Pelz’s group has a handy tool (note only for Macs) that allows you do frame-by-frame manual analysis that is common in mobile eye tracking experiments. I was showing the program to some students recently and realized that the sourceforge page does not have a compiled version and some of the quirks aren’t well explained. Here is a compiled copy with source code. Note the program expects video formats that can be opened via quicktime and must end in .mov (I find handbrake helpful for video conversion), also the labels.txt file must be on your desktop to have your predefined shortcuts. Otherwise the program is like a video player where you can easily mark time codes and milliseconds into a text editor and then save to a text file
I recently updated an old post on using Tobii trackers with PsychoPy as I’ve started digging into python again. Since that last post things have changed a bit on the scene. If you are new to python you typically have the option of download python, and individually installing libraries as needed manually or via a package manager. Because there can be many headaches with this approach for the unfamiliar, there have been 3 popular ways to provide users with a complete all-in-one python package (e.g. python command console, popular libraries, and an IDE). Three popular ones include Enthought, Python(x,y), and Anaconda. Previously, I have recommended Python(x,y), and still do as it’s quite useful. That being said I was looking into IPython’s website recently to see that their recommended setup for the most recent version of their interactive command line is via Anaconda and I’m trying it out for the time being. Python(x,y,) as of this post has not been updated in over a year, whereas Anaconda appears to be growing in popularity as well as provides 32-bit & 64-bit and Python 2.7 & 3.5 versions. While quite similar in many ways (similar libraries, both provide Spyder & IPython) it appears Anaconda may be the way to go (at least when you want something quick, pre-packaged and intended to avoid the depths of dependency hell)
Many labs have multiple eye trackers across brands, or would like to share code with colleagues in other labs with their own trackers. A common occurrence is that your code needs to be rewritten & compiled to use a tracker specific API/SDK. I recently became a member of COGAIN’s Eye Data Quality group and was informed about Oleg Špakov’s related project Eye-Tracking Universal Driver. His lab is working to make eye tracking software development easier by providing an API that supports multiple trackers, so code is written once and the API handles the details with the particular tracker. Something similar is implemented in PsychoPy, but I believe only supports SR Research, Tobii & SMI trackers, whereas ETUD supports more manufacturers and is accessed as a COM object so it should be supported by most programming languages.
I attended the developmental psychology conference BCCCD16 again in Budapest. In addition to having a workshop on using Tobii eye trackers & understanding basics of fixation detection, I also gave a talk introducing topics in eye tracking data quality and eye movement classification. You can view a pdf of the presentation here.
Examples of the classic lateral inhibition effect pop up every so often in the real world. In fact on my macbook laptop you can actually get this effect quite clearly by looking at the keyboard (black keys against the aluminum grid). I happened to being looking into some online X-mas shopping and noticed Fjällräven has a plaid shirt that induces a very strong effect as seen here, the black dots popping in & out when you saccade are kind of like reverse christmas lights twinkling! Happy Holidays! God Jul!