Evaluating scanpath comparison algorithms

Project: Research

Layman's description

When viewing a picture, or anything else for that matter, we make sequences of eye movements called scanpaths. How should we best compare the different dimensions of which a scanpath is comprised (shape, length, spatial and temporal characteristics)? Varying task difficulty, and other stimulus presentation properties, we tackle this question in evaluating our new algorithm for comparing scnpaths.

We make different sequences of eye movements depending on what we are viewing and the current task we are carrying out. Identifying commonalities in these sequences is important because it allows us to see, for example, which aspects of eye movement control are guided by learning and prior experience (i.e. top down) and which are guided by the low level features of our immediate environment (i.e. bottom up).

There are several dimensions to the sequences of fixational eye movements of which a scanpath is comprised - shape, length, spatial and temporal properties for instance all contribute. Until recently however, there was no algorithm that captured all of these properties and allowed them to be directly compared. At the Humanities Lab, Jarodzka, Holmqvist, & Nyström (2010) have developed such an alrogithm, and the present project varies task difficulty and other stimulus presentation properies in order to evaluate it with eye movement data from human observers. The overall goal is to test different dimensions of scanpath similarity, both within individual participants, and between participants when carrying out similar tasks.
Effective start/end date2010/01/012011/12/31