Published Papers


Abstract: Recent advances in mobile technology have the potential to radically change the quality of tools available for people with sensory impairments, in particular the blind and partially sighted. Nowadays almost every smart-phone and tablet is equipped with high-resolution cameras, typically used for photos, videos, games and virtual reality applications. Very little has been proposed to exploit these sensors for user localisation and navigation instead. To this end, the “Active Vision with Human-in-the-Loop for the Visually Impaired” (ActiVis) project aims to develop a novel electronic travel aid to tackle the “last 10 yards problem” and enable blind users to independently navigate in unknown environments, ultimately enhancing or replacing existing solutions such as guide dogs and white canes. This paper describes some of the project’s key challenges, in particular with respect to the design of a user interface (UI) that translates visual information from the camera to guidance instructions for the blind person, taking into account the limitations introduced by visual impairment. In this paper we also propose a multimodal UI that caters to the needs of the visually impaired that exploits human-machine progressive co-adaptation to enhance the user’s experience and improve navigation performance.

Unpublished Internal Papers


Abstract: Our aim is to build a navigation system for the visually impaired that uses a combination of feedback modes to guide the user to his/her destination. In this paper, we investigate the effectiveness of a spatial audio tone with a varying pitch component, played with bone-conducting headphones, in conveying the pan and tilt angles of a target to the user in a pointing task. We also wish to see how changes in the behaviour of the pitch affects a user's performance. We conducted a set of experiments with blindfolded users and found that the varying pitch component works well in conveying the tilt angle of a target. Furthermore, we were able to determine that the audio interface adheres to Fitts's Law and used it as a metric to determine which pitch setting produces the best results. We discovered a trade-off between the speed and accuracy in the pointing task, which are maximised when the tone-settings is adjusted to low and high respectively.

Relevant Reading


Abstract: In this paper we discuss the concept of co-adaptation between a human operator and a machine interface and we summarize its application with emphasis on two different domains, teleoperation and assistive technology. The analysis of the literature reveals that only in a few cases the possibility of a temporal evolution of the co-adaptation parameters has been considered. In particular, it has been overlooked the role of time-related indexes that capture changes in motor and cognitive abilities of the human operator. We argue that for a more effective long-term co-adaptation process, the interface should be able to predict and adjust its parameters according to the evolution of human skills and performance. We thus propose a novel approach termed progressive co-adaptation, whereby human performance is continuously monitored and the system makes inferences about changes in the users’ cognitive and motor skills. We illustrate the features of progressive co-adaptation in two possible applications, robotic telemanipulation and active vision for the visually impaired.

Abstract: The diffuse availability of mobile devices, such as smartphones and tablets, has the potential to bring substantial benefits to the people with sensory impairments. The solution proposed in this paper is part of an ongoing effort to create an accurate obstacle and hazard detector for the visually impaired, which is embedded in a hand-held device. In particular, it presents a proof of concept for a multimodal interface to control the orientation of a smartphone’s camera, while being held by a person, using a combination of vocal messages, 3D sounds and vibrations. The solution, which is to be evaluated experimentally by users, will enable further research in the area of active vision with human-in-the-loop, with potential application to mobile assistive devices for indoor navigation of visually impaired people.

Lincoln Centre for Autonomous Systems,
School of Computer Science
University of Lincoln, UK

Telephone: +44 (0)5122 886897