I am a PhD student at the University of Lincoln, UK, at the School of Computer Science and I am a member of the Lincoln Centre for Autonomous Systems Research (L-CAS).

My current research is focussed on finding a novel way to combine machine learning, computer vision and human-machine interaction to create an indoor navigation system for the visually impaired that can adapt itself to suit the needs and limitations of the individual user by learning his/her habits over time. The goal here is to provide a visually impaired user with a working navigation system without exposing them to a steep learning curve or losing any navigation performance in terms of target localisation.

Our system is based on a Google Project Tango device that allows us to find a 3D map of the environment and use that information to find any obstacles in the user's path. We then use a combination of vibration, spatialised audio cues and voice prompts to guide the user to his/her target destination while warning them of any oncoming obstacles.

In March, my supervisor and I were invited to present our work at the AAAI Spring symposium series hosted at Stanford University, USA. The campus and surrounds are very impressive; its easy to see why Silicon Valley is located around the Bay area. As for the symposium, it was a great experience and we received very positive feedback from the delegates.

We successfully conducted an experiment with our spatial sound interface with 40 blindfolded test subjects. The goal of these experiments were to get a better understanding of how a typical user with late-onset blindness (simulated with the blindfold) will respond when tasked with pointing a camera toward a virtual target, whose location is given to the user with a spatial sound cue.

Lincoln Centre for Autonomous Systems,
School of Computer Science
Brayford Pool
University of Lincoln, UK

Office: INB 3023
Telephone: +44 (0)5122 886897