Recent advances in mobile technology have the potential to radically change the quality of tools available for people with sensory impairments, in particular the blind and partially sighted. Nowadays almost every smart-phone and tablet is equipped with high-resolution cameras, typically used for photos, videos, games and virtual reality applications. Very little has been proposed to exploit these sensors for user localisation and navigation instead. To this end, the “Active Vision with Human-in-the-Loop for the Visually Impaired” (ActiVis) project aims to develop a novel electronic travel aid to tackle the “last 10 yards problem” and enable blind users to independently navigate in unknown environments, ultimately enhancing or replacing existing solutions such as guide dogs and white canes. Furthermore, to overcome the problem of usability, sensor overload and a steep learning curve often associated with such devices, we intend to add an adaptive module to the navigation system that will monitor a user's use habits and skills and will adapt itself to better suit the individual user's limitations and strengths.
For this project, we are using a Google Project Tango device as a sensor platform that takes in camera data and builds a 3D map with it. This information is then used to generate navigation signals and instructions for the user. These signal take the form of spatialised audio cues for 3D target acquisition, vibration to warn the user of obstacles in the way and voice prompts. On top of this, we intend to build a learning module that will monitor the entire interaction, i.e. navigation performance as a function of the feedback signal parameters. This module will then adapt these parameters over time in order to maximise performance.
For this project, a research group in the concentrating solar power (CSP) field were looking into using quadrotor drones to autonomously clean and calibrate the heliostat fields at a CSP power plant. However, due to the precise nature of calibrating heliostats, the pose estimation error of a quadcopter had to be determined so that it could be integrated into the drone's model.
To build this error model, we used a Vicon measurement system to generate a ground-truth dataset and built our own computer vision (CV) measurement system with which to compare it to. We opted for a CV system since its fairly robust, cheap and easy to use. The difference between or system's measurements and the ground-truth data provided us with an error dataset which we used to train a radial basis function neural network regression model which we can use to find the expected measurement error in the drone's 6 dimensions. We also found that this model's error estimates were within one standard deviation of the true error.
Paper available here
The objective of this project was to make a cashless vending machine that gives the user an option to buy a product using NFC authentication, their student IDs or online web authentication. A central remote server was used as an authentication agent to track a user's credit and authorise transactions. All transactions and communication were fully secured and encrypted using a public/private key encryption scheme.
The system was deployed with a purpose-built Android app and a real model vending machine that we built with a Raspberry PI; however, all the transactions used virtual credits and not real currency.
Paper available here