Jacobus Lock

Current and Ongoing

In an attempt to learn how the internet works, I decided to build my own website from scratch and host it on a VPS - this website actually! I built the site in Python using the Django framework and have been hosting it since 2018 or so. In 2019, I started to take an interest in taking back control over my own data and moved to Dockerise my entire webstack so that I can easily include other services, such as Gitea and Syncthing servers to sync all my data. Right now I'm in the process of migrating the entire site onto a local machine I built from spare parts and hosting it from my home.


Recent advances in mobile technology have the potential to radically change the quality of tools available for people with sensory impairments, in particular the blind and partially sighted. Nowadays almost every smart-phone and tablet is equipped with high-resolution cameras, typically used for photos, videos, games and virtual reality applications. Very little has been proposed to exploit these sensors for user localisation and navigation instead. To this end, the “Active Vision with Human-in-the-Loop for the Visually Impaired” (ActiVis) project aims to develop a novel electronic travel aid to tackle the “last 10 yards problem” and enable blind users to independently navigate in unknown environments, ultimately enhancing or replacing existing solutions such as guide dogs and white canes. Furthermore, to overcome the problem of usability, sensor overload and a steep learning curve often associated with such devices, we intend to add an adaptive module to the navigation system that will monitor a user's use habits and skills and will adapt itself to better suit the individual user's limitations and strengths.

For this project, we are using a Google Project Tango device as a sensor platform that takes in camera data and builds a 3D map with it. This information is then used to generate navigation signals and instructions for the user. These signal take the form of spatialised audio cues for 3D target acquisition, vibration to warn the user of obstacles in the way and voice prompts. On top of this, we intend to build a learning module that will monitor the entire interaction, i.e. navigation performance as a function of the feedback signal parameters. This module will then adapt these parameters over time in order to maximise performance.

For this project, a research group in the concentrating solar power (CSP) field were looking into using quadrotor drones to autonomously clean and calibrate the heliostat fields at a CSP power plant. However, due to the precise nature of calibrating heliostats, the pose estimation error of a quadcopter had to be determined so that it could be integrated into the drone's model.

To build this error model, we used a Vicon measurement system to generate a ground-truth dataset and built our own computer vision (CV) measurement system with which to compare it to. We opted for a CV system since its fairly robust, cheap and easy to use. The difference between or system's measurements and the ground-truth data provided us with an error dataset which we used to train a radial basis function neural network regression model which we can use to find the expected measurement error in the drone's 6 dimensions. We also found that this model's error estimates were within one standard deviation of the true error.

The objective of this project was to make a cashless vending machine that gives the user an option to buy a product using NFC authentication, their student IDs or online web authentication. A central remote server was used as an authentication agent to track a user's credit and authorise transactions. All transactions and communication were fully secured and encrypted using a public/private key encryption scheme.

The system was deployed with a purpose-built Android app and a real model vending machine that we built with a Raspberry PI; however, all the transactions used virtual credits and not real currency.

Side Projects

A Raspberry Pi-based system to control a model town and trainset. Components include >1000 RGB LEDs, 5 DC motors, 10 OLED screens and a set of traffic lights, each individually controlled via a web interface and purpose-written API.

An Android app that listens for inaudible ultrasonic signals emitted from markers placed around a city and processes the information contained within the audio signal. This is done with a real-time Fourier transform processor.

A Raspberry Pi system that can control the access to different doors by locking/unlocking them and providing certain users access to certain rooms via a permissions system. This is all controlled and authenticated through a web interface and API.

Lincoln Centre for Autonomous Systems,
School of Computer Science
University of Lincoln, UK

Telephone: +44 (0)5122 886897