CE1 – Incorporating deep learning and haptic feedback to discover the situation and how to navigate so that the visually impaired cope naturally

Ahmed Mohamed Ahmed Abdelhady
Egypt

Ahmed Mohamed Ahmed Abdelhady

Inventor in Beirut International Innovation Show BIIS 2021

Overview

The magnitude of visual impairment and causes has been estimated for the year 2010, globally, and by the WHO region from recent data. Globally, the number of visually impaired people is estimated to be 285 million, of whom 39 million are blind.
Incorporating deep learning and haptic feedback to discover the situation and how to navigate so that the visually impaired cope naturally; uses object detection to detect both the position and state of a crosswalk light (a hand or a person ambulating is ceased) and conveyances. It then processes this information and transmits it to the utilizer through haptic feedback in the form of vibration motors inside a headband. This sanctions the utilizer to understand whether or not it is safe to cross the street or walk in any road, in which direction to ambulate and how to deal in certain situations with the people buy using deep learning. Started out by gathering our own dataset of 2000+ images of crosswalk lights and roads. then ran through many iterations on many different models, training each model on this data set. Through the different model architectures and iterations, tried to find a balance between accuracy and speed where I eventually discovered an SSDLite Mobilenet model from the TensorFlow model zoo had the balance that required. Using transfer learning and many iterations trained a model that finally worked and implemented it onto a raspberry pi with a camera, soldered on a power button and vibration motors, and custom-designed a 3D printed case with room for a battery. This made the prototype wearable device.