In this session, we’ll build a solution that detects American Sign Language (ASL) gestures via a webcam and translates them into written English in real-time via a neural network.
To achieve this, we’ll mashup a few different libraries and tools and show you how to use each - starting with OpenCV & Python once again to procure live images and build our own labelled gesture data set. Following this, we’ll train and test our model on TensorFlow through transfer learning (using SSD MobileNet), during the course of which we’ll show you how to use the TensorFlow Object Detection API. Once the model is ready, we’ll plug it back into Python and OpenCV to classify based on the real-time live feed from the webcam.
• Jupyter Notebook (https://jupyter.org/install)