Details
Paper ID 49
Difficulty - Easy

Categories

  • Computer Vision
  • Object recogntion
  • easy

Abstract - Sign Language detection by technology is an overlooked concept despite there being a large social group which could benefit by it. There are not many technologies which help in connecting this social group to the rest of the world. Understanding sign language is one of the primary enablers in helping users of sign language communicate with the rest of the society. Image classification and machine learning can be used to help computers recognize sign language, which could then be interpreted by other people. Convolutional neural networks have been employed in this paper to recognize sign language gestures. The image dataset used consists of static sign language gestures captured on an RGB camera. Preprocessing was performed on the images, which then served as the cleaned input. The paper presents results obtained by retraining and testing this sign language gestures dataset on a convolutional neural network model using Inception v3. The model consists of multiple convolution filter inputs that are processed on the same input. The validation accuracy obtained was above 90% This paper also reviews the various attempts that have been made at sign language detection using machine learning and depth data of images. It takes stock of the various challenges posed in tackling such a problem, and outlines future scope as well.

Paper - https://ieeexplore.ieee.org/document/8537248

Dataset - https://drive.google.com/file/d/1frzllkr_axpgqbw_7takmn-0pm3na1-a/view?usp=sharing