01672nas a2200241 4500000000100000000000100001008004100002260001200043653002100055653002400076653001800100653003900118100002800157700001700185700001600202700002200218245009100240856009900331300001000430490000600440520097000446022001401416 2019 d c12/201910aImage Processing10aGesture Recognition10aSign Language10aConvolutional Neural Network (CNN)1 aRubén González-Crespo1 aElena Verdú1 aManju Khari1 aAditya Kumar Garg00aGesture Recognition of RGB and RGB-D Static Images Using Convolutional Neural Networks uhttps://www.ijimai.org/journal/sites/default/files/files/2019/09/ijimai20195_7_2_pdf_18405.pdf a22-270 v53 aIn this era, the interaction between Human and Computers has always been a fascinating field. With the rapid development in the field of Computer Vision, gesture based recognition systems have always been an interesting and diverse topic. Though recognizing human gestures in the form of sign language is a very complex and challenging task. Recently various traditional methods were used for performing sign language recognition but achieving high accuracy is still a challenging task. This paper proposes a RGB and RGB-D static gesture recognition method by using a fine-tuned VGG19 model. The fine-tuned VGG19 model uses a feature concatenate layer of RGB and RGB-D images for increasing the accuracy of the neural network. Finally, on an American Sign Language (ASL) Recognition dataset, the authors implemented the proposed model. The authors achieved 94.8% recognition rate and compared the model with other CNN and traditional algorithms on the same dataset. a1989-1660