A real-time American Sign Language (ASL) recognition system
-
Updated
Mar 25, 2025 - Jupyter Notebook
A real-time American Sign Language (ASL) recognition system
Using yolo-v8 to train on custom dataset for sign language recognition
Signfy is a Video Chat app that incorporates sign language translation to bridge the communication gap between the deaf and hearing communities.
Applied SSD integrated with MobileNet model for object (sign gestures) detection and recognition and the model is trained using Transfer Learning, with the aim to develop a web app for real-time ASL recognition from user input & then to generate text in English.
This web-based app detects and interprets sign languages into English words in real-time in order to help speech-impaired individuals communicate with others more easily.
American Sign Language Recognition
A 2D platformer that teaches American Sign Language (ASL) using Leap Motion-powered hand gestures
Aplicación que permite traducir lengua de señas a audio y texto.
Google Home feature that can recognize ASL (American Sign Language) using TensorFlow Lite, MNIST and OpenCV.
Teaching computers to understand sign language! This project uses image processing to recognize hand signs, making technology more inclusive and accessible.
This project is aimed at detecting American Sign Language (ASL) alphabets in real-time using computer vision. The system utilizes OpenCV for image processing, MediaPipe for hand detection, and a Random Forest classifier from scikit-learn for alphabet recognition.
GestureGo facilitates bidirectional communication between people with hearing or speech impairments and other people which in turn will lessen the communication gap between them and allowing everyone to understand and be understood.
Sign language gesture recognition is done in two ways. Alphabets are detected from sign language and words are formed using this. The words thus formed is then converted to speech. Another method includes recognizing gestures which include words.
This project involves creating a real-time sign language detection system using CNNs to translate sign language gestures into text. It aims to improve communication accessibility for the hearing-impaired by accurately recognizing and displaying sign language gestures from live video input in real-time.
Lessons and projects I’ve learned from Paul McWhorter’s AI and OpenCV tutorials, with additional improvements and insights.
Sign_languagues_recognition
Major Project in Final Year B.Tech (IT). Live Stream Sign Language Detection using Deep Learning.
A realtime translator app for sign language using Mediapipe Hands solutions and LSTM TFLite model.
Hebrew sign language real time recognition using CNN, Keras & OpenCV.
Add a description, image, and links to the signlanguagerecognition topic page so that developers can more easily learn about it.
To associate your repository with the signlanguagerecognition topic, visit your repo's landing page and select "manage topics."