This project focuses on translating Indian Sign Language (ISL) gestures into meaningful text using vision-based gesture recognition and Natural Language Processing (NLP).
This project utilizes a dynamic vision-based system to convert ISL gestures into text in real-time. It leverages MediaPipe Holistic for gesture recognition and LLaMA 3 for generating contextually accurate sentences from recognized gestures.
- Real-time ISL gesture detection and translation.
- Dynamic gesture recognition.
- Converts gestures into text with coherent sentence formation.
- Custom-made dataset for ISL with 30 words.
-
Clone the repository:
git clone https://github.com/your-username/ISL-to-Text.git cd ISL-to-Text
-
Install dependencies:
pip install -r requirements.txt
-
Run the gesture recognition system:
python main.py
-
Interact with the interface to detect ISL gestures and convert them into text.
- Gesture Recognition: MediaPipe Holistic is used to identify hand keypoints and track gestures.
- Text Generation: LLaMA 3 is employed to form sentences from recognized words.
A custom ISL dataset with 30 common words is used for training and testing the model.
- Python
- MediaPipe Holistic
- LLaMA 3
- NumPy
Contributions are welcome! Please submit a pull request or open an issue to suggest improvements or report bugs.