Skip to content

ffathy-tdx/Sign-Language-Detection-and-Translation-using-YOLOv5

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Sign-Language-Detection-and-Translation-using-YOLOv5

Overview: This project is a sign language detection and translation model built using YOLOv5. The model first detects the user's hand using YOLOv5 and then enters the image into a CNN to translate it into the closest sign language image in our dataset. This project aims to provide a convenient and efficient way for users to communicate with those who use sign language, regardless of whether they know how to sign themselves.

Features:

Hand detection using YOLOv5: The model uses YOLOv5 to detect the user's hand in real-time. Sign language image translation: Once the hand is detected, the model enters the image into a CNN to translate it into the closest sign language image in our dataset. Customizable dataset: Users can add their own images to the dataset to improve the model's accuracy for specific signs.

Examples and Use Cases:

A user who does not know sign language can use this model to communicate with someone who does. This model can also be used by people who are learning sign language to check if they are performing a sign correctly.

Screenshots:

Image

Image

Image

Contributing guidelines: Contributions are welcomed to my project! My model is currently limited to recognizing letters in sign language, but I would love to expand it to recognize more types of sign language. If you're interested in contributing, here are some ways you can help:

Add new sign language images to our dataset: I'm looking for more images of sign language gestures that we can add to our dataset to improve the accuracy of our model. You can contribute by submitting images of sign language gestures for us to include in our dataset.

Improve the model's accuracy: If you have experience with machine learning, you can help improve our model's accuracy by optimizing the model architecture or training process.

Expand the scope of the project: I'm open to expanding the scope of our project beyond sign language recognition. If you have ideas for how we can use this model to solve other problems or improve accessibility, I'd love to hear from you.

If you're interested in contributing, please submit a pull request with your changes. Before submitting, please make sure that your changes are well-documented and tested. If you have any questions or need help getting started, feel free to reach out to us via email or social media.

I appreciate your interest in my project and look forward to your contributions!

License: This project is licensed under the MIT License.

hope this helps! Let me know if you have any further questions.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published