Inspiration
Technology for social good is an under-appreciated topic in society. The inspiration for this comes from the idea that we should inspire and empower the next generation of technologists to build a better world. Machine learning and AI are powerful tools that give technologists the capability to revolutionize products and create an impact. Our aim is to enhance the inclusivity of deaf individuals, who constitute a minority within our society. We believe that promoting the widespread adoption of American Sign Language (ASL) would yield significant advantages. By increasing the number of ASL users, we can facilitate greater communication between hearing individuals and the deaf community, thereby fostering their increased integration into society.
What it does
ASLearner is a trained ML model focused on identifying sign language letters and portraying the accuracy of those letters to give the user quantitative feedback on their sign language.
How we built it
- Computer Vision: The model was built on a YOLOv8 detection algorithm and deployed with -Roboflow utilizing web-scraped data and Google's Mediapipe pre-trained data sets for ASL letters. This combination of data allowed the dataset to recognize any pair of hands and background. The data processing was completed using the OpenCV and Numpy python libraries with integrated Roboflow API.
- UI/UX: Our display for the model was created using React and Typescript. Elements were implemented onto the front-end for image and video processing using Flask and Rest API.
- Methodology and Statistics: We utilized a standard object-detection approach with reinforced learning to develop our model. The dataset consists of 5500 sign language images. We assess with a 92.6% confidence and accuracy interval for all ASL hand gestures.
Challenges we ran into
- Integrating multiple API's for video processing and display onto our website created multiple conflicts with data transfer which led to inaccurate predictions.
- ASL signs had to be limited to stationary ones only, the ability for our machine learning algorithm to differentiate between moving gestures, words, slang, or other complex movements was limited.
- In some cases, the model attempts to identify the specific shape and form of non-hand objects to be sign language symbols.
- Creating a feasible website to display our creation was hard considering our group has minimal experience with front-end development.
Accomplishments that we're proud of
This is our first hackathon project! We're proud of everything we've been able to accomplish.
- We developed a large model to successfully detect sign language letters with 92.6% accuracy.
- Deploying our model successfully to an application interface with integrated API.
What we learned
- How to utilize various front-end tools for website development.
- How machine learning algorithms are created and deployed.
- What computer vision is and how it can be utilized in technology for social good and impact.
What's next for ASLearner
We wish we had more time to complete our fullest intent of this project!
- Expand on the model to include the concatenation of words as the user continues to display words in ASL with better accuracy.
- Enhance our visual display and graphics on our website with front-end tools.
- Create a game-based component to allow for educational engagement utilizing accuracy, points, cross-platform gameplay, custom design models, and unity game engine.
Log in or sign up for Devpost to join the conversation.