Inspiration

Over the few years, technology has been changing quite a lot. From manual to autonomous, from touch to voice based activation, and so on. However there are people that aren’t able to cope up with this rapid change. I am talking about the elderly and differently abled population! I have seen this problem in my own house. My grandpa, who is 83 years old, has to bend down to turn on the lamp lights, or sometimes struggles to find the remote when it gets misplaced. I even adapted the lamp to make it voice activated using Alexa, but he still finds it difficult to convey his needs to Alexa. This is what inspired me to think of a solution, that makes it really simple and easy for elderly and differently abled people to communicate their daily commands.

What it does

GEST.ai is a new generation smart home hub, which allows you to interact with devices all around you like normal smart home hubs, but in the form of gestures. What I mean by that is GEST.ai is the new aid for smart home control.

How I built it

It is simple! Instead of giving touch or voice commands, you just give gesture based commands. These commands are recognized by a cross-platform application, which uses machine learning to train for different gestures. Here are some of the commands that I created: Light On, Light Off, Tv On, Tv Off, Volume Up, Volume Down, Emergency, Hungry, and finally Sick. After training for all these commands, I integrated this code into a rpi with a camera. Then I programmed the camera to take pictures of different gestures every few seconds to allow the prediction to happen. As soon as the gesture is recognized, the appropriate command is predicted and executed. To make this happen I have used: IFTTT platform, and PyCaw Library.

Challenges I ran into

The raspberry pi camera took longer to work than I anticipated, so I had to speed up my work last minute which was a big challenge. Also, it was challenging to train the ML model as I was aiming for 98%+ accuracy and after several rounds of fine-tuning, I got there eventually.

Accomplishments that I'm proud of

I was able to get my ml to interpret gestures for some of the really useful tasks. Also, I created a sign language ML which took me a while but I am proud of the outcome predictability I was able to achieve. I am also proud of my python code that predicts the hand gestures and works really well!

What I learned

I learned what ML is and how it works and also how to use "Lobe" and "Teachable Machines". I also learned how to integrate ml into raspberry pi and how to use camera images to get the correct predictions.

What's next for GEST.ai

Hey, but that’s not it! I plan to further enhance and improve my product by making it useful for the deaf and dumb. In addition to this I want to make an app out of this product to allow anyone to use this new smart home hub by generating their own gestures and connecting it to different devices around them. The thing to note is that Ideas are limitless, and so are we! THANK YOU!!

Built With

Share this project:

Updates