Inspiration
I was originally inspired by the movie Ex Machina, which has an AI named Ava. She is clearly an example of AI learning about human emotions and eventually using them for her own gain, which we clearly need to prevent from happening. And we believed that the best way to prevent this from happening is to arm people with a better understanding of emotions.
What it does
The program uses Google's speech to text, face recognition, and sentient analysis of words in order to detect real-time emotions in a live conversation.
How we built it
We mostly implemented Google's APIs and combined them.
Challenges we ran into
A key challenge was multithreading, since we had to run both the video and the audio feeds simultaneously, while also processing that data. If we ran separate threads, we would have to pause one of them.
Accomplishments that we're proud of
The things I'm proudest of is implementing multi-threading, and integrating the various services.
What's next for The Mind Reader
We would wish to implement haptic feedback, so that users could instantly be notified of the emotions of people they're speaking to. We'd also want to make a mobile app, so that we can make the program accessible to the largest amount of people, and also portable.
Log in or sign up for Devpost to join the conversation.