AW TUNES

Don't you hate it when you lift your hand off the wheel to change a song and then your car goes crashing into the car next to you because all you wanted to do was skip that Cardi B song? Well, me too. What we learned: how to run and use the following - Open CV, Python, Imultils Facial Data, Python Facial_Recgonition, Dlib Data, Spotipi (API Spotify)

How we built it:

  • Implemented the live video stream feature using OpenCV
  • Ran the python facial recognition library through the live video stream by reading in a python file through terminal
  • Ran the live video stream, connected the Imutils and dlib face data, pinpointed the area of the mouth using the array data from each library
  • Programmed conditional statements to determine whether mouth was open and closed
  • Outputted commands from the time and length
  • Imported python library Spotipy which enabled us to request authorization into Spotify's API endpoints
  • Inputted functions from Spotipy with the corresponding conditional statements to run under the right conditions

Problems we faced: Opening opencv to run in terminal, getting certain outputs given how long the mouth was open for (if was hard to write conditions that satisfied a specific output without running through the previous if statements), unable to request authorization to access Spotify causing us to be unable execute needed functions (aka. getting Spotify to work).

Built With

Share this project:

Updates