Inspiration
Since COVID-19 hit and everyone got stuck inside, we've been looking for a way to socialize, stay active, and generally boost our spirits. We originally came up with the idea for this game because we wanted to make something that would help people stay connected to and allow competition between friends, as well as generally have a good time exercising, moving, and grooving to music!
What it does
If you've ever played Just Dance, you will likely be very familiar with the main functionality of MEWSdance. While you dance in front of your webcam along with our prerecorded dance instructor, MEWSdance tracks your movements and compares it to our machine learning model to see how accurate your dance movements are based on timing and positioning. Additionally, it tracks your audio output for the more audible dance move(s) such as clapping. You get scored based upon your accuracy and consistency, which is reflected in both the real time feedback levels of “Perfect”, “Good”, “OK”, and “Miss”, as well as a cumulative numerical score.
How we built it
The two main functionalities of MEWSdance both use a different machine learning model: For the pose estimation, we trained the Tensorflow model Posenet using Google’s Teachable Machine, and for the audio recognition, we used the Tensorflow model Yamnet packaged into an EdgeML Rune. The web application was made using React (and various related libraries) and styled using SCSS.
Challenges we ran into
The main issue we ran into was figuring out the timing of the dance move and how to score the player accordingly. Because of how React saves its state, some of the values in objects weren’t being saved correctly across different parts of the program, which led to mis-timings and generally incorrect behaviour with scoring. We were able to overcome this challenge by changing the types of objects we used (e.g. instead of using React hooks we used local objects inside of the function). We also dealt with timings in the audio as there is a slight delay in the request that needs to be sent for the ML response. To combat this, we had to shift around how we structured the timings of the moves and sounds.
Accomplishments that we're proud of
After a lot of trying, we were able to get the rune to work (with a decent amount of guidance from the sponsor mentors) as well as integrate it well into the application. Additionally, we were able to work through most of the other challenges we came across!
What we learned
This project came with a lot of new technologies and concepts for all of us - none of us had worked with machine learning, let alone machine learning tools. In addition, we learned to use a wider variety of libraries to diversify our knowledge in the tools we already knew.
What's next for MEWSdance
There are many next steps that we can take with our MEWSdance project. One of our main ideas at the start that we couldn’t get around to implementing was a multiplayer feature, where two different players can dance against each other to compete and see who can get the higher score. Additionally, we only have the one song in our library right now, but we would like to make it such that we can support a wide variety of songs using abstractions for the timings, instead of needing to calculate specific timings as we did for our one song. Finally, we want to work on creating the captions and dance moves easier to follow along in order to help players who may not know the dance or lyrics.








Log in or sign up for Devpost to join the conversation.