Inspiration

Sleep deprivation plagues our society in astronomical numbers, especially with college students. Many times when we read the news, we come upon stories of young drivers and truckers that have fallen asleep behind the wheel, injuring not just themselves, but other drivers on the road as well. According to the National Safety Council, 40% of surveyed drivers have admitted to falling asleep behind the wheel. Furthermore, driving when having no sleep for 20 hours is the equivalent of driving with a 0.08% blood-alcohol level. 90 people a day (3 million per year) are injured from car accidents, and in 2015, about 5,000 people were killed from sleeping while driving. We believe the DMS can change such heartbreaking statistics, and prevent accidents from occurring.

What it does

Project DMS is a system that alerts drivers of their sleep deprivation based on their recent driving behaviors. The system detects facial expressions that indicate tiredness, such as yawning, droopy eyes, and tunnel vision. There would also be light sensors located on the front of the car near the tire well that would measure when the driver begins to swerve off of their lane. This data is then processed, analyzed, and is used to play a loud sound to alert the driver to pull over to rest, and push notifications to their loved ones of their driving behavior.

How we built it

We use Google Cloud Platform for most of the development. Using the Google Cloud Vision API, DMS would take photos every 100ms, and the API would process these images to detect any tiredness expressions. For the sensors data, we used Google Cloud IoT API. For our initial development, we connected a Grove Light Sensor to an Arduino 101 board to retrieve the actual data. This data is then transferred to a Raspberry Pi, and using its WiFi capabilities, the data is then sent into the GCP. Once all the data from the Vision API and IoT API are retrieved, we analyze the data in relation to time, and determine if the driver is falling asleep.

Challenges we ran into

We have never worked with GCP or computer vision. We had to read lots of documentation and articles to understand how the APIs work, and how to integrate them with our IoT devices. One issue we ran into was how to distinguish between a yawn and a shout. Another issue we encountered was connection between the Arduino and Raspberry Pi. There was very little documentation and examples for this. We still need to make the devices compatible with each other.

Accomplishments that we're proud of

Considering the time constraint and lack of knowledge, we are very proud of how much we finished in the 36 hours, even though we haven't completed the entire project. There is much more to be done, but we are excited to expand our knowledge and to continue development on the project.

What we learned

We learned how to use GCP and its helpful cloud computing features. We also learned about image processing, something we have never been exposed to. Furthermore, we also learned about how to use small microcontrollers to retrieve very helpful and exciting data.

What's next for Project DMS

First and foremost, we would like to have the entire connectivity of the IoT devices fully functional, and have the data processing return final results. After this, we would like to expand beyond sleep deprivation, and monitor the driver's other behaviors such as driving under the influence, distracted driving, and reckless driving. We would also like to have a mobile application that will notify loved ones of the risk of the driver falling asleep. Young new drivers and those that commute long distances, such as truck drivers, are our target audience for deployment. Using their data, we would like to notify these drivers when they reach a span of road that is notorious for being boring and prone to drowsiness.

Built With

  • arduino101
  • arduinoide
  • gcp
  • google-cloud
  • google-cloud-sdk
  • google-cloud-vision
  • google-iot
  • grove-light-sensor
  • json
  • python
  • raspberrypi
Share this project:

Updates