Inspiration

What if we prevented drowsy driving instead of "curing" it?

Our motivation comes from an experience of the real life. When Aldán was driving at night from the Spokane airport to Pullman, the trip got very lonely, and he started to feel dangerously sleepy. Suddenly, the car's sleep alarm beeped so loud that he got startled and jerked the steering wheel; he almost caused an accident.

When he arrived safely but shaken, we realized a big problem. For the last 15 years, the industry has been trying to "cure" driver sleepiness instead of "preventing" it.

Waiting to scare a driver who is already falling asleep is basically creating a new hazard.

Every current system waits for the driver to show visible signs of collapse before doing anything. So, in the era of AI, we decided to reinvent the wheel to step in and save lives before the driver ever falls asleep.

The Innovation (What CogPilot Does)

Around 6,400 people die because of drowsy driving in the USA each year NSF, 2023, but we still use technologies from 20 years ago. To the best of our knowledge, no system has combined real-time GPS telemetry, voice coherence monitoring, and AI-generated personalized cognitive stimulation into one preventative framework. CogPilot is built on a completely different idea: the right moment to act is not when your eyes are closing, it is well before that, when the fatigue is just starting to build up.

  • Predictive Hybrid Risk Scoring: Most systems wait for eye closure. CogPilot reverses this logic. It continuously monitors GPS telemetry every five seconds. We track speed, heading, drive duration, and time since the last cognitive engagement. The model evaluates many dimensions at the same time:
    • Driving monotony (low speed variance means highway hypnosis)
    • Time-on-task without mental engagement
    • Road complexity (frequent turns mean you are alert)
    • Circadian vulnerability windows, cumulative driving duration, and traffic density
    • Pre-driving cognitive load synced from the driver’s calendar

In parallel, CogPilot analyzes conversational voice telemetry, like vocal energy, speech pace, and response latency. All inputs are combined into a dynamic risk score. When the score crosses some limits, the system acts.

  • Adaptive AI Intervention: Instead of just a generic beep, CogPilot starts an intelligent interaction. In moderate risk states, it may simply ask how the drive is going. In higher-risk states, it calls Snowflake Arctic through Cortex to generate a personalized cognitive stimulus. For example, a debate question, a trivia challenge, or a logic puzzle tailored to the driver’s stored interests. The goal is genuine mental engagement. The system uses CORTEX.SENTIMENT() and coherence analysis to check if alertness recovered. The AI can autonomously decide when the brain is active enough and gracefully end the intervntion.

  • Intelligent Escalation & Safety Layer: If fatigue continues to grow, CogPilot escalates proportionally. The system can access Google Maps data to tell the driver where the nearest rest area is. Strict cooldown logic prevents over-intervention, so the system remains silent when risk is low.

How we build it (The Full Stack)

CogPilot is a full-stack Android application built in Kotlin with Material Design 3, and it's designed to run natively on Android Auto.

Rather than routing everything through an intermediate backend, the app connects directly to Snowflake via JDBC. This keeps latency low and the architecture lean. Every telemetry point, risk transition, and conversational exchange is logged to structured Snowflake tables for auditability and profile enrichment. ElevenLabs handles text-to-speech, delivering interventions through the car speakers in a natural voice.

Challenges we ran into

  • Background Execution: The hardest early challenge was making the app act autonomously from the moment the driver starts moving. We had to handle GPS initialization, risk computation, and Snowflake writes continuously in the background. Building a reliable foreground service that survived Android's battery optimization took a lot of iterations.
  • Data Access: We wanted to train our risk model against real-world fatigue datasets. But most of the relevant ones are restricted or paywalled. This forced us to design our risk model based on published literature and calibrate thresholds manually. This worked, but a richer dataset would make the model much more acurate.

Accomplishments that we're proud of

  • It actually works: It's a fully functional Android app that connects directly to Snowflake, computes risk in real time, and generates a personalized AI conversation, all built in a single hackathon. We have no mocked data and no simulated pipeline.
  • Direct JDBC: We're proud of the architecture decision to go JDBC-direct from Android to Snowflake. It simplified the stack a lot and proved that Snowflake can serve as a true real-time backend for mobile applications.
  • Real impact: We turned a frightening moment on a dark road into something that could genuinely help people stay alive.

What we have learned

  • We learned that the hardest part of building a safety system is the judgment layer. Deciding when to intervene, how much to say, and when to stay silent required more careful thinking than any of the technical conponents. Too little and the system fails its purpose; too much and it becomes a distraction.
  • We also learned that Snowflake is far more versatile than its reputation suggests. Using Cortex for both generation and sentiment analysis within the same data platform, without external API calls, was a genuinely elegant solution.

What's next for CogPilot

The immediate next step is validating the risk model against real driving data. Ideally, we we want to partner with a university transportation lab or automotive OEM to run instrumented trials.

Beyond that, we wanna to expand the voice coherence layer into a better attention scoring model, integrate wearable biometric data as an additional signal, and explore a CarPlay version for iOS.

The goal has always been simple: nobody should die on a quiet road because a system waited too long to care.


Annex: The research that is behind

We can look at the numbers to understand why this is so urgent. Every year, about 1.19 million people die worldwide beacuse of car crashes Soori & Razzaghi, 2024. Driver sleepiness is a massive factor here, causing 15% to 30% of all vehicle crashes Williamson et al., 2011.

Fatigue destroys your attention, memory, reaction time, and decision-making. Because of this, tired drivers are almost three times more likely to crash Hawking & Filtness, 2017. Even though we know this is dangerous, about 70% of drivers admit to driving while sleepy Hu & Lodewijks, 2020.

Why Current Systems Fail: Academic papers and companies try to solve driver drowsiness Mahapatra et al., 2026, but they mostly use reactive measures:

  1. Reactive, Static Limits: Current systems mostly watch the driver's face using static limits like head tilt or a decrease in the Eye Aspect Ratio (EAR). The big problem is that they can't model how fatigue grows over time; this means the system acts when your eyes are already closing.
  2. Dangerous and Ineffective Fixes: Because current systems don't warn drivers in advance, people try to stay awake by themselves. But research shows that common tricks like rolling down the window, adjusting posture, or listening to loud music don't actually work. In fact, relying on loud music can distract drivers from realizing how sleepy they truly are, and this makes the danger worse instead of fixing it.

Built With

Share this project:

Updates