Inspiration

While those in developed areas use $300+ smartwatches to track their daily heart rate/bpm, billions of people in underserved communities are disproportionately affected by chronic diseases like hypertension and diabetes and lack even a basic heart rate monitor. We wanted to bridge this "digital health divide" by turning the smartphone/laptop into a clinical-grade diagnostic tool.

What it does

Pulsera is a "Zero-Hardware" health guardian. It uses rPPG (remote photoplethysmography) to extract vitals (Heart Rate and HRV) purely through video analysis of a user's face via a standard webcam. Beyond just data, Pulsera features a voice-interactive AI nurse that talks to patients in real-time. It correlates physiological data with subjective feelings to identify whether an abnormality is just a "morning coffee spike" or a "medical red flag," providing proactive intervention before a hospitalization becomes necessary.

How we built it

  • Frontend: Next.js and Tailwind CSS for a responsive, accessible dashboard.
  • Computer Vision: OpenCV-based rPPG algorithms to detect micro-fluctuations in skin color corresponding to blood flow.
  • Backend: Python (FastAPI) and MongoDB to handle real-time data streaming and historical trend storage.
  • Intelligence: Gemini for the "Logic Engine" and ElevenLabs for life-like, empathetic voice synthesis.

Challenges we ran into

  • The Latency Gap: rPPG requires a 5–10 second buffer to calculate accurate vitals. We solved this by designing a conversational "Icebreaker" phase where the AI engages the user, making the wait feel like a natural part of a checkup.
  • Environmental Noise: Varying lighting conditions and head movements can interfere with video vitals. We implemented ROI (Region of Interest) tracking on the forehead to stabilize readings.

Accomplishments that we're proud of

  • Achieving "Zero-Hardware" monitoring that removes the $300+ entry barrier for medical wearables.
  • Creating an AI agent that feels like a "human caregiver" rather than a data entry form.
  • Building a functional pipeline where voice-to-text, video-to-vitals, and text-to-speech happen simultaneously.

What we learned

We learned that health equity isn't just about providing data; it's about providing context. A heart rate of 110 BPM means nothing without knowing the user just climbed a flight of stairs. We also learned that voice is the most natural interface for the "vulnerable populations" we aim to serve, particularly the elderly or those with limited digital literacy.

What's next for Pulsera

  • Offline Mode: Edge-based processing for regions with spotty internet.
  • Predictive Trends: Using historical data to flag "decompensation" cycles days before they happen.
  • Clinician Dashboard: Connecting high-risk "Red Flag" events directly to local health coordinators or emergency contacts.

Built With

Share this project:

Updates