TrueLight - AI-Powered Navigation Assistant for Colorblind Users
Inspiration
After browsing r/colorblind on Reddit, we discovered countless posts from colorblind individuals sharing their struggles with everyday navigation. 300 million people worldwide are colorblind (8% of men, 0.5% of women), yet there are virtually no real-time assistive tools for safe navigation. Colorblind glasses cost $300+ and don't work for everyone. We realized the smartphone camera everyone carries could be the solution.
What it does
TrueLight provides real-time visual and audio alerts for colorblind users across all modes of transportation:
- Detects objects using YOLOv3 AI (80 classes) and announces hazards
- Analyzes colors with OpenCV HSV detection
- Adapts to 9 vision types - protanopia, deuteranopia, tritanopia, achromatopsia, low vision, and more
- Tests color vision with built-in Ishihara assessment
- Tracks moving objects with animated lock-on brackets
- Adjusts by activity - walking (5s), biking (3s), driving (1.5s), passenger (2s)
- Prioritizes by urgency - low vision mode uses size/proximity instead of color
- Natural voice - ElevenLabs TTS for human-like audio alerts
- Adaptive UI - never uses colors you can't see
How we built it
Mobile (React Native + Expo)
- Real-time camera processing with adaptive frame rates
- Animated bounding box overlays
- Zustand state management
- ElevenLabs natural voice with expo-av audio playback
- Ishihara color vision test
Detection (Python + FastAPI)
- YOLOv3-tiny object detection (10% confidence threshold)
- OpenCV HSV color analysis (30+ color shades)
- Transport mode-aware thresholds
- Low vision proximity-based prioritization
- Always-detect fallback (YOLO → color regions)
Backend (Next.js)
- API proxy between mobile and Python
- ElevenLabs integration
Challenges
Color accuracy: HSV color space varies with lighting. Expanded from 7 to 30+ colors and built a fallback system that always detects something.
Real-time performance: Balancing accuracy with mobile frame rate. Transport mode-aware intervals adapt from 1.5s (driving) to 5s (walking).
Universal accessibility: Supporting 9 colorblindness types required adaptive color schemes that never use colors the user can't distinguish.
Low vision support: Realized proximity matters more than color. Built size-based prioritization where objects >10% of frame trigger urgent alerts.
ElevenLabs audio playback: MP3 streaming from API required expo-av integration with proper audio session management and cleanup.
Accomplishments
- Complete end-to-end ML pipeline with mobile app, proxy, and detection service
- 30+ color detection including subtle shades (browns, violets, teal, rust, olive)
- Transport-aware sensitivity adapting to user activity
- Low vision proximity alerts with directional cues
- ElevenLabs natural voice fully integrated with audio playback
- Ishihara test with manual override
- Animated targeting brackets tracking moving objects
What we learned
- Colorblind community is underserved - most solutions are expensive glasses, not accessible software
- HSV color space >>> RGB for color classification in varied lighting
- Real-time mobile ML is achievable with YOLOv3-tiny at 10% confidence
- Low vision users need proximity/urgency over color-based prioritization
- "Always detect something" philosophy ensures continuous feedback
What's next
- Apple CarPlay / Android Auto integration
- Offline mode with on-device models
- Brake light detection via red-region analysis
- Stop sign detection via shape recognition
- Emergency vehicle flashing light detection
- Haptic feedback patterns
- Community-driven color calibration
- Multi-language support
Built With
- elevenlabs
- expo-speech
- expo.io
- fastapi
- nextjs
- opencv
- python
- react-native
- typescript
- yolov3
- zustand
Log in or sign up for Devpost to join the conversation.