Getting outside on a bike and onto the trails shouldn’t be intimidating. Whether you’re a new rider eyeing your first trail, or a seasoned pro hunting for that perfect golden-hour descent, it helps to know what you’re getting into.
We built TrailSense because trail maps and star ratings only go so far. You can't see on a map where you’ll hit that unexpected rock garden, whether a climb really levels off after the switchbacks, or how exposed the ridgeline feels when the wind picks up. TrailSense solves all of that. We wanted something that lets you see and feel a trail before you ever show up - so you can ride with confidence, not hesitation.
What it does
TrailSense is a natural language video search engine for mountain biking trails. Riders describe the vibe they’re chasing, like "fast desert trail with berms". TrailSense provides the best matching trail closest to user, alongside a bunch of related info, like the difficulty rating, terrain type like "downhill", and location.
Data pipeline:
Ingestion
- Footage is uploaded to object storage.
- Webhook triggers Marengo’s /videos endpoint; the service returns a video_id.
Feature Extraction & Indexing
- Marengo auto-derives frame-level and scene-level embeddings.
- Uses TwelveLabs webhook to send a notification when indexing is complete
Semantic Retrieval
- A user prompt such as “fast desert trail with berms” is embedded via Marengo’s /search/semantic route.
- Cosine-nearest neighbors yield timestamped clips plus confidence scores.
Summarization & Scoring
- Pegasus’ /generate/summary endpoint returns:
- summary - abstractive text (40ish tokens)
- metadata.difficulty - normalized 0–10 score
- Coordinates are extracted from EXIF data from uploaded footage
Conversational Continuation
- The clip summary, terrain tag and difficulty score are injected into a Gemini system prompt.
- Gemini produces context-aware answers, enabling natural follow-up
It’s built for both discovery and contribution: a crowdsourced footage model allows anyone to upload their gnarly trail videos, turning their GoPro footage into searchable trail insights for the entire community.
Challenges we ran into
API rate limiting while prototyping was the main issue we ran into during development because we’d run a lot of tests when it came to testing out the capabilities of Marengo and Pegasus, like testing out how accurate its time stamps were when we’d ask for a certain point in the video (which it was very accurate), or asking for a mountain trail with specific features like a steep jump or a beautiful area to watch the sun set. Furthermore, figuring out the TwelveLabs Python library and the data pipeline was rather difficult. We spent a while figuring out the process of how to index a video once and save its video key for multiple prompts, but we got it working. Also what took a while was figuring out what URLs the api would take in. We initially thought that you could upload any public url like a youtube video, but we later realized it was unrealistic, and instead used videos from other sources.
Accomplishments that we're proud of
- Built a fully working semantic video search engine for MTB trails
- Integrated multiple AI APIs (Twelve + Gemini) into one cohesive workflow
- Created timestamped trail summaries and difficulty ratings from raw video
- Achieved meaningful results from natural language prompts like "foggy forest trail"
- Designed a beautiful UI
What we learned
- Natural language prompts are a new kind of UX, and this is probably only the beginning.
- Semantic search has many applications outside of security and looking for specific moments.
What's next for TrailSense
- Train a supervised learning model to predict trail difficulty using a combination of computer vision outputs (e.g., terrain segmentation, rider posture estimation) and metadata (speed, incline, impact events).
- Integrate structured trail data via APIs from platforms like Trailforks or MTB Project to provide GPS coordinates, elevation profiles, and POIs alongside each clip.
- Cloud storage integration S3 bucket
- Transition to a mobile-first architecture, supporting on-device query input, contextual prompts, and geolocation-based trail discovery.
Built With
- flask
- gemini
- google-maps
- marengo
- pegasus
- react.js
- twelvelabs


Log in or sign up for Devpost to join the conversation.