Inspiration
Every developer remembers the moment — you find a repo you want to contribute to, clone it, open the folder, and then... nothing. Hundreds of files, zero context. You close the tab.
That moment happens millions of times a day. Not from lack of skill — from lack of a guide.
Growing up in India, I've seen brilliant developers from Tier 2 and Tier 3 cities who have everything it takes, but no mentorship network to bridge that gap. Onramp is that bridge.
⚠️ Note: If you run OnRamp without a GITHUB_TOKEN and OPENAI_API_KEY in packages/backend/.env, you'll see a demo analysis with sample data instead of live AI results. Add your keys to unlock the full experience.
What it does
Onramp guides any developer from "I want to contribute" to "I know exactly where to start" in under 30 minutes — through four steps:
- Repository Analyzer — paste any GitHub URL, get a real-data architectural breakdown: modules, complexity ratings, entry points, detected patterns
- Profile Wizard — 4-step guided flow capturing your experience, languages, frameworks, interests. Works offline, no account needed
- Smart Recommender — scores your profile against projects, returns top matches with percentage scores and plain-language reasoning
- Contribution Pathfinder — GPT-4o generates your personalised journey:
$$\text{Read Docs} \rightarrow \text{Setup} \rightarrow \text{Explore Codebase} \rightarrow \text{First Issue}$$
How we built it
Full-stack TypeScript monorepo, built entirely inside AWS Kiro — an agentic AI IDE that made production-quality output possible within the hackathon timeline.
Backend — Express.js, Prisma + PostgreSQL, Redis caching, Zod validation, Pino logging, GitHub REST API v4
Frontend — React 18, Vite, Tailwind CSS, React Router v6, localStorage for offline profiles
LLM fallback chain — so the app never crashes:
$$\text{OpenAI GPT-4o} \rightarrow \text{Anthropic Claude} \rightarrow \text{Pattern-Based Analysis}$$
Redis caches repo analysis for 1 hour, cutting LLM calls by \( \approx 80\% \) on repeat queries.
// Fallback chain — every LLM call follows this pattern
async analyzeRepo(url: string) {
return await this.tryOpenAI(url)
?? await this.tryAnthropic(url)
?? this.patternFallback(url);
}
Challenges we ran into
- Dual LLM orchestration — OpenAI and Anthropic have different API shapes and error types. Building a unified service layer that falls back silently, without the user ever knowing, required careful error boundaries across all six LLM-calling methods
- Deterministic LLM output — GPT-4o is creative, sometimes too creative. Strict Zod schemas on both the prompt and response layers were essential to guarantee parseable, renderable data every time
- Offline-first profiles — separating localStorage state from PostgreSQL state cleanly, so guest mode is a first-class citizen and not an afterthought
Accomplishments that we're proud of
- 108 automated tests passing — including property-based tests with
fast-check. Rare at a hackathon, essential for the matching algorithm - Zero crashes — tested with every external service turned off. The fallback chain holds
- Real data only — no mocks anywhere. Analysing
facebook/reactshows the actual 230K+ stars, actual file structure, actual README - Production-grade in hackathon time — Redis, Docker Compose, Prisma migrations, Zod on every endpoint. Built to run, not just to demo
What we learned
- Resilience is a feature, not an afterthought — designing for failure from day one changes how you structure everything
- Agentic IDEs need a new mental model — in AWS Kiro, the highest-leverage skill isn't typing, it's writing precise specs
- The barrier is psychological before it's technical — developers don't contribute because they fear picking the wrong thing, not because they can't code. UX has to solve the anxiety first
What's next for OnRamp
| Timeline | Feature |
|---|---|
| 3 months | AWS Bedrock — replace OpenAI with Bedrock (Claude / Titan) for full AWS-native stack |
| 3 months | Issue Classifier — AI labels GitHub issues by difficulty and required skills |
| 6 months | Hindi UI — localised interface + multilingual README parsing for Bharat |
| 6 months | Gamification — streaks, milestones, contribution badges |
| 12 months | Mentor Matching — connect first-timers with experienced maintainers at scale |
India has 1.5 million engineering graduates every year. Most never make a single open-source contribution — not from lack of ability, but lack of a starting point. OnRamp is that starting point, at scale.
Built With
- 18
- amazon-web-services
- anthropic
- api
- css
- docker
- express.js
- github
- kiro
- node.js
- openai
- orm
- postgresql
- prisma
- react
- redis
- rest
- tailwind
- typescript
- vite
- vitest
- zod
Log in or sign up for Devpost to join the conversation.