Inspiration

Inspired by Hacktron, and the growing need for cybersecurity consciousness, we decided to create a game, tailored to teach newbies to the subject, basic cybersecurity awareness. In the modern development landscape, security often feels like a chore. Often times a checklist of dry audits and automated warnings. We wanted to change that. Inspired by the high-stakes paranoia of social deduction games like Among Us and the rising presence of autonomous security agents like Hacktron, we asked a simple question: Can we make code auditing addictive?

We realized that spotting a vulnerability in a function is surprisingly similar to spotting an imposter in a spaceship: you need intuition, logic, and a keen eye for "sus" patterns. OrbitSec was born to bridge the gap between boring security training and thrill-inducing gameplay.

What it does

OrbitSec is a high-tension, gamified security trainer where players act as the ship's Security Officer.
The Mission: Players are presented with AI-generated code snippets (some clean, some malicious).
The Task: You must make a decision. Is it SAFE or VULNERABLE.
The Stakes: A wrong move triggers a "Breach."
The Feedback: When a player fails, the game doesn't just say "Wrong." It uses Hacktron's deep scan logs fed into Claude to generate a summary (via ElevenLabs) explaining exactly how the player compromised the ship.
Tutorial mode with on‑demand hints (Claude generates two hints per snippet; user reveals them during play).
Endless mode progression (5 easy → 5 medium → 5 hard until the first mistake).
Accuracy by vulnerability type in the summary report (per‑vuln scorecard).
Live audit UX (split‑screen scan log + progress ring while Hacktron/Claude run).

How we built it

Frontend: We used React (Vite) + TypeScript to create a crisp, "retrofuturistic" terminal interface. Framer Motion powers the smooth, game-like transitions.
Backend: A FastAPI (Python) server orchestrates the game loop.
LLM: We prompt Claude to generate realistic, subtle buggy code on the fly.
Scanner: Hacktron CLI (WSL supported). It scans every generated snippet to ensure the game is technically accurate and not just hallucinating vulnerabilities.
TTS: ElevenLabs provides the AI voice that narrates the player's inevitable failures.

Challenges we ran into

We had to write a custom Python wrapper (hacktron.py) to pipe generated code from FastAPI (Windows) into the WSL instance, run the scan, and parse the JSON output back to the game in real-time.
Tuning the AI prompts to ensure Claude generated code that was subtly broken (detectable by Hacktron but not obvious to a human) also took many iterations.

Accomplishments that we're proud of

The crisp UI, emulating a retrofuturistic theme. Successfully using a real static analysis tool (Hacktron) to validate AI generation prevents the "AI Hallucination" problem common in LLM games.

What's next for orbitSec

The Gameplay Loop: Instead of AI generation, players write their own code snippets to submit to the ship's "repository."
The Engineer's Goal: Submit clean, functional code and identify malicious commits during the peer review phase.
The Saboteur's Goal: Write code that looks clean but contains hidden vulnerabilities (e.g., a subtle race condition or a regex DoS), hoping to sneak it past the group's vote.
The Arbiter: "Hacktron" remains the impartial judge, scanning accepted code to reveal if the crew successfully deployed a feature or if they just merged a catastrophic exploit.

Built With

Share this project:

Updates