Inspiration

We often view code as just lines of text, logic, and functionality. But every repository has a "spirit"—a chaotic energy, a zen-like structure, or a massive industrial complexity. We wanted to bridge the gap between engineering and art, using AI to "read" the personality of a codebase and visualize it. Key question: If your code could sing and dance, what would it look like?

What it does

GitAura is a "code horoscope" and generative art engine that quantifies the intangible "vibe" of a software project.

Analyzes Code: Users enter a GitHub repo URL. We fetch metadata and use Google Gemini Pro to analyze the "vibe" (Cyberpunk, Zen, Chaotic, Corporate, etc.). 3D Visualization: It renders an interactive 3D WebGL scene (using React Three Fiber) where particles, lights, and colors react to the code's complexity, languages, and mood. Generative Audio: It composes a unique ambient soundscape in real-time using Tone.js, matched to the repo's activity rhythm. Generative Art: Creates a downloadable P5.js abstract artwork based on commit history. Social Discovery: Uses Pinecone vector search to find other repositories with similar "vines" to yours.

How we built it

We built GitAura using a modern Next.js 16 stack. The core innovation lies in how we translate abstract code metrics into sensory parameters.

We define a "Vibe Vector" $V$ as a multi-dimensional representation of a repository's personality: $$V = { v_{mood}, v_{color}, v_{tempo}, v_{chaos} }$$

Where each component is derived from a weighted analysis of commit history, language distribution, and semantic analysis via Gemini: $$v_{chaos} = \alpha \cdot \text{CommitFrequency} + \beta \cdot \text{CodeChurn} + \gamma \cdot \text{GeminiSentiment}$$

Frontend: Built with Next.js 16 (App Router) and React 19. AI Analysis: Gemini Pro API interprets the nuanced differences between a "hackathon project" and an "enterprise monolith." 3D Graphics: React Three Fiber renders thousands of instanced particles efficiently. Audio: Tone.js uses the $v_{tempo}$ and $v_{mood}$ values to generate scales and chord progressions procedurally. Vector Search: We use Pinecone to store and query these Vibe Vectors, allowing for "semantic vibe matching."

Challenges we ran into

Abstract to Concrete: Translating a prompt like "This code is messy" into specific numbers for particle speed and audio tempo was difficult. We had to fine-tune the Gemini prompts extensively to get consistent JSON outputs. Performance: Rendering 3D particles ($N > 5000$), generating real-time audio, and handling API requests simultaneously required careful optimization in the React render loop using useFrame. Browser Audio policies: Managing auto-play restrictions for the generative audio experience required a tailored UX flow.

Accomplishments that we're proud of

Creating a feeling of synesthesia where the visual and audio elements feel perfectly synced to the code analysis. The "Similar Vibe" search actually works really well—connecting seemingly unrelated projects (e.g., a high-frequency trading bot and a hyper-casual game) because they share the same "Chaotic" energy. Building a smooth, 60fps 3D experience directly in the browser with Next.js 16.

What we learned

How to use LLMs not just for text generation, but for parameter generation to drive deterministic creative systems. Advanced 3D shader techniques and post-processing effects in React Three Fiber. The power of vector databases for non-textual similarity matching (matching on "mood" rather than keywords).

What's next for GitAura

VR Support: Walking inside your codebase in Virtual Reality. Team Vibes: Analyzing organization-level vibes (e.g., "How stressed is this engineering team?"). Spotify Integration: Generating playlists based on code history.

Built With

Share this project:

Updates