Inspiration
We've all experienced that frustrating moment: a brilliant idea during a meeting, an important insight while reading an article, a voice note recorded on the subway... and it all gets lost in the digital chaos. Current tools force us to manually organize our thoughts, turning knowledge management into a chore rather than a superpower. Second Brain Live was born from a simple question: What if your digital brain organized itself?
What it does
Second Brain Live is your AI-powered augmented memory assistant that: Captures everything, everywhere: text notes, voice notes, screenshots, web links Automatically organizes: AI structures your thoughts in real time without manual intervention Connects ideas: detects links between your notes to reveal hidden insights Makes your knowledge accessible: intelligent natural language search, not keyword search Syncs in real time: your thoughts are available on all your devices instantly
You think. We do the rest.
How we built it
We built Second Brain Live using Gemini 3 as the core reasoning engine. Audio and text inputs are processed in short, low-latency chunks and sent to Gemini 3, which performs multi-step reasoning to maintain a structured cognitive state.
The frontend displays live summaries, action items, and mental structures, while the backend manages session context and incremental updates. Gemini 3’s multimodal understanding and low latency were essential to making the experience feel truly live.
Challenges we ran into
One of the main challenges was maintaining coherence over time while processing continuous streams of information. Another challenge was designing prompts that encouraged reasoning and structure, rather than simple summarization.
Latency was also critical — any delay breaks the feeling of a “live second brain.”
Accomplishments that we're proud of
Building a real-time reasoning system rather than a static assistant Demonstrating meaningful use of Gemini 3 beyond chat Achieving live cognitive structuring with low latency Creating a clear, intuitive interface that shows thinking in motion
What we learned
We learned that real-time AI requires a fundamentally different design mindset than post-hoc analysis. Prompting for continuous reasoning, managing context carefully, and balancing latency with depth are key to building live AI systems.
Most importantly, we learned that AI feels most powerful when it works with humans, not just for them.
What's next for Second Brain Live
Next, we plan to expand multimodal inputs (screen and visual context), add long-term memory across sessions, and introduce proactive insights such as detecting contradictions or forgotten decisions.
Our long-term vision is to make Second Brain Live a true cognitive co-pilot a second brain that thinks while you do.

Log in or sign up for Devpost to join the conversation.