VaultSim: Encrypted AI Reasoning Architecture
Inspiration
VaultSim emerged from recognizing a fundamental architectural gap: AI systems today cannot reason over sensitive data without accessing, storing, or exposing identity. Legal cases became the proving ground, but the insight is universal. Across healthcare, finance, and enterprise environments, organizations need AI that can perform intelligent analysis on personally sensitive information—names, addresses, financial records, medical histories, identity numbers—without ever holding or "seeing" that data.
We set out to build something unprecedented: an Encrypted AI Reasoning Architecture where multiple reasoning agents operate on tokenized, identity-abstracted information, generating structured intelligence while the original data remains encrypted and isolated on the client layer. The architecture proves that AI cognition can be privacy-native from the ground up—not privacy-added afterward.
VaultSim is powered end-to-end by Gemini—privately hosted at the edge for identity masking and centrally orchestrated for reasoning—creating a unified cognition stack where privacy enforcement and intelligence operate within the same model ecosystem while remaining cryptographically isolated.
What it does
VaultSim is a privacy-native AI reasoning platform that demonstrates how AI agents can perform complex reasoning tasks on sensitive data while maintaining zero exposure to original identity.
Users describe their case and optionally upload documents. A privately hosted Gemini model—running in an isolated environment with no internet exposure—performs semantic PII detection and masks sensitive content while preserving contextual integrity. This creates a tokenization layer: encrypted token–value mappings stored exclusively on the frontend, protected by AES-256 encryption, and destroyed at session end.
The masked data then flows into a Gemini-powered reasoning layer where multiple AI agents engage in structured cognition. Gemini acts as the centralized intelligence engine coordinating multi-agent reasoning across abstracted inputs while never accessing original personal data.
The output is structured analysis derived entirely from encrypted reasoning—no raw data exposure, no external storage, no retention after session closure.
This architecture is immediately applicable to medical record analysis, financial case review, confidential document reasoning, and any domain requiring AI intelligence over compliance-sensitive information.
How we built it
VaultSim's design treats encrypted reasoning as infrastructure, not an afterthought.
Layer 1 — Client-Side Identity Abstraction (Private Gemini)
The input layer accepts user descriptions and documents. A privately hosted Gemini model running in an isolated environment detects PII and generates masked content alongside encrypted token–value mappings (AES-256).
- Identity fields are replaced with structured tokens
- Token mappings remain client-side only
- No mapping between token and identity ever reaches reasoning systems
- Gemini operates locally as the privacy boundary before reasoning begins
This ensures identity never enters the intelligence layer.
Layer 2 — Gemini as Central Reasoning Infrastructure
After masking, tokenized case data enters a centralized Gemini reasoning system that orchestrates multi-agent cognition.
- Support agent generates structured arguments
- Opposition agent produces counter-reasoning
- Judge agent synthesizes conclusions and outcomes
Each agent reasons only on identity-abstracted inputs. Gemini maintains contextual awareness across the entire pipeline, proving that semantic meaning survives masking when abstraction is designed correctly.
Gemini effectively centralizes cognition while identity remains decentralized—allowing the system to function as a unified intelligence environment without concentrating sensitive data.
This demonstrates a new architectural model: intelligence can be centralized, while privacy remains local.
Layer 3 — Session-Based Zero Retention
All encryption keys, token mappings, intermediate reasoning states, and Gemini outputs exist only during the active session. Upon logout, the client-side storage is wiped, ensuring zero persistent data footprint.
This three-layer design ensures that encrypted reasoning—including Gemini-powered cognition—is the foundation that enables trustworthy intelligence over sensitive information.
Challenges we ran into
The core challenge was architectural: How do you enable contextually accurate AI reasoning when models never see identity?
Masking PII without degrading semantic richness required treating abstraction as a learned task. The privately hosted Gemini model had to preserve relationships, context, and narrative structure—not simply redact data.
A second challenge was designing a unified Gemini architecture where:
- one instance enforces privacy at the edge
- another powers centralized reasoning
while maintaining cryptographic separation between identity and cognition.
A third challenge was orchestrating stateless multi-agent reasoning while preserving continuity. Each reasoning step had to remain context-aware without persistent memory.
Balancing encrypted reasoning with acceptable latency and coherence required infrastructure optimization, prompt architecture, and careful orchestration.
These challenges are foundational to any privacy-native reasoning system, not just legal applications.
Accomplishments that we're proud of
- Unified Gemini architecture — privacy enforcement and reasoning operate within the same model ecosystem
- Identity never enters intelligence systems — token mappings remain client-side only
- Context-preserving masking — Gemini reasoning retains narrative integrity despite abstraction
- Centralized cognition, decentralized privacy — intelligence scales while identity remains protected
- Session-based zero-retention architecture — no prompts, outputs, or mappings persist after session closure
The result is a privacy-native AI reasoning infrastructure — not privacy-bolted-on, but privacy-defining every layer of the system.
What we learned
Privacy engineering is architecture, not compliance.
We discovered that centralizing cognition through Gemini while decentralizing identity creates a powerful system design:
- intelligence scales through a unified reasoning core
- privacy scales through local abstraction
We learned that abstraction at the input boundary is more powerful than encryption after transmission. Removing identity before reasoning eliminates entire classes of risk.
Stateless reasoning aligns naturally with privacy-first systems, producing cleaner, more secure architectures.
This pattern—local identity abstraction, centralized reasoning, zero retention—generalizes across healthcare, finance, compliance, and enterprise intelligence workflows.
What's next for VaultSim
In the immediate term, we're enhancing reasoning depth, improving explainability, and expanding support for multilingual and multi-document inputs.
Long-term, VaultSim evolves into a privacy-native AI reasoning infrastructure for:
- healthcare case intelligence
- financial compliance analysis
- enterprise document reasoning
- institutional training and research
The Ultimate Vision: Distributed Encrypted Agent Infrastructure
Today’s AI systems centralize both intelligence and data.
VaultSim proposes a new model:
A distributed network where:
- Gemini powers cognition
- identity remains local and encrypted
- agents reason on abstracted data
- outcomes aggregate without exposing sensitive information
Each node operates with its own encrypted reasoning boundary. Intelligence scales through centralized cognition while privacy remains decentralized.
This unlocks:
- enterprise automation without centralized data gathering
- multi-organizational reasoning without identity sharing
- privacy-safe agent economies
- large-scale intelligence over sensitive domains
VaultSim is the proof of this architecture.
It demonstrates that the future of AI isn’t about bigger models or more data—it’s about rearchitecting how intelligence interacts with identity.
Gemini becomes the cognition layer.
Encryption becomes the operating system.
Privacy becomes the default state of intelligence.
Log in or sign up for Devpost to join the conversation.