Space Designer – Agentic AI Interior & Framing Assistant
Inspiration
Designing interiors is often overwhelming—balancing aesthetics, budget, and spatial constraints demands both creativity and technical precision.
We set out to build an autonomous AI system that bridges architecture, art, and computation—a system that “understands” a room like a human designer but reasons with the precision and scalability of a machine.
What We Learned
Through building Space Designer, we explored how multi-agent architectures can collaborate to achieve complex spatial reasoning. Each agent needed distinct objectives yet shared contextual understanding—from geometry and layout logic to aesthetic style embeddings.
We learned how NVIDIA NIMs (for vision inference) and Llama-3 (for reasoning and justification) could jointly enable architectural intelligence. The synergy of vision-language modeling allowed our agents to perceive, reason, and explain design choices in human-readable form.
How We Built It
We built Space Designer as a modular agentic architecture, orchestrated using AWS Step Functions for workflow coordination and FastAPI for real-time interaction:
- NVIDIA NIMs host the visual perception models, deployed as containerized inference endpoints on AWS SageMaker.
- Llama-3 agents perform high-level reasoning, justification, and dialogue-based design explanation.
- Blueprint segmentation is calibrated using room scale metadata to reconstruct accurate 3D spaces.
- Three.js powers browser-based real-time 3D rendering, enabling users to interactively visualize and adjust room layouts.
- The Shopping Agent connects directly to Amazon and IKEA APIs, offering product recommendations (furniture, décor, lighting) that match the generated design aesthetic and physical dimensions after the 3D rendering phase.
- Data flow moves through specialized agents — Tour Guide → Math → Architect → Presenter → Shopping Agent — producing validated layouts, spatial analysis, interactive renders, and curated furnishing options.
- SageMaker hosts the AI endpoints, and AWS Step Functions manage the sequential task flow.
- DynamoDB stores structured design metadata and user preferences, while S3 maintains 3D model assets and visual data.
Challenges We Faced
- Integrating visual embeddings with numeric spatial data required a custom multimodal schema.
- Coordinating inter-agent reasoning while maintaining low latency on SageMaker endpoints was complex.
- Balancing user preferences, ergonomic rules, and aesthetic feasibility demanded iterative fine-tuning.
- Calibrating 3D scale from blueprints with real-world accuracy involved significant geometric validation.
- Building the Shopping Agent required natural-language to API mapping across multiple retail platforms.
Outcome
Space Designer emerged as a fully autonomous, explainable design assistant capable of transforming room images or blueprints into:
- Structured, human-readable architectural layouts
- Interactive 3D visualizations rendered in real-time with Three.js
- Personalized product recommendations via Amazon and IKEA integration
This system demonstrates how Agentic AI can co-create with human taste, combining multimodal perception, spatial logic, and aesthetic intelligence into a seamless design experience.
Log in or sign up for Devpost to join the conversation.