Suzanne

The Creative 3D Model Generation and Editing Platform Where Every Object Builds Itself

The Problem

3D modeling remains one of the most creative yet inaccessible design processes.

Traditional software like Blender or CAD requires precision tools, complex interfaces, and significant training. Even experienced creators often lose creative momentum switching between modeling, rendering, and exporting.

AI tools have begun to address generation, but most stop at static files. They rarely allow meaningful editing, accurate geometry, or integration into existing workflows. The creative process becomes limited to prompts rather than genuine exploration.

Today, 3D creation is often rigid and deterministic. Users follow identical pipelines, make identical adjustments, and produce predictable results. There is little space for conversational, iterative, or multimodal creation that feels truly collaborative.

The Solution

Suzanne transforms 3D creation into a dynamic, generative dialogue between the user and an intelligent modeling agent.

It is not a traditional CAD tool, but a creative modeling companion that understands language, imagery, and intent. Suzanne lets users describe, sketch, or upload a photo to generate and edit 3D models in real time. Every object can be modified through natural commands, refined procedurally, and exported instantly.

Key Features

Text-to-Model Generation

Users describe what they want in natural language. Suzanne uses Claude to translate these descriptions into OpenSCAD code, enabling precise parametric modeling through text. The generated code is compiled client-side using WebAssembly, producing editable STL files directly in the browser.

Image-to-Model Conversion

When users upload reference photos, Suzanne employs Hunyuan, an advanced diffusion model, to reconstruct 3D geometry from 2D images. The output meshes are automatically aligned, cleaned, and converted into OpenSCAD-compatible forms for further refinement. Prediction models Hunyuan is trained on give the model a unique ability to "fill in the gaps" of models based on just one photo.

Conversational Editing

After generation, users can speak or type refinements such as “make the edges smoother” or “add a circular base.” Suzanne parses these edits, updates the underlying code, and regenerates the geometry instantly.

Interactive Visualization

Built-in rendering powered by Three.js allows users to rotate, resize, and inspect models in real time. In XR mode, Suzanne enables spatial manipulation of geometry using intuitive hand gestures through WebXR.

How We Built It

  • Frontend: React and Tailwind for interface design, integrated with Three.js and WebXR for interactive visualization
  • Backend: Lightweight Flask API managing request routing and model translation layers
  • AI Engine:

    • Claude for structured code generation from text prompts (OpenSCAD syntax)
    • Hunyuan for 2D-to-3D mesh reconstruction from reference images
  • STL Generation: Compiled and previewed client-side through OpenSCAD WebAssembly, ensuring low-latency updates without external dependencies

  • Editing Engine: Custom parser that tracks user instructions and modifies OpenSCAD definitions dynamically

Challenges We Ran Into

  • Parsing vague natural language into accurate OpenSCAD syntax
  • Optimizing WebAssembly performance for large STL outputs
  • Integrating Hunyuan’s image-to-model pipeline within browser memory limits
  • Preserving mesh editability after AI generation
  • Synchronizing real-time geometry updates across XR and desktop views

Accomplishments That We’re Proud Of

  • Built a complete text-to-OpenSCAD and image-to-model workflow entirely in the browser
  • Achieved live STL generation through WebAssembly without server-side rendering
  • Created an editable AI-assisted modeling loop combining Claude reasoning and procedural geometry
  • Integrated multimodal inputs (text, voice, and image) into a single 3D workspace
  • Designed a scalable architecture capable of local generation with minimal latency

What We Learned

Combining large language models with procedural CAD systems creates a powerful new way to bridge human creativity and geometric precision.

Through iterative testing, we discovered that conversational modeling accelerates both ideation and prototyping. We also explored the limits of running heavy geometry computation entirely on-device through WebAssembly, improving both performance and privacy.

What’s Next for Suzanne

  • Expanded material intelligence for automatic texturing and color mapping
  • A collaborative workspace for shared model editing and remixing
  • Integration with 3D printing pipelines for direct fabrication
  • Advanced physics-based constraints for mechanical model validation
  • A fine-tuned local model for OpenSCAD generation to reduce dependency on external APIs

The Vision

Suzanne is designed to make 3D creation conversational, accessible, and endlessly adaptable. By combining reasoning models like Claude, vision models like Hunyuan, and real-time WebAssembly generation, it turns imagination into geometry in seconds.

Suzanne is not just an AI modeling tool. It is a step toward a new era of agentic, multimodal design, where creativity flows naturally from language, vision, and intuition.

Built With

Share this project:

Updates