Inspiration
Squircles is a reimagining of interaction with our personal devices as a fluid (mirror of the mind) experience enhancing human intent. Its aim is to take interaction from the state of choosing to accomplishing. There is a basic dissimilarity between computer language and human language, which makes for the most rigid obstacle in true human-device symbiosis. But as now computing systems can understand natural language, we can make leaps to ease the communication, for the purpose of real-time collaboration between the two. We reject the apps and brands metaphors as a battle for our most prized resource – our attention.
What it does
When picking up our device, we have an intention of what we're trying to do formulated in our mind – not a choice between apps. Our attention spans are shortened more and more by competition between brands to get us to notice them and eventually be hooked. But our needs and thoughts are fluid and the lives we want to live are intentional, not spiralling. Acts as an on device interoperability layer. Create commands using a visual language of icons. Through progressive disclosure, express your intent of doing a multi-step action. The Neural-Symbolic AI behind it reads the command, infers sensible defaults and suggests next possible actions. Then, when doing the command, it fires up all the apps needed without any context-switching for the user.
How we built it
System Architecture To fulfill the requirement of a visual interface that translates to dynamic, interoperable commands, we needed a Hybrid Neuro-Symbolic Architecture. • Symbolic (Deterministic): Handles the rigid rules of the OS (specific API endpoints, required parameters). • Neural (Probabilistic): Handles the user intent prediction and "next icon" generation. Core Layers
- The Visual Composer (Frontend): • A reactive UI that renders the "sentence" being built. • It treats icons as "tokens" in a language.
- The Semantic Engine (The Brain): • State Manager: Tracks the current build path (e.g., [Commerce] -> [Buy]). • Prediction Model (LLM): Takes the current path + context (time, location) and predicts the top k most likely next tokens (Icons). • NLG Module: Translates the icon sequence into a natural language string with placeholders (e.g., "Order a ride to {location} using {service}").
- The Interoperability Layer (The Executor): • Service Registry: A database of all available capabilities on the device (installed apps, webhooks, OS functions). • Slot Filler: Matches the {placeholders} from the Semantic Engine to actual UI components (pickers, widgets) or data. • Dispatcher: The actual code that triggers the deep link, API call, or Intent.
Challenges we ran into
Training an small, but powerful model to run on device and know the visual language, the command grammar and the app intents needed. Integrating all the moving parts of the system and developing with a multi-platform approach.
Accomplishments that we're proud of
Post-training a model for the first time. Doing flutter for the first time. Doing development with barely any programming background (basic understanding of Python only)
What we learned
What's next for Squircles
Refining the interaction an user experience of the app. Fine-tuning the model to give out more robust suggestions. Kick-ass presentation video :)
Log in or sign up for Devpost to join the conversation.