Inspiration
I watched a video editor friend constantly interrupt her creative flow to manually switch profiles on her MX Creative Console. Editing mode, color grading mode, audio mode — each transition required her to stop, think about which profile to activate, and mentally remap what each button now did. The hardware was powerful, but the software made it feel like a burden rather than an extension of her creativity. That's when it hit me: what if the console could just know? What if it could detect that she'd opened the Lumetri Color panel and automatically surface exposure, contrast, and color wheels — without her lifting a finger? The console shouldn't require you to manage it. It should manage itself. FlowState was born from that simple observation: creative professionals don't work in apps — they work in contexts within apps. And their tools should understand that.
What it does
FlowState transforms the MX Creative Console and Actions Ring from static button-mapping devices into intelligent, context-aware workstation controllers. Core Capabilities: Intelligent Context Detection — Goes beyond app detection to understand what you're doing within each application. In Premiere Pro, it distinguishes between editing, color grading, and audio work. In VS Code, it knows if you're writing code or debugging. Predictive Button Mapping — Learns your workflow patterns and anticipates what you'll need next. After recognizing your habits, it can pre-surface the Export button because "you usually export after color grading this clip type." Voice-Activated Mode Switching — Hold the dial and say "switch to color" — your entire console reconfigures instantly. Cross-App Macro Chains — Chain actions across multiple applications with a single button. "Render in Blender → Export FBX → Import to Unity" — one button, three apps, zero friction.
How I built it
Architecture:
Built the interactive prototype using vanilla HTML, CSS, and JavaScript for maximum compatibility and zero dependencies Designed the context detection system around application-specific APIs (Adobe CEP for Creative Cloud, VS Code Extension API, Blender Python API) Created a modular data structure where each app contains multiple task contexts, each with its own button mappings, dial assignments, and ring configurations
Technical Approach:
Used event-driven state management to ensure UI updates propagate correctly across all components Implemented proper event listener patterns (addEventListener vs. assignment) to prevent handler overwrites during re-renders Built visual feedback systems (toast notifications, activity logging, status indicators) to make the AI's decisions transparent to users
Design Philosophy:
Prioritized "solo-developer feasibility" — every feature uses existing APIs rather than requiring custom infrastructure All ML inference designed to run locally for privacy and low latency Created a plugin architecture that allows community extensions for new applications
Challenges I ran into
- The Re-render Problem Early versions had event handlers that would silently break when the UI re-rendered after switching apps. Buttons would visually update but clicking them did nothing. Solved this by moving from inline event assignment to proper addEventListener patterns with fresh bindings on each render.
- Making AI Decisions Visible Users felt uncomfortable when buttons "magically" changed without understanding why. Added the AI Insights panel, context flow visualization, and activity log so users can always see what FlowState detected and why it made specific mapping decisions.
- Balancing Automation with Control Too much automation felt intrusive; too little defeated the purpose. Found the sweet spot by making FlowState's suggestions visible and overridable while still providing meaningful automation for repetitive context switches.
- Hardware Status Ambiguity Testers couldn't tell if their devices were actually connected. Added prominent "Connected" status indicators on both the hardware visualizations and the header to provide constant reassurance.
Accomplishments that I am proud of
- Context-Within-Context Detection No existing plugin does this. Detecting "Premiere Pro" is easy. Detecting "color grading within Premiere Pro" required rethinking how we understand user intent through panel focus, active tools, and workflow patterns.
- The Interactive Prototype Built a fully functional demo that lets judges experience FlowState's core value proposition firsthand — switch apps, change tasks, run macros, use voice commands. Every button works. Every state change propagates correctly.
- Solo-Developer Feasibility Deliberately designed every feature to be buildable by one person using official APIs. No massive ML infrastructure required — just smart use of existing application hooks and lightweight local models.
- Making Complexity Feel Simple FlowState does a lot under the hood, but users just see their buttons change to exactly what they need. That seamless experience required careful attention to timing, visual feedback, and predictable behavior.
What I learned
What I learned
- Event-Driven UI Is Harder Than It Looks Managing state across multiple interconnected components (console buttons, ring segments, dial values, app selector, task selector) taught me why frameworks like React exist — and how to achieve similar reliability in vanilla JS when needed.
- Users Need to Understand AI Decisions "Magic" automation creates anxiety. Transparency creates trust. Showing users why FlowState made a decision (through the AI Insights panel) transformed the experience from "creepy" to "helpful."
- Hardware-Software Integration Is About Confidence Physical devices need constant reassurance that they're connected and working. Status indicators, visual feedback on button presses, and logging every action aren't polish — they're essential.
- Prototype Fidelity Matters for Hardware Projects You can't just describe what a context-aware console would feel like. You have to let people click the buttons, turn the dials, and see the controls change in real-time.
What's next for FlowState: AI Context-Aware Workspace Controller
Immediate Roadmap:
Real Logitech SDK Integration — Connect to actual MX Creative Console and Actions Ring hardware using the official SDK Adobe CEP Plugin — Ship the first production integration for Premiere Pro with editing, color grading, audio, and effects contexts VS Code Extension — Release context detection for writing, debugging, Git operations, and terminal use
Future Vision:
Community Context Packs — Let users create and share context definitions for new applications (DaVinci Resolve, Logic Pro, Maya, etc.) Workflow Recording — Record your typical workflow once, and FlowState learns the entire pattern for predictive mapping Multi-User Profiles — Different team members can have different context mappings on shared workstations Hardware Expansion — Extend context awareness to MX Keys, MX Master mouse gestures, and future Logitech devices
The Long-Term Goal: Make the MX Creative Console the first truly intelligent creative peripheral — one that understands not just which app you're in, but what you're trying to accomplish, and adapts itself to help you get there faster.

Log in or sign up for Devpost to join the conversation.