Inspiration

Every developer knows the feeling — you're in flow, the code is coming together, and then it stops. Not because of a hard problem. Because you had to reach for the mouse.

Alt-tab to the browser. Alt-tab back. Open the terminal. Switch to the AI chat. Copy the suggestion. Paste it in. Reject it. Try again with a different model. The modern AI-assisted development workflow is powerful — but it's physically fragmented. The hardware never caught up.

We built Flowli because we believe your hands should never have to leave the desk to make a decision.


What it does

Flowli is a fully customisable plugin for the Logitech MX Creative Console that maps your entire AI-assisted development workflow onto physical hardware.

  • Accept or reject AI code suggestions with a dedicated green or red LCD key — no mouse, no context switch, eyes stay on the code
  • Switch between AI models in real time by turning the aluminium dial — Claude, GPT, DeepSeek, Gemini, local Ollama models, or any custom endpoint
  • Deploy your pipeline with three keys: run tests, push to staging, deploy to production — compatible with GitHub Actions, Cloudflare Workers, Vercel, or any webhook-based CI/CD
  • Reprompt in one press — fire preset modifiers at your last AI prompt instantly: "add error handling", "write tests for this", "simplify this" — all customisable
  • Context Snapshot — captures your current file, cursor position, and terminal output, and fires it to your chosen model as instant context
  • Screen Switcher — one key per context: IDE, terminal, browser preview, logs — configurable per project

Everything is configured via a simple JSON file. Bring your own API keys. Bring your own stack.


How we built it

Flowli is built on the Logitech Actions SDK, which allows us to bind programmable logic to each LCD key, the aluminium dial, and the roller on the MX Creative Console.

The plugin architecture is model-agnostic by design — we built a unified API adapter layer that speaks to OpenAI, Anthropic (Claude), DeepSeek, Google Gemini, and any Ollama-hosted local model through a common interface. Switching models on the dial doesn't change the prompt or the workflow — it just changes the endpoint.

The config layer is intentionally minimal:

{
  "model": "claude-sonnet-4-6",
  "api_key": "sk-...",
  "keys": {
    "K1": "accept_suggestion",
    "K2": "reject_suggestion",
    "K3": "deploy_staging",
    "K4": "reprompt",
    "K5": "context_snapshot",
    "dial": "switch_model"
  },
  "reprompt_modifiers": [
    "add error handling",
    "write tests for this",
    "simplify this",
    "refactor for TypeScript"
  ]
}

The LCD keys update dynamically — when you switch models on the dial, the key display updates in real time to show the active model name and icon.


Challenges we ran into

Latency on the dial. The model switcher needed to feel instant — like changing a physical gear, not waiting for an API call. We decoupled the dial rotation from the actual model switch, so the hardware responds immediately and the context swap happens asynchronously in the background.

Making it truly model-agnostic. Every major AI provider has slightly different API shapes, token formats, and streaming behaviours. Building a clean adapter that abstracts all of this without losing capability took significant iteration.

The reprompt timing problem. Firing a modifier at the "last AI prompt" sounds simple until you realise that prompt state can live in five different places depending on the IDE, the model, and the chat context. We solved this with a lightweight local prompt history buffer that Flowli maintains independently.

Context Snapshot accuracy. Capturing cursor position, file state, and terminal output simultaneously — and packaging it as coherent model context — required careful handling of async state across different editor integrations.


Accomplishments that we're proud of

  • The dial as a first-class developer input. Turning a physical dial to switch between Claude, GPT, DeepSeek, and a local Ollama model — and having the IDE respond in under 200ms — feels genuinely new. It's the first time we've seen the model layer treated as a hardware-switchable resource.

  • Zero lock-in architecture. Flowli doesn't care which model you use, which cloud you deploy to, or which IDE you work in. It's infrastructure, not an ecosystem.

  • The icon language. Every LCD key in Flowli uses a minimal icon set — no words, just symbols — so the console reads as fluently as a mixing board once you've used it for a day.

  • The name. Flowli — a nod to Swiss German diminutives, a hat tip to Logitech's Lausanne roots, and the thing every developer is chasing: flow.


What we learned

Building for physical hardware is fundamentally different from building software. Every interaction has weight — a key press, a dial click, a tactile confirm. That physicality changes how you design the logic. You can't hide behind a loading spinner when someone presses a key on a £200 peripheral.

We also learned how unevenly distributed the AI coding toolchain is. Developers have extraordinary models available — but the interaction layer is still almost entirely mouse-and-keyboard. The hardware layer is completely unoccupied. Flowli is an attempt to start filling it.


What's next for Flowli

  • VS Code and Cursor native extensions — deeper integration beyond keyboard shortcuts, with direct access to editor state, open files, and inline suggestion APIs
  • Team profiles — shared Flowli configs that sync across a development team, so everyone's console speaks the same workflow language
  • Dial pages per project — switch the entire console mapping when you switch git branches or open a new project
  • Voice + hardware — combining the dial-based model switcher with voice-triggered context snapshots for a fully hands-free AI review mode
  • Marketplace profiles — publish and share your Flowli config the way developers share dotfiles

Flowli is built by HeySalad — a multi-jurisdictional developer infrastructure company building tools for developers and payments teams. Incubated at BlockDojo, Queen Elizabeth Olympic Park, London.

Built With

Share this project:

Updates