MixMate: Real-Time Audio Analysis and AI-Powered Mix Assistance
Inspiration
MixMate was inspired by our experience in live sound engineering. Spending hours mixing live performances, we often encountered recurring challenges: feedback, inaudible singers, harsh frequencies, and the pressure of keeping audiences engaged. Existing tools provided either raw visualisation or basic analysis, leaving a gap for actionable, intelligent guidance. MixMate was built to fill this gap, combining audio analysis with AI-driven insights and intermission music generation.
What It Does
MixMate empowers live audio engineers by providing:
- Real-Time Audio Analysis - Visualise and monitor every frequency band during live shows or with prerecorded tracks.
- Raw Mix Insights - Identify muddiness, harsh highs, and tonal imbalances, giving a clear overview of the mix.
- AI-Powered Recommendations - Claude analyses the audio and generates precise EQ, compression, and mix adjustment suggestions tailored to the venue and setup.
- Intermission Music Generation - Suno creates seamless, vibe-matched instrumental tracks inspired by the previous track, keeping audiences engaged between sets.
How We Built It
- Audio Engine & Analysis: AVAudioEngine with FFT-based feature extraction to deliver live and offline audio analysis with minimal latency.
- AI Integration: Structured prompts for Claude and Suno to produce recommendations and musical intermission tracks.
- User Interface: Built in SwiftUI, the dashboard displays frequency spectra, detected issues, AI insights, and generated prompts intuitively.
Accomplishments
- Developed a robust real-time audio analyser capable of functioning in live venues of all sizes, and across various music genres.
- Created a reliable Claude prompt system for actionable mix recommendations.
- Integrated Suno intermission music generation directly into the app workflow.
- Designed a unified dashboard that displays the Real-Time Analyser, AI insights, and generated music prompts in one place.
What's Next!
- Expanding AI recommendations to support complex multi-instrument setups.
- Developing venue-adaptive recommendations, adjusting based on acoustics and audience size.
- Offering customisable intermission music styles, including genre, tempo, and mood options.
- Conducting more in-depth music analysis to confirm the key, genre, and tempo of each input.
- Launching cross-platform support for macOS, Android, and integration with live sound consoles (DiGiCo, Yamaha, Allen & Heath).
Built With
- avaudioengine
- claude
- fft
- live-sound
- suno
- swift
- swiftui
Log in or sign up for Devpost to join the conversation.