Inspiration

I am a software engineer with 20 years of experience, but for the last two years, I have also been a stand-up comedian in NYC. This project fits the "Wildcard" track because it was born from a personal obsession: the disconnect between how a set feels and what actually happened.

Comedians usually judge a performance by memory or "vibes." We remember the big laughs and the awkward silences, but we can't objectively see the reasons why they happened. Athletes have game tape to analyze their mechanics; comedians only have vague recollections. I wanted to build the analysis I wished existed: a tool that treats a comedy set like game tape, turning subjective art into objective data.

What it does

Laugh Spikes is a "Game Tape" analyzer for stand-up comedy. It visualizes the invisible feedback loop between a performer and an audience.

Specifically, it:

  1. Aligns audience laughter (audio data) with the transcript of the set (text data).
  2. Visualizes "Laugh Spikes" by showing exactly when laughs occur, how intense they are, and how long they last.
  3. Generates an AI-powered "Coach" report. Using the aligned data, an AI agent reviews the set, citing specific timestamps and lines to provide feedback on joke efficiency, timing, and "dead zones."

It transforms a blurry memory of a performance into a concrete, actionable dataset.

How we built it

We leveraged Hex’s unified platform to keep the entire workflow, including ingestion, analysis, and AI reasoning, in one place.

  1. Ingestion: We imported two datasets: a CSV of laughter metrics (start, end, intensity) and a timestamped transcript.
  2. Logic: We built a time-based join algorithm in Python to map every laugh instance to the specific spoken line that preceded it. This created a new "Reaction Dataframe" where every joke is linked to its result.
  3. Visualization: We used Hex's native visualization tools to build the "Laugh Spike" scatter chart, which allows users to hover over a data point (a laugh) and see the joke that caused it.
  4. AI Integration: We passed the joined dataframe directly to an AI agent within Hex. Because the AI had access to the results (laughter duration) alongside the content (text), it didn't just hallucinate advice; it grounded its feedback in the actual audience reaction.

Challenges we ran into

The biggest technical hurdle was "Comedy Timing." Laughter is messy; it doesn't always happen immediately after a sentence ends, and transcript timestamps aren't always perfect. Defining a join logic that felt "correct," such as attributing a laugh to the setup or punchline rather than the noise before it, took significant iteration.

Additionally, ensuring the AI feedback felt authentic was difficult. Generic AI says, "Make your jokes funnier." We had to engineer the prompt to act like a veteran comic, forcing it to reference specific metrics (e.g., "This setup had a 0.5s laugh; tighten it") rather than generalities.

Accomplishments that we're proud of

  • Quantifying the Subjective: We successfully turned "vibes" into measurable metrics like "Laugh Duration per Minute."
  • The "Game Tape" UI: Creating a visualization where you can see the shape of a set at a glance, spotting the big hits and the quiet failures immediately.
  • Grounded AI: Building an AI workflow that uses actual performance data to justify its critiques, making it feel like a real coach rather than a text generator.
  • Unified Workflow: Doing it all in a single Hex notebook, proving that complex audio-text analysis doesn't require a fragmented stack.

What we learned

We learned that comedy is data; it just hasn't been treated that way. When you rigorously align audience reaction with semantic content, patterns emerge immediately. We also learned the power of Hex for "multimodal" storytelling. Being able to show the code, the visual chart, and the AI narrative side-by-side makes the insights far more compelling than a static dashboard.

What's next for Laugh Spikes

  • Upload & Processing: Building a frontend for comedians to upload raw audio, using OpenAI Whisper for transcription and basic signal processing to detect laughter automatically.
  • Comparative Analytics: Tracking how the same joke evolves over multiple performances to see if rewrites actually improve the "Laugh Score."
  • Expansion: Applying this "Reaction-to-Content" alignment model to other fields, such as public speaking, sales calls, or teacher evaluations.

Built With

Share this project:

Updates