Inspiration
I saw the problem where students in class were completely lost, but the room remained silent. We’ve all been there: the professor asks, "Does everyone understand?" and nobody raises their hand because they’re afraid of looking "dumb" in front of their peers. The professor moves on, assuming everything is fine, only to realize days later—after grading a quiz—that half the class missed the core concept. We realized that the feedback loop in education is broken; it’s too slow (3 days) and too intimidating. We wanted to build a way to make feedback instant (3 seconds) and psychological safe.
What it does
ClassPulse is a real-time classroom synchronization tool with two main interfaces:
Professor View: A command center where the lecturer uploads their slides (PDF). As they navigate through the deck, the slide changes instantly on every student's device. They also see a live stream of "Question Clusters"—groups of similar student questions transcribed and organized by AI, allowing them to address confusion without wading through duplicates.
Student View: A sync-only interface where students follow the lecture slides in real-time. Instead of typing, they use a "Push-to-Talk" button to ask questions verbally. These questions are transcribed instantly and sent anonymously to the professor.
How we built it
Frontend: We used Flutter Web to create a responsive, cross-platform UI. This allowed us to build a single codebase that works on laptops, tablets, and phones. We used flutter_riverpod for state management to handle the complex flow of slide synchronization and audio recording.
Backend: The core logic runs on FastAPI (Python). We chose it for its native support for asynchronous WebSockets, which are essential for the live slide syncing and audio streaming.
AI & Processing:
Speech-to-Text: We integrated Deepgram Nova-2 for ultra-fast, live transcription. Audio is streamed from the client to the backend and then proxied to Deepgram via WebSockets.
Slide Conversion: We used PyMuPDF (fitz) to convert uploaded PDFs into high-resolution images server-side, removing the need for heavy external dependencies like Poppler.
NLP Clustering: To group similar questions, we implemented a TF-IDF cosine similarity algorithm (threshold 0.45) in Python. This groups questions asked within a 10-minute window on the same slide, reducing noise for the professor.
Challenges we ran into
Real-Time Audio Streaming: Piping raw audio data from a browser microphone (via Flutter) through a Python backend and into Deepgram's WebSocket API without introducing massive latency was difficult. We had to handle various audio formats (WAV, WebM, Ogg) and implement a raw PCM16 fallback.
Accomplishments that we're proud of
Cross-Device Sync: Seeing the slide change on a phone instantly when the professor clicks "Next" on their laptop felt like magic. It was also the first time trying out AI models such as Deepgram that is able to convert voice to text. And it was impressive considering the implementation was just to use an API key to be able to unlock this feature
What we learned
We have gained significant exposure to flutter development, as well as push beyond expectations, where we have implemented so many new features and new tools that we have not learnt or come across before
What's next for ClassPulse
Session Recording: Saving the synchronized slide transitions and audio transcript generated during the class to create an auto-generated "Lecture Review" document for students.
Improved Clustering: Moving from TF-IDF to a lightweight semantic embedding model (like all-MiniLM-L6-v2) to better understand the intent of questions, not just keyword overlap.
Student Upvoting: Allowing students to see the clustered questions and "upvote" them if they have the same question, further prioritizing what the professor sees.
Log in or sign up for Devpost to join the conversation.