Inspiration

I’ve always been obsessed with immersive technology. Movies are arguably the most immersive medium we have, but raw footage rarely captures the soul of a story - it needs processing to evoke a mood. Think about how a crime documentary looks "cold" and gloomy, while romantic shows are filled with vibrant sakura flowers in the wind.

Filmmakers and photographers spend hours manually tweaking sliders to get these colors right. While AI can help, it lacks the human touch; machines don't always understand what invokes a deep feeling of immersion. I wanted to bridge that gap by making the editing process accessible through a human response - literally using your own face as the "slider."

What it does

AuraGrade is a bio-adaptive cinematic grading engine. It takes professional Sony RAW (.ARW) images and re-grades it in real-time based on your facial expressions. Using an iPhone as a high-fidelity sensor, the system detects whether you are smiling or looking serious/stressed. It then dynamically "tints" the photo: a smile might trigger a cold, uneasy grade for a thriller themed movie, while a serious look shifts the image into a warm, vibrant "Hyper-Romantic" aesthetic for romantic themed movies to compensate. It also brings out certain colors depending on the theme and gives a richer feel to the romantic theme and a "ghost like" feel to the uneasy theme

How we built it

The core engine is built in Python using OpenCV. Because I wanted professional-grade results, I used rawpy to develop Sony RAW files directly. For the "bio-sensor," I used Haar Cascade models to detect facial landmarks and expressions through an iPhone Continuity Camera feed. I also experimented with the Gemini API for intelligent grading suggestions and tried to containerize a physiological engine using Docker to explore rPPG (heart rate) detection (not yet complete).

Challenges we ran into

I spent a lot of time trying to get the Presage SDK to run natively on macOS, but architectural limits made it a massive hurdle. Initially, the emotion detection was hit-or-miss - wearing glasses or being in a dim room would break the tracking. I also ran into Gemini API quota limits while debugging, which forced me to pivot toward precisely hardcoded color-science values. Lastly, processing videos in real-time drastically affected the frame rate, so I focused on perfecting the high-res static image grading as a prototype.

Accomplishments that we're proud of

I’m proud that I built something I truly love in such a short window of time. I successfully created a functional feedback loop where a human face can influence professional color science. It’s a tool that, if built upon, could genuinely help artists save time and focus on creativity, while letting AI do the heavy lifting.

What we learned

I learned that "good enough" tech used creatively is better than "perfect" tech that doesn't run. Moving from a complex physiological model (Presage) to a functional Haar Cascade taught me how to pivot quickly under pressure. I also learned that Presage can understand your heartbeat through standard webcam footage based on how "flush" your face looks, which is pretty impressive. I'd definitely explore that more in future projects.

What's next for AuraGrade

I want to actually run Presage successfully on a linux kernel (on which it is available), so that I don't have to frown or smile very big to affect the grading - letting simple heartbeat or breathing rate affect change. Lastly, if I have a stable version running for photos, running one for videos would be amazing (at a decent frame rate)

Built With

Share this project:

Updates