Inspiration
Content creation is everywhere now, but filming yourself is honestly kind of annoying. You either need a second person to hold the camera or you have to buy super expensive tracking gear that most people don’t have access to.
We wanted something that could follow you around automatically without costing hundreds of dollars. So we thought, why not build our own robotic cameraman? That’s how AutoVlog was born.
What it does
AutoVlog is an autonomous rover that follows you around while recording, completely hands-free.
It uses IR sensors to track your direction and ultrasonic sensors to measure distance and avoid obstacles. The robot keeps a consistent filming distance so your shots don’t look awkward or shaky. If you get too close, it stops. If you move left or right, it adjusts in real time.
Basically, it’s your own personal camera operator that doesn’t complain!
How we built it
We built AutoVlog using an Arduino as the brain, a dual motor driver to control movement, IR sensors for directional tracking, and an ultrasonic sensor to measure distance.
We mounted a servo on top so the “head” could scan when idle. The motors are powered through an H-bridge setup, and everything is powered by a compact battery pack so it’s fully portable.
The entire system runs on custom logic (no pre-trained machine learning models) just real-time sensor input and decision making.
Challenges we ran into:
-Sensor noise and interference between IR and ultrasonic readings
-Fine-tuning a sweet spot for the robot to a follow at a consistent distance while prevent jerky movement
-Logic for the prevention of oscillation when both IR sensors detect similar values
-Mechanical stability while mounting the camera
-We originally planned to 3D print a custom mount and housing to make everything look cleaner and more professional, but we ran into time constraints and printing issues. Some parts didn’t fit as expected, and we had to pivot to a more scrappy but functional solution.
Accomplishments that we're proud of
-Built a hardware-based audio-reactive tracking rover
-Successfully combined movement tracking and sound awareness
-Created a tool useful for musicians and content creators
-Implemented real-time adaptive behavior without pre-trained ML
What we learned
With code, you can usually trace an error back to a line. With hardware, one loose wire or a tiny voltage drop can make everything act completely random.
Working with new components like the IR sensors, ultrasonic sensor, and the ESP32-CAM pushed us out of our comfort zone. Every sensor behaves slightly differently, and getting them to cooperate took way more trial and error than we expected.
Another big learning curve was front-end design for the website. Building the robot was one thing, but creating a clean interface to interact with it was a completely different skill set. It forced us to think about user experience, not just functionality.
What's next for AutoVlog
Right now, the movement and tracking system works reliably, but the camera system isn’t fully where we want it yet. While we integrated the ESP32-CAM and experimented with live streaming and recording, it doesn’t yet deliver the smooth, high-quality footage we originally envisioned.
Our goal is to make AutoVlog feel less like a prototype and more like a polished filming tool. That means improving video stability, reducing delay, and refining how the camera locks onto the subject.
Log in or sign up for Devpost to join the conversation.