Inspiration
Hurricane Milton brought widespread devastation, causing floods, collapsed buildings, and isolating communities. Rescue teams struggled against time and harsh conditions, but many lives were tragically lost.
Watching these events unfold we saw the need for faster, more effective rescue solutions, such as Computer Vision-powered technology, to save lives when every second matters.
What it does
Our app uses cutting-edge computer vision technology to assist in search and rescue missions. It analyzes live drone and camera footage to identify and locate missing persons in real time, even in hard-to-reach areas like dense forests or disaster-struck urban zones. The app can detect specific features, helping rescue teams quickly find survivors. By democratizing search and rescue models, it improves the speed and effectiveness of search operations, ultimately saving lives in critical situations.
How we built it
We used the YOLOv8 object detection model and fine-tuned it using a Kaggle dataset of labeled images featuring people in various environments. This training enabled the model to recognize missing persons under different conditions. We then built the iOS app using Xcode, integrating the fine-tuned model to process live camera and drone footage in real time. While also allowing users to upload videos of footage and detect the individuals in those images. The result is an intuitive tool that helps rescue teams quickly detect and locate survivors, making search and rescue operations more efficient.
Challenges we ran into
One of the main challenges we faced was the limitation of our computers during the fine-tuning of the YOLOv8 model. The process required significant computational power, and with our hardware, it took much longer than anticipated to train the model effectively. This slowed down development and testing cycles, adding to the complexity of the project.
Additionally, we had little to no prior experience in iOS app development, which made building the app in Xcode a steep learning curve. We encountered issues with Xcode, from debugging errors to managing dependencies, which further delayed progress. Navigating these challenges was tough, but they ultimately helped us grow and improve our technical skills throughout the project.
Accomplishments that we're proud of
- Consuming 500 mgs of caffeine per person
- Not sleeping until the hackathon was completed
- Fine-tuning an existing CV model with very little background in CV at all
What we learned
- We learned the process to train Computer Vision models from the Data Extraction -> Preprocessing -> Model Training -> Evaluation -> Deployment of Model
- The development of apps in IOS using Xcode and related technologies such as Swift, UIkit,
What's next for Blue Horus
We are very proud of the progress we made in these 24 hours with Blue Horus. We have the ultimate goal of shipping out our product to be run locally on Drones or other Aerial devices. Our software can be run on any piece of tech that has a camera and we hope that by getting our technology into the hands of as many people as possible we can save lives by improving the response of emergency services and helping them to locate people in need quicker.
Kindo’s Tools + Blue Horus:
Our project exemplifies the spirit of Kindo by simplifying complex tasks. With Blue Horus, we’re making advanced search and rescue technology available to anyone with a drone and a camera, reducing the technical barrier for first responders. This aligns perfectly with Kindo’s goal of making intricate tasks easier and revolution
Log in or sign up for Devpost to join the conversation.