Inspiration

As college students, many of us are buying our own groceries for the first time. We want to eat healthy, but it’s hard to tell which produce is truly fresh. Our idea started with helping people choose better fruits and vegetables and learn how to care for them. We also wanted to support the farmers who grow our food by giving them tools to track crop health and improve their harvests, helping to connect the field to the table.

What it does

Growcery uses AI and computer vision to analyze images of crops and produce. Users can take a photo to instantly learn about an item’s freshness, quality, or possible diseases. For shoppers, it helps identify the best produce to buy and how to store it. For farmers, it detects early signs of crop stress or disease and provides treatment advice.

How we built it

We built a React / Next.js front end that sends images to a FastAPI backend running on an AMD ROCm 7.0 server hosted on DigitalOcean. There, MobileNetV3 and EfficientNetV2 models (trained in PyTorch) perform high-speed inference, classifying produce quality and identifying crop diseases in milliseconds using ROCm-optimized GPU acceleration. The inference results are then passed to Gemini 2.0 Flash, where it passes through multiple specialized agents to further reason. Gemini interprets the raw model outputs and original image, contextualizes them with crop/produce type and condition metadata, and utilizes this to generate clear, readable advice. This advice comes in the form of treatment plans, storage tips, warnings about upcoming experiments/harvests, and more!

Challenges we ran into

We faced several major technical hurdles during development. Early on, our TensorFlow/Keras models couldn’t be cleanly exported to ONNX for ROCm compatibility, forcing us to rebuild the entire training pipeline in PyTorch and learn the framework on the fly. Connecting our Vercel front end to the DigitalOcean backend also proved tricky, since limited port access prevented standard API communication. We solved this by containerizing our FastAPI app in Docker with custom port configurations. Another challenge was that we spent a lot of time attempting to get A2A working but we encountered some bugs that we couldnt fix in enough time to get a working model out. Getting all of these systems to work together in real time was complex, and ultimately we had to sacrifice to only use a single agent to interpret the models.

Accomplishments that we’re proud of

We successfully deployed optimized ROCm models, built a full AI-reasoning pipeline with Gemini 2.0 Flash, and created a smooth, mobile-ready front end. Seeing real-time crop analysis and intelligent feedback working end-to-end was a huge milestone for our team.

What We Learned

Throughout the hackathon, we learned how to bridge computer vision, hardware acceleration, and generative AI reasoning into a cohesive system. We gained hands-on experience deploying deep learning models on AMD ROCm 7.0, understanding how ONNX conversion and optimized inference pipelines work in real-world GPU environments. Most importantly, we discovered how effective generative AI can be when paired with precise vision models to raw predictions into meaningful, human-centered insights.

What’s Next

Next, we plan to expand our datasets to include more crop types, environmental stress conditions (like drought or nutrient deficiency), and perishable produce categories to improve model generalization. We also aim to extend the feature set by adding real-time weather and soil data integration, enabling predictive insights beyond visual detection. Finally, we plan to conquer A2A and get the multi-agent pipeline working.

Built With

Share this project:

Updates