Inspiration

It began with a challenge that felt both simple and urgent: “What if identifying a crop disease was as easy as snapping a photo?” Across Africa and much of the developing world, smallholder farmers lose nearly 40% of their annual yield to plant diseases that could be treated if detected early. These losses perpetuate hunger and financial instability, directly tied to SDG 1 (No Poverty) and SDG 2 (Zero Hunger). We envisioned Leaf Labs as a way to reverse that trend, by giving farmers a smart, pocket-sized plant doctor powered by AI and accessible from any smartphone.

What it does

Leaf Labs is a real-time, AI-powered plant health detection platform that allows farmers to identify crop infections by simply uploading or photographing a leaf. The app runs a fine-tuned MobileNet model directly in the browser through ONNX Runtime Web, providing instant results even on low-end devices. When the model’s confidence falls below a defined threshold ( P(c|x) < 0.75 ), a Gemini Vision API fallback performs secondary validation using a weighted ensemble approach:

$$ \text{Final Prediction} = \arg\max_c (\alpha P_{\text{ONNX}}(c|x) + (1-\alpha) P_{\text{Gemini}}(c|x)) $$

The output includes disease probability, recommended treatments, and links to verified agricultural resources, offering farmers not just detection, but actionable insight.

How we built it

Leaf Labs was developed using Next.js 14, TypeScript, Tailwind CSS, and shadcn/ui for a modern, intuitive UI.

  • AI Model: Trained on a refined subset of the PlantVillage dataset using TensorFlow, then exported to ONNX for efficient browser-side inference through WASM.
  • Backend: Built with Supabase (Auth, Postgres, Edge Functions) for secure storage and user management, and deployed using Vercel and Deno Deploy for scalability.
  • State Management: Implemented with Zustand to handle inference flow, fallback logic, and live updates.
// Confidence fallback handler
if (onnxConfidence < 0.75) {
  finalPrediction = blend(onnxResult, geminiResult, alpha);
}

Challenges we ran into

  • Shrinking a high-performing AI model into a browser-executable ONNX format without losing precision.
  • Dealing with real-world variability, poor lighting, motion blur, and low-quality camera sensors.
  • Maintaining low-latency inference on mobile devices while ensuring high prediction accuracy.
  • Building a multilingual, offline-first interface suited for remote agricultural communities.

Accomplishments that we're proud of

  • Achieving 80%+ model accuracy with instant, local inference.
  • Empowering farmers to detect and manage plant diseases independently, improving yield and income.
  • Seamlessly combining edge AI and cloud vision into one unified workflow.
  • Aligning AI innovation directly with food security and sustainable development goals.

What we learned

We learned that innovation doesn’t require massive infrastructure, it requires empathy and optimization. By combining AI compression, edge inference, and hybrid validation, we discovered how powerful and inclusive browser-based AI can be. The process reinforced that trust and simplicity are as crucial as accuracy when building technology for real-world users.

What's next for Leaf Labs

Next, we aim to expand Leaf Labs with offline diagnosis caching, voice-based interaction, and regional disease mapping to track agricultural health trends. We also plan to partner with local agricultural agencies and NGOs to scale adoption globally.

Leaf Labs - Engineering the Future of Food Security. Today.

Built With

Share this project:

Updates