Inspiration

Modern work lives across dozens of folders, file types, and naming styles, but local file search still expects exact keywords. We built Findly to make finding files easier, by searching through intent and natural language.

What it does

Findly is an AI-powered desktop file search engine that helps you locate relevant files quickly.

  • Monitors selected folders in the background.
  • Parses common document and code formats (txt, pdf, png, md, code, docx, pptx, jpg)
  • Indexes file content semantically for meaning-based search.
  • Returns ranked results with concise AI summaries.
  • Provides a Spotlight-style quick-search flow with a global shortcut.

How we built it

Findly uses a three-part architecture:

  • Desktop app: Electron + React + TypeScript for the main UI and Spotlight experience.
  • Watcher service: Node.js + chokidar to detect file additions/changes and queue processing.
  • AI backend: Python + FastAPI for ingestion/search, OpenAI embeddings for vectorization, Pinecone for * vector storage/retrieval, and Gemini for final ranking + summary generation. Pipeline flow:
  • File change detected.
  • File parsed and chunked.
  • Chunks embedded and upserted into Pinecone.
  • User query embedded and matched semantically.
  • Top candidates re-ranked with summaries for clarity. ## Challenges we ran into
  • Building reliable real-time indexing without overwhelming the system during bursts of file updates.
  • Normalizing metadata and parsing quality across multiple file formats (pdf, docx, pptx, code/text).
  • Balancing relevance, latency, and cost in a multi-stage retrieval + ranking pipeline.
  • Designing safe fallbacks when model/API calls fail so users still get useful results. ## Accomplishments that we're proud of
  • Delivered a working end-to-end product, not just isolated experiments.
  • Achieved continuous background indexing for fresh, up-to-date search results.
  • Combined semantic retrieval with AI summaries so users can evaluate results faster.
  • Shipped a desktop UX that feels quick and practical for everyday workflows. ## What we learned
  • Search quality depends heavily on chunking and metadata consistency, not only model selection.
  • Real-time ingestion systems need queueing/backpressure early, even in MVP stage.
  • Users trust results more when each hit includes path, recency, and a short explanation.
  • Cross-language systems (Electron/Node/Python) can move fast if interfaces are clean and explicit. ## What's next for Findly
  • Add smarter indexing controls: include/exclude paths, file-type filters, and scheduling.
  • Add personalization and feedback loops to improve ranking over time.
  • Scale performance for larger libraries and faster incremental re-indexing.
  • Explore secure team/workspace search with shared contexts and permission-aware results.

Built With

Share this project:

Updates