Inspiration

We realized that the modern hiring process is fundamentally flawed. Companies lose 30% of a first-year salary on a single bad hire , and the average time-to-hire has ballooned to 4-6 weeks. Why? Because we are still using "dumb" tools to do a human's job. We post generic job listings, sift through thousands of unverified resumes, and conduct biased interviews that test rehearsal skills instead of actual competence.

We asked ourselves: What if you didn't just hire a tool? What if you hired an autonomous digital recruiter?

What it does

Simer is the world's first Autonomous AI Recruitment Agent. It doesn't just "assist" with hiring; it executes the entire lifecycle through a multi-agent system:

Agent Zero: Sits inside your productivity stack (Slack/Jira) to detect burnout and workload spikes before you need to hire.

Agent A: autonomously verifies skills by analyzing GitHub/Portfolios (stopping spam) or headhunting passive talent on LinkedIn.

Agent B: Conducts real-time, voice-native video interviews. It spins up live coding sandboxes (for devs) or design canvases (for designers) to test actual skills.

Agent C: Acts as a "Fairness Engine," filtering out bias and drafting the final Offer Letter based on a weighted "True Fit" score.

How we built it

We architected Simer as a multi-agent system utilizing the Model Context Protocol (MCP) to decouple our AI from the tools it controls.

We used AWS Bedrock (Claude 4.5 Sonnet) for high-level reasoning and resume analysis.

We integrated ElevenLabs Conversational AI to achieve sub-400ms latency, creating an interview experience that feels indistinguishable from a human conversation. We used Claude 4.5 Sonnet to generate job-specific questions (like a programming question for developers)

We utilized E2B Sandboxes to give Agent B a secure cloud environment where it can execute candidate code safely in real-time.

Challenges we ran into

Taming the Voice Latency (ElevenLabs API): Our biggest technical hurdle was integrating the ElevenLabs Conversational AI. We initially faced significant issues with connection stability and API timeouts, which broke the immersion of the "human-like" interview. Debugging the WebSocket handshake to ensure the AI didn't cut off the user or stay silent for too long was a marathon effort.

The "Idea" Pivot: We started with a broad concept of "better hiring," but struggled to define how AI should intervene. Moving from a standard "video interview tool" to a "Multi-Agent System" required a complete mental shift. We had to architect independent agents (Sentinel, Scout, Interviewer) rather than just building one monolithic feature,

Accomplishments that we're proud of

Agent B is Live (The Talking Sandbox): We are incredibly proud of shipping a working MVP of Agent B. We successfully connected a real-time, low-latency voice agent to a live code execution environment. Seeing the AI ask a coding question, listen to the user, and evaluate the code in real-time is a magical user experience.

The "Gatekeeper" Works: We successfully built the Inbound Mode for Agent A. It can take a resume, parse it using LLMs, and verify the skills against a job description, effectively automating the first 48 hours of a recruiter's job.

Bias-Aware Decision Logic: We built the initial framework for Agent C, demonstrating that an AI can provide a weighted "True Fit" score rather than just a raw performance metric.

What we learned

Making an entire platform in less than 5 hours is not an easy task. :((

We also learned that for agents to be truly useful, they need standardized ways to talk to tools. Adopting the Model Context Protocol (MCP) mindset changed how we viewed integrations—moving from "hardcoded APIs" to "flexible resources."

What's next for Simer

Building Agent 0: Currently, our system reacts to job postings. Our next major sprint is building Agent 0, the proactive monitor that connects to Slack and Jira to detect hiring needs before they are reported.

Unlocking "Headhunter Mode": While Agent A handles inbound applicants well, we are actively developing the Outbound Scout capabilities to autonomously scan LinkedIn and reach out to passive candidates who haven't applied yet.

Refining the "Judge": We plan to deepen Agent C's bias detection capabilities, moving from simple prompt engineering to training a custom adapter on Amazon SageMaker for more nuanced fairness audits.

Built With

Share this project:

Updates