DEV Community

PicklePixel
PicklePixel

Posted on

I Built an AI Agent to Apply to 1,000 Jobs While I Kept Building Things

Job searching is a full-time job. That's the actual problem. It competes directly with the work I love which is building things, learning and automating stuff. At some point I enough and wanted to figure it out so I started building ApplyPilot.

I built ApplyPilot. It's a fully autonomous job application pipeline that discovers jobs, scores them against my profile, tailors my resume per role, writes cover letters, and submits applications all by itself. 1,000 applications in 2 days. I have interviews scheduled right now.

Here's how it works and what surprised me along the way.

The Situation

The existing tools in this space are either cheap and dumb, or smart and expensive. The "smart" browser automation services charge per application and still require babysitting. I've been building automations for years - it's genuinely one of my stronger skills - so I decided to just build the thing myself instead of paying someone else's margins.

The core idea was treating job searching as a pipeline problem. Every stage has a clear input and output. Automate each one.

The Pipeline

ApplyPilot runs in 6 stages:

  1. Discover - Scrapes Indeed, LinkedIn, Glassdoor, ZipRecruiter, and Google Jobs, plus 48 pre-configured Workday employer portals and 30+ direct career sites
  2. Enrich - Fetches the full job description from each listing URL
  3. Score - An LLM rates each job 1-10 based on my resume and search preferences. Only jobs scoring ≥7 move forward
  4. Tailor - Rewrites my resume for the specific role (reorganizes sections, emphasizes relevant experience, injects keywords from the job description)
  5. Cover Letter - Generates a targeted cover letter per job
  6. Apply - Submits the application

The whole thing runs off a single SQLite database that acts as a conveyor belt. Each stage reads what the previous one produced and writes its output to new columns. You can run stages independently, restart failed ones, or run a subset:

applypilot run                    # full pipeline
applypilot run score tailor      # just re-score and re-tailor
applypilot apply --workers 3     # 3 Chrome instances submitting in parallel
Enter fullscreen mode Exit fullscreen mode

The discovery config took the most upfront time 48 Workday employer configs, 30+ direct sites, rules for blocked sites and ATS detection. But once you have it, it's done. I'd encourage anyone building something similar to start there and build a library of templates. It's a rewarding step and it makes everything downstream much easier.

The Architectural Mistake I Made First

My first instinct was a traditional orchestrator/agent setup - a central controller dispatching discrete actions to a stateless agent. Pull an action, execute it, report back, repeat. Kept the context window small, felt efficient on paper.

It didn't work well. Form filling isn't a sequence of independent actions - it's a stateful session. The agent needs to see the page, understand what it just filled, notice when something went wrong, and adapt. A thin stateless action-puller can't do any of that reliably.

I switched to a full LLM session with persistent context as the brain - one continuous conversation per application, with complete page visibility throughout. The agent could actually reason about what was happening instead of just executing one-off commands. The difference was immediate.

Haiku Is the Goat

I know that sounds like a take, but I mean it. Claude Haiku follows instructions precisely, barely hallucinates on structured tasks, and is fast enough to run as the core of a real-time automation. For this use case - filling forms with clear instructions and real page context - it outperforms bigger models on the metrics that actually matter.

Some things Haiku did that I didn't explicitly build for:

It reset my LinkedIn password. One application required LinkedIn login and the session had expired. Haiku navigated to the forgot-password flow, reset the password, and continued the application. I didn't tell it to do that. It identified the obstacle and removed it.

It sent an email when there was no form. One listing had no application form - just a contact email buried in the description. Haiku noticed, composed a professional email, attached my resume, and sent it. The correct behavior for that situation, with zero special-casing in my code.

It completed a French application entirely in French. I didn't build any localization handling. It just handled it.

These aren't lucky guesses - they're genuine adaptations to situations the code didn't anticipate. That's the actual value of using a capable model as the agent brain rather than a rigid script.

The Result

1,000 applications in 2 days. Multiple companies reached out and I'm in the interview process right now. It works.

The resume tailoring is a big part of why. ApplyPilot never fabricates anything - there's a resume_facts section in the config that locks the real companies, real projects, and real metrics. The AI can reorganize and emphasize, but it can't invent. That matters both for integrity and for not getting blindsided in an interview.

On the ethics of applying at scale: I have an extensive skillset across multiple domains and I'd genuinely thrive in any of the roles I targeted. The tailoring means each application is actually relevant to the role, not just spam. If a company receives a well-matched resume for something I can do, I don't see the problem. The reader can draw their own conclusion.

What I'd Tell You

If job searching is eating your time right now - that frustration is real and it doesn't have to work that way. The tools to automate most of this already exist.

Start with the discovery config. Build a library of job site templates. That foundation makes everything else possible, and once it's built you won't have to touch it again.

The code is https://github.com/Pickle-Pixel/ApplyPilot. Go build something.

Top comments (0)