Job searching is a full-time job.
Manually checking multiple job boards every day, reading through dozens of descriptions, and deciding which roles are actually worth applying to is slow, inconsistent, and exhausting — especially while enrolled in school full-time and targeting a specific geographic area and skill match.
The goal was to eliminate the daily search entirely. Pull the listings automatically. Score them against your actual resume. Only surface the ones that matter.
Five-stage automated pipeline.
Key implementation details.
The AI scoring prompt is intentionally structured — Claude is given a defined output format and instructed to respond with only valid JSON, making the response safe to parse directly without brittle string manipulation.
def score_job(client, job): prompt = f"""You are a career advisor. Score how well this job matches the candidate's resume on a scale of 1-10. Only return a JSON object with these fields: score (integer 1-10), reason (1-2 sentences explaining the match). RESUME: {RESUME} JOB: Title: {title} Company: {company} Location: {location} Description: {description} Respond with only valid JSON: {{"score": 7, "reason": "Strong SQL match."}}""" message = client.messages.create( model="claude-haiku-4-5-20251001", max_tokens=150, messages=[{"role": "user", "content": prompt}] ) result = json.loads(message.content[0].text.strip()) return result["score"], result["reason"]
Deduplication is handled by building a dictionary keyed on job ID before scoring, so the same listing pulled under multiple search terms is never scored twice — keeping API calls and cost to a minimum.
# Merge both sources, deduplicate by job ID all_jobs = {} for job in adzuna_jobs + arbeitnow_jobs: all_jobs[job["id"]] = job # same ID = overwrite, not duplicate # Score every unique listing scored = [] for job in all_jobs.values(): score, reason = score_job(client, job) if score >= 8: scored.append((job, score, reason)) # Sort highest match first scored.sort(key=lambda x: x[1], reverse=True)
What the daily digest looks like.
The pipeline ends with a formatted HTML email delivered each morning. Each card shows the role, company, location, salary if available, AI match score, and a direct link to apply.
📊 Your Daily Job Matches
Jobs near Chandler, AZ (+ remote) scored 8/10 or higher against your resume.
Data Analyst II — Operations
💰 $58,000 – $74,000/yr
Match Score: 9/10
Strong alignment on SQL, Tableau, and operational data experience within a manufacturing environment.
Inventory & Reporting Analyst
Match Score: 8/10
Direct match on inventory control background and Excel/data tracking skills; contract role with strong conversion potential.
Powered by Adzuna + Arbeitnow + Claude AI
What this project demonstrated.
- API integration end-to-end — authenticating with Adzuna, parsing paginated JSON responses, handling timeouts and exceptions gracefully across two different API schemas.
- Prompt engineering for structured output — designing a Claude prompt that reliably returns parseable JSON rather than freeform text, with fallback handling for malformed responses.
- Cost-aware design — deduplication before scoring means every API call to Claude is on a unique listing. On a typical day, ~400 listings deduplicate down significantly before hitting the model.
-
Secrets management — credentials for three separate services (Adzuna, Anthropic, Gmail) stored in a
.envfile, loaded with python-dotenv, never hardcoded. - Automated scheduling — configured with Windows Task Scheduler to run daily at a set time, requiring zero manual intervention once deployed.