Building SoundSignal in 5 Days with Claude Code
How one developer and an AI pair-programmed a full-stack permit intelligence platform — 143 commits, 95 PRs, and ~29,500 lines of code in under a week.
I built SoundSignal — a full-stack building permit intelligence platform — in 5 days using Claude Code as my primary development partner. This is a transparent account of what that looked like: the workflow, the numbers, the friction, and what actually shipped.
The raw numbers
| Metric | Value |
|---|---|
| Calendar days | 5 |
| Claude Code sessions | 56 |
| Total messages | 414 |
| Commits | 143 |
| Pull requests | 95 |
| Lines of code | ~29,500 |
| Tests | 506 (464 backend + 42 frontend) |
| Estimated cost (Claude Code) | ~$200 |
Every commit went through a PR. No direct pushes to main. Claude Code handled the full implement → commit → push → PR → merge loop.
What is SoundSignal?
SoundSignal takes a property address on Bainbridge Island, WA and produces a complete building permit risk report. The pipeline:
- Resolve the address to a parcel via the county's SmartGov portal
- Scrape every building permit associated with that parcel
- Download all permit documents (PDFs — blueprints, inspection reports, applications)
- Extract structured data from each document using Claude (Layer 1)
- Aggregate per-permit summaries (Layer 2)
- Synthesize a final parcel report with risk flags (Layer 3)
- Display results in a real-time streaming dashboard
The 3-layer extraction avoids context window explosion. Each layer synthesizes its inputs before passing data up. The whole pipeline runs for about $1.30 per parcel using claude-sonnet-4-5-20250929.
The stack
- Backend: Python 3.12, FastAPI, Celery, PostgreSQL, Redis
- Frontend: Next.js 14, TypeScript, Tailwind CSS
- AI: Anthropic Claude API (Bedrock in production)
- Scraping: Playwright for browser automation
- AWS infrastructure (all managed via Terraform):
- Compute: ECS Fargate (3 services: API, worker, dashboard) with auto-scaling
- Database: RDS PostgreSQL with automated backups
- Caching/Pub-Sub: ElastiCache Redis
- CDN: CloudFront with custom cache policies and origin request policies
- Storage: S3 (document archival, Terraform state, newsletter reports)
- Load Balancing: ALB with path-based routing and health checks
- Auth: Cognito user pools with Google OAuth federation
- Email: SES with DKIM/SPF, SNS/SQS for bounce/complaint handling
- API Gateway: REST API with usage plans and API key management
- DNS/TLS: Route 53 + ACM certificates (auto-validated)
- Secrets: Secrets Manager for Cognito client secrets and API keys
- Monitoring: CloudWatch log groups for all services
- CI/CD: ECR for container images, Lambda + API Gateway for self-hosted GitHub Actions runner orchestration
- Networking: VPC with public/private subnets, security groups, service discovery via Cloud Map
- AI: Bedrock for Claude API access (IAM-authenticated, no API keys in production)
How Claude Code actually worked
The development loop was remarkably consistent across all 56 sessions:
- I describe what I want in plain English
- Claude Code explores the codebase, reads relevant files
- It proposes an approach (sometimes I redirect)
- It writes code, creates a branch, commits, pushes, opens a PR
- I review the diff, request changes or merge
The --dangerously-skip-permissions tradeoff
Early on, I started running Claude Code with --dangerously-skip-permissions. This lets it execute shell commands, write files, and run git operations without asking for confirmation each time.
The name is intentionally scary. And yes, it means Claude Code can rm -rf your repo if it wanted to. In practice, it never did anything destructive — but you're trusting the model's judgment for every file write and command execution. I was comfortable with this because:
- Everything was in git with frequent commits
- I reviewed every PR before merging
- The repository was a greenfield project with nothing sensitive
Would I do this on a production codebase with secrets? No. For a greenfield project where I'm reviewing every PR? The speed gain was worth it.
The worktree pattern
Claude Code works best when it creates git worktrees for each task. This keeps main clean and runnable while work happens in isolation. The typical flow:
git worktree add .claude/worktrees/feat/my-feature -b feat/my-feature
# ... make changes ...
git push -u origin feat/my-feature
gh pr create --title "..." --body "..."
# merge via PR
This pattern emerged organically and became the standard workflow by day 2.
What went well
Speed of iteration. The time from "I want X" to "X is deployed" was often under 10 minutes for straightforward features. Claude Code would read the relevant code, understand the patterns, write the implementation, add tests, and open a PR.
Pattern consistency. Once a pattern was established (like the docs system using MDX, or the API endpoint structure), Claude Code would replicate it faithfully. The blog you're reading right now was built by mirroring the existing docs infrastructure.
Test coverage. Claude Code wrote tests proactively. The test suite grew to 506 tests covering models, API endpoints, scraper HTML parsing, downloader logic, newsletter generation, rate limiting, and frontend utilities.
Infrastructure as code. The entire AWS deployment — 15+ services across compute, database, caching, CDN, auth, email, networking, and monitoring — was written as Terraform by Claude Code. Over 4,400 lines of infrastructure code across modularized Terraform configs. The CI/CD pipeline builds Docker images, runs migrations, applies Terraform, and deploys to ECS on every merge to main.
The friction points
Over-exploration. Claude Code sometimes spent too long reading files and exploring the codebase before writing any code. For a 5-day sprint, I needed fast iteration. I added a note to CLAUDE.md: "Limit exploration/planning to 2-3 min max before producing code."
Missing environment variables. When Claude Code added a new feature that required an environment variable (like an API key or S3 bucket), it would update the application code but sometimes forget to add the variable to the ECS task definition in Terraform. This caused several production debugging sessions where the code was correct but the deployment was missing config.
Context window pressure. Long sessions with many file reads would fill up context. The solution was shorter, focused sessions — one feature per session rather than marathon coding blocks.
PyMuPDF gotchas. Small things like page.widgets() returning a generator that's always truthy (you need any(w for page in doc for w in page.widgets())) tripped up both me and Claude Code until we figured out the pattern.
What shipped
Beyond the core pipeline described above, here's everything else that was built and deployed:
- Next.js dashboard with job management, real-time progress, and structured result display
- Landing page with feature overview, use cases, and newsletter signup
- REST API with OpenAPI docs, API key authentication, and rate limiting
- Chrome extension for Zillow integration — run a permit check directly from a listing page
- Contractor scoring extracted from permit history
- Watchlist for monitoring properties over time
- Admin panel for user management and approval workflows
- Newsletter system — AI-generated weekly building activity digests using Claude Opus, with SES email delivery, subscriber management, and privacy rules that never expose residential addresses
- CloudFront CDN with hybrid caching — immutable assets (JS/CSS/images) cached at the edge with long TTLs, while SSR pages and API routes bypass the cache entirely. This means the site is ready to absorb traffic spikes out of the box: static assets are served from edge locations worldwide without touching the origin, and CloudFront's built-in DDoS protection (AWS Shield Standard) covers the distribution automatically
- Rate limiting via slowapi (Redis-backed, distributed across Fargate tasks)
- Full CI/CD pipeline — lint, test, build, migrate, deploy on every merge. Self-hosted GitHub Actions runners on ECS Fargate handle CI and deploys (direct VPC access for migrations), while Docker image builds run on GitHub-hosted runners
- ~500 unit tests across backend and frontend
- Cost tracking per job with token usage breakdowns
- Database migrations — lightweight custom runner that discovers numbered SQL files, tracks applied versions, and runs automatically before each deployment
The $1.30/parcel breakdown
The AI extraction cost per parcel breaks down roughly as:
- Layer 1 (document extraction): ~$0.80 — this is the expensive part, especially for architectural drawings that go through the vision path
- Layer 2 (permit aggregation): ~$0.30 — synthesizing extracted documents into permit summaries
- Layer 3 (parcel report): ~$0.20 — final synthesis with risk flags
Vision-path documents (blueprints, scanned forms) cost 5-20x more than text-path documents. The PDF classifier in downloader.py routes each document to the cheapest viable extraction method.
Prompt caching via cache_control: {"type": "ephemeral"} on system prompts reduces repeat costs significantly. The system prompt and municipal glossary are cached across documents within the same job.
Reflections
AI-assisted development is not autopilot. I was actively directing every session — choosing what to build, reviewing every PR, catching issues Claude Code missed (like the env var gaps), and making architectural decisions. The AI handled the implementation mechanics. I handled the product decisions.
CLAUDE.md is the most important file. The project instructions file that Claude Code reads at the start of every session was critical. It described the architecture, the data models, the deployment pipeline, and the development conventions. Without it, each session would start from scratch. With it, Claude Code could pick up exactly where the last session left off.
The PR workflow kept things safe. Every change went through a branch and PR. This gave me a review checkpoint before anything hit main, and main always stayed deployable. The worktree pattern meant I could run the app from main while Claude Code worked on a feature in isolation.
Five days is not a magic number. The timeline was compressed because I was working on this full-time and had a clear vision of what I wanted to build. Claude Code removed the implementation bottleneck, but the thinking — what to build, how to architect it, what tradeoffs to make — was all human.
The honest summary: Claude Code is a remarkably capable pair programmer that can maintain context across a complex codebase, follow established patterns, and execute multi-file changes reliably. It's not replacing developers. It's making individual developers dramatically more productive.