RECENT WORK

Recent work.

Anonymized for client privacy — happy to walk through specifics on a call.

GLOBAL SEO PLATFORM · 50K+ PUBLISHER MARKETPLACE

Routing publisher inputs without a human in the loop

THE WORKFLOW PROBLEM

A two-sided marketplace was processing hundreds of publisher submissions weekly through manual review and routing. The ops team was the bottleneck for every transaction, and quality was inconsistent across reviewers.

WHAT CHANGED

A Zoom → n8n → Claude pipeline that captured intake calls, scored submissions against a structured rubric, routed approved publishers into the right marketplace tier, and flagged edge cases for human review. Eval framework wrapped around it to monitor scoring drift over time.

MEASURABLE OUTCOMES
60%
reduction in review cycle time
98%
routing accuracy vs. human baseline
~15
ops hours reclaimed per week
MID-MARKET SERVICES ORGANIZATION

Standing up an AI transformation function from zero

THE WORKFLOW PROBLEM

A multi-team services company knew they were behind on AI but had no internal capability, no roadmap, and no clear first wins. Leadership wanted real production deployments, not a pilot graveyard.

WHAT CHANGED

90-day audit and roadmap covering 12 candidate workflows. Prioritized three for immediate build: client deliverable QA, capacity planning, and contract intake. Shipped all three to production with adoption tracking and team training.

MEASURABLE OUTCOMES
3
production AI workflows shipped in Q1
40%
reduction in deliverable QA cycle time
4
internal teams using the framework
SALES + SUPPORT OPERATIONS

AI call scoring at scale

THE WORKFLOW PROBLEM

A team handling thousands of customer calls per week had no scalable way to monitor quality. Manual QA covered ~5% of calls; the other 95% were a black box.

WHAT CHANGED

An automated call scoring system using Claude to evaluate every transcript against a multi-dimensional rubric, surface coaching moments, and flag at-risk accounts in real time. Dashboard for team leads, weekly trend reports for leadership.

MEASURABLE OUTCOMES
100%
call coverage vs. 5% manual baseline
12x
increase in coaching insights surfaced
6 wks
from kickoff to production
MULTI-BRAND PUBLISHER NETWORK · 8 PROPERTIES

Editorial intake and routing across eight brands

THE WORKFLOW PROBLEM

Editors at a publisher network were manually triaging briefs across eight properties — reading every submission, tagging it, and assigning it to the right desk. Routing took ~45 minutes per piece and misroutes were a weekly occurrence.

WHAT CHANGED

A unified intake form feeding an n8n + Claude pipeline that tags topic, brand fit, and tier, then routes the brief into the right desk's queue with a confidence score. Edge cases get auto-flagged for editor review instead of silently misrouting.

MEASURABLE OUTCOMES
<2 min
average routing time vs. ~45 min manual
~3x
editor capacity unlocked per week
0
misrouted briefs in the first 90 days
PROFESSIONAL SERVICES FIRM · 40-PERSON TEAM

Pre-flight QA for client deliverables

THE WORKFLOW PROBLEM

Senior reviewers were spending ~10 hours a week eyeballing decks and docs for brand, structure, and consistency issues before they went to clients. Rework was constant; senior time was the bottleneck on every shipment.

WHAT CHANGED

A Claude-based pre-flight QA pass that scores every deliverable against a structured rubric, posts a Slack summary with line-level fixes, and only escalates to a human reviewer when something genuinely needs judgment.

MEASURABLE OUTCOMES
70%
of issues caught before senior review
~8 hrs
of senior time reclaimed per week
45%
drop in deliverable rework cycles

Want to talk through something similar?