Rafaella Ruiz · automation

Automation that gets out of the way of the work.

Three production systems I've shipped. What I built them with, what they actually do, and what I'd do differently next time.

Case 01 · Personal tool

career.eval — a one-click triage for job ads.

The problem

Job hunting in Switzerland means reading 30 to 40 ads a week and triaging by gut feel. Each one needs cross-checking against three CV variants (ops, media, hybrid) to decide which version to send. Manual triage was eating 20 to 30 minutes per ad before the actual application even started.

The architecture

Input Job description (paste) 3 stored CV variants
Process 3 parallel Claude API calls Structured rubric scoring
Output Match % per CV Gap analysis CV recommendation

Single-page web app, runs entirely in the browser. Three tabs: paste a job description, store CV variants, store API key. One button. The Anthropic API key lives in localStorage, nothing leaves the browser except the API call itself.

Tools

Frontend: HTML, vanilla JavaScript
AI: Anthropic API (Claude)
Storage: Browser localStorage

Outcome

Triage time per ad: under 60 seconds. The 20-minute manual cross-check became a one-click decision. Built and used in active job search.

What I'd do differently

v1 sends each CV as a fresh API call. v2 should batch them into a single structured prompt with comparative scoring, both for cost and consistency. Adding a "saved evaluations" history would help me track which framings actually got responses, and feed that back into the rubric.

Side note: this is the tool I used to evaluate the role you're hiring for.

Case 02 · B2B SaaS · Learnship Networks

A LinkedIn content program that ran 5 to 7x benchmark.

The problem

A B2B EdTech company with a quiet LinkedIn presence, no consistent publishing rhythm, and no analytics loop closing the gap between "what we posted" and "what worked." Marketing wanted a structured program. The team was small, the production cost of original video and design was high, and nobody had time to close the loop manually each week.

The architecture

Plan 4 content pillars Notion calendar AI-assisted ideation (Claude)
Produce First draft (Claude) Human edit & rewrite Premiere / AE / Illustrator
Learn Publish LinkedIn analytics Weekly review Repeat / kill / test

A Notion calendar built around four pillars matching the buyer journey. Templates that let me produce a mix of static, motion, and short video without burning out the team. Claude for ideation and first-draft copy with human review and rewrite as the constant. A simple analytics tracker comparing reach, CTR, and engagement week-over-week. Weekly call on what to repeat, kill, or test next.

Tools

Planning: Notion (content calendar & pillars)
Production: Adobe Premiere Pro, After Effects, Illustrator
AI: Claude (ideation & first drafts)
Analytics: LinkedIn Analytics, manual weekly review

Outcome

63 posts over 12 months, two phases. 54,333 impressions. 9,231 clicks. Average CTR 15%. Average engagement 17%. B2B LinkedIn benchmarks sit at 2 to 3% CTR and around 2% engagement, so the program ran consistently 5 to 7x benchmark. 98% on-time publishing. Generated 10 inbound trainer applications and one direct sales lead attributed to LinkedIn visibility.

What I'd do differently

I'd close the analytics loop tighter, ideally automate the weekly digest into a Notion page so the "what worked" review takes 15 minutes instead of 90. I'd also build an experiment log earlier. We found the format-content combinations that worked, but I want them documented so the next person running the program doesn't have to rediscover them.

Case 03 · Personal studio

Studio Ops — a video pipeline that runs itself.

The problem

Solo content production has a hidden tax: the admin between the creative steps. Folder setup, file routing, metadata writing, status tracking. Roughly four hours per video. None of it the work I want to be doing. So I automated the boring parts.

The architecture

Three n8n workflows, all triggered by status changes in a Notion video pipeline database. Notion is the state machine.

Workflow 01 Status: Scripting Notion fetch Claude API (script) Notion + Drive write
Workflow 02 Status: Ready to Record Drive folder structure Naming logic Notion update
Workflow 03 Status: Recorded Drive fetch script Claude API (metadata) Status: Ready to Upload

Tools

Orchestration: n8n (self-hosted)
State store: Notion (pipeline database)
File store: Google Drive
AI: Anthropic API (Claude)
Logic: JavaScript (parsing & folder rules)

Outcome

First episode through the full pipeline shipped end-to-end. Admin time per video: from ~4 hours to ~5 minutes. A 95% reduction on the operational layer. Studio Ops v2 in active build.

What I'd do differently

v1 polls Notion for status changes; v2 should switch to webhooks for lower latency. The metadata stage could also benefit from a manual review gate: the Claude output is good but not always what I'd publish first try. Cost tracking on Claude calls is missing, the metadata step is more expensive than I expected.