Build Log

Week 1: Emails, Email Octopus, and API Keys in Plain Text

2026-03-04·11 min read

My app was live. The Stripe link worked. The landing page existed. And for weeks, I'd been stuck on the same thing: writing the emails that would actually launch this thing.

I ran public speaking workshops in London for 6 years. Hundreds of people came through. In 2025 I stopped the in-person sessions and spent months building an AI-powered version of the same methodology — an app called Confident You. The product was ready. What wasn't ready was me.

This is what the first week of actually preparing to launch looked like. Three evening sessions, three very different problems, one thread connecting them all: stop overthinking and ship.

Looking back now, the launch didn't go the way I'd hoped — zero sales, and the app is in passive maintenance. But this week was real, the lessons were real, and I'd do it all the same way again. Here's what happened.


Sunday: The Email Writing Session

I sat down on a Sunday afternoon with Claude (Anthropic's AI, running in their Cowork mode) and forced myself through the email campaign. Here's what it actually looked like — the internal hurdles, the decisions, and what I learned about using AI as a thinking partner rather than a content machine.

The Real Blocker Wasn't Writing. It Was Deciding.

I had email templates from a previous launch attempt back in November that never happened. The copy existed. The segments were defined. I had a 700-line email strategy document sitting in a folder.

So why hadn't I sent anything?

Because every time I opened the drafts, I'd second-guess the tone, tweak one line, get distracted by a different segment, wonder if the pricing was right, and close the laptop. Classic builder trap — I was optimising in my head instead of shipping.

The first thing I did was tell Claude: "I have a couple of hours. Help me work through the emails. Be direct, keep me focused."

That framing mattered. I wasn't asking for "help writing emails." I was asking for a working session with a deadline.

What AI Is Actually Good At Here (And What It Isn't)

Here's what surprised me: the most valuable thing wasn't the AI writing emails for me. It was having something to think with.

The original email drafts were written with Claude back in November. They were fine. Competent. But when I read them, something felt off — and I couldn't quite articulate what. That's where the real value came in: I could verbalise my doubts, half-formed instincts, and "this doesn't feel right" reactions, and have a partner help me figure out why they didn't feel right and what to do about it.

Example 1: The "data dump" opening. The VIP email opened with "I was looking through our member data recently and saw that you attended 275 sessions." I read that and thought — this is weird. I know these people. Why am I opening like I ran a SQL query on them? But I wasn't sure what to replace it with. When I said "something like reminiscing about the past," Claude helped me land on a version that used the same data but felt like I was thinking about an old friend, not auditing a spreadsheet.

Example 2: Mentioning how much people paid. The T2 email had: "You paid £2,600 total with us." I flagged this — some of these people paid a lot, and reminding them of the number felt more like showing them a receipt than paying them a compliment. Through talking it out, we landed on a simple rule: session count stays (that's effort, commitment, growth), payment amount goes (that's just money). Obvious in hindsight, but I needed to say it out loud to someone before it clicked.

Example 3: The "builder" label. I wanted to call myself "a builder" in the reconnection email. I knew the word felt right to me but worried it might mean nothing to someone who doesn't listen to AI podcasts. Claude suggested something simpler — just add a few words of context: "I'm a builder at heart — I love creating tools that help people get better at things." No jargon, self-explanatory. That's the kind of micro-decision that takes 30 seconds with a thinking partner and 30 minutes alone in your own head.

The pattern was the same every time: I had the instinct that something was wrong, but I needed to verbalise it before I could fix it. The AI wasn't the one catching problems — I was. But having a partner to talk through the "why" and quickly generate alternatives meant I could act on my instincts instead of just sitting with vague discomfort until I closed the laptop.

The Process

The session had a rhythm:

  1. Claude drafted a full version based on my segments, data, and previous templates
  2. I read it as if I were the recipient and flagged what felt off
  3. We discussed the why — not just "change this line" but "why does this feel weird?"
  4. Claude proposed 2-3 options, I picked one or riffed on it
  5. I made final tweaks in my own voice, then we moved on

The key was that I stayed in the decision seat. Claude never just produced a final email and said "send this." Every email went through 2-3 rounds of "this bit doesn't sound like me" or "would a real person actually respond to this?"

Some of the best moments were when I pushed back and Claude agreed I was right. Like when I said the T3 and T4 emails were basically the same and we should merge them — Claude immediately said "you're right, the emotional distance is the same, just let the session count personalisation do the work." That saved me a whole email and simplified the schedule.

What I Shipped on Sunday

In about 3 hours:

  • 11 emails drafted (reconnection, offers for 4 segments, follow-ups, social proof, final call, welcome, response templates)
  • 5 emails fully finalised and ready to load into Email Octopus
  • A complete sending schedule for a 3-week campaign
  • A task tracker with everything mapped out
  • The whole project under git version control

Tuesday: Email Octopus and the Simplification

Two days later, I sat down to load everything into Email Octopus. And almost immediately, I stepped back and simplified the whole campaign.

I had 9 audience segments with different emails for each — and realised it was overengineered. These are all people who came to my workshops. They remember me. The tone barely changes between segments.

Consolidated to 2 segments: former paying members (~700) and guests/cold contacts. One sequence each. Hours of Email Octopus setup saved.

I shifted all sends from 9am to 7:30pm too. These are personal emails, not marketing. They belong in someone's evening, not fighting 50 work emails at 9am.

Nearly fell into the personalisation trap as well. I had the data to auto-insert "You attended X sessions" but half the list only came 4-5 times. "You attended 4 sessions" just highlights how little they did. Replaced it with something forward-looking — remind them of the feeling of practising and getting better, not the number.

All member emails loaded into Email Octopus by the end of the evening. Five days to launch.

Takeaway: Simplify until it feels obvious. If you're building different versions of the same thing for slightly different people, you're overthinking it.


Wednesday: Security Hardening

With the app functionally complete, I ran a full infosec review of Confident You before opening it up to real users. What I found was humbling.

The Audit

I ran a comprehensive security scan across the entire codebase — frontend, backend, infrastructure config, dependencies, and git history. The results came back with 3 critical issues, 7 high severity, 7 medium, and 6 low findings.

The worst discovery? My API keys were sitting in plain text in git-tracked files. Not in .env (that was properly gitignored), but in documentation files I'd written to help myself set up AWS Secrets Manager. The irony of committing your secrets to a file called CREATE_SECRET.md is not lost on me. Deepgram, Anthropic, and Hume AI keys — all right there in the commit history.

The second critical finding was almost as bad: two of my most expensive API endpoints had no authentication. Anyone on the internet could POST to /api/transcribe and /api/feedback, burning through my Deepgram and Anthropic credits with zero accountability. These endpoints were the backbone of the app — transcription and AI coaching feedback — and they were completely open.

The Fix

I spent the session systematically working through the findings.

Immediate actions:

  • Rotated all three API keys across Deepgram, Anthropic, and Hume AI
  • Updated the new keys in AWS Secrets Manager
  • Redacted the old keys from the committed files
  • Added verifyAuth JWT middleware to both unprotected routes
  • Updated the frontend services to include the Authorization header (this was the part I initially missed — adding auth to the backend without updating the frontend to send tokens)

Hardening the API:

  • Added file upload size limits (25MB cap) and MIME type filtering to prevent memory exhaustion attacks on Lambda
  • Locked down CORS from wildcard * to only my actual domains
  • Added input validation on the feedback endpoint — transcript length caps, question length limits, and sanitised the lessonContext object to prevent prompt injection
  • Stripped all error.message details from client-facing error responses across every route. Internal errors now log server-side only
  • Deleted a debug endpoint (/api/verify-keys) that was leaking key prefixes
  • Ran npm audit fix across frontend and backend, bringing vulnerabilities down from 40+ to a handful that require major version bumps

The CORS lesson: After deploying, the app broke. Recording failed silently. Digging into the browser console revealed the issue — users were hitting www.app.confidentyou.training but I'd only whitelisted app.confidentyou.training in the CORS config. A single missing origin killed the entire app. Fixed the CORS config and set up a proper redirect so the www subdomain always points to the non-www version.

What I Learned About Security

  1. Documentation files are code too. If it's in your repo, treat it like source code. I had the right instinct using Secrets Manager, but then documented the actual secrets right next to the setup instructions.
  2. Auth is end-to-end. Adding middleware to the backend is only half the job. The frontend needs to actually send the token. This seems obvious in hindsight, but when you're adding auth to previously-open endpoints, it's easy to forget the client side.
  3. CORS is unforgiving. app.example.com and www.app.example.com are completely different origins. One missing entry and your entire app goes down with a cryptic browser error.
  4. Security reviews before launch are non-negotiable. I nearly shipped an app where anyone could run up my API bills. A single afternoon of review probably saved me from a very expensive lesson.

The Week in Perspective

Three sessions. Maybe 8 hours of actual work. And the app went from "technically ready but not actually launchable" to genuinely ready for real users.

The through-line for the whole week was the same lesson in different disguises: simplify, then ship. The emails needed fewer segments, not better copy. The campaign needed two sequences, not nine. The security needed a systematic review, not a hope that nothing would go wrong.

AI doesn't replace your voice — it accelerates finding it. Every email went through rounds of "that doesn't sound like me." The AI handled structure, completeness, and catching blind spots. I handled tone, authenticity, and the final call on every line.

The biggest time-saver across the whole week was having someone to decide with. Solo builders don't have a marketing team or a CTO to bounce ideas off. Having a thinking partner that responds in seconds and doesn't get tired meant I could work through decisions that would have taken days of going back and forth in my own head.

If you're building something and stuck on the "just need to launch" part — I get it. Sometimes you just need to sit down, set a timer, and start making decisions instead of optimising them.