The 12 things your senior reviewer keeps re-explaining.

A marketing-ops team I run for a client sent 378 emails last year. The same six notes appeared on most of them. I caught every one. The junior requesters fixed every one. Six weeks later the next requester made the same mistakes. The pattern ran for a quarter, then a year, then the contract renewed.

This essay is about why that loop runs forever, and what to do about it that isn't "more training."

The pattern

Senior reviewer at most marketing-ops teams spends roughly half a day a week on the same notes. Hero image is the wrong dimensions. The first-name personalization token is missing its default value, so when the email lands at a contact without a first name on file, it reads Hi , with the comma floating where the name should be. The hyperlink runs into the period at the end of the sentence so the period becomes part of the clickable URL. The footer is the marketing footer when the email should have been routed as operational. The Spanish-speaking audience got targeted by including the Spanish locale instead of by excluding the others.

The same six to twelve issues, every week, on different campaigns, by different requesters, and the senior reviewer types the same correction for each one in 12 to 20 ClickUp comments per email. Multiply that by the number of emails the team sends in a week. The math is brutal.

The first instinct is to call this a training problem. New hires should know better. The brand team should publish a checklist. The senior reviewer should run a 30-minute lunch and learn. We have tried versions of all three. They make a small dent and then the dent fills back in.

Why training doesn't fix it

The reason is structural. Senior reviewer is the only feedback loop. Junior requester ships an email, senior reviewer catches a mistake, junior requester fixes it, six weeks later that junior requester has internalized the correction and stops making it. Now a new junior requester joins, and the loop restarts at zero. The team is two people deep on the rules and the institutional memory leaves with the laptop.

Even within one person's tenure, the rules are too many to hold in working memory. Twelve checks per email, plus the audience-segment rules, plus the program-flow rules, plus the operational-vs-marketing footer rules. The senior reviewer holds them because she's done it 5,000 times. The junior requester holds three of them and forgets four.

Training closes the gap on the requesters who stay. Most teams have meaningful turnover. Training never quite catches up.

The senior reviewer is the only feedback loop. The institutional memory leaves with the laptop.

The 12 things, in the order they show up

I went back through a quarter of ClickUp comments at one client and counted. The same twelve issues account for over 90% of the back-and-forth on email QA. Listed here so other marketing-ops leads can see if their list matches.

  1. Hero image is the wrong size. The team standard is 640 by 250. The requester pulled an asset that was 600 by 250 or 800 by 300 from the brand library and didn't resize. The email looks fine in the editor and broken in the inbox.
  2. Side-by-side images don't match the dimensions standard. The team uses 560 by 324 for two-up image rows. The requester used 540 because that's what fit nicely in the layout grid. Renders ragged.
  3. Title size is wrong. 26 pixels is the team standard. The requester used 24 because the editor's default is 24.
  4. Padding is wrong. 40 pixels left and right is the standard. The requester used 20, or 50, or zero, because the editor's defaults vary by template.
  5. The first-name personalization token has no default. The token reads {{lead.First Name}} instead of {{lead.First Name:default=Hi there}}. When a contact in the database has no first name on file, the email greets them with Hi ,. Looks broken.
  6. UTMs are missing on the campaign links. The requester pasted a clean URL into the CTA and forgot to append ?utm_source=&utm_medium=&utm_campaign=. The marketing analytics team will not be able to attribute the click. The campaign will look like organic traffic in the dashboard.
  7. Hyperlink runs into the period. The requester wrote Register at company.com/event. and made the entire string clickable, including the period. The period becomes part of the URL and breaks the destination.
  8. Spelling and grammar. Real example from the prototype: customrs instead of customers. The editor doesn't catch it because it's not flagged as misspelled by browser dictionaries when the rest of the word is plausible.
  9. Footer is wrong for the email type. Operational emails need the operational footer (with the explicit license-based reason for sending). Marketing emails need the marketing footer with the unsubscribe and privacy. The requester picked the wrong template and the email goes out claiming the recipient subscribed to a list they didn't.
  10. Audience targeting uses the wrong filter. One client targets English-speaking audiences by selecting the English locale. The team rule is to target by excluding Portuguese, Spanish, and Russian, because that catches contacts who haven't set a preferred language. Includes-based filtering misses thousands of contacts. The senior reviewer catches this every time and the junior requester forgets every time.
  11. Send date overlaps with another campaign to the same audience. The requester didn't check the calendar. Two emails from the same brand land in the same inbox within 24 hours. Open rates take the hit and the unsubscribe rate spikes.
  12. Two-button spacing is wrong. The team standard is 50 pixels between primary and secondary CTAs. The requester used 30 because the editor's default. Looks crowded.

Half of these would not be caught by Litmus. Most would not be caught by the editor's built-in QA. All twelve are caught by the senior reviewer, every time, and the senior reviewer is the bottleneck.

The deeper observation

This is not a training problem and it is not a tooling problem in the usual sense. It is a feedback-loop architecture problem.

The current architecture has one loop. Junior requester to senior reviewer to junior requester. The senior reviewer is the rule book. The senior reviewer is also the rate-limiter. Every email touches her. The team's throughput is bounded by how many emails she can review in a week.

The architecture that scales has two loops. A first loop runs in software, against a versioned rule pack, and posts structured feedback into the project tool the team already lives in. The junior requester sees the feedback within seconds, fixes the easy stuff, and only then triggers the second loop. The second loop is the senior reviewer, who now sees emails that have already cleared the rule-pack pass, and spends her time on the work that actually requires judgment.

The two-loop architecture is what the manufacturing industry figured out 50 years ago when they put statistical process control in front of the senior quality engineer. The engineer didn't go away. She moved one step downstream and started seeing higher-leverage work. The first loop didn't replace her. It protected her.

Marketing-ops is exactly that, with the rule pack as the SPC layer.

What we built

This is the work Email QA Agent does. The agent is a deterministic rule engine that reads a Marketo or HubSpot send-sample, runs your team's checklist against the rendered email, and posts a structured comment into ClickUp or Asana, addressed to the requester by name. The 12 issues above are the default checks. The rule pack is per-team, versioned, and Mark-tunable without engineering.

The senior reviewer doesn't go away. She moves one loop downstream. The work she keeps re-explaining stops needing to be re-explained.

If your senior reviewer is currently on the loop with junior requesters every day, the loop is the thing to change. Training tightens the loop on the people who stay. Two-loop architecture changes who's on the loop in the first place.

Adjacent reading. If you use Litmus today, the comparison page walks through how Email QA Agent fits in alongside it. Litmus owns rendering and deliverability. Email QA Agent owns the per-team checklist. Most marketing-ops teams that take this seriously will run both, with Email QA Agent as the first loop and Litmus as the second.
MW
Mark Willson Account & subject-matter lead at Innovative Group. Mark has run marketing operations for B2B clients across Marketo and HubSpot since 2018, and is the subject-matter lead on Email QA Agent. He still QAs emails by hand for two of our clients while the agent runs for the rest.
Stop re-explaining the 12 things

See it run on your stack.

30 minutes, screenshare, no slideware. Send us your team's checklist and the most recent email you sent. We'll show you which items the rule pack would have caught.