The starting point: spending money and hoping for the best
When the client came to us, they had been running Meta ads for about eight months. They had a Shopify store, decent products, and a monthly ad budget around $8K. On paper, it should have been working.
It was not. ROAS was negative. They were spending more on ads than they were making in revenue from those ads. And the worst part — they did not actually know how negative it was, because their tracking was broken.
Here is what we found in the first audit:
- No server-side tracking. They were relying entirely on the Meta pixel, which was missing roughly 40% of conversions due to iOS privacy changes and ad blockers. Their reported ROAS was bad, but their actual ROAS was even worse than they thought — because the pixel was over-counting in some cases and under-counting in others.
- No UTM structure. They could not tell which campaigns, ad sets, or creatives were driving actual purchases. Everything was lumped together in Google Analytics as "facebook / cpc" with no granularity.
- Broad targeting with no testing structure. Every campaign was running broad audiences with the same creative. No lookalikes, no exclusions, no separation between testing and scaling budgets.
- Zero email marketing. No abandoned cart flow. No post-purchase sequence. No winback. Nothing. Every customer who did not buy on the first visit was gone forever.
The client was not stupid. They just did not know what they did not know. And the agency that set this up originally had done the bare minimum and moved on.
Week 1-2: Rebuilding the foundation
Before we touched a single ad, we spent two weeks fixing the tracking infrastructure. This is the part nobody wants to do because it is invisible work that does not show up in revenue charts immediately. But it is the most important work we did.
Server-side Meta CAPI
We implemented Meta Conversions API (CAPI) through a server-side setup. This sends conversion data directly from the server to Meta, bypassing browser restrictions entirely. The result: we went from tracking about 60% of conversions to tracking 95%+.
This alone changed everything. Suddenly Meta's algorithm had accurate data to optimize against. It could see which people actually purchased, not just which people the pixel happened to catch.
UTM structure and GA4 events
We built a consistent UTM naming convention: source, medium, campaign name, ad set name, ad name, and ad ID all encoded in the URL parameters. Then we set up GA4 with proper e-commerce events — view_item, add_to_cart, begin_checkout, purchase — all firing correctly with revenue data.
For the first time, the client could open Google Analytics and see exactly which ad was driving which sale. Not estimated. Not modeled. Actually tracked.
The 40% discovery
Once tracking was clean, we discovered that roughly 40% of previous conversions had been unattributed or misattributed. Some sales the client thought came from organic were actually from paid. Some sales attributed to paid were actually branded search. The picture was completely distorted.
You cannot optimize what you cannot measure. And most e-commerce brands are measuring badly enough that their optimization decisions are based on fiction.
Week 3-4: Creative and targeting reset
With tracking in place, we tore down the existing campaign structure and rebuilt it from scratch.
Audience restructure
We killed every broad audience campaign. In their place, we built:
- Lookalike audiences from actual purchasers (1%, 3%, and 5% lookalikes). Not from page visitors or add-to-carts — from people who actually bought.
- Interest-based audiences that we tested in small budget ad sets before scaling. Each interest group got $20/day for 7 days. If it hit a 2x ROAS in that window, we scaled. If not, we killed it.
- Exclusion audiences to prevent showing acquisition ads to existing customers. This seems obvious but the previous setup had no exclusions, meaning they were paying to show ads to people who already bought.
Campaign architecture
We split the account into two clean buckets:
- Testing campaigns. Low budget ($30-50/day), rapid creative testing, cost-per-purchase as the primary metric. This is where new angles, new hooks, and new creative formats get validated.
- Scaling campaigns. Higher budget, only proven winners from testing, optimized for incremental ROAS. Once a creative proved itself in testing, it graduated to scaling.
This separation is critical. If you test and scale in the same campaign, your learning data gets polluted. Scaling budgets need stable audiences and proven creative. Testing budgets need flexibility and fast iteration.
Week 5-8: Scaling what works
This is where the numbers started moving. With clean tracking and a proper testing framework, we could finally see what was actually working — and double down on it.
Our creative testing cadence was 3 new angles per week. Not 3 new images — 3 new angles. Different hooks, different pain points, different formats. Static vs. video. UGC vs. polished. Problem-focused vs. aspiration-focused.
Each creative entered the testing campaign. After 7 days and at least 1,000 impressions, we evaluated. Winners moved to scaling. Losers got killed. No emotional attachment, no "let's give it another week." The numbers decide.
The critical shift in this phase was measuring incremental ROAS instead of platform-reported ROAS. Meta will always tell you your ROAS is great, because Meta's attribution model is designed to make Meta look good. We cross-referenced Meta's reported conversions against our server-side data and GA4. The truth was usually 20-30% lower than what Meta claimed — but still profitable.
Budget allocation followed a simple rule: any ad set above 3x incremental ROAS got a 20% budget increase every 3 days. Any ad set below 1.5x for more than 5 days got cut. This created a natural selection process where money flowed to what worked.
Week 9-12: Adding the email engine
By week 8, paid media was performing. But we were leaving money on the table with every visitor who did not buy on the first visit. That is where email came in.
We built four email flows in Klaviyo:
- Abandoned cart (3 emails). Email 1 at 1 hour (reminder with cart contents). Email 2 at 24 hours (social proof and FAQ). Email 3 at 48 hours (small discount). This single flow recovered 12% of abandoned carts.
- Post-purchase (4 emails). Order confirmation, shipping update, delivery follow-up with review request, cross-sell recommendation at day 14. This turned one-time buyers into repeat customers.
- Winback (3 emails). Triggered at 60, 90, and 120 days of no purchase. Each email escalated the incentive. The 90-day email with a 15% discount had a 4.2% conversion rate.
- Browse abandonment (2 emails). If someone viewed a product page 3+ times without buying, they got a gentle nudge. Lower conversion than cart abandonment, but still profitable.
We also built retargeting layers in Meta that mirrored the email sequences. If someone abandoned a cart and did not open the email, they saw a retargeting ad within 24 hours. This multi-channel approach reduced CAC by 35% compared to paid-only acquisition.
The numbers
After 90 days, here is where we landed:
- $114K in attributed revenue (server-side tracked, cross-referenced with Shopify data)
- ROAS improvement of 128% from when we started
- CAC reduced from $67 to $38 — a 43% reduction
- Email contributing 22% of total revenue — from zero when we started
- Abandoned cart recovery rate of 12%
- Creative testing win rate of 23% — roughly 1 in 4 new creatives beat the control
The $114K was not a spike. It was a system. Month 4 came in at $42K. Month 5 at $48K. The machine kept running because we built infrastructure, not just campaigns.
Revenue engines are not built from one great ad or one lucky audience. They are built from tracking that works, creative that gets tested, and systems that catch the people who do not buy the first time.
What we would do differently
If I ran this engagement again from day one, two things would change:
Start email from day one. We waited until week 9 to launch email flows because we wanted to stabilize paid first. That was a mistake. Every visitor from week 1 through week 8 who abandoned a cart was a lost opportunity. Even a basic abandoned cart flow on day one would have recovered revenue while we fixed everything else.
Test more aggressively in weeks 1-2. We were cautious with creative testing early on because we wanted clean data first. But the tracking was good enough by mid-week 2 to start testing. We could have shaved a week off the ramp by running creative tests in parallel with the tracking rebuild.
Neither of these would have changed the outcome dramatically, but in a 90-day window, every week of revenue matters. And the lesson applies broadly: do not wait for perfect conditions to start testing. Start with good enough and improve while you go.
The client is still with us. The system still runs. And the math keeps working — not because we got lucky, but because we built something that compounds.