What's up, it's Zayd.

There's a version of outbound where every week looks like the last one. More messages out, some meetings booked, a dashboard that moves sideways. You get better at running the motion without ever getting smarter about your market. I'd guess most 0-$1M ARR teams I talk to are living in that version right now.

The problem is the structure. You can’t learn anything from a 10,000-send blast, because you haven't isolated anything. You've just done a lot of work and hoped the results tell you the story you want to hear (something about assumptions and donkeys).

Today I want to walk through a different way to run outbound where the real output is in the knowledge that compounds rather than the meetings themselves. I'll use a batch of 100 prospects as the unit, because that's the size where you can actually think about what you're doing.

Zayd’s Picks

My favorite finds of the week

  • Free sales training from a $30k/year mastermind (link)

  • Expert framework for scaling SaaS products to 7-figures (link)

  • Secrets to structure and present SaaS pricing (link)

  • Don’t sell through users to get buyers (link)

  • Prompting in json or xml format increases LLM output by 10x (link)

  • The traditional outbound playbook is dead (link)

Start With A Belief

Most outbound motions start with a list. "We pulled 5,000 founders who raised in the last 90 days." Great? Now what?

A lab starts with a belief. You sit down, you write a sentence that begins with "I believe," and you finish it with something falsifiable.

A good one sounds like this: I believe that seed-stage founders who've raised in the last 60 days will respond to a message about hiring their first AE, because that's the pain they're feeling right now.

You can prove that wrong. You can prove it right. You can run it and find out. That's the whole point.

A bad one sounds like this: I believe we should message more founders. You cannot prove that wrong because you haven't said anything. You've just announced that you want to do more work.

Pull apart any great outbound batch and you'll find a specific belief underneath it. Pull apart a bad one and you'll find activity wearing a costume.

Build The Cohort To Match The Belief

Once you have a real belief, the list builds itself. You're trying to find the 100 people who most precisely match what you're testing, not just the biggest number.

If the belief is about founders who raised in the last 60 days, you're pulling exactly that. Crunchbase, recent funding filter, narrow ICP filter, 100 names. Done.

The urge you have to fight is the urge to expand the cohort "to make sure we get enough data." This is the moment where the lab becomes the factory again. You water down the hypothesis, you water down the cohort, and at the end of the batch you have no idea what you learned because you were testing three things at once on four different kinds of prospect.

I've seen a lot of teams do this. I did it myself for years. A precise cohort of 100 will tell you something real. A fuzzy cohort of 1,000 will tell you that you did a lot of work.

Test One Thing, Keep Everything Else Still

This is where the discipline really hurts.

Inside your batch of 100, pick one variable. Something like: do people respond more to a message that references their funding, or a message that references a specific operational pain point? The rest of the message, the CTA, the follow-up cadence, the timing, the channel, everything else stays identical.

Most teams run five experiments inside one batch. They change the opener and the proof point and the send time and the follow-up and the channel, and then they look at the results and say, "Okay, something worked!" Something did work. They will never know what.

Isolate one variable. Move it. Measure it. Next batch, move a different one. Over the course of ten batches, you've mapped out ten distinct things about how your ICP behaves. That map is worth more than every individual meeting you booked along the way.

💡 LinkedIn Hack of the Week:

Asking for nothing in your connection request increases acceptance. The ask can come in the follow-up message.

Measure What Predicts Revenue, Not What Flatters It

Reply rate is the metric every team shows in their Friday update. It's also close to useless on its own. If 10% of people reply and 9 out of 10 of those replies are "unsubscribe" or "please stop emailing me," you have a 1% positive response rate and an active brand problem.

Track the metrics that matter:

  • Positive response rate: how many replies were interested rather than negative or neutral

  • Meeting booked rate: how many of those turned into something on the calendar

  • Show rate: how many showed up

  • Opportunity created: how many turned into something with real money attached

A 4% reply rate that produces three closed-won deals is better than a 15% reply rate that produces none. The numbers at the top of the funnel are interesting. The numbers at the bottom are what your investors actually ask about.

Write The Thing Down

Here's the part almost nobody does, and it's the reason most sales teams never compound their learnings: they don't record what they just did.

Every batch gets a one-paragraph writeup. What was the belief? What was the cohort? What was the variable? What happened? What's the next experiment? Regardless of what tool you use, it’s the habit that matters.

Run this for a quarter and you've got a document nobody else in your market has. Every batch, every variable you've tested, every answer you've gotten, all in one place. When you hire your next AE, you hand them that doc on day one and they ramp in half the time, because they're inheriting a real playbook instead of a set of half-remembered vibes from whoever was running outbound six months ago.

At Valley, this is how we think about our own outbound. Tight, hypothesis-driven batches. Everything measured. Everything written down. Our best customers run the same playbook (yes, we built a product that makes it a lot easier to run, but you can do the first version of this in a spreadsheet tomorrow).

🎁 Gift from Zayd:

The B2B Growth and Sales Creator Handbook

What You Actually Get Out Of This

After a quarter of running it this way, you know things about your market that your competitors literally cannot match. Which segments respond to which framing. Which pain points are real and which are vanity. Which channels your buyer actually uses, not the one everyone assumes they use. What your earned baseline response rate is, independent of whatever benchmark the last sales newsletter quoted.

That knowledge is a moat. Competitors can copy your emails in an afternoon. They cannot copy the three months of structured experiments you ran to figure out which emails to write.

Most 0-$1M ARR teams spend their first year drowning in activity and starving for insight. Ten batches of 100 this quarter, instead of one big blast of 10,000, is the trade that flips that. You'll send way fewer messages, you'll sleep better, and you'll end the quarter knowing more about your market than your investors do.

Almost nobody makes the trade. The ones who do win their category.

How can we work together 🏔️

  1. See more of Valley’s messaging examples, feel free to roast them: https://coolmessagebro.com/

  2. Generate more demos for your company using LinkedIn: https://meetings.hubspot.com/zayd-from-valley/tryvalley

  3. Become a Valley partner and get 20% recurring commission for every user you bring in: https://withvalley.notion.site/valley-affiliate-partner-program

Keep Reading