I used to spend my mornings doing research that didn’t require my judgment. Here’s what I built to stop.
I was doing BD the hard way
At Sablier, my BD motion depends on timing. We sell vesting and airdrop distribution infrastructure to Web3 projects approaching a token launch. That means the window to reach a prospect is narrow, often three to six months between a raise closing and a TGE going live. Miss it and you’re either too early (nobody’s thinking about TGE yet) or too late (they’ve already wired up a competitor).
So I handled the research the way most solo BD people do: manually. Every morning I’d scroll through X, piece together notes on team members and timelines that went stale within weeks as projects reshuffled roles or pushed back their TGE, and try to track down whoever was actually making tokenomics decisions. The system sort of worked. But it also meant the first two hours of my day went to tasks that had nothing to do with my judgment, just legwork anyone could have done.
I began by building prompts the simplest way possible, copying them back and forth between Google Docs and ChatGPT. Eventually I moved to Claude, breaking the work into distinct, reusable “skills” that I ran manually for months. Everything changed when OpenClaw launched. That’s when I shifted orchestration and CRM integration onto a single platform, turning a scattered workflow into a single system. Within a few weeks the entire stack was live and running in production.
This is the story of what I built, what broke along the way, and what you can take from my setup.
Every crypto BD has this problem
Most BD playbooks are built on a flawed assumption: that prospecting is a volume game. More outreach, more pipeline, more deals. That model collapses the moment timing becomes the deciding factor.
BDRs aren’t losing deals because they can’t hold a conversation. They’re losing because they’re consistently late, or early, or simply irrelevant. Reaching the right person matters less if you miss the moment that makes the message land.
Throughput was never my constraint. I still move through 50+ projects a week. The difference is I stopped pretending research was high-value work. The entire layer now runs without me. Same volume, no wasted motion, and my time only shows up where judgment actually matters.
What I built: three skills, one orchestrator, one CRM hook
The stack has three distinct layers. I built them incrementally, which I’d recommend. Running them separately first let me calibrate each one before tying them together.
Prospecting with AI
My prospecting trigger is a fundraising announcement. A fresh raise means capital and momentum. For some of those projects, it also means a TGE is now plausibly on the calendar.
That last point matters. Not every raise in the output is a token project. Some won’t be. This layer casts wide on purpose. The qualification layer below is where I filter for actual token relevance. The prospecting prompt is a signal sweep, not a verdict.
I run this skill against Grok’s native X access once a week via cron:
---
name: web3-fundraising-prospector
description: Scan X for web3 fundraising rounds announced in the past 7 days. Returns structured list of projects with handles, round details, investors, and token infra relevance.
---
# Web3 Fundraising Prospector
Collect and show web3 and crypto fundraising rounds announced on X past 7 days.
For each round include:
- Project name + @handle
- Round size and type
- Key investors (if mentioned)
- Why it matters for token infra / vesting / airdrop (1 line)
- Direct link to the announcement post
## Output Format (Strict)
Return results as a numbered list. Each entry must follow this structure:
1. **[Project Name]** (@handle)
- Round: [size] [type]
- Investors: [names or "not disclosed"]
- Token infra relevance: [1 sentence]
- Link: [URL]
## Behavior Constraints
- Only include rounds announced in the past 7 days
- Do not editorialize or rank projects
- If round size is not disclosed, write "undisclosed"
- Include both crypto-native and adjacent raises (AI infra, gaming, RWA) if token signals exist
A typical weekly execution returns somewhere between five and fifteen projects. Last time I ran it, the list included a $500M AI infrastructure raise, a $165M robotics round, a $45M crypto accounting platform, and a handful of smaller L1, DeFi, and prediction market seeds. Several of those won’t survive qualification. That’s expected.
Grok’s output is a language model working against live X data, not a deterministic database query. The structured output is good most of the time, but it requires a sanity check before anything gets passed downstream. I read the weekly output, remove obvious noise, and then kick the clean list into the qualification layer. That review takes around twenty minutes.
The output of this layer isn’t a pipeline. It’s a filtered starting list. The next layer is where I actually qualify.
Deal qualification with AI
Early on I tried using generic AI summaries of projects. That didn’t work. A summary tells you what a project does. A qualification tells you whether it’s worth a call and who to call.
For my BD motion at Sablier, a qualified prospect has to clear four questions: Is there a real token event on the horizon? When? Who controls the vesting or airdrop budget decision? And is there a path in through a shared investor on Sablier’s backer list?
I built the Web3 BD Qualification Skill to enforce exactly that structure on every prospect. Here’s the full skill:
---
name: web3-bd-prospect-qualification
description: Qualify a single web3 company for token vesting, airdrop distribution, or allocation infrastructure. Focus on deal readiness, token timing, and decision-maker access. Optimized for sales/BD usage with structured, low-token output.
---
# Web3 BD Prospect Qualification
Act as a Web3 Business Development Analyst supporting sales.
Goal: Quickly decide:
1. Is this project a real token infra buyer?
2. Are they early enough to need vesting/airdrop tooling?
3. Who can we actually contact?
Avoid ecosystem education, protocol explanations, or generic web3 commentary.
## Operating Rules (Token Efficiency)
- Max 1–2 sentences per bullet
- Max 3 bullets per section
- If data is unknown → write `unknown` and move on
- Do not explain methodology
- Prefer signals > certainty
- Skip sections entirely if no signal found
## Workflow
### Step 1: Deal Relevance Snapshot
Quickly determine if the project is worth pursuing. Capture only high-signal facts:
- What they're building (1 line, non-technical)
- Likely token usage (governance, incentives, infra, etc.)
- Chain(s) (only if relevant to token ops)
- Target users (retail / devs / institutions)
### Step 2: Token Readiness Qualification
Focus on timing + pain, not theory.
**Output schema (strict):**
```
token:
status: live | pre-tge | implied | none
urgency: high | medium | low
tge_window: <3m | 3–6m | 6–12m | unknown
signals:
- max 3 short bullets
```
**Urgency guidance:**
- High → pre-TGE + airdrop/points/tokenomics hinted
- Medium → token discussed but no timeline
- Low → token live or no token signals
### Step 3: Funding & Buyer Credibility
Only collect what impacts budget + intros.
```
funding:
stage: pre-seed | seed | series-a | later | unknown
total_raised: number | unknown
notable_investors:
- max 3
```
Skip round-by-round detail unless directly relevant.
### Step 4: Decision Maker Discovery
Goal: 1 primary buyer, 1 backup. Do not list more than 2 people.
**Priority order:**
1. Founder / CEO / COO
2. Head of Tokenomics / Ops / Finance
3. Head of BD (only if founders unavailable)
```
contacts:
- name:
title:
relevance: decision-maker | owner | influencer
contact:
twitter:
linkedin:
telegram:
email:
```
If no individual found → write `No identifiable buyer found`.
### Step 5: Warm Intro Check
Only check investor overlap against the Sablier investor list.
```
warm_intro:
match: yes | no
investor_name: name | none
```
- If yes → draft 2-sentence intro ask
- If no → state "No direct investor match found"
## Final Output (Strict Format)
```
## [Project Name], BD Qualification Brief
**What they do:**
[1 sentence]
**Token Readiness:**
- Status:
- Urgency:
- TGE Window:
- Signals:
**Funding Snapshot:**
- Stage:
- Total Raised:
- Notable Backers:
**Recommended Buyer:**
- Name / Title
- Why this person
- Best Contact Channel
**Warm Intro:**
- Match:
- Details:
**BD Recommendation:**
[Pursue / Monitor / Disqualify], 1-line rationale
**Suggested Next Action:**
[Exact outreach step]
```
## BD Decision Rules
- **Pursue** → Pre-TGE + medium/high urgency + buyer identified
- **Monitor** → Token implied but timeline unclear
- **Disqualify** → Token live OR no token signals OR no buyer found
## Sablier Investor List (use only for matching, do not restate in output)
<list of investors>
Note on the investor list field: mine runs against Sablier’s backer network. If you’re adapting this skill, replace it with your own list. The cross-match is only as good as the list behind it, more on that in “What I Got Wrong.”
The constraints are intentional. The brief is capped at two sentences per bullet and three bullets per section, which keeps token usage low and makes each output scannable in under a minute. The structured format also means the output can pass directly into the deal evaluator and CRM without reformatting. “Unknown” instead of a guess when data is absent. Contact priority: Founder first, then Tokenomics or Ops lead, then BD.
The skill outputs the BD Recommendation directly. “Pursue,” “Monitor,” or “Disqualify” is the skill’s call. OpenClaw’s role downstream is to act on that output: route “Pursue” deals into the deal evaluator and CRM, park everything else. There’s a second gate in the Pipedrive step (Pursue must also have a confirmed warm intro to trigger the email draft), but the qualification decision itself lives in the skill.
Before this, a qualification brief took me roughly 45 minutes per prospect. The briefs are also more consistent than what I was producing manually, which matters when I’m trying to compare pipeline quality over time.
Deal value evaluation with AI
The qualification layer told me whether a project was worth pursuing. It didn’t tell me whether the deal was big enough to justify the cycle time. I kept having discovery calls with projects that looked interesting and turned out to have a deal ceiling too low to warrant the effort.
The Deal Evaluator runs before any calendar invite goes out. Here’s the full skill:
---
name: web3-deal-evaluator
description: Evaluate a web3 BD deal to determine applicable products (Vesting, Airdrop distribution) and calculate deal value. Use when given a project name, website URL, or BD research brief to estimate deal sizing based on investor count and Twitter followers.
---
# Web3 Deal Evaluator
Evaluate a business development deal to determine which product applies and calculate deal value.
## Accepted Inputs
- Project name
- Website URL
- Complete BD research brief
Input may be structured or unstructured.
## Information to Extract
From input, identify:
- **Investor count**, if mentioned or known
- **Airdrop status**, confirmed, implied, or not mentioned
- **Twitter followers**, only if airdrop is implied or confirmed
If information is missing, clearly state what could not be determined.
## Evaluation Rules
### Rule 1: Vesting Product
Apply only if investor count is known.
```
Product = Vesting
Value = $1 × investor_count × 12 × 4
```
### Rule 2: Airdrop Distribution Product
Apply only if airdrop is implied or confirmed.
```
Product = Airdrop Distribution
Value = $2 × (twitter_followers × 0.05)
```
### Rule Precedence
- If both rules apply → evaluate both, return both results
- If neither applies → return "Insufficient data for deal evaluation"
## Output Format (Strict)
```
## Deal Evaluation Summary
**Project:** [Name]
### Identified Signals
- Investors count: [number | unknown]
- Airdrop status: [confirmed | implied | not mentioned]
- Twitter followers: [number | unknown | N/A]
### Products Assigned
1. **Product:** Vesting
- Deal value: $[calculated]
- Calculation: $1 × [investors] × 12 × 4
2. **Product:** Airdrop Distribution
- Deal value: $[calculated]
- Calculation: $2 × 5% of [followers]
### Notes
[State assumptions or missing data]
```
## Behavior Constraints
- Never guess investor count or follower numbers
- Only calculate when required inputs are explicitly available
- Be concise and factual
- Do not add extra products or pricing logic
The numbers and project below are illustrative. I’ve adjusted the figures to show how the evaluator works without disclosing actual contract economics or client data.
The math is simple by design. For vesting: $1 per investor per month, across a 48-month vesting schedule ($1 × investors × 12 × 4). The $1 figure is my per-investor monthly fee at contract entry; 48 months is the standard vesting duration. For airdrop distribution: $2 per estimated recipient, where I use 5% of Twitter followers as a rough proxy for recipient count.
About that 5% proxy: I arrived at it by back-calculating from a sample of closed airdrop deals where I knew the actual recipient count and the project’s follower base at signing. Across those deals, 5% was a reasonable median. It’s wrong on any individual project, sometimes by a lot, but it’s useful enough as a floor estimate for routing.
Here’s what the evaluator returned on Acme Corp, a $22M L1 infrastructure raise:
- 14 known investors. Vesting value: $672.
- 87,400 Twitter followers with an implied airdrop. Airdrop distribution value: $8,740.
- Total pre-discovery estimate: $9,412.
That $9,412 is a small deal. The vesting component at $672 wouldn’t justify senior attention on its own. My current routing threshold for a first-touch senior call is $8,000 combined across products. Acme Corp cleared it, barely, and only because of the airdrop component.
I took the call for one reason specific to L1 projects: they typically run multiple airdrop rounds as their ecosystem grows, and a first deal often becomes the reference for a larger follow-on. If I’m building a network of L1 relationships, a $9k entry point can be worth the call. If I’m not, it probably isn’t. The evaluator gave me the number. I still had to decide what it meant.
Projects clearly below threshold go to a lighter-touch sequence. Projects well above it get my immediate attention. The evaluator’s job is to remove the guesswork from that routing decision.
Tying it together: OpenClaw as the orchestrator
Skills sitting idle in a browser tab aren’t a system. I needed something to run them on a schedule, chain their outputs together, and push results somewhere I’d actually see them.
OpenClaw is an open-source, self-hosted AI agent platform. It runs on my old Apple laptop, connects to Grok and Claude, gives agents real tools (shell, file access, web, APIs), and has a native cron scheduler where jobs persist across restarts. No external scheduling service. No vendor with access to my prospect data.
The orchestrator does three things on a schedule. Once a week: run the X fundraising prompt. After my manual review of that output, feed the clean handle list into the qualification skill and route “Pursue” outputs into the deal evaluator. Every morning at 7am: deliver a briefing with the day’s “Pursue” deals, their estimated values, and recommended buyers.
On costs: I run the whole stack on a $100/month Claude Pro subscription. For 50+ qualifications per week plus deal evaluations and the morning briefing, that covers everything. I’m currently looking at alternatives, partly for cost, partly because I want more control over model routing and fallback behavior.
The Pipedrive integration: where it becomes revenue
Connecting the orchestrator to my CRM is where it stopped being a research tool and started affecting revenue.
When the qualification skill outputs BD Recommendation: Pursue, OpenClaw automatically creates or updates a Pipedrive Deal with custom fields populated: Token Urgency, TGE Window, calculated deal value, recommended buyer, warm intro status. The full qualification brief attaches as a note. The deal label is set to “AI Qualified.”
If the same deal also has warm_intro: match: yes, a draft email fires. A Pursue recommendation alone creates the deal in the CRM. A warm intro match on top of it triggers the outreach draft.
The draft pulls the matched investor name, the qualification brief, and my email template (configured once in OpenClaw). Here’s the output it generated for Acme Corp:
Subject: Quick intro request, Acme Corp + Sablier vesting/airdrop infra
Hi [Name],
Hope you’re well. Reaching out regarding Acme Corp, where [Investor] led the recent $25M seed.
We’ve been tracking Acme Corp as they move toward mainnet and a confirmed TGE later this year. With the SAFE plus token warrant structure, a planned 10 percent community airdrop, and multiple stakeholder groups, it seems like they’re entering the phase where token vesting and distribution infrastructure becomes important.
We work with teams at this exact pre-TGE stage and thought Acme Corp could be a strong fit. Would you be open to an intro to [Founder] or the right person on their side.
Thanks,
The draft appears in Pipedrive as a scheduled Activity linked to the Deal and Contact. I see it, hit “Approve and Send” or make a quick edit. Once sent, the thread logs automatically.
Setup was about ten minutes of one-time configuration: Pipedrive OAuth and the email template.
The full flow
What I got wrong the first time
A few things broke or underperformed before I got to the version above.
The prospecting layer initially had no manual review step. I trusted the Grok output directly and flooded the qualification layer with projects that had no token relevance. Adding the twenty-minute human review before the qualification run was the single biggest improvement I made to the overall system. The qualification skill is fast, but it can’t fix a bad input list.
The deal evaluator took two iterations. The initial formula used figures that didn’t match my actual contract economics. After running it against roughly 40 closed contracts, I landed on the current numbers. They’re a floor estimate, and the 5% Twitter proxy is rough. Good enough for routing. Not good enough for telling anyone how much a deal is worth.
What I want to change next
Two things are on the list.
First, the orchestration layer. OpenClaw got me from zero to running, but I’m hitting its limits. I want to move to something like Hermes or n8n, a proper workflow orchestrator where I can build conditional branches, retry logic, and multi-step chains without scripting each one from scratch. Right now, if a qualification fails mid-run or Grok returns malformed output, I find out from a missing morning briefing, not from an alert. A real orchestrator would let me set fallback paths, run parallel evaluations, and get notified when something breaks instead of when I notice the silence. That’s the difference between a cron job and a workflow.
Second, the warm intro layer is too narrow. Right now it only cross-matches against Sablier’s investor list, which is a fixed set of VCs. That catches the obvious connections but misses everything else. I built a personal networking CRM called PingCRM that tracks my broader professional network, founders I’ve met at events, former colleagues, people I’ve helped with intros before. PingCRM already has an MCP server, so the integration path is straightforward: instead of checking one static list, the qualification skill would query PingCRM and look for any viable intro path through my full network. A second-degree connection through a founder I know well is often a warmer intro than a first-degree connection through a VC I’ve met once. The current setup can’t see that. PingCRM can.
What you can take from this
This setup works for me because my product has a narrow timing window and my BD motion depends on reaching the right person before a decision is made. If your situation looks similar, most of this transfers.
A few things before you start:
The prospecting layer needs a human review step, at least until you know its failure modes. Grok is a language model, not a database query. Feeding its raw output straight into an automated qualification pipeline amplifies noise.
The qualification skill only works if it reflects your actual criteria. Mine is built around token readiness, TGE timing, and warm intro paths because those are the three things that predict whether a Sablier deal closes. If you copy the template without rebuilding it around your own signals, you’ll get consistent output that’s consistently wrong for your business.
The investor list is curated, not automated. The sidekick cross-matches against it on every prospect, but the list itself still requires human upkeep. This setup won’t eliminate all manual work. It eliminates per-prospect research. The shared context underneath (investor relationships, deal history, formula calibration) still needs someone who knows your business to maintain it.
The deal evaluator is a routing tool, not a revenue forecast. It tells you whether a deal deserves a call and which product to lead with. It won’t tell you whether the deal closes at that number.
All three skill files are included above. They won’t work out of the box. The investor list, formula calibration, and email template all need to reflect your own business before you run them.
What it doesn’t replace
The sidekick doesn’t know whether a founder is actually going to ship. It can’t tell me if my pricing holds up against what they’re already evaluating, or whether the relationship with a shared investor is close enough to call in a favor.
What it removed is the two hours I used to spend every morning figuring out which calls were worth making at all. That was the actual problem. The research wasn’t hard. It was just relentless and it didn’t need me to do it.