Back to Blog
Sales Strategies 7 min read

How to Detect Buying Signals in Cold Email Replies With AI

Not all replies are equal. Learn how AI classifies cold email responses to surface buying signals your team would otherwise miss.

MC

Michael Chen

Technical Writer

How to Detect Buying Signals in Cold Email Replies With AI

How to Detect Buying Signals in Cold Email Replies With AI

Your cold email campaign just got 47 replies. Your SDR opens the inbox and starts reading.

“Thanks, not interested right now.” Delete. “We actually just signed with a competitor.” Tag as lost. “Can you send more info?” Forward to AE.

That took 45 minutes. And they probably misclassified at least 5 of those replies.

Here’s the problem: humans are bad at reading between the lines at scale. The reply that says “not interested right now” might actually be a timing signal — they’re interested, just not this quarter. The one asking for “more info” might be a tire-kicker who’ll never buy. And the one-word “thanks” response that got ignored? That person clicked every link in your email before replying.

AI reply classification fixes this. Not by replacing human judgment on the important calls, but by making sure the important calls actually reach a human.

The Five Types of Cold Email Replies

Before you can detect buying signals, you need a classification framework. Every cold email reply falls into one of five categories:

1. Positive Intent

These are the obvious wins. The prospect explicitly expresses interest:

  • “This looks interesting, can we set up a call?”
  • “We’ve been looking for something like this”
  • “Send me a proposal”

Detection difficulty: Easy. Even basic keyword matching catches most of these.

2. Soft Positive (Hidden Buying Signals)

This is where most teams lose deals. The reply doesn’t explicitly express interest, but contains signals that indicate the prospect is engaged:

  • “What’s the pricing?” (They’re evaluating, not just curious)
  • “How does this compare to [competitor]?” (They’re in an active buying cycle)
  • “Not the right time, but check back in Q3” (They have budget cycles — this is a timing signal, not a rejection)
  • “Forwarding this to my colleague who handles this” (They’re routing you to a decision-maker)
  • “We use [current tool] for this right now” (They have the problem you solve and are open to switching)

Detection difficulty: Hard for humans at scale. Easy for AI with the right training data.

3. Neutral / Information Seeking

The prospect is gathering information but hasn’t signaled intent:

  • “Can you send more details?”
  • “What exactly does your product do?”
  • “How long have you been around?”

These aren’t buying signals. They’re research signals. The correct response is to provide information and move them toward a discovery call — but don’t prioritize them over soft positives.

4. Objection (Opportunity in Disguise)

Objections are not rejections. They’re engagement:

  • “We don’t have budget for this” (Budget objection — ask about timeline)
  • “We tried something similar and it didn’t work” (Past experience objection — differentiate)
  • “I’m not the right person” (Routing opportunity — ask for an intro)
  • “We’re locked into a contract until October” (Timing signal — set a reminder)

Detection difficulty: Medium. Keyword matching fails because the language overlaps with rejections. AI classification with context awareness handles this well.

5. Hard Negative

Genuine rejections with no recovery path:

  • “Remove me from your list”
  • “Not interested, please don’t email again”
  • “This is spam”
  • Auto-generated unsubscribe confirmations

Detection difficulty: Easy. These are unambiguous and should trigger immediate suppression.

How AI Classification Works

AI reply classification isn’t magic. It’s pattern matching at a level of nuance that humans can’t sustain across hundreds of replies per day.

The Classification Pipeline

  1. Text extraction: Strip signatures, disclaimers, and forwarded thread content. Isolate the actual reply.

  2. Intent classification: NLP model scores the reply against the five categories above. Most models output a probability distribution — “72% soft positive, 18% objection, 10% neutral.”

  3. Signal extraction: Pull specific buying signals from the text:

    • Competitor mentions (active buying cycle indicator)
    • Timeline references (budget/planning cycle indicator)
    • Internal routing (“forwarding to…” = champion behavior)
    • Question specificity (pricing/integration questions > general questions)
  4. Behavioral context: Layer in engagement data from before the reply:

    • Email open count (3+ opens before replying = high interest)
    • Link clicks (clicked pricing page = evaluating)
    • Time-to-reply (same-day reply = urgency)
  5. Scoring and routing: Combine classification, signals, and behavior into a composite score. Route high-scoring replies to AEs immediately. Queue medium-scoring replies for SDR follow-up. Auto-handle negatives.

What This Looks Like in Practice

Reply: “Thanks for reaching out. We’re actually evaluating a few options in this space right now. What does implementation look like?”

Human SDR classification: “Interested — wants more info” (routes to AE)

AI classification:

  • Category: Soft Positive (89% confidence)
  • Buying signals detected:
    • Active evaluation (“evaluating a few options”) — HIGH
    • Implementation question — HIGH (indicates they’re past research, into feasibility)
  • Behavioral context: Opened email 4 times, clicked case study link
  • Composite score: 94/100
  • Routing: Immediate AE alert with full context

The human got the right answer. But the AI got the right answer with context that makes the AE’s first response dramatically more effective.

Building Your Own Reply Classification System

Option 1: Rule-Based (Quick and Dirty)

Start with keyword and phrase matching:

  • Positive triggers: “interested,” “set up a call,” “demo,” “pricing,” “proposal”
  • Negative triggers: “unsubscribe,” “remove,” “not interested,” “stop emailing”
  • Objection triggers: “budget,” “contract,” “not the right person,” “tried before”
  • Timing triggers: “next quarter,” “Q3,” “after,” “check back,” “later this year”

This catches maybe 60% of signals accurately. It’s a starting point, not a solution.

Option 2: AI-Powered Classification

Use a fine-tuned language model trained on your historical reply data:

  1. Collect training data: Export 1,000+ replies from past campaigns with manual labels
  2. Train or fine-tune: Use a classification model (even a prompted LLM works well for this)
  3. Integrate with your sequencer: Auto-classify incoming replies and route based on score
  4. Feedback loop: Have AEs confirm or correct classifications to improve accuracy over time

This gets you to 85-90% accuracy within the first month, and 95%+ after the feedback loop kicks in.

Option 3: Use a Purpose-Built Tool

Underfive handles reply classification natively. Every incoming reply is automatically classified, scored, and routed — with the buying signals extracted and surfaced for your AE team.

No training data required. No model fine-tuning. Connect your email accounts, and classification starts immediately using a model pre-trained on millions of B2B sales replies.

The Signals Most Teams Miss

After analyzing thousands of classified replies, these are the highest-value signals that human SDRs consistently overlook:

The “Wrong Person” Redirect

When a prospect says “I’m not the right person, you should talk to Sarah,” most SDRs email Sarah cold. The AI flags this as a warm intro opportunity — the original contact has implicitly endorsed you. The correct play is to ask the original contact to make an introduction.

The Competitor Mention

“We use [competitor] for this” feels like a rejection. It’s actually one of the strongest buying signals possible. They have the problem, they’ve allocated budget for it, and they’re telling you what to beat. AI classifies this as soft positive with a competitor displacement opportunity.

The Delayed Timing Signal

“Check back in Q3” gets filed away and forgotten by 80% of SDRs. AI classification flags this as a timing signal, calculates the optimal re-engagement date, and automatically schedules follow-up — no human memory required.

The Over-Researched Reply

When someone asks 3+ specific questions in a single reply, they’re not casually curious. They’re building an internal business case. AI detects question density as a high-intent signal and escalates accordingly.

Measuring What Matters

Once you have AI classification running, track these metrics:

  • Signal-to-noise ratio: What percentage of replies contain actual buying signals vs. noise?
  • Classification accuracy: Are AEs confirming or overriding the AI’s routing decisions?
  • Time-to-AE: How quickly do high-intent replies reach a human closer?
  • Conversion by signal type: Which buying signals most reliably convert to meetings?

Most teams see a 20-30% increase in meetings booked within the first 60 days — not because they’re getting more replies, but because they’re handling existing replies better.

The Bottom Line

You’re already generating replies. The question is whether you’re reading them correctly.

AI reply classification doesn’t change your outbound volume. It changes your outbound yield. Every reply gets classified, every buying signal gets surfaced, and every high-intent prospect reaches a human closer within minutes instead of hours.

Stop leaving deals in your inbox. See how Underfive detects buying signals automatically →

buying signals AI reply analysis cold email sales intelligence lead scoring

Share this article

MC

Written by

Michael Chen

Technical Writer

Ready to reply faster?

Underfive responds to your leads in under 5 minutes, 24/7. Start converting more leads today.

Book a Demo