Twitter Competitor Tracking Without the $5,000/Month API Bill

Twitter's official API costs $5,000/month for full archive access. Most teams tracking competitors don't need the firehose — they need a handful of signals. Here's how to get them without the invoice.
What You're Actually Tracking
Before choosing a method, clarify what data matters. Most teams care about three things:
Content strategy — what competitors post, how often, which formats drive engagement, and what they're amplifying. Track post frequency, hashtag usage, and posting patterns to understand their content calendar.
Audience signals — who follows them, follower growth rate, and which accounts engage most. A competitor suddenly gaining 10,000 followers in a specific vertical is a signal worth investigating.
Strategic moves — product launches, partnerships, pricing changes. Competitors announce these on Twitter before press releases. Real-time monitoring catches moves 10–15 days ahead of quarterly reports.
The goal isn't copying tactics. It's understanding positioning so you can differentiate faster.
Method 1: Manual Setup with TweetDeck and Twitter Lists
Start here if budget is zero.
Create a private Twitter List containing all competitor accounts — company handles, executive accounts, product teams, customer support. Private lists are invisible to competitors; they won't know you're watching.
Configure TweetDeck columns for real-time streams: one per competitor handle, one for keyword searches, one for industry hashtags. Enable browser notifications on critical accounts so you catch announcements immediately.
Use Boolean operators to reduce noise. Instead of searching a bare competitor name, try:
(competitor1 OR competitor2) AND (launch OR update OR partnership) -giveaway -contestThis surfaces strategic announcements while filtering promotional clutter. Most teams underuse Boolean search and wonder why their results are noisy.
Limitations: no persistent storage, no historical data, requires manual review. Reasonable for monitoring 3–5 competitors casually. Unscalable beyond that.
Method 2: Third-Party Scraping APIs
Scraping services bypass API restrictions by accessing Twitter's public interface through proxy networks. They handle anti-bot friction, rate limiting, and data normalization. You call an endpoint and receive structured JSON.
ScrapeBadger covers 39 Twitter endpoints — competitor profiles, timelines, follower lists, engagement data, keyword search. No rate limits on the data side. Credit-based pricing at $0.10 per 1,000 items.
The pipeline for competitor timeline tracking is straightforward:
import asyncio
import os
from scrapebadger import ScrapeBadger
async def track_competitor_tweets(handle: str, max_items: int = 100):
async with ScrapeBadger(api_key=os.getenv("SCRAPEBADGER_API_KEY")) as client:
stream = client.twitter.users.latest_tweets(handle, max_items=max_items)
results = []
async for tweet in stream:
metrics = tweet.get("public_metrics") or {}
results.append({
"id": tweet.get("id"),
"text": tweet.get("text"),
"likes": metrics.get("like_count", 0),
"retweets": metrics.get("retweet_count", 0),
"created_at": tweet.get("created_at"),
})
return results
tweets = asyncio.run(track_competitor_tweets("competitor_handle"))Run this daily via cron and store results in SQLite. After two weeks, you have a trend dataset — posting frequency, average engagement per post, content format patterns.
Beyond timelines, a few endpoints are particularly useful for competitor analysis:
User profile lookup — grabs bio, follower count, verification status as a baseline
Follower data — spots follower growth spikes before they show up in aggregate stats
Tweet replies — reveals audience sentiment analysis on competitor content without needing their internal analytics
For systematic mention tracking, the advanced search endpoint accepts full query syntax:
"CompetitorBrand" -from:competitorhandle # What others say about them
from:competitorhandle -is:retweet # Only their original content
#CompetitorCampaign min_faves:50 # High-engagement campaign tweetsOther options worth evaluating: Apify (serverless actors with proxy rotation), Bright Data (large residential IP pool). All deliver similar data freshness at slightly different price points and infrastructure models.
Method 3: Social Listening Platforms
Sprout Social, Hootsuite Insights, and Brandwatch bundle monitoring with analytics dashboards, sentiment tracking, and automated reports. These run $200–500/month but eliminate technical setup.
Sprout Social's competitor benchmarking compares your metrics against up to 20 competitors across follower growth, engagement rate, and posting frequency. Hootsuite Insights offers persistent Boolean search streams with CSV export. Brandwatch adds AI-powered trend clustering and share-of-voice analysis.
Setup is straightforward: add competitor handles, define keyword lists, set alert thresholds for unusual activity (follower spikes, viral content, negative sentiment spikes), schedule weekly summary reports.
The trade-off: these platforms treat Twitter as one channel among many. If you only need Twitter competitor data, you're paying for features you won't use. If you're monitoring across Instagram, LinkedIn, and YouTube simultaneously, consolidated reporting justifies the cost.
Method 4: Automated Keyword Monitoring Bots
Build a lightweight bot that searches Twitter hourly, deduplicates results against a local store, and fires Slack alerts on new matches. This catches competitor activity the moment it happens.
import asyncio
import sqlite3
import os
import requests
from scrapebadger import ScrapeBadger
SLACK_WEBHOOK = os.getenv("SLACK_WEBHOOK_URL")
def setup_db(conn):
conn.execute("CREATE TABLE IF NOT EXISTS seen (tweet_id TEXT PRIMARY KEY)")
conn.commit()
def send_alert(text: str):
if SLACK_WEBHOOK:
requests.post(SLACK_WEBHOOK, json={"text": text}, timeout=10)
async def monitor_keywords(keywords: list[str], max_per_keyword: int = 50):
conn = sqlite3.connect("seen_tweets.db")
setup_db(conn)
async with ScrapeBadger(api_key=os.getenv("SCRAPEBADGER_API_KEY")) as client:
for keyword in keywords:
stream = client.twitter.tweets.search_all(keyword, max_items=max_per_keyword)
async for tweet in stream:
tweet_id = str(tweet.get("id") or "")
if not tweet_id:
continue
try:
conn.execute("INSERT INTO seen VALUES (?)", (tweet_id,))
conn.commit()
send_alert(f"New mention [{keyword}]: {tweet.get('text', '')[:200]}")
except sqlite3.IntegrityError:
pass # Already processed
conn.close()
if __name__ == "__main__":
asyncio.run(monitor_keywords([
"competitor1 launch",
"competitor2 pricing",
'"CompetitorBrand" partnership',
]))Schedule this via cron at whatever interval matches your urgency. Product launches and pricing changes warrant 15-minute checks; broader market keywords are fine hourly or daily.
Noise is the main problem at scale. Set minimum engagement thresholds in your normalization step and add negative keywords (-giveaway -contest -crypto) to queries upfront.
Comparison: Which Method Fits Your Situation
Method | Pros | Cons | Cost | Best For |
|---|---|---|---|---|
Manual TweetDeck | Free, zero setup, no maintenance | No storage, doesn't scale, requires manual checking | $0 | Solo operators, 3–5 competitors max |
Scraping APIs (e.g., ScrapeBadger) | Structured data, programmatic access, full endpoint coverage | Requires coding, some setup time | $10–50/month | Developers building custom pipelines |
Social Listening Platforms | No code, multi-channel, dashboards and reports included | Expensive, features often exceed scope | $200–500/month | Marketing teams needing reports and multi-platform coverage |
Automated Keyword Bots | Real-time alerts, scales to 50+ keywords, cheap | Moderate build time, noise filtering requires tuning | $10–30/month | Teams that need instant alerts on specific signals |
Noise Filtering That Actually Works
The biggest complaint about competitor monitoring is irrelevant results. Fix this before it becomes a problem.
Exclude retweets. Add -is:retweet to API queries or filter in code. Retweets rarely surface new information.
Set engagement floors. Ignore tweets with under 5 likes unless they're from verified accounts or specific handles you care about. Eliminates most spam.
Language filtering. If you operate in English-speaking markets, restrict to lang:en. Cuts volume 40–60% depending on competitor international presence.
Hard exclusion list. Terms like giveaway, contest, follow me, airdrop generate constant false positives. Add them to your negative keyword list on day one.
After 2–3 weeks, audit your results. If 80% of collected data is actionable, your filters work. If not, tighten thresholds.
Metrics Worth Tracking
Not everything is worth measuring. Focus on what changes decisions.
Posting frequency and time patterns — reveals resource allocation and audience timezone targeting
Average engagement rate per post — sudden spikes signal campaigns worth studying
Follower velocity — growth rate per week distinguishes organic traction from paid activity
Hashtag distribution — which hashtags drive engagement vs. which get ignored
Strategic announcements — partnerships, launches, hiring posts; these matter more than any engagement metric
Export weekly. Store in spreadsheets or a database. Look for 30–90 day trends, not daily noise.
Practical Starting Point
Pick the method that matches your current skill level and data volume. Non-technical teams start with a social listening platform and get something running same-day. Developers start with ScrapeBadger for custom pipelines — the docs cover all 39 endpoints with examples. Solo operators start with TweetDeck and Boolean search for free.
Run a 30-day pilot tracking 3–5 competitors before scaling. Measure how often the collected data actually influences a decision. If it does, expand scope. If you're collecting data nobody reads, narrow the focus.
Competitor tracking only works if it changes behavior. The signal that matters is the one that moves your next decision faster.

Written by
Thomas Shultz
Thomas Shultz is the Head of Data at ScrapeBadger, working on public web data, scraping infrastructure, and data reliability. He writes about real-world scraping, data pipelines, and turning unstructured web data into usable signals.
Ready to get started?
Join thousands of developers using ScrapeBadger for their data needs.