The official Python client for ScrapeBadger. Fully type-hinted, async-ready, and designed for data pipelines. Install with pip install scrapebadger.
Install the SDK: pip install scrapebadger
Get your API key from scrapebadger.com/dashboard/api-keys
Initialize the client: ScrapeBadger(api_key="YOUR_KEY")
Call any Twitter endpoint through the typed client interface
Handle pagination automatically with built-in cursor support
Check the docs at docs.scrapebadger.com for full API reference
from scrapebadger import ScrapeBadger
sb = ScrapeBadger(api_key="YOUR_API_KEY")
# Get user profile
user = sb.twitter.users.get_by_username("elonmusk")
print(f"{user['data']['name']}: {user['data']['followers_count']} followers")
# Search tweets
results = sb.twitter.tweets.advanced_search(
query="web scraping API lang:en",
query_type="Latest"
)
# Get followers with auto-pagination
followers = sb.twitter.users.get_followers("elonmusk")
for f in followers["data"]:
print(f["username"], f["followers_count"])Yes. The SDK provides both sync and async interfaces. Use AsyncScrapeBadger for async/await patterns.
Python 3.8 and above. We recommend Python 3.10+ for best type hint support.
Yes. The SDK automatically retries failed requests with exponential backoff.
Absolutely. The SDK works great in Jupyter. The sync client works out of the box, and async works with nest_asyncio.
The official Node.js/TypeScript client for ScrapeBadger. Full TypeScript definitions, Promise-based API, and tree-shakeable. npm install scrapebadger.
Build automated Twitter data pipelines with n8n's visual workflow builder. Extract tweets, monitor brands, and trigger actions — all without writing code.
Give your AI agents access to real-time Twitter data through the Model Context Protocol. Works with Claude, custom agents, and any MCP-compatible client.