ScrapeBadger Python SDK

The official Python client for ScrapeBadger. Fully type-hinted, async-ready, and designed for data pipelines. Install with pip install scrapebadger.

Setup Guide

1

Install the SDK: pip install scrapebadger

2

Get your API key from scrapebadger.com/dashboard/api-keys

3

Initialize the client: ScrapeBadger(api_key="YOUR_KEY")

4

Call any Twitter endpoint through the typed client interface

5

Handle pagination automatically with built-in cursor support

6

Check the docs at docs.scrapebadger.com for full API reference

Code Example

from scrapebadger import ScrapeBadger

sb = ScrapeBadger(api_key="YOUR_API_KEY")

# Get user profile
user = sb.twitter.users.get_by_username("elonmusk")
print(f"{user['data']['name']}: {user['data']['followers_count']} followers")

# Search tweets
results = sb.twitter.tweets.advanced_search(
    query="web scraping API lang:en",
    query_type="Latest"
)

# Get followers with auto-pagination
followers = sb.twitter.users.get_followers("elonmusk")
for f in followers["data"]:
    print(f["username"], f["followers_count"])

What You Can Build

Data science and ML pipelines
Academic research data collection
Automated reporting scripts
Django/Flask web application backends
Jupyter notebook analysis
ETL pipelines with pandas integration

Frequently Asked Questions

Yes. The SDK provides both sync and async interfaces. Use AsyncScrapeBadger for async/await patterns.

Python 3.8 and above. We recommend Python 3.10+ for best type hint support.

Yes. The SDK automatically retries failed requests with exponential backoff.

Absolutely. The SDK works great in Jupyter. The sync client works out of the box, and async works with nest_asyncio.