Back to Blog

Best Google Trends Scraper in 2026: Every Tool Compared Honestly

Thomas ShultzThomas Shultz
18 min read
33 views
Best Google Trends Scraper in 2026

If you've ever tried to pull Google Trends data programmatically, you know how it goes. PyTrends works for a few hours, then starts throwing 429 Too Many Requests. You add delays. It works again, then breaks for a different reason. You find a GitHub issue from 2023 describing the exact problem — with no resolution. Then you discover the repository was archived by its maintainers on April 17, 2025 and is officially read-only.

This is the state of DIY Google Trends scraping in 2026. The tools that were free and convenient are now unmaintained, rate-limited at scale, and increasingly fragile as Google updates its internal API structure without notice. Meanwhile, the data itself has never been more commercially valuable — teams using systematic Trends monitoring for content timing, competitive intelligence, and market research make meaningfully better decisions than those relying on manual lookups.

This article covers every credible Google Trends scraper available right now — what each one does well, where each one falls short, and which is the right choice depending on your actual use case. We've tested these tools at ScrapeBadger across thousands of keyword queries in building our own Google Trends API. The comparisons here reflect real infrastructure experience, not marketing copy.

First: What You're Actually Trying to Extract

Most people searching for a Google Trends scraper want one thing — interest over time for a set of keywords. That's the core data type, and any tool on this list gets you there. Where they diverge is depth, reliability at scale, and whether they expose the data types that actually drive decisions.

Google Trends exposes five distinct data categories that a production scraper should handle:

Interest Over Time — The 0–100 normalised weekly index of search interest. The core data type. One critical thing to understand: this is relative, not absolute. A score of 100 means the peak of interest for that keyword during that period in that geography. Comparing two keywords accurately requires pulling them in the same request, not separately — or the normalisation scales are incompatible.

Interest by Region — Geographic breakdown of where interest is highest. Available at country, region, city, and DMA level. Invaluable for geo-targeting campaigns, market entry decisions, and local SEO strategy. Most tools support country-level; fewer support sub-region granularity cleanly.

Related Queries — Two sub-types: Top (highest overall search frequency) and Rising (largest increase in frequency). Rising queries — especially "Breakout" terms that have grown over 5,000% — are often more actionable than interest over time for spotting emerging demand before competitors do.

Related Topics — Similar to Related Queries but at the Knowledge Graph entity level, not the exact-string level. More stable and semantically meaningful for topic clustering.

Trending Now — Real-time trending searches updating approximately every ten minutes, with estimated search volumes and growth percentages. The only Trends data type that returns approximate volume rather than a normalised index.

Any scraper that only gets you interest over time is giving you 20% of what's available. Keep this in mind as you evaluate tools.

In July 2025, Google officially announced the Google Trends API in alpha — the first time the company has offered a supported, documented API for programmatic Trends access. For the developer community, this was long overdue.

The alpha covers interest over time with daily, weekly, monthly, and yearly aggregations; regional and sub-regional breakdowns; and up to 1,800 days (five years) of historical data. The data is consistently scaled, meaning comparisons across separate API calls are reliable — addressing one of the core limitations of third-party tools that rely on Google's internal widget API.

The problem: it's still limited-access alpha. Only a handful of approved testers can use it. General availability has not been announced. For teams that need Trends data today, at scale, the official API is not a production option.

When it does reach general availability, it will be worth evaluating for research-focused use cases where historical accuracy and query-by-query comparability are the primary requirements. For production pipelines that need related queries, trending now data, multi-keyword comparison, and geographic granularity below the country level, third-party tools will still be necessary regardless — the official API's scope is narrower than what most commercial use cases require.

PyTrends — The Default Starting Point (And Why It's No Longer One)

The PyTrends GitHub repository was archived by its maintainers on April 17, 2025. It is now read-only. This is the most important thing to know before reaching for PyTrends in 2026. The library still installs, still runs on simple queries, and still returns data for light personal use. But it is no longer maintained — which means any breaking changes Google makes to its internal Trends API go unpatched indefinitely.

PyTrends was never an official library. It was an unofficial wrapper around Google's internal widget API — the same two-step request flow (explore endpoint for tokens, then widgetdata endpoints for actual data) that any scraper needs to implement. When it works, it's convenient. When it breaks, there's no SLA and no guarantee of a fix.

The rate limiting situation is also real and well-documented. Users consistently report 429 errors even at modest query volumes, with unpredictable behaviour around daily and hourly limits that Google doesn't publicly document. For anything beyond a few dozen queries per session, you need proxy rotation — which means managing your own proxy infrastructure on top of PyTrends, or accepting regular throttling.

For learning how Google Trends data works, PyTrends is fine. For any production pipeline that needs to run reliably, it was marginal before it was archived and is now genuinely not the right tool.

There is a community fork — pytrends-modern — that has added retry logic, async support, and some maintenance fixes. It's a better starting point than the archived original if you need the Python library approach, but it's still an unofficial wrapper with the same fundamental rate limit and maintenance-dependency constraints.

The Tools Worth Evaluating in 2026

ScrapeBadger's Google Trends API is part of a broader 8-product Google data platform — Search, Maps, News, Shopping, Trends, Jobs, Hotels, and Patents — all under a single API key. The Trends endpoints cover all five data types: interest over time, interest by region, related queries (top and rising), related topics, and trending now.

What makes it different technically. The underlying infrastructure handles the two-step widget API flow, response prefix stripping (the )]}'\n anti-XSSI characters that trip up naive implementations), proxy rotation, and rate limit management automatically. Every request routes through residential proxies that pass Google's session validation — the same infrastructure that handles Google Search results without getting blocked. You send a keyword and parameters; you get back clean, structured JSON.

Data coverage. The API returns complete structured responses across all data types. The interest over time response includes ISO timestamps alongside the relative date strings — important for time-series work where you need to join Trends data with other dated datasets. Related queries returns both top and rising sub-types with explicit is_breakout indicators for terms growing faster than 5,000%. Trending now includes estimated search volume and growth percentages — the only Trends data type that approximates absolute numbers rather than a normalised index.

Multi-product advantage. The use cases where Trends data gets genuinely powerful are those where it's combined with other signals — SERP rankings to measure content opportunity, News data to understand what's driving a spike, Jobs data to validate sustained demand behind a search trend. Because all eight Google products sit under one ScrapeBadger API key, combining data sources is a matter of calling different endpoints, not managing multiple vendor integrations.

MCP integration. For teams building AI agents, ScrapeBadger's MCP server exposes the Trends endpoints directly to any MCP-compatible client — Claude, Cursor, Windsurf. An agent can query keyword trajectories, regional breakdowns, and trending searches as part of its reasoning workflow without custom integration code. The MCP documentation covers setup in under ten minutes.

Pricing. ScrapeBadger offers flexible pricing to fit your workflow. Choose pay-as-you-go credits that never expire, or switch to a subscription plan for significantly lower costs.

Quick start:

python

import requests

API_KEY = "your_scrapebadger_key"

# Interest over time — compare two keywords on the same normalised scale
response = requests.get(
    "https://api.scrapebadger.com/v1/google/trends/interest_over_time",
    headers={"X-API-Key": API_KEY},
    params={
        "q": "web scraping api,data extraction api",
        "date": "today 12-m",
        "geo": "US",
    }
)

data = response.json()
for week in data["interest_over_time"]["timeline_data"][-4:]:
    values = {v["query"]: v["extracted_value"] for v in week["values"]}
    print(f"{week['date']}: {values}")

# Rising queries — find emerging demand before competitors do
response = requests.get(
    "https://api.scrapebadger.com/v1/google/trends/related_queries",
    headers={"X-API-Key": API_KEY},
    params={"q": "AI agents", "date": "today 3-m", "geo": "US"}
)

rising = response.json()["related_queries"]["rising"]["ranked_list"]
for q in rising[:5]:
    label = "šŸš€ BREAKOUT" if q["value"] == "Breakout" else f"+{q['extracted_value']}%"
    print(f"{q['query']}: {label}")

Full endpoint reference, all parameters, and response schemas at docs.scrapebadger.com.

Best for: Production pipelines, multi-data-source analysis, AI agent workflows, teams who want Trends data alongside SERP and Maps intelligence under one integration.

2. Bright Data — Enterprise Infrastructure, Enterprise Cost

Bright Data's Trends scraper sits on top of their 72M+ IP residential proxy network. It's the highest-reliability option in market if raw infrastructure quality is the primary constraint.

How it works. Rather than a dedicated Trends API, Bright Data's approach routes your Trends URL directly through their proxy network with JSON output enabled via a brd_json=1 parameter. You pass a full trends.google.com/trends/explore URL with your parameters; they handle proxy rotation, CAPTCHA solving, and browser fingerprinting; you get back structured JSON.

Data coverage. Interest over time, geographic breakdown, related queries, and related topics via configurable brd_trends widget parameters. The API supports custom date ranges, category filtering, and Google property filtering (web, YouTube, News, Images, Shopping) — flexible configuration for advanced use cases.

The pricing reality. Monthly plans start at $499/month. For teams whose primary use case is Trends data, this is difficult to justify. Bright Data makes more sense for organisations with compliance requirements, dedicated data engineering teams, and budgets where the cost differential against alternatives is irrelevant. As detailed in the SERP API comparison on the ScrapeBadger blog, billing complexity across Bright Data's proxy, scraper IDE, and dataset products makes monthly cost prediction genuinely difficult.

Best for: Enterprise teams with ISO 27001/GDPR compliance requirements; organisations already paying for Bright Data's broader infrastructure.

Apify's marketplace includes a dedicated Google Trends Scraper Actor (apify/google-trends-scraper) that wraps the scraping logic in a point-and-click interface on their platform.

What it does. Enter a search term, location, and time range. The Actor extracts interest over time, interest by subregion, related queries, and related topics, exporting to JSON, CSV, XML, or Excel. Integrations with Make, Zapier, Google Sheets, and webhooks are available out of the box.

The community-maintenance caveat. Apify's Actor marketplace is community-maintained — quality and update cadence depend on the individual contributor. When Google changes its Trends interface or API structure, Actor fixes depend on the maintainer's availability. For research or batch processing where a day or two of downtime is acceptable, this is a reasonable trade-off. For production pipelines with uptime requirements, it's a real risk.

Pricing. Apify uses compute-unit billing — $0.25–$0.40/GB RAM per hour, plus proxy costs separately. Monthly cost for a Trends monitoring pipeline is hard to predict and tends to be higher than the marketing suggests once proxy charges are included.

Best for: Non-technical users who want a UI to run batch Trends jobs without coding; researchers running one-off multi-keyword analyses; developers who already use Apify for other data tasks.

Scrape.do offers a dedicated Google Trends API endpoint that returns interest over time, interest by region, related queries, related topics, and real-time trending searches as structured JSON.

The API supports geo-targeting from worldwide down to sub-region level (e.g., US-CA for California), custom date ranges from the last four hours up to five years, and Google property filtering across web, YouTube, News, Images, and Shopping.

What sets it apart. Scrape.do's Trends implementation correctly handles both the interest over time endpoint and the related queries/topics endpoints in a single request — no multi-step token exchange required on your side. The response structure is clean and consistent, which matters when you're building downstream parsing logic.

Pricing. Credit-based with 1,000 free credits for testing, no credit card required. Paid tiers start at a competitive rate per request. The free tier is genuinely enough to evaluate the data quality before committing.

Limitation. Scrape.do is a Trends-only or SERP-focused tool. If your roadmap includes adding Maps reviews, Search rank tracking, or Shopping price intelligence alongside Trends, you'll need a second vendor integration — eliminating the cost advantage when total infrastructure cost is considered.

Best for: Teams with a focused Trends use case who need a reliable dedicated endpoint without the overhead of a full multi-product platform.

ScrapingBee is a general-purpose web scraping API that handles JavaScript rendering, proxy rotation, and CAPTCHA solving. It doesn't have a dedicated Trends parsing layer — you get the rendered HTML back and parse it yourself, or use their structured data option where available.

For Google Trends specifically, this means you're responsible for the two-step widget API flow (explore tokens, then widgetdata requests), the )]}'\n prefix stripping, and the field extraction logic. ScrapingBee handles the anti-bot bypass; you handle everything above it.

When this is fine. If you're already using ScrapingBee for other scraping tasks and occasionally need Trends data, extending an existing integration is reasonable. The API is well-documented, easy to use, and the developer experience is generally positive.

The credit multiplier. ScrapingBee's pricing uses a credit system where JavaScript rendering costs 5 credits per request and stealth proxies cost 75 credits per request. For Trends data, which requires JavaScript-enabled sessions, the effective cost per query is higher than the headline price suggests. At production Trends volumes this adds up.

Best for: Teams already using ScrapingBee for other sites who want to add occasional Trends scraping to existing workflows without a second integration.

Outscraper offers a Google Trends scraper accessible both via API and through a web dashboard — giving non-technical users a viable option for batch Trends analysis without any coding.

The dashboard accepts keyword lists, time ranges, and geographic parameters, and exports to CSV or Google Sheets. The API follows the same parameters for programmatic access. Interest over time and related queries are covered; trending now data is not a primary focus.

Pricing. Pay-per-use credits with a free tier sufficient for small-scale testing. Pricing scales reasonably for moderate research volumes.

Limitation. Outscraper's Trends tooling is a feature within a broader Google data extraction product — it's not as deeply integrated or as well-documented as dedicated Trends tools. Data freshness and accuracy at high volumes are less consistently reported by users than with purpose-built APIs.

Best for: Non-technical researchers and marketers who need Google Trends data exports via a dashboard without writing code; teams running periodic rather than continuous Trends monitoring.

Side-by-Side Comparison Table

ScrapeBadger

Bright Data

Apify

Scrape.do

ScrapingBee

Outscraper

Interest over time

āœ…

āœ…

āœ…

āœ…

āœ… (manual parse)

āœ…

Interest by region

āœ… Sub-region

āœ… Sub-region

āœ…

āœ… Sub-region

āœ… (manual parse)

āœ…

Related queries (rising)

āœ… Breakout flag

āœ…

āœ…

āœ…

āœ… (manual parse)

āœ…

Trending now

āœ… With volume

āŒ

āŒ

āœ…

āŒ

āŒ

ISO timestamps

āœ…

āŒ

āŒ

āŒ

āŒ

āŒ

Multi-keyword comparison

āœ… Same request

āœ…

āœ…

āœ…

āœ…

āœ…

Multi-product (SERP, Maps, etc.)

āœ… 8 products

āœ… Complex

āœ… Actors

āŒ

āœ…

Limited

MCP integration

āœ…

āŒ

āœ…

āŒ

āŒ

āŒ

No-code option

āŒ

āŒ

āœ… Dashboard

āŒ

āŒ

āœ… Dashboard

Dedicated Trends parsing

āœ…

Proxy-based

Actor-based

āœ…

āŒ Raw HTML

āœ…

Pricing model

Per-request, no expiry

$499+/mo

Compute units

Per-request

Credit tiers

Pay-per-use

Free trial

āœ…

āœ…

$5 credits

1,000 credits

āœ…

āœ…

Best for

Production + multi-product

Enterprise

Non-devs + batch

Trends-focused

Existing ScrapingBee users

No-code research

How to Choose

If you need Trends data inside a larger Google data pipeline — SERP rankings, Maps intelligence, Shopping prices, News monitoring — ScrapeBadger is the only option on this list that covers all of these under one integration, one API key, and predictable per-request pricing. Switching between data sources is a matter of changing an endpoint path, not managing a second vendor. This matters both for engineering simplicity and for the analytical use cases where combining Trends with other signals produces the best insights. The Google Scraper product page has the full endpoint overview.

If Trends is your only Google data source and cost is the primary constraint — evaluate Scrape.do and Outscraper. Both offer dedicated Trends endpoints at competitive per-request pricing with free tiers for validation. Test your actual keyword set before committing to either.

If you're non-technical and need batch exports — Apify's Actor or Outscraper's dashboard let you run Trends jobs without writing code. The trade-off is less programmatic control, higher effective cost at volume, and for Apify specifically, community-maintenance risk on the Actor itself.

If you're evaluating for enterprise compliance requirements — Bright Data is the only option with formal ISO 27001 certification. The cost is real and the setup complexity is real, but for regulated industries those certifications are sometimes non-negotiable.

If you're already using ScrapingBee for other scraping tasks — extending your existing integration to cover Trends is straightforward. Just account for the credit multiplier when estimating cost at production volume.

Having worked with Trends data across hundreds of use cases at ScrapeBadger, the mistakes we see most often aren't about scraper choice. They're about how the data is used once it's collected.

Comparing keywords from separate requests. As covered in the ScrapeBadger Google Trends tutorial, the 0–100 index is normalised per query. Pulling "coffee" in one request and getting 75, then "tea" in another and getting 80, tells you nothing about their relative popularity — the scales are incompatible. Always compare keywords together in a single request.

Ignoring rising queries in favour of interest over time. Rising queries — especially breakout terms — are the leading indicator. A keyword appearing in rising queries is gaining momentum before it shows up in absolute interest data. Teams that monitor rising queries around their core topics consistently spot content opportunities and competitive shifts weeks before teams monitoring only interest over time.

Treating Trends as a one-time lookup. The value of Trends data compounds over time. A single snapshot tells you where interest is; a time series tells you the trajectory. Running Trends monitoring on a schedule, storing the results, and tracking change over time turns a lookup tool into an intelligence platform. This is exactly the kind of pipeline the ScrapeBadger CLI is built to support — scheduled Trends pulls feeding into analysis workflows without manual intervention.

Not combining Trends with complementary sources. Trends data in isolation tells you that interest in a keyword is rising. Combined with SERP data from ScrapeBadger's Search endpoint, it also tells you whether the top-ranking content is weak enough to beat. Combined with Google News data, it tells you whether the spike is driven by a news event (temporary) or organic demand growth (sustained).

Scraping publicly visible data from Google Trends is generally treated as lawful for research and analysis purposes under established US precedent. The hiQ v. LinkedIn Ninth Circuit ruling affirmed that automated access to publicly available web data does not violate the Computer Fraud and Abuse Act. Google Trends data is publicly visible to any user without authentication — no login wall, no paywall, no personal data.

Google's Terms of Service prohibit automated scraping — but ToS violations are civil matters rather than criminal ones, and the commercial scraping market built on Google data has operated with this understanding for years. Using a third-party API service like ScrapeBadger reduces direct ToS exposure since you're not making requests to Google's servers yourself.

Practical guidelines: use data at responsible rates that don't affect service quality for other users, use it for analysis rather than raw redistribution, and if you're building a commercial product that makes Trends data a core offering, consult legal counsel on the specific commercial use.


The Google Trends scraper market in 2026 has clear tiers. PyTrends is archived and no longer viable for production. The official Google API is in limited alpha. Third-party APIs range from genuinely production-ready to community-maintained projects with no uptime guarantees.

For teams building serious data pipelines, ScrapeBadger's Google Trends endpoint delivers all five Trends data types in structured JSON, with residential proxy handling, rate limit management, and ISO timestamps included — no infrastructure to maintain, no library to patch when Google updates its internal API. Credits never expire and there's no subscription required to get started.

Thomas Shultz

Written by

Thomas Shultz

Thomas Shultz is the Head of Data at ScrapeBadger, working on public web data, scraping infrastructure, and data reliability. He writes about real-world scraping, data pipelines, and turning unstructured web data into usable signals.

Ready to get started?

Join thousands of developers using ScrapeBadger for their data needs.

Best Google Trends Scraper in 2026: 6 Tools Tested and Ranked | ScrapeBadger