Back to Blog

How to Scrape Google Maps Reviews: The Complete Guide (2026)

Thomas ShultzThomas Shultz
18 min read
8 views
How to Scrape Google Maps Reviews

Google Maps sits on one of the most commercially valuable datasets on the internet — over 200 million business listings, each accumulating years of unfiltered customer feedback. Every review is a data point: a specific complaint about wait times, an unprompted recommendation, a rating that shifts a business's local SEO ranking, a timestamp that marks when a service quality change became visible to the public.

Companies that treat this data as a strategic asset — monitoring their own reputation, benchmarking competitors, tracking sentiment shifts over time — make better decisions than those relying on quarterly surveys and gut feel. Restaurants that catch an emerging staff complaint in reviews before it compounds. Hotel chains that spot a maintenance issue at one location before it damages the brand. Retailers using review text to understand exactly what features customers value in competitors' products.

All of this starts with getting the data out of Google Maps reliably. Which is harder than it sounds — and much more achievable than most tutorials suggest.

The Official API Problem (Why You're Here)

Before writing a single line of code, every developer asks the same question: can I just use the official Google Places API?

The answer is: yes, but it gives you almost nothing useful for any serious review analysis.

The Places API returns a maximum of 5 reviews per place — not the 5 most recent, not the 5 most critical, but 5 reviews Google selects with undisclosed criteria. For a restaurant with 2,000 reviews, you're seeing 0.25% of the available data. For reputation monitoring, competitive analysis, or sentiment research, this is effectively useless.

The cost structure makes bulk research prohibitive on top of this. The Places Details call that returns those 5 reviews costs $17 per 1,000 requests under Google's standard pricing — which adds up fast when you need hundreds of business profiles.

This is why the scraping approach exists. Google Maps' web interface shows every review for every business, publicly, without authentication. The data is there. The question is how to get it programmatically, reliably, and at scale.

What Google Maps Review Data Actually Contains

Before scraping anything, it's worth understanding the full data model — what fields are available and which ones drive real analytical value.

A complete review record contains:

json

{
  "reviewer": {
    "name": "Sarah Mitchell",
    "profile_url": "https://www.google.com/maps/contrib/...",
    "total_reviews": 47,
    "is_local_guide": true,
    "local_guide_level": 5
  },
  "rating": 4,
  "text": "Visited on a Saturday afternoon. The pasta was excellent — hand-rolled and clearly fresh. Service was attentive without being intrusive. Only complaint is the noise level; it's quite loud when full. Would return for a weeknight dinner.",
  "published_date": "2026-03-15",
  "relative_time": "2 months ago",
  "owner_response": {
    "text": "Thank you Sarah! You're right about the acoustics — we're installing sound panels next month.",
    "date": "2026-03-16"
  },
  "photos": ["https://..."],
  "likes": 3
}

The fields that matter most for business intelligence:

Reviewer credibility signals. A review from a Local Guide Level 7 with 300 total reviews carries different weight than an account with 1 review and no photo. Scraping reviewer metadata lets you build a credibility-weighted sentiment score rather than treating all reviews equally.

Owner response data. Response rate and response time are both public signals. Businesses that respond to reviews see up to 30% more customer engagement, according to Google's own research. Competitor response patterns reveal how seriously they take customer service — and whether they're addressing specific recurring issues.

Temporal patterns. Review timestamps reveal when problems emerge and when they get fixed. A restaurant that averages 4.2 stars but shows a cluster of 1-star reviews in November 2024 mentioning "new management" has a story in its data. Time-series analysis of review sentiment catches operational changes that aggregate ratings obscure.

Review text. The actual language customers use to describe their experience contains specificity that star ratings never capture. "The carbonara was too salty" is actionable in a way that "3 stars" isn't.

The Technical Reality: Why Google Maps Is Hard to Scrape

Google Maps is one of the most technically challenging scraping targets outside of Amazon and LinkedIn. Three specific factors make it difficult:

Everything is JavaScript-rendered. Reviews don't exist in the initial HTML response. The page loads a shell, then executes JavaScript that makes API calls to Google's backend to populate reviews dynamically. A requests.get() call to a Maps URL returns navigation chrome and no reviews whatsoever.

Reviews require interaction to load. By default, Google Maps shows a handful of reviews. Loading more requires scrolling within the reviews panel — a user interaction that triggers additional API calls. Scraping all reviews for a high-review-count business means programmatically scrolling through potentially hundreds of pages.

Session-based detection. Google Maps uses cookie-based session tracking. Cold sessions from datacenter IPs are challenged immediately. Fresh sessions need to "warm up" with normal browsing behaviour before Maps endpoints stop serving CAPTCHAs.

The practical consequence: DIY Maps scraping requires a full headless browser (Playwright or Puppeteer), residential proxies, and careful session management. It works, but it's resource-intensive and requires ongoing maintenance as Google updates its Maps interface.

Method 1: Playwright-Based Review Scraper

For developers who want to understand the mechanics or scrape at low volume, here's a complete implementation using Playwright. This handles JavaScript rendering, review pagination, and the "Sort by" functionality to get reviews in a specific order.

python

from playwright.sync_api import sync_playwright
import time
import json
import re
from datetime import datetime

def scrape_google_maps_reviews(
    place_url: str,
    max_reviews: int = 100,
    sort_by: str = "newest"  # "newest", "highest_rating", "lowest_rating", "relevant"
) -> list[dict]:
    """
    Scrape reviews from a Google Maps business listing.
    
    place_url: Full Google Maps URL for the business
    max_reviews: Maximum number of reviews to collect
    sort_by: Sort order for reviews
    """
    reviews = []

    sort_options = {
        "newest": 1,
        "highest_rating": 2,
        "lowest_rating": 3,
        "relevant": 0
    }

    with sync_playwright() as p:
        browser = p.chromium.launch(
            headless=True,
            args=[
                "--no-sandbox",
                "--disable-blink-features=AutomationControlled",
                "--disable-dev-shm-usage",
            ]
        )

        context = browser.new_context(
            viewport={"width": 1280, "height": 800},
            user_agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36",
            locale="en-US",
            timezone_id="America/New_York",
        )

        page = context.new_page()

        # Navigate to the place URL
        page.goto(place_url, wait_until="networkidle")
        time.sleep(2)

        # Click the "Reviews" tab
        try:
            reviews_tab = page.locator('button[aria-label*="reviews"], [data-tab-index="1"]').first
            reviews_tab.click()
            time.sleep(1.5)
        except Exception:
            # Already on reviews or different layout
            pass

        # Sort reviews if needed
        if sort_by != "relevant":
            try:
                sort_button = page.locator('button[aria-label*="Sort reviews"], [data-value="Sort"]').first
                sort_button.click()
                time.sleep(0.8)

                option_index = sort_options.get(sort_by, 1)
                menu_items = page.locator('[role="menuitemradio"], [role="option"]').all()
                if len(menu_items) > option_index:
                    menu_items[option_index].click()
                    time.sleep(1.5)
            except Exception as e:
                print(f"Couldn't sort: {e}")

        # Find the reviews scrollable container
        reviews_container = page.locator('[data-review-id]').first
        scrollable = page.locator('[class*="m6QErb"][class*="DxyBCb"]').first

        collected = 0
        last_count = 0
        stale_iterations = 0

        while collected < max_reviews and stale_iterations < 3:
            # Extract currently visible reviews
            review_elements = page.locator('[data-review-id]').all()
            current_count = len(review_elements)

            for element in review_elements[collected:]:
                try:
                    # Expand "More" if review is truncated
                    more_btn = element.locator('button[aria-label*="See more"]').first
                    if more_btn.is_visible():
                        more_btn.click()
                        time.sleep(0.2)

                    # Extract review data
                    review = {
                        "reviewer_name": element.locator('[class*="d4r55"]').first.text_content(timeout=1000) or "",
                        "rating": None,
                        "text": "",
                        "date": "",
                        "owner_response": None,
                        "reviewer_reviews_count": None,
                        "is_local_guide": False,
                    }

                    # Rating (count filled stars)
                    stars = element.locator('[aria-label*="star"]').first
                    if stars:
                        aria = stars.get_attribute("aria-label") or ""
                        rating_match = re.search(r'(\d+)', aria)
                        if rating_match:
                            review["rating"] = int(rating_match.group(1))

                    # Review text
                    text_el = element.locator('[class*="wiI7pd"]').first
                    if text_el.is_visible():
                        review["text"] = text_el.text_content() or ""

                    # Date
                    date_el = element.locator('[class*="rsqaWe"]').first
                    if date_el.is_visible():
                        review["date"] = date_el.text_content() or ""

                    # Owner response
                    response_el = element.locator('[class*="CDe7pd"]').first
                    if response_el.is_visible():
                        review["owner_response"] = response_el.text_content() or ""

                    # Local Guide status
                    guide_el = element.locator('[aria-label*="Local Guide"]').first
                    review["is_local_guide"] = guide_el.is_visible()

                    reviews.append(review)
                    collected += 1

                    if collected >= max_reviews:
                        break

                except Exception:
                    continue

            # Check if we got new reviews
            if current_count == last_count:
                stale_iterations += 1
            else:
                stale_iterations = 0
                last_count = current_count

            # Scroll to load more reviews
            if collected < max_reviews:
                try:
                    scrollable.evaluate("el => el.scrollTop += 1000")
                except Exception:
                    page.mouse.wheel(0, 1000)
                time.sleep(1.5)

        browser.close()

    return reviews


# Usage
reviews = scrape_google_maps_reviews(
    "https://www.google.com/maps/place/Dishoom+Covent+Garden/@51.5119,-0.1246,17z",
    max_reviews=200,
    sort_by="newest"
)

print(f"Collected {len(reviews)} reviews")
for r in reviews[:3]:
    print(f"\n⭐ {r['rating']}/5 — {r['date']}")
    print(f"   {r['text'][:200]}...")

The limitations of this approach at scale: Running a full headless browser for every place is resource-heavy. A typical business with 500 reviews takes 3–5 minutes to scrape completely, requires a full browser process, and uses significant CPU and memory. For a handful of locations, this is fine. For monitoring hundreds of businesses on a schedule, you need either a fleet of browser instances or a smarter approach.

Method 2: The ScrapeBadger Maps API

ScrapeBadger's Google Maps endpoint handles all the browser automation, proxy management, and anti-bot bypass described above — you call the API with a place ID, you get back structured JSON.

The key concept to understand first is the Place ID — Google's unique identifier for every business listing. It's more stable than a business name (which can change), more precise than an address (multiple businesses share addresses), and it's what every Maps API call uses internally.

Finding Place IDs

python

import requests

API_KEY = "your_scrapebadger_key"

def find_place_id(query: str, location: str = None) -> list[dict]:
    """
    Search Google Maps to find places and their IDs.
    Returns list of matching businesses with place IDs.
    """
    params = {
        "q": query,
        "type": "search",
    }
    if location:
        params["location"] = location

    response = requests.get(
        "https://api.scrapebadger.com/v1/google/maps/search",
        headers={"X-API-Key": API_KEY},
        params=params
    )

    data = response.json()
    results = []

    for place in data.get("local_results", []):
        results.append({
            "name": place.get("title"),
            "place_id": place.get("place_id"),
            "address": place.get("address"),
            "rating": place.get("rating"),
            "reviews_count": place.get("reviews"),
            "category": place.get("type"),
        })

    return results


# Find coffee shops in London
places = find_place_id("Monmouth Coffee", "London, UK")
for place in places:
    print(f"{place['name']} — ID: {place['place_id']} — {place['reviews_count']} reviews")

Scraping Reviews by Place ID

With a place ID, pulling reviews is a single API call returning clean, structured JSON:

python

def get_place_reviews(
    place_id: str,
    sort_by: str = "newest",
    max_pages: int = 10
) -> list[dict]:
    """
    Retrieve all reviews for a place using ScrapeBadger Maps API.
    
    sort_by options: "newest", "relevant", "highest_rating", "lowest_rating"
    """
    all_reviews = []
    next_page_token = None

    for page in range(max_pages):
        params = {
            "place_id": place_id,
            "sort_by": sort_by,
            "hl": "en",
        }
        if next_page_token:
            params["next_page_token"] = next_page_token

        response = requests.get(
            "https://api.scrapebadger.com/v1/google/maps/reviews",
            headers={"X-API-Key": API_KEY},
            params=params
        )

        data = response.json()
        reviews = data.get("reviews", [])

        if not reviews:
            break

        all_reviews.extend(reviews)

        # Check for next page
        next_page_token = data.get("serpapi_pagination", {}).get("next_page_token")
        if not next_page_token:
            break

    return all_reviews


# Full pipeline: search → get place ID → scrape all reviews
places = find_place_id("Dishoom Covent Garden")
if places:
    place_id = places[0]["place_id"]
    reviews = get_place_reviews(place_id, sort_by="newest", max_pages=20)

    print(f"Retrieved {len(reviews)} reviews")

    # Quick sentiment breakdown
    by_rating = {}
    for r in reviews:
        rating = r.get("rating", 0)
        by_rating[rating] = by_rating.get(rating, 0) + 1

    for stars in sorted(by_rating.keys(), reverse=True):
        pct = by_rating[stars] / len(reviews) * 100
        print(f"  {'⭐' * stars}: {by_rating[stars]} reviews ({pct:.1f}%)")

The response structure from the Maps reviews endpoint:

json

{
  "reviews": [
    {
      "user": {
        "name": "Sarah Mitchell",
        "link": "https://www.google.com/maps/contrib/...",
        "thumbnail": "https://...",
        "reviews": 47,
        "photos": 12,
        "local_guide": true
      },
      "rating": 4,
      "date": "2 months ago",
      "iso_date": "2026-03-15T14:23:00Z",
      "snippet": "Visited on a Saturday afternoon. The pasta was excellent...",
      "images": ["https://..."],
      "likes": 3,
      "response": {
        "date": "2 months ago",
        "snippet": "Thank you Sarah! You're right about the acoustics..."
      }
    }
  ],
  "serpapi_pagination": {
    "next_page_token": "CAESBkFGMVFpcA..."
  }
}

The iso_date field is particularly useful for time-series analysis — it gives you an exact timestamp for every review, enabling proper before/after comparisons around specific events. The full Maps endpoint documentation is at docs.scrapebadger.com.

Building a Multi-Location Review Intelligence Pipeline

A single business's reviews are useful. The real power comes from systematic monitoring across multiple locations — whether those are your own properties or competitors across a market.

This pattern powers three high-value use cases: franchise quality monitoring, competitive benchmarking, and market entry research.

python

import requests
import json
from datetime import datetime, timedelta
from collections import defaultdict
from typing import Optional

API_KEY = "your_scrapebadger_key"

class ReviewIntelligencePipeline:
    """
    Multi-location review monitoring pipeline.
    Detects rating shifts, sentiment changes, and emerging themes.
    """

    def __init__(self, storage_path: str = "review_intelligence.json"):
        self.storage_path = storage_path
        self.data = self._load()

    def _load(self) -> dict:
        try:
            with open(self.storage_path) as f:
                return json.load(f)
        except FileNotFoundError:
            return {"locations": {}, "last_run": None}

    def _save(self):
        with open(self.storage_path, "w") as f:
            json.dump(self.data, f, indent=2, default=str)

    def add_location(self, name: str, place_id: str, category: str = "own"):
        """Register a location for monitoring. category: 'own' or 'competitor'"""
        self.data["locations"][place_id] = {
            "name": name,
            "place_id": place_id,
            "category": category,
            "reviews": [],
            "snapshots": [],
            "added": datetime.utcnow().isoformat(),
        }
        self._save()
        print(f"Added: {name} ({category})")

    def refresh_location(self, place_id: str, days_back: int = 30) -> dict:
        """Pull recent reviews for a location and detect changes."""
        location = self.data["locations"].get(place_id)
        if not location:
            return {}

        # Get fresh reviews
        response = requests.get(
            "https://api.scrapebadger.com/v1/google/maps/reviews",
            headers={"X-API-Key": API_KEY},
            params={
                "place_id": place_id,
                "sort_by": "newest",
                "hl": "en",
            }
        )
        data = response.json()
        new_reviews = data.get("reviews", [])

        # Filter to recent reviews only
        cutoff = datetime.utcnow() - timedelta(days=days_back)
        recent = []
        for review in new_reviews:
            iso = review.get("iso_date", "")
            if iso:
                try:
                    review_date = datetime.fromisoformat(iso.replace("Z", "+00:00"))
                    if review_date.replace(tzinfo=None) > cutoff:
                        recent.append(review)
                except ValueError:
                    recent.append(review)  # Include if we can't parse date

        # Build snapshot
        if recent:
            ratings = [r.get("rating", 0) for r in recent if r.get("rating")]
            avg_rating = sum(ratings) / len(ratings) if ratings else 0
            response_count = sum(1 for r in recent if r.get("response"))

            snapshot = {
                "date": datetime.utcnow().isoformat(),
                "review_count": len(recent),
                "avg_rating": round(avg_rating, 2),
                "response_rate": round(response_count / len(recent) * 100, 1) if recent else 0,
                "rating_distribution": {
                    str(i): sum(1 for r in recent if r.get("rating") == i)
                    for i in range(1, 6)
                }
            }

            location["snapshots"].append(snapshot)
            location["reviews"] = recent  # Keep most recent batch
            self._save()

            # Detect changes from previous snapshot
            alerts = self._detect_changes(place_id, snapshot)
            return {"snapshot": snapshot, "alerts": alerts, "reviews": recent}

        return {"snapshot": None, "alerts": [], "reviews": []}

    def _detect_changes(self, place_id: str, current_snapshot: dict) -> list[str]:
        """Compare current snapshot to previous and surface significant changes."""
        location = self.data["locations"][place_id]
        snapshots = location.get("snapshots", [])
        alerts = []

        if len(snapshots) < 2:
            return alerts  # Need at least 2 snapshots to compare

        previous = snapshots[-2]
        current = snapshots[-1]

        # Rating shift
        prev_rating = previous.get("avg_rating", 0)
        curr_rating = current.get("avg_rating", 0)
        if abs(curr_rating - prev_rating) >= 0.3:
            direction = "šŸ“ˆ Improved" if curr_rating > prev_rating else "šŸ“‰ Declined"
            alerts.append(
                f"{direction}: Rating {prev_rating:.1f} → {curr_rating:.1f} "
                f"({location['name']})"
            )

        # Volume spike
        prev_count = previous.get("review_count", 0)
        curr_count = current.get("review_count", 0)
        if curr_count > prev_count * 1.5 and curr_count > 10:
            alerts.append(
                f"šŸ”„ Review volume spike: {prev_count} → {curr_count} reviews "
                f"({location['name']})"
            )

        # 1-star surge
        prev_one_star = previous.get("rating_distribution", {}).get("1", 0)
        curr_one_star = current.get("rating_distribution", {}).get("1", 0)
        if curr_count > 0:
            one_star_pct = curr_one_star / curr_count * 100
            if one_star_pct > 20:
                alerts.append(
                    f"āš ļø High 1-star rate: {one_star_pct:.0f}% of recent reviews "
                    f"({location['name']})"
                )

        return alerts

    def compare_competitors(self) -> dict:
        """Generate competitive comparison across all monitored locations."""
        own_locations = []
        competitor_locations = []

        for place_id, location in self.data["locations"].items():
            snapshots = location.get("snapshots", [])
            if not snapshots:
                continue

            latest = snapshots[-1]
            entry = {
                "name": location["name"],
                "avg_rating": latest.get("avg_rating"),
                "recent_reviews": latest.get("review_count"),
                "response_rate": latest.get("response_rate"),
            }

            if location["category"] == "own":
                own_locations.append(entry)
            else:
                competitor_locations.append(entry)

        # Rank by average rating
        all_locations = own_locations + competitor_locations
        ranked = sorted(all_locations, key=lambda x: x.get("avg_rating", 0), reverse=True)

        return {
            "your_locations": own_locations,
            "competitors": competitor_locations,
            "market_ranking": ranked,
            "market_avg_rating": sum(l.get("avg_rating", 0) for l in all_locations) / len(all_locations) if all_locations else 0
        }


# Usage: Set up a competitive monitoring pipeline
pipeline = ReviewIntelligencePipeline()

# Add your own locations
pipeline.add_location("Cafe Nero - Covent Garden", "ChIJ...", category="own")
pipeline.add_location("Cafe Nero - Oxford Street", "ChIJ...", category="own")

# Add competitors
pipeline.add_location("Costa Coffee - Covent Garden", "ChIJ...", category="competitor")
pipeline.add_location("Starbucks - Covent Garden", "ChIJ...", category="competitor")

# Run monitoring
for place_id in list(pipeline.data["locations"].keys()):
    result = pipeline.refresh_location(place_id, days_back=30)
    if result.get("alerts"):
        for alert in result["alerts"]:
            print(alert)

# Generate competitive report
report = pipeline.compare_competitors()
print(f"\nMarket average rating: {report['market_avg_rating']:.2f}")
print("\nMarket ranking:")
for i, loc in enumerate(report["market_ranking"], 1):
    print(f"  {i}. {loc['name']}: {loc['avg_rating']:.1f} ⭐ ({loc['recent_reviews']} recent reviews)")

Turning Review Text Into Business Intelligence

Raw reviews are interesting. Processed review text is actionable. Here's how to extract structured signals from unstructured review language.

Keyword Frequency Analysis

The simplest form of text analysis — what words appear most in 1-star vs 5-star reviews — reveals what customers care about and what's going wrong.

python

from collections import Counter
import re

def extract_review_themes(reviews: list[dict]) -> dict:
    """
    Identify recurring themes in positive and negative reviews.
    Returns top keywords for each rating tier.
    """
    # Stopwords to filter out
    stopwords = {
        "the", "a", "an", "is", "was", "were", "and", "or", "but",
        "in", "on", "at", "to", "for", "of", "with", "it", "its",
        "this", "that", "they", "we", "i", "my", "our", "very",
        "so", "have", "had", "be", "been", "not", "no", "just"
    }

    positive_words = Counter()  # 4-5 stars
    negative_words = Counter()  # 1-2 stars

    for review in reviews:
        text = review.get("snippet", review.get("text", "")).lower()
        rating = review.get("rating", 3)

        # Extract meaningful words
        words = re.findall(r'\b[a-z]{4,}\b', text)
        words = [w for w in words if w not in stopwords]

        if rating >= 4:
            positive_words.update(words)
        elif rating <= 2:
            negative_words.update(words)

    return {
        "positive_themes": positive_words.most_common(20),
        "negative_themes": negative_words.most_common(20),
        "review_count": len(reviews),
        "avg_rating": sum(r.get("rating", 0) for r in reviews) / len(reviews) if reviews else 0
    }


# Analyse themes in competitor reviews
themes = extract_review_themes(reviews)

print("What customers LOVE (top words in 4-5 star reviews):")
for word, count in themes["positive_themes"][:10]:
    print(f"  {word}: {count}")

print("\nWhat customers HATE (top words in 1-2 star reviews):")
for word, count in themes["negative_themes"][:10]:
    print(f"  {word}: {count}")

LLM-Powered Sentiment Classification

For deeper analysis, passing review batches to a language model produces structured, categorised insights that keyword counting misses. This connects naturally to the ScrapeBadger MCP integration — an AI agent with Maps access can retrieve, analyse, and summarise review sentiment in a single workflow.

python

import requests
import json

def analyse_reviews_with_llm(reviews: list[dict], business_name: str) -> dict:
    """
    Use Claude to extract structured insights from review text.
    Identifies recurring themes, specific complaints, and praise patterns.
    """
    # Prepare review sample (limit to 50 for context window)
    review_texts = []
    for r in reviews[:50]:
        rating = r.get("rating", "?")
        text = r.get("snippet", r.get("text", ""))[:300]
        date = r.get("date", "")
        if text:
            review_texts.append(f"[{rating}⭐, {date}] {text}")

    reviews_formatted = "\n\n".join(review_texts)

    prompt = f"""Analyse these Google Maps reviews for {business_name}.

Reviews:
{reviews_formatted}

Extract and return as JSON:
1. "top_positives": Top 5 things customers consistently praise (with frequency estimate)
2. "top_complaints": Top 5 recurring complaints (with frequency estimate)
3. "service_themes": Key service-related mentions
4. "product_themes": Key product/food/item-related mentions
5. "overall_sentiment": brief 2-sentence summary
6. "urgent_issues": Any complaints requiring immediate attention

Return only valid JSON, no other text."""

    response = requests.post(
        "https://api.anthropic.com/v1/messages",
        headers={
            "x-api-key": "your_anthropic_key",
            "anthropic-version": "2023-06-01",
            "content-type": "application/json"
        },
        json={
            "model": "claude-sonnet-4-20260514",
            "max_tokens": 1000,
            "messages": [{"role": "user", "content": prompt}]
        }
    )

    try:
        content = response.json()["content"][0]["text"]
        return json.loads(content)
    except (json.JSONDecodeError, KeyError):
        return {"error": "Analysis failed"}


insights = analyse_reviews_with_llm(reviews, "Dishoom Covent Garden")
print(json.dumps(insights, indent=2))

What Google Maps Reviews Are Actually Used For

The technical capabilities above power real business decisions across several domains:

Franchise and multi-location quality control. A restaurant chain with 200 locations cannot have someone reading Google reviews for every site every week. A pipeline that monitors rating shifts, flags 1-star surges, and surfaces recurring complaint themes across all locations is the difference between catching a problem at one underperforming branch in week 2 and discovering it in a quarterly meeting in month 4.

Competitive market entry research. Before opening in a new location, you want to know what the competition's customers are saying. Not their aggregate rating — that's visible without scraping — but the specific complaints (parking, noise, wait times, pricing) that a competitor consistently receives. These are your differentiation opportunities. The businesses opening new locations successfully use scraped review data to design around existing market gaps.

Lead generation qualification. A business with a 3.2-star rating and hundreds of complaints about their outdated website, slow response times, and poor customer service is a sales prospect for an agency. Scraping Maps reviews at scale to identify businesses with specific, addressable problems is a lead generation strategy that produces far more qualified prospects than any directory search.

Academic and policy research. Google Maps reviews are increasingly used in academic research because they represent a large, timestamped, geographically tagged dataset of consumer experience. A 2025 study in the Journal of Medical Internet Research analysed 55,043 Google Maps reviews from primary healthcare locations in Finland and Spain to track changes in patient satisfaction before, during, and after COVID-19 — the kind of temporal sentiment analysis that review data uniquely enables.

A Note on Legality and Ethics

Scraping publicly visible Google Maps reviews — content that any visitor can read without logging in — is generally treated as lawful for research and analysis purposes under established precedent. The hiQ v. LinkedIn Ninth Circuit ruling established that automated access to publicly available web data doesn't violate the Computer Fraud and Abuse Act.

The practical guidelines that matter: scrape at rates that don't degrade Google's service, don't attempt to scrape authenticated or private content, handle any personal data (reviewer names, photos) in compliance with GDPR and CCPA, and use data for analysis rather than direct redistribution.

Reviewer names and profile URLs are personal data under GDPR. If you're operating in the EU or processing EU residents' data, ensure your pipeline has a legal basis for processing and appropriate data minimisation — keep the review text and rating, delete the reviewer's personal identifiers if you don't need them for your analysis.


The data is public. The technical path to accessing it reliably is well-established. Whether you're monitoring your own reputation, benchmarking competitors, or building a sentiment intelligence platform, Google Maps reviews represent one of the richest available sources of real consumer experience data — updated continuously, geographically precise, and covering over 200 million businesses worldwide.

The ScrapeBadger Google Maps endpoint handles the infrastructure — JavaScript rendering, anti-bot bypass, session management, pagination — and returns clean structured JSON from place search through to full review extraction. The complete documentation covers the full Maps endpoint suite including place search, place details, photos, and reviews.

Start your free trial and make your first Maps API call in under five minutes.

Thomas Shultz

Written by

Thomas Shultz

Thomas Shultz is the Head of Data at ScrapeBadger, working on public web data, scraping infrastructure, and data reliability. He writes about real-world scraping, data pipelines, and turning unstructured web data into usable signals.

Ready to get started?

Join thousands of developers using ScrapeBadger for their data needs.