The Screaming Frog SEO Spider is a desktop application that crawls websites similarly to how Googlebot does, extracting on-page and technical data for comprehensive SEO audits. Available for Windows, macOS, and Linux, it can crawl up to 500 URLs for free; purchasing an annual license for £199 ($279/year) removes this limit and unlocks advanced features like custom source code search, JavaScript rendering, and API integrations.

At its core, Screaming Frog helps SEOs identify common issues—broken links, duplicate content, missing metadata, redirect chains, and more—by presenting data in sortable tables and exportable spreadsheets.
Unlike many cloud-based auditors, it runs locally on your machine, enabling faster, more secure crawls and granular control over crawl depth, URL filters, and user-agent settings.
This crawler has been essential when performing site audits for technical SEO issues, and has also been a vital component for dozens of site migrations to ensure URLs and SEO data remain intact.
Let’s take a look at some of the features offered by this app.
Screaming Frog’s Core Features & Capabilities
1. Comprehensive Crawl Overview
The Crawl Overview and Summary Stats give an at-a-glance snapshot of page counts, status codes, metadata issues, and link metrics. This immediate insight helps prioritize areas requiring deeper investigation—whether it’s a flood of 404 errors or excessive duplicate titles.
2. Metadata & On-Page Analysis
Screaming Frog extracts key on-page elements—title tags, meta descriptions, H1s, H2s, canonical tags—identifying missing, duplicate, or overly long tags. This automated auditing replaces time-consuming manual checks and ensures consistency across thousands of pages.
3. HTTP Status & Redirect Chains
Accurate reporting of HTTP response codes (200, 301, 302, 404, 500, etc.) and detailed redirect chain tracing enables SEOs to pinpoint broken assets and inefficient redirect loops that harm crawl efficiency and link equity.
4. XML Sitemap Generation
The tool can generate XML sitemaps on the fly, including images and hreflang annotations, ensuring that all important pages are discoverable by search engines.
5. Custom Extraction & Advanced Configurations
Using XPath and regex, you can extract virtually any element from page source—structured data, Open Graph tags, embedded schema—empowering highly tailored audits beyond the built-in reports.
6. JavaScript Rendering
By integrating Chromium, Screaming Frog renders JavaScript-heavy pages, capturing content only visible post-render. This is critical for single-page applications (SPAs) and JS frameworks.
7. API Integrations (GA, GSC, PageSpeed)
Linking to Google Analytics and Search Console APIs surfaces orphaned pages (no GA/GSC data) and performance metrics directly in the crawl. The PageSpeed Insights API integration allows bulk Core Web Vitals audits per URL.
Why I Use Screaming Frog in Every Project
Speed & Local Processing
Because it runs on your local machine, crawl speed hinges on your hardware, not shared cloud resources—ideal for large sites. Switching from RAM to database storage mode lets you crawl millions of URLs without crashing your machine.
Unmatched Flexibility
Screaming Frog’s deep customization—from crawl limits and path exclusions to header overrides—ensures you can tailor each audit to the site’s architecture and objectives. No two crawls need ever be the same.
Granular Data Export
Every report—be it internal links, canonical status, or page titles—can be exported to Excel or CSV for deeper analysis in your favorite BI tools. This data-first approach underpins data-driven SEO strategies.
Integration with Existing Workflows
By pulling in GA, GSC, Ahrefs, Moz, and PageSpeed data, the SEO Spider centralizes multi-source insights in one interface, cutting down on tool switching and manual data joins.
Free Version for Quick Health Checks
The free 500-URL limit makes Screaming Frog the go-to tool for small sites, blogs, or quick pre-launch checks—no license needed.
Community & Continuous Updates
With 12+ years in the market, Screaming Frog has a robust support community, detailed user guides, and frequent updates—new features like Core Web Vitals audits attest to its ongoing evolution.
Real-World Use Cases
Site Migrations & Redesigns
I rely on Screaming Frog to pre- and post-migration crawl comparisons—using the “List Mode” to crawl old URLs, checking status, and mapping redirects to the new structure. This prevents traffic loss and broken links.
Technical SEO Audits
A typical audit uncovers thousands of errors in minutes: missing alt text, duplicate meta tags, blocked resources, and more. I document and prioritize fixes via exportable spreadsheets that stakeholders can easily digest.
Ongoing Health Monitoring
Monthly crawls scheduled via command-line interface (CLI) export results to automated dashboards (Looker Studio), flagging new issues before they impact rankings.
Core Web Vitals Optimization
By combining Screaming Frog with the PageSpeed Insights API, I bulk-report LCP, FID, and CLS for every key landing page—pinpointing slow assets and layout shifts at scale.
Screaming Frog vs. Alternatives
| Tool | Crawl Depth Control | JS Rendering | API Integrations | Price |
|---|---|---|---|---|
| Screaming Frog | ✅ Highly granular | ✅ Yes | ✅ GA, GSC, PSI | £199/year (unlimited) / Free |
| Ahrefs Site Audit | ✅ Basic | ❌ No | ✅ Ahrefs API | $99+/month |
| SEMrush Site Audit | ✅ Basic | ❌ Limited | ✅ SEMrush API | $119.95+/month |
| DeepCrawl | ✅ Granular | ✅ Yes (paid) | ✅ Multiple APIs | $89+/month |
Screaming Frog stands out for its local performance, cost-effectiveness, and extensive customization options—particularly on large or JavaScript-driven sites.
Advanced Tips & Tricks
1. Custom Extraction with XPath
Use the Extraction tab to pull structured data—like price, SKU, or JSON-LD schema—directly into your crawl results for compliance checks.
Purpose: Extract structured data (e.g., prices, SKUs, schema) for SEO audits, eCommerce compliance, or schema validation.
How to Perform:
- Open Screaming Frog and go to Configuration > Custom > Extraction.
- Click “Add” to create a new extraction.
- Choose XPath, CSSPath, or Regex depending on your needs.
- Enter the XPath query (e.g.,
//span[@class='price']). - Assign a label like “Price” and run your crawl.
Example:
- For a product page with
<span class="price">$49.99</span>, use://span[@class='price'] - The extracted value “$49.99” will appear in a custom column in your crawl export.
What to Expect:
- Screaming Frog will output a custom column per extraction.
- Great for auditing missing prices, schema fields, or meta tags.
2. JavaScript-Rendered Crawls
Enable Rendering to crawl SPAs. Adjust the AJAX timeout and window size settings to mimic different devices.
Purpose: Crawl content rendered client-side (common in SPAs and JavaScript-heavy frameworks like React, Vue, Angular).
How to Perform:
- Go to Configuration > Spider > Rendering.
- Choose JavaScript from the “Rendering” drop-down menu.
- Adjust AJAX timeout (default is 5 seconds, increase if needed).
- Set Window Size to simulate devices (e.g., 1366×768 for desktop or 375×667 for mobile).
- Start the crawl.
Example:
- Crawling a React site that only loads product listings after the DOM renders.
- Without JS rendering: pages appear blank.
- With JS rendering: content appears and metadata can be analyzed.
What to Expect:
- Longer crawl times due to browser emulation.
- Richer data from dynamically-loaded content.
- Accurate representation of the rendered HTML seen by users.
3. Scheduled CLI Crawls
Automate weekly crawls via the CLI and output to CSV/console—and feed results into Data Studio for real-time monitoring.
Purpose: Automate site audits and integrate results into dashboards like Google Looker Studio.
How to Perform:
- Install Screaming Frog CLI.
- Write a command in a script or terminal like:
ScreamingFrogSEOSpiderCli --crawl https://example.com --output-folder "C:\CrawlReports" --headless --export-tabs "Internal:All" --save-crawl - Schedule with Windows Task Scheduler or cron (Linux/macOS).
- Optionally, pipe data to BigQuery, Google Sheets, or Looker Studio.
Example:
- Automate a weekly crawl every Monday at 9am.
- Export internal pages, response codes, and metadata to CSV.
What to Expect:
- Scheduled .csv or .xlsx exports without opening the GUI.
- Easily monitored site health with fewer manual steps.
- Crawl logs and results saved to predefined folders.
4. Orphan Page Detection
Configure GA & GSC API access to identify pages missing from analytics data, often overlooked yet important for internal linking and traffic growth.
Purpose: Identify pages receiving impressions or sessions but missing from internal links (aka orphans).
How to Perform:
- Go to Configuration > API Access and authenticate GA and GSC.
- Choose the correct property and profile.
- Enable “Orphan URLs” in crawl settings (tick “Include URLs From Analytics” and/or “Search Console”).
- Start crawl; it will merge API data with site crawl.
Example:
- A blog post receives organic traffic but isn’t linked anywhere on the site.
- Screaming Frog flags it as an orphan.
What to Expect:
- A new Orphaned URLs report under Reports > Orphaned URLs.
- Pages flagged for internal linking opportunities.
- Improved crawl depth and indexation when orphans are resolved.
5. Database Storage Mode
Switch to SQL database crawl storage when auditing enterprise-scale sites (100k+ URLs) to avoid memory constraints.
Purpose: Improve performance when crawling large enterprise sites (100k+ URLs) by reducing memory load.
How to Perform:
- Go to File > Settings > Storage Mode.
- Choose Database Storage Mode (SQL Lite or PostgreSQL).
- Restart Screaming Frog for settings to take effect.
- Begin crawling.
Example:
- Site has over 200,000 URLs; memory-based mode crashes or slows down.
- Switch to database mode to ensure smooth crawl and easier recovery.
What to Expect:
- Crawl data is written to disk instead of RAM.
- Slower performance but significantly more stable.
- Ability to pause/resume massive crawls and avoid memory-related crashes.
6. Leveraging List Mode to Bypass URL Limit
Use List Mode to crawl a specific set of URLs—even on the free version—without being restricted by the 500-URL crawl limit.
Purpose:
Bypass the default crawl cap of 500 URLs on the free license by manually uploading a list of target URLs. Ideal for auditing legacy redirects, post-migration URL checks, or validating indexed pages in bulk.
How to Perform:
- Open Screaming Frog and go to Mode > List.
- Select a source: upload a
.txtor.csvfile, paste URLs manually, or pull from your clipboard. - Go to Configuration > Spider > Limits.
- Untick both “Limit Crawl Total” and “Limit Crawl Depth” to allow unrestricted crawling of the uploaded list.
- Start the crawl.
Example:
You want to audit 2,000 landing pages from a sitemap or export of indexed URLs.
Create a .txt list of those URLs and upload them via List Mode. Screaming Frog will crawl every one—even without a paid license.
What to Expect:
- Full crawl of all URLs in your list, not capped at 500.
- Great for quick diagnostics of critical pages, even on the free version.
- No discovery of additional internal links unless you enable “Crawl Linked URLs” in Spider settings.
Conclusion
Screaming Frog SEO Spider’s blend of power, flexibility, and affordability makes it my go-to SEO crawler for projects of all sizes—from quick blog audits to complex enterprise migrations. Its local processing speed, deep customization, and array of advanced features (API integrations, JS rendering, custom extractions, Core Web Vitals auditing) simply can’t be replicated by cloud-only tools.
If you’re not yet using Screaming Frog—or only scratching the surface of its capabilities—now is the time to dive in. Download the free version for up to 500 URLs, explore the tutorials, and consider the full license (£199/year) to unlock its full potential.
Ready to supercharge your SEO audits?
Download Screaming Frog SEO Spider
Hi, I’m Adam — a Denver-based SEO and content strategist with over 10 years of experience helping websites climb the search rankings and increase conversions. Whether it’s a site audit, keyword strategy, or a full-blown content overhaul, I bring a creative, technical, and human approach to digital marketing.

