Robots.txt Validator
Paste your robots.txt to validate syntax, inspect crawl rules, and test URLs
Paste your robots.txt above or click Load Example to get started
User-agent: * / Disallow: /admin/About Robots.txt Validator
The Robots.txt Validator parses and validates your robots.txt file against the Robots Exclusion Protocol (REP) standard. It checks for syntax errors, unknown directives, missing wildcard rules, overly restrictive crawl blocks, and invalid Sitemap URLs — giving you a clear picture of how search engines and other web crawlers will interpret your file.
Beyond validation, the tool lets you inspect each User-agent group with its Allow/Disallow rules and Crawl-delay settings, view declared Sitemap URLs, and test any URL path to see instantly whether a specific bot would be permitted or blocked. All processing happens locally in your browser — your data is never sent to a server.
How to Use Robots.txt Validator
- Paste the full contents of your
robots.txtfile into the text area above, or click Load Example to try a sample. - The validator instantly checks for syntax errors, missing User-agent groups, invalid Sitemap URLs, and common SEO pitfalls.
- Review the Validation Issues panel — errors are shown in red, warnings in yellow, and informational notes in blue.
- Browse the User-agent Groups section to see each crawler's Allow/Disallow rules and optional Crawl-delay in a clear table.
- To test a specific URL, enter a path (e.g.
/blog/post) or a full URL in the Test a URL field, choose a User-agent, and see whether that bot is allowed or blocked — with the matched rule shown. - Check the Sitemaps panel to review all declared sitemap URLs and verify they are valid.