Robots.txt Generator
Build a robots.txt file visually. Control how search engines and AI crawlers access your site.
User-Agent Rules
Directives
Preview
What is a Robots.txt File?
A robots.txt file is a text file placed in your website's root directory that tells search engine crawlers and AI bots which pages they can and cannot access. It's a fundamental part of SEO and web crawling control.
How to Use the Robots.txt Generator
Use our visual builder to create rules without writing code. Select quick presets for common scenarios, or manually add User-Agent rules with Allow and Disallow paths. Add your sitemap URL and other directives, then download or copy the generated file.
Steps:
- Choose a preset or start with a blank slate
- Add rules for specific crawlers (Googlebot, GPTBot, etc.)
- Specify Allow and Disallow paths
- Add your sitemap URL (recommended)
- Download or copy the generated robots.txt
- Upload to your website root (e.g., https://example.com/robots.txt)
Supported User-Agents
- * (All crawlers)
- Googlebot (Google Search)
- Bingbot (Bing Search)
- GPTBot (ChatGPT Training)
- ChatGPT-User (ChatGPT Retrieval)
- ClaudeBot (Claude Training)
- PerplexityBot (Perplexity AI)
- Google-Extended (Google Generative AI)
Common Rules
Block all crawlers: Disallow: / for User-agent: *
Block AI bots: Create rules for GPTBot, ClaudeBot, PerplexityBot with Disallow: /
Allow Googlebot, block others: Allow all for Googlebot, Disallow: / for User-agent: *
Frequently Asked Questions
Do I need a robots.txt file?
It's not required, but highly recommended. It gives you control over how crawlers access your site and can improve crawl efficiency.
Where do I place the robots.txt file?
In your website root (https://example.com/robots.txt). Most hosting providers allow FTP/SFTP upload to the root public_html directory.
Can I block specific pages from Google?
Yes. Use Disallow: /path-to-page/ to prevent Googlebot from crawling that page. Use Allow: / for all other pages.
Will robots.txt hide my pages from Google Search?
Robots.txt prevents crawling but not indexing. Pages blocked in robots.txt can still appear in search results if linked externally. Use noindex meta tag if you want to prevent indexing.
How long does it take for changes to take effect?
Search engines usually check robots.txt on each crawl. Changes typically take effect within days.
Can I block AI crawlers?
Yes! Use our "Block AI Crawlers" preset or manually add rules for GPTBot, ClaudeBot, and PerplexityBot.
Power up your SEO workflow
BreezyTools Pro removes all ads, gives you early access to new tools, and supports ongoing development. Less than a coffee a month.