X
Beta

Frequently Asked Questions

The most important questions about robots.txt: why you need it and how to implement it


Home > FAQ > How to configure a robots.txt file?

How to configure a robots.txt file?

A robots.txt file is a text file and usually has an enumeration of the following commands:

- useragent: this is used to give specific commands to a bot. A useragent is always used in combination with the commands: allow and disallow, which will allow (or disallow) to visit a website's certain search path.
- allow: the following path may be visited by bots
- disallow: the following path may not be visited by bots
- sitemap: this specifies a direct path to a site map. A sitemap file is placed on the sever en mostly gives a total enumeration of all the pages of the website, which are allowed to be visited by the bot.

- crawldelay: Crawl-Delay specifies the delay between each robot visit in seconds
- requestrate: Request-Rate specifies the delay between each robot visit in seconds
- visittime: Visit-Time specifies in which timeframe of the day the robot may index

It is also possible to add whitespace, so the text is easier for people to read, the spacing will not be read by the robot.
Comment lines are preceded by a hash (#) and will not be read by the robot.

For useful examples of robots.txt implementations, please visit our robots.txt example page.


Please wait while checking