What is the dislike404.com Robot?
Dislike404Bot is the dedicated crawler for dislike404.com, a service designed to help maintain the health of websites by detecting and reporting errors such as broken links and HTTP status errors.
Crawler Activity
Dislike404Bot systematically scans websites to index their content for our website monitoring and error detection service. It adheres to the
standard
protocols of respectful crawling, including obeying robots.txt instructions for website crawling.
Respect for Webmasters
Dislike404Bot is designed to have a minimal impact on your website's performance. It controls the crawl rate to avoid overwhelming server resources
and
respects the directives specified in your site's robots.txt.
Managing Dislike404Bot Access
If you wish to prevent Dislike404Bot from crawling your website, you can do so by adding the following rules to your robots.txt file:
Code:
User-agent: Dislike404Bot
Disallow: /
This directive tells Dislike404Bot to refrain from accessing any part of your site. If you want to block the crawler from specific areas of your
site, you can specify the paths accordingly. For example, to block Dislike404Bot from accessing the content in your /private directory, you would
use:
Code:
User-agent: Dislike404Bot
Disallow: /private/
These instructions will ensure that Dislike404Bot respects your site's boundaries and only accesses areas you have permitted.
Crawler Details
Version |
1.0 |
User-agent |
Mozilla/5.0 (compatible; Dislike404Bot/1.0; +http://dislike404.com/bot/) |
Obeys robots.txt |
Yes |
Obeys crawl delay |
No |
IP address |
5.9.153.60 |
Data Use
The data collected by Dislike404Bot is used exclusively for providing and improving the Dislike404 service. This includes notifying website owners
of detected errors.