Frequently Asked Questions
If you notice something missing or have individual questions, please feel free to contact us, thank you !
Our service is completely free. There are no hidden costs.
For unverified websites, the limits are 200 internal pages and 1,000 requests. Verified websites have much larger limits, with 50,000 internal pages and 500,000 total requests.
Some websites block robots entirely. The website functions perfectly when you visit it with your browser. However, our robot is excluded from these websites. For example, Amazon blocks most requests. To prevent these false positives from 'contaminating' your scan results, you can add these URLs to your personal website blacklist.
There are various reasons why such errors can occur. The most common cause is that our robot is blocked by security measures. This happens especially when using Content Delivery Networks like Cloudflare. Our robot is then considered potentially unwanted and blocked. Unfortunately, there is nothing we can do about it, but you can check with your provider to see if you can allow our bot. Since our robot identifies itself accordingly, you can add it to a whitelist. Perhaps you can also add our IP address 5.9.153.60 to a whitelist. If you have any questions or need assistance, feel free to contact us.
We maintain a constantly updated global blacklist of known sites that cannot be scanned or do not wish to be scanned. Additionally, we evaluate the robots.txt file of each site before checking individual URLs for availability. If a website does not want to be scanned, we respect that. These sites will not appear in the results. If you believe we have made a mistake or think our blacklist should be expanded with an entry, please let us know, thank you!
With the blacklist, you can exclude individual sections of your website or external websites from being scanned. This can be useful if, for example, the external website blocks robots or if you do not want certain areas of your site to be checked.
- To exclude an exact URL from the scanner, specify the complete URL: "https://example.com/dont_scan_me.html"
- To exclude the area within "https://example.com/dont_scan_me/", use the asterisk: "https://example.com/dont_scan_me/*"
- Subdomains can also be excluded using the asterisk: "*dontscan.example.com"
- To exclude an entire domain, use "*example.com"
dislike404.com is a one-person hobby project. The idea is to offer a service that simply doesn't exist on the internet right now: free website scans! Currently, a few Google Ads are placed on the homepage to help cover the server costs.
If your website cannot be scanned, please check the following:
- Do you have a robots.txt file (https://example.com/robots.txt) that excludes our robot?
- Are you using CDN services like Cloudflare that consider the dislike404.com robot as 'malicious' and therefore deny access?
- Is your site actually online, and is the URL correct?