Frequently Asked Questions

Welcome to our FAQ page, where you can find answers to common questions about our services. If your question isn't covered here, please don't hesitate to contact us for further assistance.

What does the service on cost?

Our service is completely free of charge. We're committed to providing a helpful tool that supports webmasters and developers in maintaining the health of their websites without any financial barriers. Enjoy the benefits of regular site scans and error reports at no cost!

What are the current limits for scanning?

For unverified websites, the limits are set at 200 internal pages and 1,000 requests. Verified websites enjoy significantly higher limits, allowing up to 50,000 internal pages and 500,000 total requests.

Why are some external websites reported with 'method not allowed' or 'connection failed'?

Our robot is dedicated to scanning websites for errors, but not all websites may accommodate our scanning requests. Occasionally, certain security measures in place, such as those employed by Content Delivery Networks (CDNs) like Cloudflare, may misinterpret our robot's intent and block it, perceiving it as a potential threat. If you encounter such issues with specific websites, we recommend adding them to a blacklist feature we provide, ensuring your scan list remains optimized and free of these obstructions.

Why are there many 'Connection Failed' errors when scanning my site?

Security mechanisms on your website may have blocked our scanning robot. To ensure accurate and complete scans, please verify that our robot is not being blocked by any security settings, firewalls, or server configurations. Whitelisting our robot's IP address and adjusting your security settings can help prevent these errors. If the problem persists, please contact your hosting provider or check your website's security logs for more details.

Why aren't some external websites being scanned?

Our global blacklist at may exclude certain domains from being scanned. There are websites that prefer not to be scanned and are set up to automatically fail scan requests. This could be due to their own privacy policies, security measures, or preference for exclusion from external services. If you believe that a website has been mistakenly added to our global blacklist, please do not hesitate to contact us so we can review and update our list accordingly.

How does the URL-specific blacklist function?

The URL-specific blacklist is a feature designed to give you more control over your website scanning process. By adding URLs to the blacklist, you can customize your scans and exclude any specific URL—be it internal or external—from being checked. This could be for reasons such as privacy, security, or simply to ignore sections that you do not need scanned. To streamline the process, you can use an asterisk '*' as a wildcard, which acts as a placeholder for multiple URLs. For instance, entering '**' would exclude all subpages under ''. If you want to block an entire domain, you would input '**', and all links leading to '' will be omitted from scans. Remember, the blacklist is completely optional and fully customizable. You can update it at any time to fit your scanning needs.

How is funded if the service is free? began as a one-person project fueled by a passion for web development and a commitment to improving the web experience for everyone. As a hobbyist venture, it's not about profit but about providing a valuable service to fellow web enthusiasts. To help cover server costs and ensure the website can continue operating, we utilize Google AdSense, a program that allows us to earn revenue through discrete advertising. This helps keep free for users like you while supporting ongoing maintenance and future developments.

Why isn't my website being scanned?

If you notice your website isn't being scanned by, it's important to ensure there are no barriers preventing our robot from accessing your site. Common obstacles could include settings in your robots.txt file that disallow robotic access, or security features like Cloudflare which might be blocking automated scanning activities. Please check your website's configurations to ensure that the robot is granted permission to perform scans. If you need assistance with this or believe your site should be accessible, feel free to reach out for support.