Check if your Site's URLs Can Be Indexed or Not with Robots.txt Tester

Robots.txt is an important file that allow or deny access to specific pages from our site to search engines.

By default all the URLs can be crawled by Google, however, if we don't want the googlebot scan some specific pages, we use the "disallow" rule in this format:

{codecitation}Disallow: /dont-scan-this-url/{/codecitation}

In this post, I'll show you how to know if a URL is being blocked or allowed for googlebots.



[[ This is a content summary only. Visit http://OSTraining.com for full links, other content, and more! ]]
 •  0 comments  •  flag
Share on Twitter
Published on March 09, 2016 16:27
No comments have been added yet.