By default, all the URLs on your site can be crawled by Google. However, if we don't want Google to index some specific pages, you can use a robots.txt file.
In your Robots.txt file, you can request that Google not index certain pages by using this "disallow" rule:
{codecitation}Disallow: /dont-scan-this-url/{/codecitation}
In this post, I'll show you how to use Google Search Console to check whether you have successfully blocked Google from indexing a particular URL.
[[ This is a content summary only. Visit
http://OSTraining.com for full links, other content, and more! ]]
Published on March 09, 2016 17:27