The robots.txt record is a straightforward (no html) message that is placed in your site’s in order to prompt the web index which lists the pages to rundown and which not to. The slithering guidelines are given with the goal that the web robot comprehends what pages to creep and so forth.
There are basic instructions given to web robot in the format of –
For example-For www.xyz.com site:
Robots.txt file URL: www.xyz.com/robots.txt
Blocking all web crawlers from all content
Using this code in a robots.txt file would tell all web crawlers not to crawl any pages on www.xyz.com, including the homepage.
Allowing all web crawlers access to all content
Using this code in a robots.txt file tells web crawlers to crawl all pages on www.xyz.com, including the homepage.
How does robots.txt function?
Web search tools have two fundamental occupations:
1. Crawling the web to find the duplicate content;
2. Indexing that pages with duplicacy that can come to up to searchers who are searching for the results.
To crawl sites, web search tools take after connections to get starting with one webpage then onto the next — at last, creeping crosswise over a huge number of connections and sites. This slithering conduct is here and there known as “spidering.”
After reaching the base of a site yet before spidering it, the scan crawler will search for a robots.txt document. On the off chance that it discovers one, the crawler will read that document first before proceeding through the page. Since the robots.txt record contains data about how the internet searcher should slither, the data found there will teach advance crawler activity on this specific website. On the off chance that the robots.txt document does not contain any mandates that refuse a client operator’s action (or if the site doesn’t have a robots.txt record), it will continue to creep other data on the site.
5 things robots.txt improves the situation SEO execution –