Configure website Robots.txt file easily

Core clew: In a website, robots.txt is an important file, every website ought to have the Robots.txt with a proper setting.

When search index props up capture before your website file, they can treat the setting of the Robots.txt file of your website first, in order to know the range that you permit its capture, include what file, what catalog. So Where is the Robots.txt file that how configures you?

Let us see a case:

# Robots.txt File Start

# Exclude Files From All Robots:

User-agent: *

Disallow: / Admin_login/

Disallow: / Admin/

Disallow: / Admin.htm

Disallow:/admin.aspx

# End Robots.txt File

Those who have # number is annotate, convenient read.

User-agent searchs the spider of engine namely, * name was used from the back, state all pair of spiders are effective.

Disallow expresses not to allow capture namely, the catalog from the back or file, express to prohibit the limits of capture.

Had compiled this document, below the root list that maintains the website that is put in you next (it is a catalog certainly below, of course you also can adjust) , so search engine can look.

If you do not have Robots.txt file, in file of your website visit log, you can see the spider visits record of Robots.txt file unsuccessful.

Good, set your Robots.txt file now.