Robots.txt is optimized use

The function of Robots.txt file is very finite, it not can prevail on spider costs more time to perhaps visit more pages on your website. But the effect that you can produce Robots.txt document will undertake be optimizinged certainly handling to his website.

1.Every time when an user tries to visit an already nonexistent URL, the server can record a 404 mistakes in the log (cannot find a document) . Every time when the spider will seek nonexistent Robots.txt document, the server also will record a 404 mistakes in the log, so you should add file of a Robots.txt below website root catalog, even if be it may not be a bad idea of file of Robots of a blank.

2.Make spider program is far from the catalog on certain server to assure server function. Avoid document of will all program to be indexed by the spider, can save server resource.

The link in Sitemap file still can include directly in 3.robots.txt file. Resemble such:

Sitemap: Http: / / / this announce is opposite Sitemap.xml // Baidu has certain effect

There is tool of an analysis Robots.txt in Google website administrator, can help us analysed Robots.txt to be installed whether successfully stop the Google spider visit to specific webpage, and whether does Robots.txt have solecism to wait a moment.

1.Https: / / / Webmasters/tools/

After entering, choose the website that you should analyse, choose a tool next " " analytic Robots.txt

2.After entering, you can see the basic message of the Robots.txt that concerns your website

The Robots.txt file that 3. also can keep to oneself has relevant test, the Robots.txt document that writes you to write and the website that want a test (the address that includes to be prevented by you) safeguard does not give error.

The Robots.txt that I use

Sitemap: Http: / / / Sitemap.xml

User-Agent: *

Disallow: / Wp-content/

Robots.txt grammar: Http: / / / Search/robots.html