SEO Spiders, Robots and Crawlers

Search Engine Optimization (SEO)

Spiders, Robots and Crawlers

A spider/robot/crawler is a software program that automatically fetches web pages. Spiders are used to feed pages to search engines. It’s called a spider because it crawls over the web.

Because most Web pages contain links to other pages, a spider can start almost anywhere. As soon as it sees a link to another page, it goes off and fetches it.

This process is called crawling or spidering. Search engine use spidering as a means of providing up-to-date data and relevant search results. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine, that will index the downloaded pages to provide fast searches.

Spiders work by following links. So it is essential that it can see your links and that they are not hidden in flash or javascript code (see Common Mistakes). A spider only traverses the web’s “hypertext” structure.

Spiders can’t see anything in javascript, graphical elements or Flash. Where possible, try to keep javascript to a minimum by placing it in a separate script file. Use appropriate ALT text for your images.

Leave a Reply

Your email address will not be published. Required fields are marked *