Search Engine Optimization (SEO)
Spiders, Robots and Crawlers
A spider/robot/crawler is a software program that automatically fetches web pages. Spiders are used to feed pages to search engines. It’s called a spider because it crawls over the web.
Because most Web pages contain links to other pages, a spider can start almost anywhere. As soon as it sees a link to another page, it goes off and fetches it.
This process is called crawling or spidering. Search engine use spidering as a means of providing up-to-date data and relevant search results. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine, that will index the downloaded pages to provide fast searches.