SEO Glossary Recommends:


Recently Added Terms

Ads

us to see your advertising here.

Ebook

Brain Teasers

Spider

Also known as a bot, robot, or crawler. Spiders are programs used by a search engine to explore the World Wide Web in an automated manner and download the HTML content (not including graphics) from web sites, strip out whatever it considers superfluous and redundant out of the HTML, and store the rest in a database (i.e. its index).

Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine, that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for spam).

A web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit. As it visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, recursively browsing the Web according to a set of policies.

A spider is a robot sent out by search engines to catalog websites on the internet. When a spider indexes a particular website, this is known as 'being spidered'.