A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Other terms for Web crawlers are ants, automatic indexers, bots, Web spiders, Web robots, or—especially in the FOAF community—Web scutters. This process is called Web crawling or spidering. Many sites, in particular search engines,
use spidering as a means of providing up-to-date data. Web crawlers are
mainly used to create a copy of all the visited pages for later
processing by a search engine that will index
the downloaded pages to provide fast searches. Crawlers can also be
used for automating maintenance tasks on a Web site, such as checking
links or validating HTML
code. Also, crawlers can be used to gather specific types of
information from Web pages, such as harvesting e-mail addresses (usually
for sending spam).
Ion Negatif (Anion)
-
*Ion* adalah atom atau sekumpulan atom yang bermuatan listrik. Ion
bermuatan negatif, yang menangkap satu atau lebih elektron, disebut *anion*,
kare...