Web harvesting

From Seo Wiki - Search Engine Optimization and Programming Languages

Jump to: navigation, search

Web harvesting is an implementation of a Web crawler that uses human expertise or machine guidance to direct the crawler to URLs which compose a specialized collection or set of knowledge. Web harvesting can be thought of as focused or directed Web crawling.

Contents

Purpose

Web harvesting allows Web-based search and retrieval applications, commonly referred to as search engines, to index content that is pertinent to the audience for which the harvest is intended. Such content is thus virtually integrated and made searchable as a separate Web application. General purpose search engines, such as Google and Yahoo! index all possible links they encounter from the origin of their crawl. In contrast, search engines based on Web harvesting only index URLs to which they are directed. This implementation strategy has the effect of creating a searchable application that is faster, due to the reduced size of the index; and one that provides higher quality and more selective results since the indexed URLs are pre-filtered for the topic or domain of interest. In effect, harvesting makes otherwise isolated islands of information searchable as if they were an integrated whole.

Process

Web Harvesting begins by identifying and specifying as input to a computer program a list of URLs that define a specialized collection or set of knowledge. The computer program then begins to download this list of URLs. Embedded hyperlinks that are encountered can be either followed or ignored, depending on human or machine guidance. A key differentiation between Web harvesting and general purpose Web crawlers is that for Web harvesting, crawl depth will be defined and the crawls need not recursively follow URLs until all links have been exhausted. The downloaded content is then indexed by the search engine application and offered to information customers as a searchable Web application. Information customers can then access and search the Web application and follow hyperlinks to the original URLs that meet their search criteria.

Focused web harvesting

Focused web harvesting is similar to the targeted web crawler. Instead of let the general purpose crawler to harvest the web, the mechanism works under a certain pre-defined conditions to specify the information[1][2]. Especially this mechanism is intended to realize an indirect data integration. An implementation of this kind of data integration can be found at the Indonesian Scientific Index - ISI[3] which integrates all information related to the science and technology in Indonesia.

References

  1. L.T. Handoko, A new approach for scientific data dissemination in developing countries: a case of Indonesia, Earth, Moon, and Planets 104 (2009) 331..
  2. Z. Akbar and L.T. Handoko, A Simple Mechanism for Focused Web-harvesting, Proceeding of the International Conference on Advanced Computational Intelligence and Its Applications, arXiv:0809.0723 (2008).
  3. Indonesian Scientific Index

See also

Personal tools

Served in 0.135 secs.