- Web archiving
Web archiving is the process of collecting portions of the
World Wide Weband ensuring the collection is preserved in an archive, such as an archive site, for future researchers, historians, and the public. Due to the massive size of the Web, web archivists typically employ web crawlers for automated collection. The largest web archiving organization based on a crawling approach is the Internet Archivewhich strives to maintain an archive of the entire Web. National libraries, national archives and various consortia of organizations are also involved in archiving culturally important Web content. Commercial web archiving software and services are also available to organizations who need to archive their own web content for legal or regulatory purposes.
Collecting the Web
Web archivists generally archive all types of web content including
Methods of collection
The most common web archiving technique uses
web crawlers to automate the process of collecting web pages. Web crawlers typically view web pages in the same manner that users with a browser see the Web, and therefore provide a comparatively simple method of remotely harvesting web content. Examples of web crawlers frequently used for web archiving include:
* [http://www.metaproducts.com/mp/Offline_Explorer_Enterprise.htm Offline Explorer]
* [http://webcurator.sourceforge.net/ Web Curator]
There are numerous services that may be used to archive web resources "on-demand", using web crawling techniques:
WebCite, a service specifically for scholarly authors, journal editors and publishers to permanently archive and retrieve cited Internet references (Eysenbach and Trudel, 2005).
* [http://www.archive-it.org/ Archive-It] , a subscription service, allows institutions to build, manage and search their own web archive.
* [http://www.hanzoarchives.com/ Hanzo Archives] offer commercial web archiving tools and services, implementing an archive policy for web content and enabling
electronic discovery, litigation support or regulatory compliance.
Database archiving refers to methods for archiving the underlying content of database-driven websites. It typically requires the extraction of the
databasecontent into a standard schema, often using XML. Once stored in that standard format, the archived content of multiple databases can then be made available using a single access system. This approach is exemplified by the [http://deeparc.sourceforge.net/ DeepArc] and [http://www.nla.gov.au/xinq/ Xinq] tools developed by the Bibliothèque nationale de Franceand the National Library of Australiarespectively. DeepArc enables the structure of a relational databaseto be mapped to an XML schema, and the content exported into an XML document. Xinq then allows that content to be delivered online. Although the original layout and behavior of the website cannot be preserved exactly, Xinq does allow the basic querying and retrieval functionality to be replicated.
Transactional archiving is an event-driven approach, which collects the actual transactions which take place between a
web serverand a web browser. It is primarily used as a means of preserving evidence of the content which was actually viewed on a particular website, on a given date. This may be particularly important for organizations which need to comply with legal or regulatory requirements for disclosing and retaining information.
A transactional archiving system typically operates by intercepting every
HTTPrequest to, and response from, the web server, filtering each response to eliminate duplicate content, and permanently storing the responses as bitstreams. A transactional archiving system requires the installation of software on the web server, and cannot therefore be used to collect content from a remote website.
Examples of commercial transactional archiving software include:
* [http://www.projectcomputing.com/products/pageVault/ PageVault]
* [http://www.vignette.com/portal/site/us/menuitem.dcbb524431151aaa32189210180141a0/?vgnextoid=1e0395338521b010VgnVCM1000005610140aRCRD&vgnext-selected-menuitem=4b09bdd80b8ff1e8fb3d8010180141a0 Vignette WebCapture]
Difficulties and limitations
Web archives which rely on web crawling as their primary means of collecting the Web are influenced by the difficulties of web crawling:
robots exclusion protocolmay request crawlers not access portions of a website. Some web archivists may ignore the request and crawl those portions anyway.
* Large portions of a web site may be hidden in the
Deep Web. For example, the results page behind a web form lies in the deep web because a crawler cannot follow a link to the results page.
* Some web servers may return a different page for a web crawler than it would for a regular browser request. This is typically done to fool search engines into sending more traffic to a website.
Crawler traps (e.g., calendars) may cause a crawler to download an infinite number of pages, so crawlers are usually configured to limit the number of dynamic pages they crawl.
The Web is so large that crawling a significant portion of it takes a large amount of technical resources. The Web is changing so fast that portions of a website may change before a crawler has even finished crawling it.
Not only must web archivists deal with the technical challenges of web archiving, they must also contend with intellectual property laws. Peter Lyman (2002) states that "although the Web is popularly regarded as a
public domainresource, it is copyrighted; thus, archivists have no legal right to copy the Web". However national libraries in many countries do have a legal right to copy portions of the web under an extension of a legal deposit.
Some private non-profit web archives that are made publicly accessible like
WebCiteor the Internet Archiveallow content owners to hide or remove archived content that they do not want the public to have access to. Other web archives are only accessible from certain locations or have regulated usage. WebCite also cites on its [http://www.webcitation.org/faq FAQ] a recent lawsuit against the caching mechanism, which
Aspects of Web curation
Web curation, like any digital curation, entails:
*Collecting verifiable Web assets
*Providing Web asset search and retrieval
*Certification of the trustworthiness and integrity of the collection content
*Semantic and ontological continuity and comparability of the collection content
Thus, besides the discussion on methods of collecting the web, those of providing access, certification, and organizing must be included. There are a set of popular tools that addresses these curation steps:
A suit of tools for Web Curation by International Internet Preservation Consortium:
* [http://crawler.archive.org/ Heritrix - official website] - collecting Web asset
* [http://archive-access.sourceforge.net/projects/nutch/ NutchWAX] - search Web archive collections
* [http://archive-access.sourceforge.net/projects/wayback/ Wayback (Open source Wayback Machine)] - search and navigate Web archive collections using NutchWax
* [http://webcurator.sourceforge.net/manuals.shtml Web Curator Tool] - Selection and Management of Web Collection
Other open source tools for manipulating web archives:
* [http://code.google.com/p/warc-tools/ WARC Tools] - for creating, reading, parsing and manipulating, web archives programmatically
* [http://code.google.com/p/search-tools/ Search Tools] - for indexing and searching full-text and metadata within web archives
Library of Congress Digital Library project
National Digital Information Infrastructure and Preservation Program
UK Web Archiving Consortium
* [http://www.netpreserve.org/ International Internet Preservation Consortium (IIPC)] - International consortium whose mission is to acquire, preserve, and make accessible knowledge and information from the Internet for future generations
* [http://www.iwaw.net/ International Web Archiving Workshop (IWAW)] - Annual workshop that focuses on web archiving
* [http://www.loc.gov/library/libarch-digital.html The Library of Congress, Digital Collections and Programs]
* [http://www.loc.gov/webcapture/ Library of Congress, Web Capture]
* [http://www.ifs.tuwien.ac.at/~aola/links/WebArchiving.html Web archiving bibliography] - Lengthy list of web-archiving resources
* [http://listes.cru.fr/sympa/info/web-archive Web archiving discussion list] - Used for discussing the technical, legal, and organizational aspects of web archiving
* [http://www.webarchivist.org/ WebArchivist] - Researchers that work with scholars, librarians, and archivists interested in preserving and analyzing Web resources
* Web archiving programmes:
** [http://www.boa-bw.de/ Baden-Württemberg Online Archive]
** [http://govinfo.library.unt.edu/ CyberCemetery]
** [http://www.sino.uni-heidelberg.de/dachs/ Digital Archive of Chinese Studies]
** [http://www.europarchive.org/ European Archive]
** [http://www.hanzoarchives.com/ Hanzo Archives]
** [http://www.archive.org/ Internet Archive]
** [http://www.digitalpreservation.gov The Library of Congress, National Digital Information Infrastructure and Preservation Program]
** [http://www.loc.gov/minerva/ Minerva]
** [http://netarchive.dk/ netarchive.dk]
** [http://pandora.nla.gov.au/ Pandora]
** [http://www.nationalarchives.gov.uk/preservation/archivedwebsites.htm UK Government Web Archive]
** [http://www.webarchive.org.uk/ UK Web Archiving Consortium]
** [http://warp.ndl.go.jp/ WARP]
** [http://www.webcitation.org WebCite]
Wikimedia Foundation. 2010.
Look at other dictionaries:
UK Web Archiving Consortium — The UK Web Archiving Consortium (UKWAC) is a consortium of six leading UK institutions working collaboratively on a pilot operation archiving selected UK websites. UKWAC consists of the British Library, the Joint Information Systems Committee… … Wikipedia
Web-Archivierung — ist das Sammeln und dauerhafte Ablegen von Netzpublikationen mit dem Zweck, in der Zukunft Öffentlichkeit und Wissenschaft einen Blick in die Vergangenheit bieten zu können. Die größte internationale Einrichtung zur Web Archivierung ist das… … Deutsch Wikipedia
Web crawler — For the search engine of the same name, see WebCrawler. For the fictional robots called Skutters, see Red Dwarf characters#The Skutters. Not to be confused with offline reader. A Web crawler is a computer program that browses the World Wide Web… … Wikipedia
Web search engine — Search engine redirects here. For other uses, see Search engine (disambiguation). The three most widely used web search engines and their approximate share as of late 2010. A web search engine is designed to search for information on the Wo … Wikipedia
Web search query — A web search query is a query that a user enters into web search engine to satisfy his or her information needs. Web search queries are distinctive in that they are unstructured and often ambiguous; they vary greatly from standard query languages … Wikipedia
Archivage Du Web — Le Web est par essence un média éphémère. Certains sites sont mis à jour très souvent, d autres disparaissent ou changent de fournisseur ou d hébergeur. Face à ce constat, il a été tenté, dans une perspective de conservation du patrimoine, de… … Wikipédia en Français
Archivage du web — Le Web est par essence un média éphémère. Certains sites sont mis à jour très souvent, d autres disparaissent ou changent de fournisseur ou d hébergeur. Face à ce constat, il a été tenté, dans une perspective de conservation du patrimoine, de… … Wikipédia en Français
Archivage du Web — Le Web est par essence un média éphémère. Certains sites sont mis à jour très souvent, d autres disparaissent ou changent de fournisseur ou d hébergeur. Face à ce constat, il a été tenté, dans une perspective de conservation du patrimoine, de… … Wikipédia en Français
Self-archiving — To self archive is to deposit a free copy of a digital document on the World Wide Web in order to provide open access to it. The term usually refers to the self archiving of peer reviewed research journal and conference articles as well as… … Wikipedia
Picture archiving and communication system — An image as stored on a picture archiving and communication system (PACS) The same image following contrast adjustment, sha … Wikipedia