Compound term processing

Compound term processing

Compound term processing is the name that is used for a category of techniques in Information retrieval applications that performs matching on the basis of compound terms. Compound terms are built by combining two (or more) simple terms, for example "triple" is a single word term but "triple heart bypass" is a compound term.

In August 2003 Concept Searching Limited introduced the idea of using statistical Compound Term Processing via an article published in INFORMATION MANAGEMENT AND TECHNOLOGY (VOL 36 PART 4). A British Library Direct catalogue entry can be found here: [1].

The complete original article can also be downloaded from here: [2].

Further discussion of Compound Term Processing can be found here: [3]. CLAMOUR is a European collaborative project which aims to find a better way to classify when collecting and disseminating industrial information & statistics. In contrast to the techniques discussed by Concept Searching Limited, CLAMOUR appears to be primarily a linguistic approach, rather than one based on statistical modelling. The final project report (dated March 2002) can be found here: [4]

Compound Term Processing is important because it allows search (and other Information Retrieval) applications to perform their matching on the basis of multi-word concepts rather than single words in isolation which can be highly ambiguous.

Most search engines simply look for documents that contain the words that the user enters into the search box (aka "keyword search" engines). Boolean search engines add a degree of sophistication by allowing the user to specify additional requirements but most users struggle to comprehend and use the necessary syntax (e.g. Tiger NEAR Woods AND (golf OR golfing) NOT Volkswagen). Phrase search is easier to understand but can lead to many useful documents being missed if they do not contain the exact phrase specified.

Techniques for probabilistic weighting of single word terms dates back to at least 1976 and the landmark publication by Stephen E. Robertson [5] and Karen Spärck Jones: Relevance weighting of search terms originally published in the Journal of the American Society for Information Science. [6] Robertson has stated that the assumption of word independence is not justified and exists simply as a matter of mathematical convenience. The objection to assumptions about term independence are not new, dating back to at least 1964 when H. H. Williams expressed it this way: "The assumption of independence of words in a document is usually made as a matter of mathematical convenience". [7]

Compound term processing is a new approach to an old problem: how to improve the relevance of search results without missing anything important whilst maintaining ease of use. By forming compound (i.e. multi-word) terms and placing these in the search engine's index the search can be performed with a higher degree of accuracy because the ambiguity inherent in single words is no longer a problem. A search for survival rates following a triple heart bypass in elderly people will locate documents about this topic even if this precise phrase is not contained in any document. A concept search using "Compound Term Processing" can extract the key concepts automatically (in this case "survival rates", "triple heart bypass" and "elderly people") and use these to select the most relevant documents.

In 2004 Anna Lynn Patterson filed a number of patents on the subject of "Phrase based indexing and retrieval" and to which Google subsequently acquired the rights. A full discussion of the patents can be found here: Webmaster Woman. The patents themselves can be found online, for example: [8].

Statistical Compound Term Processing is more adaptive than the "phrase based indexing and retrieval" detailed by Anna Lynn Patterson in her patent applications. The "phrase based indexing" is targeted at searching the World Wide Web where an extensive statistical knowledge of common searches can be used to identify candidate phrases. Statistical Compound Term Processing is more suited to Enterprise Search applications where such a priori knowledge is not available.

Statistical Compound Term Processing is also more adaptive than the linguistic approach taken by the CLAMOUR project which considers the syntactic properties of the terms (part of speech, gender, number) and their combination. CLAMOUR is highly language dependent, whereas the statistical approach is language independent.

See also

References

  1. ^ [1] British Library Direct catalogue entry
  2. ^ [2] Lateral Thinking in Information Retrieval
  3. ^ [3] National Statistics CLAMOUR project
  4. ^ [4] CLAMOUR Final Report
  5. ^ http://www.soi.city.ac.uk/~ser/homepage.html Stephen E. Robertson
  6. ^ [5] Relevance weighting of search terms
  7. ^ WILLIAMS, J.H., 'Results of classifying documents with multiple discriminant functions', In : Statistical Association Methods for Mechanized Documentation, National Bureau of Standards, Washington, 217-224 (1965).
  8. ^ [6] US Patent: 20060031195

External links


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать реферат

Look at other dictionaries:

  • Natural language processing — (NLP) is a field of computer science and linguistics concerned with the interactions between computers and human (natural) languages; it began as a branch of artificial intelligence.[1] In theory, natural language processing is a very attractive… …   Wikipedia

  • copper processing — Introduction  use of smelting or leaching, usually followed by electrolytic refining or recovery, to turn the ore into a form from which products can be fashioned. Included in this article also is a discussion of the mining of copper and of its… …   Universalium

  • Natural-gas processing — is a complex industrial process designed to clean raw natural gas by separating impurities and various non methane hydrocarbons and fluids to produce what is known as pipeline quality dry natural gas.[1] Contents 1 Background 2 Types of raw… …   Wikipedia

  • tin processing — Introduction       preparation of the ore for use in various products.       Tin (Sn) is a relatively soft and ductile metal with a silvery white colour. It has a density of 7.29 grams per cubic centimetre, a low melting point of 231.88° C… …   Universalium

  • Crystallographic image processing — (CIP) is a set of methods for determining the atomic structure of crystalline matter from high resolution electron microscopy (HREM) images obtained in a transmission electron microscope (TEM). The term was created in the research group of Sven… …   Wikipedia

  • lead processing — Introduction       preparation of the ore for use in various products.       Lead (Pb) is one of the oldest metals known, being one of seven metals used in the ancient world (the others are gold, silver, copper, iron, tin, and mercury). Its low… …   Universalium

  • Concept Searching Limited — Type Private Industry Information retrieval Founded 2002 Headquarters UK, USA …   Wikipedia

  • Latent semantic analysis — (LSA) is a technique in natural language processing, in particular in vectorial semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms. LSA was …   Wikipedia

  • Index (search engine) — Search engine indexing collects, parses, and stores data to facilitate fast and accurate information retrieval. Index design incorporates interdisciplinary concepts from linguistics, cognitive psychology, mathematics, informatics, physics, and… …   Wikipedia

  • Document classification — or document categorization is a problem in both library science, information science and computer science. The task is to assign a document to one or more classes or categories. This may be done manually (or intellectually ) or algorithmically.… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”