Latent semantic analysis

Latent semantic analysis

Latent semantic analysis (LSA) is a technique in natural language processing, in particular in vectorial semantics, of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms.

LSA was patented in 1988 ( [http://patft.uspto.gov/netacgi/nph-Parser?patentnumber=4839853 US Patent 4,839,853] ) by Scott Deerwester, Susan Dumais, George Furnas, Richard Harshman, Thomas Landauer, Karen Lochbaum and Lynn Streeter. In the context of its application to information retrieval, it is sometimes called latent semantic indexing (LSI).

Occurrence matrix

LSA can use a term-document matrix which describes the occurrences of terms in documents; it is a sparse matrix whose rows correspond to terms and whose columns correspond to documents, typically stemmed words that appear in the documents. A typical example of the weighting of the elements of the matrix is tf-idf (term frequency–inverse document frequency): the element of the matrix is proportional to the number of times the terms appear in each document, where rare terms are upweighted to reflect their relative importance.

This matrix is also common to standard semantic models, though it is not necessarily explicitly expressed as a matrix, since the mathematical properties of matrices are not always used.

LSA transforms the occurrence matrix into a relation between the terms and some "concepts", and a relation between those concepts and the documents. Thus the terms and documents are now indirectly related through the concepts.

Applications

The new concept space typically can be used to:
* Compare the documents in the concept space (data clustering, document classification)......
* Find similar documents across languages, after analyzing a base set of translated documents (cross language retrieval).
* Find relations between terms (synonymy and polysemy).
* Given a query of terms, translate it into the concept space, and find matching documents (information retrieval).

Synonymy and polysemy are fundamental problems in natural language processing:
* Synonymy is the phenomenon where different words describe the same idea. Thus, a query in a search engine may fail to retrieve a relevant document that does not contain the words which appeared in the query. For example, a search for "doctors" may not return a document containing the word "physicians", even though the words have the same meaning.
* Polysemy is the phenomenon where the same word has multiple meanings. So a search may retrieve irrelevant documents containing the desired words in the wrong meaning. For example, a botanist and a computer scientist looking for the word "tree" probably desire different sets of documents.

Rank lowering

After the construction of the occurrence matrix, LSA finds a low-rank approximation to the term-document matrix. There could be various reasons for these approximations:

* The original term-document matrix is presumed too large for the computing resources; in this case, the approximated low rank matrix is interpreted as an "approximation" (a "least and necessary evil").
* The original term-document matrix is presumed "noisy": for example, anecdotal instances of terms are to be eliminated. From this point of view, the approximated matrix is interpreted as a "de-noisified matrix" (a better matrix than the original).
* The original term-document matrix is presumed overly sparse relative to the "true" term-document matrix. That is, the original matrix lists only the words actually "in" each document, whereas we might be interested in all words "related to" each document--generally a much larger set due to synonymy.

The consequence of the rank lowering is that some dimensions are combined and depend on more than one term:

:: {(car), (truck), (flower)} --> {(1.3452 * car + 0.2828 * truck), (flower)}

This mitigates synonymy, as the rank lowering is expected to merge the dimensions associated with terms that have similar meanings. It also mitigates polysemy, since components of polysemous words that point in the "right" direction are added to the components of words that share a similar meaning. Conversely, components that point in other directions tend to either simply cancel out, or, at worst, to be smaller than components in the directions corresponding to the intended sense.

Derivation

Let X be a matrix where element (i,j) describes the occurrence of term i in document j (this can be, for example, the frequency). X will look like this:

:egin{matrix} & extbf{d}_j \ & downarrow \ extbf{t}_i^T ightarrow &egin{bmatrix} x_{1,1} & dots & x_{1,n} \vdots & ddots & vdots \x_{m,1} & dots & x_{m,n} \end{bmatrix}end{matrix}

Now a row in this matrix will be a vector corresponding to a term, giving its relation to each document:

: extbf{t}_i^T = egin{bmatrix} x_{i,1} & dots & x_{i,n} end{bmatrix}

Likewise, a column in this matrix will be a vector corresponding to a document, giving its relation to each term:

: extbf{d}_j = egin{bmatrix} x_{1,j} \ vdots \ x_{m,j} end{bmatrix}

Now the dot product extbf{t}_i^T extbf{t}_p between two term vectors gives the correlation between the terms over the documents. The matrix product X X^T contains all these dot products. Element (i,p) (which is equal to element (p,i)) contains the dot product extbf{t}_i^T extbf{t}_p ( = extbf{t}_p^T extbf{t}_i). Likewise, the matrix X^T X contains the dot products between all the document vectors, giving their correlation over the terms: extbf{d}_j^T extbf{d}_q = extbf{d}_q^T extbf{d}_j.

Now assume that there exists a decomposition of X such that U and V are orthonormal matrices and Sigma is a diagonal matrix. This is called a singular value decomposition (SVD):

:X = U Sigma V^T

The matrix products giving us the term and document correlations then become

:egin{matrix}X X^T &=& (U Sigma V^T) (U Sigma V^T)^T = (U Sigma V^T) (V^{T^T} Sigma^T U^T) = U Sigma V^T V Sigma^T U^T = U Sigma Sigma^T U^T \X^T X &=& (U Sigma V^T)^T (U Sigma V^T) = (V^{T^T} Sigma^T U^T) (U Sigma V^T) = V Sigma U^T U Sigma V^T = V Sigma^T Sigma V^Tend{matrix}

Since Sigma Sigma^T and Sigma^T Sigma are diagonal we see that U must contain the eigenvectors of X X^T, while V must be the eigenvectors of X^T X. Both products have the same non-zero eigenvalues, given by the non-zero entries of Sigma Sigma^T, or equally, by the non-zero entries of Sigma^TSigma. Now the decomposition looks like this:

:egin{matrix} & X & & & U & & Sigma & & V^T \ & ( extbf{d}_j) & & & & & & & (hat extbf{d}_j) \ & downarrow & & & & & & & downarrow \( extbf{t}_i^T) ightarrow &egin{bmatrix} x_{1,1} & dots & x_{1,n} \\vdots & ddots & vdots \\x_{m,1} & dots & x_{m,n} \end{bmatrix}&=&(hat extbf{t}_i^T) ightarrow&egin{bmatrix} egin{bmatrix} , \ , \ extbf{u}_1 \ , \ ,end{bmatrix} dotsegin{bmatrix} , \ , \ extbf{u}_l \ , \ , end{bmatrix}end{bmatrix}&cdot&egin{bmatrix} sigma_1 & dots & 0 \vdots & ddots & vdots \0 & dots & sigma_l \end{bmatrix}&cdot&egin{bmatrix} egin{bmatrix} & & extbf{v}_1 & & end{bmatrix} \vdots \egin{bmatrix} & & extbf{v}_l & & end{bmatrix}end{bmatrix}end{matrix}

The values sigma_1, dots, sigma_l are called the singular values, and u_1, dots, u_l and v_1, dots, v_l the left and right singular vectors.Notice how the only part of U that contributes to extbf{t}_i is the i extrm{'th} row. Let this row vector be called hat extrm{t}_i. Likewise, the only part of V^T that contributes to extbf{d}_j is the j extrm{'th} column, hat extrm{d}_j. These are "not" the eigenvectors, but "depend" on "all" the eigenvectors.

It turns out that when you select the k largest singular values, and their corresponding singular vectors from U and V, you get the rank k approximation to X with the smallest error (Frobenius norm). The amazing thing about this approximation is that not only does it have a minimal error, but it translates the term and document vectors into a concept space. The vector hat extbf{t}_i then has k entries, each giving the occurrence of term i in one of the k concepts. Likewise, the vector hat extbf{d}_j gives the relation between document j and each concept. We write this approximation as

:X_k = U_k Sigma_k V_k^T

You can now do the following:
* See how related documents j and q are in the concept space by comparing the vectors hat extbf{d}_j and hat extbf{d}_q (typically by cosine similarity). This gives you a clustering of the documents.
* Comparing terms i and p by comparing the vectors hat extbf{t}_i and hat extbf{t}_p, giving you a clustering of the terms in the concept space.
* Given a query, view this as a mini document, and compare it to your documents in the concept space.

To do the latter, you must first translate your query into the concept space. It is then intuitive that you must use the same transformation that you use on your documents:

: extbf{d}_j = U_k Sigma_k hat extbf{d}_j

:hat extbf{d}_j = Sigma_k^{-1} U_k^T extbf{d}_j

This means that if you have a query vector q, you must do the translation hat extbf{q} = Sigma_k^{-1} U_k^T extbf{q} before you compare it with the document vectors in the concept space. You can do the same for pseudo term vectors:

: extbf{t}_i^T = hat extbf{t}_i^T Sigma_k V_k^T

:hat extbf{t}_i^T = extbf{t}_i^T V_k^{-T} Sigma_k^{-1} = extbf{t}_i^T V_k Sigma_k^{-1}

:hat extbf{t}_i = Sigma_k^{-1} V_k^T extbf{t}_i

Implementation

The SVD is typically computed using large matrix methods (for example, Lanczos methods) but may also be computed incrementally and with greatly reduced resources via a neural network-like approach, which does not require the large, full-rank matrix to be held in memory ( [http://www.dcs.shef.ac.uk/~genevieve/gorrell_webb.pdf Gorrell and Webb, 2005] ).

A fast, incremental, low-memory, large-matrix SVD algorithm has recently been developed ( [http://www.merl.com/publications/TR2006-059/ Brand, 2006] ). Unlike Gorrell and Webb's (2005) stochastic approximation, Brand's (2006) algorithm provides an exact solution.

Limitations

LSA has two drawbacks:

* The resulting dimensions might be difficult to interpret. For instance, in :: {(car), (truck), (flower)} --> {(1.3452 * car + 0.2828 * truck), (flower)}:the (1.3452 * car + 0.2828 * truck) component could be interpreted as "vehicle". However, it is very likely that cases close to :: {(car), (bottle), (flower)} --> {(1.3452 * car + 0.2828 * bottle), (flower)}:will occur. This leads to results which can be justified on the mathematical level, but have no interpretable meaning in natural language.

* The probabilistic model of LSA does not match observed data: LSA assumes that words and documents form a joint Gaussian model (ergodic hypothesis), while a Poisson distribution has been observed. Thus, a newer alternative is probabilistic latent semantic analysis, based on a multinomial model, which is reported to give better results than standard LSA Fact|date=April 2008.

Commercial Applications

LSA has been used to assist in performing prior art searches for patents. [ [http://www.liebertonline.com/doi/abs/10.1089/blr.2007.9896 Gerry Elman, "Automated Patent Examination Support - A proposal", Biotechnology Law Report, October 2007] ]

See also

* Vectorial semantics
* DSIR model
* Latent Dirichlet allocation
* Spamdexing
* An example of the application of [http://blog.targetwoman.com/latent-semantic-analysis/] Latent Semantic Analysis in Natural language Processing
* Probabilistic latent semantic analysis
* Latent semantic mapping
* Latent Semantic Structure Indexing
* Principal components analysis
* Compound term processing

External links

* [http://www.seobook.com/lsi/lsa_definition.htm Latent Semantic Indexing] , a non mathematical introduction and explanation of LSI

* [http://knowledgesearch.org/ The Semantic Indexing Project] , an open source program for latent semantic indexing

* [http://senseclusters.sourceforge.net SenseClusters] , an open source package for Latent Semantic Analysis and other methods for clustering similar contexts

References

* cite web
url=http://lsa.colorado.edu/
title=The Latent Semantic Indexing home page

* cite journal
url=http://www.merl.com/publications/TR2006-059/
title=Fast Low-Rank Modifications of the Thin Singular Value Decomposition
author=Matthew Brand
journal=Linear Algebra and Its Applications
volume=415
pages=20–30
date=2006
doi=10.1016/j.laa.2005.07.021
-- a [http://www.eecs.umich.edu/~wingated/resources.html MATLAB implementation of Brand's algorithm] is available
* cite journal
url=http://lsa.colorado.edu/papers/dp1.LSAintro.pdf
title=Introduction to Latent Semantic Analysis
author=Thomas Landauer, P. W. Foltz, & D. Laham
journal=Discourse Processes
volume=25
pages=259–284
date=1998

* cite journal
url=http://lsi.research.telcordia.com/lsi/papers/JASIS90.pdf
title=Indexing by Latent Semantic Analysis
author=S. Deerwester, Susan Dumais, G. W. Furnas, T. K. Landauer, R. Harshman
journal=Journal of the American Society for Information Science
volume=41
issue=6
pages=391–407
date=1990
doi=10.1002/(SICI)1097-4571(199009)41:6<391::AID-ASI1>3.0.CO;2-9
Original article where the model was first exposed.
* cite journal
url=http://citeseer.ist.psu.edu/berry95using.html
title=Using Linear Algebra for Intelligent Information Retrieval
author=Michael Berry, S.T. Dumais, G.W. O'Brien
date=1995
[http://lsirwww.epfl.ch/courses/dis/2003ws/papers/ut-cs-94-270.pdf PDF] . Illustration of the application of LSA to document retrieval.
* cite web
url=http://iv.slis.indiana.edu/sw/lsa.html
title=Latent Semantic Analysis
publisher=InfoVis

* cite conference
url=http://www.cs.brown.edu/people/th/papers/Hofmann-UAI99.pdf
title=Probabilistic Latent Semantic Analysis
author=T. Hofmann
booktitle=Uncertainty in Artificial Intelligence
date=1999

* cite conference
url=http://www.dcs.shef.ac.uk/~genevieve/gorrell_webb.pdf
title=Generalized Hebbian Algorithm for Latent Semantic Analysis
author=G. Gorrell and B. Webb
booktitle=Interspeech
date=2005

* cite web
url=http://cran.at.r-project.org/web/packages/lsa/index.html
title=An Open Source LSA Package for R
publisher=CRAN
author=Fridolin Wild
date=November 23 2005
accessdate=2006-11-20

* cite web
url=http://www.welchco.com/02/14/01/60/96/02/2901.HTM
title=A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge
author=Thomas Landauer
accessdate=2007-07-02

* cite web
url=http://scgroup.hpclab.ceid.upatras.gr/scgroup/Projects/TMG/
title=A MATLAB Toolbox for generating term-document matrices from text collections
author=Dimitrios Zeimpekis and E. Gallopoulos
date=September 11, 2005
accessdate=2006-11-20


Wikimedia Foundation. 2010.

Игры ⚽ Нужна курсовая?

Look at other dictionaries:

  • Latent Semantic Analysis — Latent Semantic Indexing (kurz LSI) ist ein (patentgeschütztes) Verfahren des Information Retrieval, das 1990 zuerst von Deerwester et al.[1] erwähnt wurde. Verfahren wie das LSI sind insbesondere für die Suche auf großen Datenmengen wie dem… …   Deutsch Wikipedia

  • Latent Semantic Analysis — Analyse sémantique latente L’analyse sémantique latente (LSA, de l anglais : Latent semantic analysis) ou indexation sémantique latente (ou LSI, de l anglais : Latent semantic indexation) est un procédé de traitement des langues… …   Wikipédia en Français

  • Probabilistic latent semantic analysis — (PLSA), also known as probabilistic latent semantic indexing (PLSI, especially in information retrieval circles) is a statistical technique for the analysis of two mode and co occurrence data. PLSA evolved from Latent semantic analysis, adding a… …   Wikipedia

  • Latent semantic mapping — (LSM) is a data driven framework to model globally meaningful relationships implicit in large volumes of (often textual) data. It is a generalization of latent semantic analysis.Mac OS X v10.5 includes a framework implementing latent semantic… …   Wikipedia

  • Latent Semantic Structure Indexing — (LaSSI) is a technique for calculating chemical similarity derived from Latent semantic analysis (LSA).LaSSI was developed at Merck Co. and patented in 2007 [http://patft.uspto.gov/netacgi/nph Parser?patentnumber=7219020] by Richard Hull, Eugene… …   Wikipedia

  • Latent Semantic Indexing — (kurz LSI, englisch für schwache Bedeutungseinordnung) ist ein (patentgeschütztes) Verfahren des Information Retrieval, das 1990 zuerst von Deerwester et al.[1] erwähnt wurde. Verfahren wie das LSI sind insbesondere für die Suche auf großen… …   Deutsch Wikipedia

  • Semantic analysis (machine learning) — In machine learning, semantic analysis of a corpus is the task of building structures that approximate concepts from a large set of documents. It generally does not involve prior semantic understanding of the documents. Latent semantic analysis… …   Wikipedia

  • Semantic similarity — or semantic relatedness is a concept whereby a set of documents or terms within term lists are assigned a metric based on the likeness of their meaning / semantic content. Concretely, this can be achieved for instance by defining a topological… …   Wikipedia

  • Semantic memory — refers to the memory of meanings, understandings, and other concept based knowledge unrelated to specific experiences. The conscious recollection of factual information and general knowledge about the world,cite web… …   Wikipedia

  • Semantic relatedness — Computational Measures of Semantic Relatedness are [http://cwl projects.cogsci.rpi.edu/msr/ publically available] means for approximating the relative meaning of words/documents. These have been used for essay grading by the Educational Testing… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”