- Latent semantic analysis
Latent semantic analysis (LSA) is a technique in
natural language processing , in particular invectorial semantics , of analyzing relationships between a set of documents and the terms they contain by producing a set of concepts related to the documents and terms.LSA was patented in
1988 ( [http://patft.uspto.gov/netacgi/nph-Parser?patentnumber=4839853 US Patent 4,839,853] ) byScott Deerwester ,Susan Dumais ,George Furnas ,Richard Harshman ,Thomas Landauer ,Karen Lochbaum andLynn Streeter . In the context of its application toinformation retrieval , it is sometimes called latent semantic indexing (LSI).Occurrence matrix
LSA can use a
term-document matrix which describes the occurrences of terms in documents; it is asparse matrix whose rows correspond to terms and whose columns correspond to documents, typically stemmed words that appear in the documents. A typical example of the weighting of the elements of the matrix istf-idf (term frequency–inverse document frequency): the element of the matrix is proportional to the number of times the terms appear in each document, where rare terms are upweighted to reflect their relative importance.This matrix is also common to standard semantic models, though it is not necessarily explicitly expressed as a matrix, since the mathematical properties of matrices are not always used.
LSA transforms the occurrence matrix into a relation between the terms and some "concepts", and a relation between those concepts and the documents. Thus the terms and documents are now indirectly related through the concepts.
Applications
The new concept space typically can be used to:
* Compare the documents in the concept space (data clustering ,document classification )......
* Find similar documents across languages, after analyzing a base set of translated documents (cross language retrieval ).
* Find relations between terms (synonymy andpolysemy ).
* Given a query of terms, translate it into the concept space, and find matching documents (information retrieval ).Synonymy and polysemy are fundamental problems in
natural language processing :
* Synonymy is the phenomenon where different words describe the same idea. Thus, a query in a search engine may fail to retrieve a relevant document that does not contain the words which appeared in the query. For example, a search for "doctors" may not return a document containing the word "physicians", even though the words have the same meaning.
* Polysemy is the phenomenon where the same word has multiple meanings. So a search may retrieve irrelevant documents containing the desired words in the wrong meaning. For example, a botanist and a computer scientist looking for the word "tree" probably desire different sets of documents.Rank lowering
After the construction of the occurrence matrix, LSA finds a low-rank approximation to the
term-document matrix . There could be various reasons for these approximations:* The original term-document matrix is presumed too large for the computing resources; in this case, the approximated low rank matrix is interpreted as an "approximation" (a "least and necessary evil").
* The original term-document matrix is presumed "noisy": for example, anecdotal instances of terms are to be eliminated. From this point of view, the approximated matrix is interpreted as a "de-noisified matrix" (a better matrix than the original).
* The original term-document matrix is presumed overly sparse relative to the "true" term-document matrix. That is, the original matrix lists only the words actually "in" each document, whereas we might be interested in all words "related to" each document--generally a much larger set due tosynonymy .The consequence of the rank lowering is that some dimensions are combined and depend on more than one term:
:: {(car), (truck), (flower)} --> {(1.3452 * car + 0.2828 * truck), (flower)}
This mitigates synonymy, as the rank lowering is expected to merge the dimensions associated with terms that have similar meanings. It also mitigates polysemy, since components of polysemous words that point in the "right" direction are added to the components of words that share a similar meaning. Conversely, components that point in other directions tend to either simply cancel out, or, at worst, to be smaller than components in the directions corresponding to the intended sense.
Derivation
Let be a matrix where element describes the occurrence of term in document (this can be, for example, the frequency). will look like this:
:
Now a row in this matrix will be a vector corresponding to a term, giving its relation to each document:
:
Likewise, a column in this matrix will be a vector corresponding to a document, giving its relation to each term:
:
Now the
dot product between two term vectors gives thecorrelation between the terms over the documents. Thematrix product contains all these dot products. Element (which is equal to element ) contains the dot product (). Likewise, the matrix contains the dot products between all the document vectors, giving their correlation over the terms: .Now assume that there exists a decomposition of such that and are orthonormal matrices and is a
diagonal matrix . This is called asingular value decomposition (SVD)::
The matrix products giving us the term and document correlations then become
:
Since and are diagonal we see that must contain the
eigenvector s of , while must be the eigenvectors of . Both products have the same non-zero eigenvalues, given by the non-zero entries of , or equally, by the non-zero entries of . Now the decomposition looks like this::
The values are called the singular values, and and the left and right singular vectors.Notice how the only part of that contributes to is the row. Let this row vector be called . Likewise, the only part of that contributes to is the column, . These are "not" the eigenvectors, but "depend" on "all" the eigenvectors.
It turns out that when you select the largest singular values, and their corresponding singular vectors from and , you get the rank approximation to X with the smallest error (
Frobenius norm ). The amazing thing about this approximation is that not only does it have a minimal error, but it translates the term and document vectors into a concept space. The vector then has entries, each giving the occurrence of term in one of the concepts. Likewise, the vector gives the relation between document and each concept. We write this approximation as:
You can now do the following:
* See how related documents and are in the concept space by comparing the vectors and (typically by cosine similarity). This gives you a clustering of the documents.
* Comparing terms and by comparing the vectors and , giving you a clustering of the terms in the concept space.
* Given a query, view this as a mini document, and compare it to your documents in the concept space.To do the latter, you must first translate your query into the concept space. It is then intuitive that you must use the same transformation that you use on your documents:
:
:
This means that if you have a query vector , you must do the translation before you compare it with the document vectors in the concept space. You can do the same for pseudo term vectors:
:
:
:
Implementation
The SVD is typically computed using large matrix methods (for example,
Lanczos method s) but may also be computed incrementally and with greatly reduced resources via aneural network -like approach, which does not require the large, full-rank matrix to be held in memory ( [http://www.dcs.shef.ac.uk/~genevieve/gorrell_webb.pdf Gorrell and Webb, 2005] ).A fast, incremental, low-memory, large-matrix SVD algorithm has recently been developed ( [http://www.merl.com/publications/TR2006-059/ Brand, 2006] ). Unlike Gorrell and Webb's (2005) stochastic approximation, Brand's (2006) algorithm provides an exact solution.
Limitations
LSA has two drawbacks:
* The resulting dimensions might be difficult to interpret. For instance, in :: {(car), (truck), (flower)} --> {(1.3452 * car + 0.2828 * truck), (flower)}:the (1.3452 * car + 0.2828 * truck) component could be interpreted as "vehicle". However, it is very likely that cases close to :: {(car), (bottle), (flower)} --> {(1.3452 * car + 0.2828 * bottle), (flower)}:will occur. This leads to results which can be justified on the mathematical level, but have no interpretable meaning in natural language.
* The
probabilistic model of LSA does not match observed data: LSA assumes that words and documents form a joint Gaussian model (ergodic hypothesis ), while aPoisson distribution has been observed. Thus, a newer alternative isprobabilistic latent semantic analysis , based on a multinomial model, which is reported to give better results than standard LSA Fact|date=April 2008.Commercial Applications
LSA has been used to assist in performing
prior art searches forpatents . [ [http://www.liebertonline.com/doi/abs/10.1089/blr.2007.9896 Gerry Elman, "Automated Patent Examination Support - A proposal", Biotechnology Law Report, October 2007] ]See also
*
Vectorial semantics
*DSIR model
*Latent Dirichlet allocation
*Spamdexing
* An example of the application of [http://blog.targetwoman.com/latent-semantic-analysis/] Latent Semantic Analysis in Natural language Processing
*Probabilistic latent semantic analysis
*Latent semantic mapping
*Latent Semantic Structure Indexing
*Principal components analysis
*Compound term processing External links
* [http://www.seobook.com/lsi/lsa_definition.htm Latent Semantic Indexing] , a non mathematical introduction and explanation of LSI
* [http://knowledgesearch.org/ The Semantic Indexing Project] , an open source program for latent semantic indexing
* [http://senseclusters.sourceforge.net SenseClusters] , an open source package for Latent Semantic Analysis and other methods for clustering similar contexts
References
* cite web
url=http://lsa.colorado.edu/
title=The Latent Semantic Indexing home page
* cite journal
url=http://www.merl.com/publications/TR2006-059/
title=Fast Low-Rank Modifications of the Thin Singular Value Decomposition
author=Matthew Brand
journal=Linear Algebra and Its Applications
volume=415
pages=20–30
date=2006
doi=10.1016/j.laa.2005.07.021 -- a [http://www.eecs.umich.edu/~wingated/resources.html MATLAB implementation of Brand's algorithm] is available
* cite journal
url=http://lsa.colorado.edu/papers/dp1.LSAintro.pdf
title=Introduction to Latent Semantic Analysis
author=Thomas Landauer , P. W. Foltz, & D. Laham
journal=Discourse Processes
volume=25
pages=259–284
date=1998
* cite journal
url=http://lsi.research.telcordia.com/lsi/papers/JASIS90.pdf
title=Indexing by Latent Semantic Analysis
author=S. Deerwester,Susan Dumais , G. W. Furnas, T. K. Landauer, R. Harshman
journal=Journal of the American Society for Information Science
volume=41
issue=6
pages=391–407
date=1990
doi=10.1002/(SICI)1097-4571(199009)41:6<391::AID-ASI1>3.0.CO;2-9 Original article where the model was first exposed.
* cite journal
url=http://citeseer.ist.psu.edu/berry95using.html
title=Using Linear Algebra for Intelligent Information Retrieval
author=Michael Berry, S.T. Dumais, G.W. O'Brien
date=1995 [http://lsirwww.epfl.ch/courses/dis/2003ws/papers/ut-cs-94-270.pdf PDF] . Illustration of the application of LSA to document retrieval.
* cite web
url=http://iv.slis.indiana.edu/sw/lsa.html
title=Latent Semantic Analysis
publisher=InfoVis
* cite conference
url=http://www.cs.brown.edu/people/th/papers/Hofmann-UAI99.pdf
title=Probabilistic Latent Semantic Analysis
author=T. Hofmann
booktitle=Uncertainty in Artificial Intelligence
date=1999
* cite conference
url=http://www.dcs.shef.ac.uk/~genevieve/gorrell_webb.pdf
title=Generalized Hebbian Algorithm for Latent Semantic Analysis
author=G. Gorrell and B. Webb
booktitle=Interspeech
date=2005
* cite web
url=http://cran.at.r-project.org/web/packages/lsa/index.html
title=An Open Source LSA Package for R
publisher=CRAN
author=Fridolin Wild
date=November 23 2005
accessdate=2006-11-20
* cite web
url=http://www.welchco.com/02/14/01/60/96/02/2901.HTM
title=A Solution to Plato's Problem: The Latent Semantic Analysis Theory of Acquisition, Induction, and Representation of Knowledge
author=Thomas Landauer
accessdate=2007-07-02
* cite web
url=http://scgroup.hpclab.ceid.upatras.gr/scgroup/Projects/TMG/
title=A MATLAB Toolbox for generating term-document matrices from text collections
author=Dimitrios Zeimpekis and E. Gallopoulos
date=September 11, 2005
accessdate=2006-11-20
Wikimedia Foundation. 2010.