Multilinear principal-component analysis

Multilinear principal-component analysis

Multilinear principal-component analysis (MPCA) [1] is a mathematical procedure that uses multiple orthogonal transformations to convert a set of multidimensional objects into another set of multidimensional objects of lower dimensions. There is one orthogonal transformation for each dimension (mode). This transformation aims to capture as high a variance as possible, accounting for as much of the variability in the data as possible, subject to the constraint of mode-wise orthogonality. MPCA is a multilinear extension of principal component analysis (PCA) and it is a basic algorithm in multilinear subspace learning. Its origin can be traced back to the Tucker decomposition[2] in 1960s and it is closely related to higher-order singular value decomposition[3] (HOSVD) and the best rank-(R1, R2, ..., RN ) approximation of higher-order tensors [4].

Contents

The algorithm

MPCA performs feature extraction by determining a multilinear projection that captures most of the original tensorial input variations. As in PCA, MPCA works on centered data. The MPCA solution follows the alternating least square (ALS) [5] approach. Thus, is iterative in nature and it proceeds by decomposing the original problem to a series of multiple projection subproblems. Each subproblem is a classical PCA problem, which can be easily solved.

It should be noted that while PCA with orthogonal transformations produces uncorrelated features/variables, this is not the case for MPCA. Due to the nature of tensor-to-tensor transformation, MPCA features are not uncorrelated in general although the transformation in each mode is orthogonal [6]. In contrast, the uncorrelated MPCA (UMPCA) [6] generates uncorrelated multilinear features.

Feature selection

MPCA produces tensorial features. For conventional usage, vectorial features are often preferred. E.g. most classifiers in the literature takes vectors as input. On the other hand, as there are correlations among MPCA features, a further selection process often improve the performance. Supervised (discriminative) MPCA feature selection is used in [1] for object recognition while unsupervised MPCA feature selection is employed in visualization task [7].

Extensions

Various extensions of MPCA have been developed[8]:

Resources

  • Matlab code: MPCA.

References

  1. ^ a b H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, "MPCA: Multilinear principal component analysis of tensor objects," IEEE Trans. Neural Netw., vol. 19, no. 1, pp. 18–39, Jan. 2008.
  2. ^ Ledyard R. Tucker (September 1966). "Some mathematical notes on three-mode factor analysis". Psychometrika 31 (3): 279–311. doi:10.1007/BF02289464. 
  3. ^ L.D. Lathauwer, B.D. Moor, J. Vandewalle, A multilinear singular value decomposition, SIAM Journal of Matrix Analysis and Applications vol. 21, no. 4, pp. 1253–1278, 2000
  4. ^ L. D. Lathauwer, B. D. Moor, J. Vandewalle, On the best rank-1 and rank-(R1, R2, ..., RN ) approximation of higher-order tensors, SIAM Journal of Matrix Analysis and Applications 21 (4) (2000) 1324–1342.
  5. ^ P. M. Kroonenberg and J. de Leeuw, Principal component analysis of three-mode data by means of alternating least squares algorithms, Psychometrika, 45 (1980), pp. 69–97.
  6. ^ a b c H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, "Uncorrelated multilinear principal component analysis for unsupervised multilinear subspace learning," IEEE Trans. Neural Netw., vol. 20, no. 11, pp. 1820–1836, Nov. 2009.
  7. ^ H. Lu, H.-L. Eng, M. Thida, and K.N. Plataniotis, "Visualization and Clustering of Crowd Video Content in MPCA Subspace," in Proceedings of the 19st ACM Conference on Information and Knowledge Management (CIKM 2010) , Toronto, ON, Canada, October, 2010.
  8. ^ Lu, Haiping; Plataniotis, K.N.; Venetsanopoulos, A.N. (2011). "A Survey of Multilinear Subspace Learning for Tensor Data". Pattern Recognition 44 (7): 1540–1551. doi:10.1016/j.patcog.2011.01.004. http://www.dsp.utoronto.ca/~haiping/Publication/SurveyMSL_PR2011.pdf. 
  9. ^ H. Lu, K. N. Plataniotis and A. N. Venetsanopoulos, "Boosting Discriminant Learners for Gait Recognition using MPCA Features", EURASIP Journal on Image and Video Processing, Volume 2009, Article ID 713183, 11 pages, 2009. doi:10.1155/2009/713183.
  10. ^ Y. Panagakis, C. Kotropoulos, G. R. Arce, "Non-negative multilinear principal component analysis of auditory temporal modulations for music genre classification", IEEE Trans. on Audio, Speech, and Language Processing, vol. 18, no. 3, pp. 576–588, 2010.
  11. ^ K. Inoue, K. Hara, K. Urahama, "Robust multilinear principal component analysis", Proc. IEEE Conference on Computer Vision, 2009, pp. 591–597.

Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Principal component analysis — PCA of a multivariate Gaussian distribution centered at (1,3) with a standard deviation of 3 in roughly the (0.878, 0.478) direction and of 1 in the orthogonal direction. The vectors shown are the eigenvectors of the covariance matrix scaled by… …   Wikipedia

  • Multilinear subspace learning — (MSL) aims to learn a specific small part of a large space of multidimensional objects having a particular desired property. It is a dimensionality reduction approach for finding a low dimensional representation with certain preferred… …   Wikipedia

  • Numerical analysis — Babylonian clay tablet BC 7289 (c. 1800–1600 BC) with annotations. The approximation of the square root of 2 is four sexagesimal figures, which is about six decimal figures. 1 + 24/60 + 51/602 + 10/603 = 1.41421296...[1] Numerical analysis is the …   Wikipedia

  • Singular value decomposition — Visualization of the SVD of a 2 dimensional, real shearing matrix M. First, we see the unit disc in blue together with the two canonical unit vectors. We then see the action of M, which distorts the disk to an ellipse. The SVD decomposes M into… …   Wikipedia

  • Eigenvalues and eigenvectors — For more specific information regarding the eigenvalues and eigenvectors of matrices, see Eigendecomposition of a matrix. In this shear mapping the red arrow changes direction but the blue arrow does not. Therefore the blue arrow is an… …   Wikipedia

  • CP decomposition — In multilinear algebra, the canonical polyadic decomposition (CPD), historically known as PARAFAC and later CANDECOMP, is a generalization of the matrix singular value decomposition (SVD) to tensors, with many applications in in statistics,… …   Wikipedia

  • Dimension reduction — For dimensional reduction in physics, see Dimensional reduction. In machine learning, dimension reduction is the process of reducing the number of random variables under consideration, and can be divided into feature selection and feature… …   Wikipedia

  • Non-negative matrix factorization — NMF redirects here. For the bridge convention, see new minor forcing. Non negative matrix factorization (NMF) is a group of algorithms in multivariate analysis and linear algebra where a matrix, , is factorized into (usually) two matrices, and… …   Wikipedia

  • Outline of algebraic structures — In universal algebra, a branch of pure mathematics, an algebraic structure is a variety or quasivariety. Abstract algebra is primarily the study of algebraic structures and their properties. Some axiomatic formal systems that are neither… …   Wikipedia

  • arts, East Asian — Introduction       music and visual and performing arts of China, Korea, and Japan. The literatures of these countries are covered in the articles Chinese literature, Korean literature, and Japanese literature.       Some studies of East Asia… …   Universalium

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”