Differences between the PCA function and Karhunen-Loève expansion

In summary, the conversation discusses the use of the PCA function in MATLAB on a Gaussian process. The function provides three results: Coeff, Score, and Latent. Coeff is a matrix of eigenvectors, Score is a representation of the original data in principal component space, and Latent are the eigenvalues of the covariance matrix. The Karhunen-Loève expansion is similar to PCA, but MATLAB does not have a direct implementation for it. The article discusses decomposing the process into random variables and eigenfunctions, but the relationship between Score-Coeff and random variable-eigenfunction is not clear. The use of the terms may be causing confusion, but the MATLAB documentation clearly defines them. The KL expansion is not equivalent
  • #1
confused_engineer
39
2
TL;DR Summary
I am using matlab's pca function to generate a Karhunen-Loève expansion. However, I am strugging to understand the results. Function documentation: https://la.mathworks.com/help/stats/pca.html and the article that I am following https://arxiv.org/pdf/1509.07526.pdf.
I am also using the book Principal Component Analysis by I.T. Jolliffe
Hello everyone. I am currently using the pca function from MATLAB on a gaussian process. Matlab's pca offers three results. Coeff, Score and Latent. Latent are the eigenvalues of the covariance matrix, Coeff are the eigenvectors of said matrix and Score are the representation of the original data in the principal component space. If the original data is a matrix with n observations of p varialbes, coeff is a p*p matrix and score a n*p, just as the original data. Mathwork documentation

The Karhunen-Loève expansion is similar to the pca, unfortunately, MATLAB doesn't implement this directly. In the article, instead of coeffs and scores, the process is decomposed in random variables and eigenfunctions.

I cannot find the relationship between score-coeff and random variable-eigenfunction.

According to the article, the random variables of a standard gaussian process are standard gaussian random variables , which I find if I plot the histograms of the coeffs. This however is very extrange for me, since this way I am comparing a eigenvector with a random variable. Also the scores then would be the eigenfunction, which I also find weird, since I expected the eigenfunctions to be the coeffs and random variables to be the scores.

Can anyone please confirm me that this is right that the coeffs of pca are the random variables of the Karhunen-Loève expansion and the scores are the eigenvectors?Thanks for reading.

I am also attaching the code that I am using to calculate the pca.
 

Attachments

  • principal_components.txt
    870 bytes · Views: 170
Physics news on Phys.org
  • #2
I am not an expert in this topic, but I will try to answer since no one else has jumped in. The K-L expansion is not equivalent to an eigenvector decomposition but rather is a more general decomposition onto an orthogonal basis. EV decomp. and Fourier analysis are two specific examples of K-L. (This is presumably why Matlab doesn't have a KL function, because it takes very different forms depending on the problem being solved.)

Given that, I don't really understand your question. I also am missing why you are mixing up the terms. Matlab documentation seems very clear in defining what each variable means. Finally, I assume that you are not taking EV's of a random variable but are decomposing its covariance matrix, instead?
 

1. What is the main difference between the PCA function and Karhunen-Loève expansion?

The main difference between the PCA function and Karhunen-Loève expansion is their purpose. PCA is a statistical technique used for dimensionality reduction and data compression, while Karhunen-Loève expansion is a mathematical method used for signal processing and data analysis.

2. How do the two methods handle data?

PCA function works by finding the directions of maximum variance in a dataset and projecting the data onto those directions. On the other hand, Karhunen-Loève expansion decomposes a signal into a series of orthogonal components ordered by their eigenvalues.

3. Which method is more suitable for image processing?

Karhunen-Loève expansion is more commonly used in image processing as it can efficiently capture the underlying statistical structure of an image. PCA function can also be used for image processing, but it may not be as effective as Karhunen-Loève expansion.

4. Are there any limitations to using these methods?

Both PCA function and Karhunen-Loève expansion have limitations. PCA function may not work well with non-linear data and can only capture linear relationships between variables. Karhunen-Loève expansion may not be suitable for high-dimensional data as it can lead to overfitting.

5. Can these methods be used together?

Yes, PCA function and Karhunen-Loève expansion can be used together. In fact, PCA is often used as a preprocessing step for Karhunen-Loève expansion to reduce the dimensionality of the data before applying the expansion method.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
111
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
722
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
778
  • MATLAB, Maple, Mathematica, LaTeX
Replies
1
Views
1K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
2
Views
1K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
1
Views
1K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
1
Views
865
  • Programming and Computer Science
Replies
1
Views
689
  • MATLAB, Maple, Mathematica, LaTeX
Replies
1
Views
1K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
2
Views
3K
Back
Top