Undergrad Differences between the PCA function and Karhunen-Loève expansion

confused_engineer
Messages
34
Reaction score
1
TL;DR
I am using matlab's pca function to generate a Karhunen-Loève expansion. However, I am strugging to understand the results. Function documentation: https://la.mathworks.com/help/stats/pca.html and the article that I am following https://arxiv.org/pdf/1509.07526.pdf.
I am also using the book Principal Component Analysis by I.T. Jolliffe
Hello everyone. I am currently using the pca function from MATLAB on a gaussian process. Matlab's pca offers three results. Coeff, Score and Latent. Latent are the eigenvalues of the covariance matrix, Coeff are the eigenvectors of said matrix and Score are the representation of the original data in the principal component space. If the original data is a matrix with n observations of p varialbes, coeff is a p*p matrix and score a n*p, just as the original data. Mathwork documentation

The Karhunen-Loève expansion is similar to the pca, unfortunately, MATLAB doesn't implement this directly. In the article, instead of coeffs and scores, the process is decomposed in random variables and eigenfunctions.

I cannot find the relationship between score-coeff and random variable-eigenfunction.

According to the article, the random variables of a standard gaussian process are standard gaussian random variables , which I find if I plot the histograms of the coeffs. This however is very extrange for me, since this way I am comparing a eigenvector with a random variable. Also the scores then would be the eigenfunction, which I also find weird, since I expected the eigenfunctions to be the coeffs and random variables to be the scores.

Can anyone please confirm me that this is right that the coeffs of pca are the random variables of the Karhunen-Loève expansion and the scores are the eigenvectors?Thanks for reading.

I am also attaching the code that I am using to calculate the pca.
 

Attachments

Physics news on Phys.org
I am not an expert in this topic, but I will try to answer since no one else has jumped in. The K-L expansion is not equivalent to an eigenvector decomposition but rather is a more general decomposition onto an orthogonal basis. EV decomp. and Fourier analysis are two specific examples of K-L. (This is presumably why Matlab doesn't have a KL function, because it takes very different forms depending on the problem being solved.)

Given that, I don't really understand your question. I also am missing why you are mixing up the terms. Matlab documentation seems very clear in defining what each variable means. Finally, I assume that you are not taking EV's of a random variable but are decomposing its covariance matrix, instead?
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 5 ·
Replies
5
Views
4K
Replies
11
Views
6K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 0 ·
Replies
0
Views
7K