# Differences between the PCA function and Karhunen-Loève expansion

Summary:
I am using matlab's pca function to generate a Karhunen-Loève expansion. However, I am strugging to understand the results. Function documentation: https://la.mathworks.com/help/stats/pca.html and the article that I am following https://arxiv.org/pdf/1509.07526.pdf.
I am also using the book Principal Component Analysis by I.T. Jolliffe
Hello everyone. I am currently using the pca function from matlab on a gaussian process. Matlab's pca offers three results. Coeff, Score and Latent. Latent are the eigenvalues of the covariance matrix, Coeff are the eigenvectors of said matrix and Score are the representation of the original data in the principal component space. If the original data is a matrix with n observations of p varialbes, coeff is a p*p matrix and score a n*p, just as the original data. Mathwork documentation

The Karhunen-Loève expansion is similar to the pca, unfortunately, matlab doesn't implement this directly. In the article, instead of coeffs and scores, the process is decomposed in random variables and eigenfunctions.

I cannot find the relationship between score-coeff and random variable-eigenfunction.

According to the article, the random variables of a standard gaussian process are standard gaussian random variables , which I find if I plot the histograms of the coeffs. This however is very extrange for me, since this way I am comparing a eigenvector with a random variable. Also the scores then would be the eigenfunction, which I also find weird, since I expected the eigenfunctions to be the coeffs and random variables to be the scores.

Can anyone please confirm me that this is right that the coeffs of pca are the random variables of the Karhunen-Loève expansion and the scores are the eigenvectors?

Thanks for reading.

I am also attaching the code that I am using to calculate the pca.

#### Attachments

• principal_components.txt
870 bytes · Views: 86

## Answers and Replies

marcusl
Science Advisor
Gold Member
I am not an expert in this topic, but I will try to answer since no one else has jumped in. The K-L expansion is not equivalent to an eigenvector decomposition but rather is a more general decomposition onto an orthogonal basis. EV decomp. and Fourier analysis are two specific examples of K-L. (This is presumably why Matlab doesn't have a KL function, because it takes very different forms depending on the problem being solved.)

Given that, I don't really understand your question. I also am missing why you are mixing up the terms. Matlab documentation seems very clear in defining what each variable means. Finally, I assume that you are not taking EV's of a random variable but are decomposing its covariance matrix, instead?