Principal Components and the Residual Matrix

  • Thread starter Thread starter Jrb599
  • Start date Start date
  • Tags Tags
    Components Matrix
Jrb599
Messages
24
Reaction score
0
I've been reading about principal components and residual matrixs.

It's my understanding if you used every principal component to recalculate your orginal data, then the residual matrix should be 0.

Therefore, I created a fake dataset of two random variables and calculated the principal components.


When I do eigenvector1,1*princomp1,1+Eigenvector1,2*princomp1,2 = var 1
similarly
When I do eigenvector2,1*princomp2,1+Eigenvector2,2*princomp2,2 = var 2

so therefore the residual matrix is 0 which is what I wanted. However, this is only true when I standardize the data.

If I don't standardized the data, the two formulas I listed above aren't true.

What is throwing me for a loop is none of the papers I read said anything about standardizing the data, but it looks like the data must be standardized for this to hold. I don't want to make any assumptions so I thought I would ask. Is this correct?
 
Physics news on Phys.org
Hey Jrb599 and welcome to the forums.

Did you take into account the eigen-values for the principal component eigen-vectors?

The eigen-values represent the variance component which is related the un-standardized random variables' variance attributes.
 
Hi Chiro,

Thanks for the response. Yeah I've taken the eigenvalues into account, and I still can't get it to work
 
Just out of curiosity, what eigen-values do you get from PCA for the standardized data? Are they unit length?

Also you should calculate the PCA matrix and get its inverse to go from PCA space to original space since the PCA is a linear transformation from original space to new space.

Try this to get the original random variables if you are in the initial PCA space.
 
Chiro - I realized the program I was using was still doing mean-centering. It's working now
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top