Verifying that a matrix T represents a projection operation

  • Thread starter Thread starter Seydlitz
  • Start date Start date
  • Tags Tags
    Matrix Projection
Seydlitz
Messages
262
Reaction score
4
Hello guys,

I want to verify or rather show that a given matrix ##T## does represent a projection from ##\mathbb{R^{3}}## to a particular plane, also lying in ##\mathbb{R^{3}}##. Would it be enough to pre-multiply that matrix to an arbitrary vector ##(x,y,z)##, and see if the resulting vector is orthogonal to the normal vector of that given plane, thus implying that the vector is projected successfully to the plane?

Or do I need to row reduce the matrix ##T## until I can see the basis vectors used in the original ##T##, and verify that they all lie on the plane? Or rather since I can also get the basis of the kernel, will showing that the basis of the kernel is parallel with the normal of the plane enough? Geometrically I imagine that the kernel space is all of the vectors that are orthogonal to the plane and their projection to that plane will be 0.

Thanks
 
Physics news on Phys.org
Seydlitz said:
Hello guys,

I want to verify or rather show that a given matrix ##T## does represent a projection from ##\mathbb{R^{3}}## to a particular plane, also lying in ##\mathbb{R^{3}}##. Would it be enough to pre-multiply that matrix to an arbitrary vector ##(x,y,z)##, and see if the resulting vector is orthogonal to the normal vector of that given plane, thus implying that the vector is projected successfully to the plane?
This is necessary but not sufficient. If ##n## is normal to the plane and ##\langle n, Tx\rangle = 0## for all ##x##, then the image of ##T## is contained in the plane, but that doesn't necessarily mean that ##T## is a projection onto the plane. For example, consider the matrix
$$T = \begin{pmatrix}0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 1 \end{pmatrix}$$
This matrix has the property that ##\langle n, Tx\rangle = 0##, where ##n = \begin{pmatrix}1 \\ 0 \\ 0 \end{pmatrix}## and ##x## is any vector. So the image lies in the plane whose normal vector is ##\begin{pmatrix}1 \\ 0 \\ 0\end{pmatrix}##. But it is not a projection onto that plane because the image only has dimension 1.

For another example, consider the matrix
$$T = \begin{pmatrix}0 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 2 \end{pmatrix}$$
Once again, we have ##\langle n, Tx\rangle = 0##, where ##n = \begin{pmatrix}1 \\ 0 \\ 0 \end{pmatrix}## and ##x## is any vector. The dimension of the image is correct (2), but this is still not a projection because it stretches vectors lying in the plane, e.g. it maps ##\begin{pmatrix}0 \\ 1 \\ 0\end{pmatrix}## to ##\begin{pmatrix}0 \\ 2 \\ 0\end{pmatrix}##. So you also need a constraint ensuring that no such stretching (and also no rotating) occurs. This is neatly captured by the condition ##T^2 = T##. Indeed, a matrix represents a projection if and only if it satisfies ##T^2 = T##.

So to summarize, if you want to show that a 3x3 matrix is a projection onto a particular plane, you need to verify all of the following:
  1. ##T^2 = T##, so ##T## is a projection
  2. ##\dim(\ker(T)) = 1## or equivalently, ##\dim(\text{im}(T)) = 2##, so ##T## projects onto a plane
  3. ##\langle n, Tx\rangle = 0## where ##n## is normal to the plane and ##x## is arbitrary, so ##T## projects onto the specified plane
You can find equivalent conditions which will allow you to do less work. [strike]For example, you can replace condition 3 with ##Tn = 0##.[/strike] [correction: If condition 3 is replaced with ##Tn=0## then not only is ##T## a projection, it is in fact an orthogonal projection.] But the basic idea remains the same.
 
Last edited:
Thanks for the comprehensive information jbunniii, I really appreciate it.

By the way I just realized the fact that the kernel space of a projection matrix is orthogonal to the image of the the projection. Is this true in general? For example if I have a subspace ##W## and a linear transformation from a vector space ##V## to ##W##. Can we consider the kernel of that transformation as the orthogonal complement of ##W##?
 
Seydlitz said:
Thanks for the comprehensive information jbunniii, I really appreciate it.

By the way I just realized the fact that the kernel space of a projection matrix is orthogonal to the image of the the projection.
Actually, that's only true of orthogonal projections. For general projections, it need not be true. Consider for example
$$T = \begin{pmatrix}0 & 0 \\ c & 1 \\ \end{pmatrix}$$
This is a projection matrix, because ##T^2 = T##. The image is the subspace consisting of all scalar multiples of ##\begin{pmatrix}0 \\ 1 \end{pmatrix}##. The kernel is the subspace consisting of all ##\begin{pmatrix}x \\ y \end{pmatrix}## satisfying ##cx + y = 0##, or in other words, all scalar multiples of ##\begin{pmatrix}1 \\ -c \end{pmatrix}##.

You can think of ##T## as a source of light aimed in the direction of ##\begin{pmatrix}1 \\ -c \end{pmatrix}##, which projects a given vector onto its "shadow" on the image subspace. If the source of light is directly overhead (##c = 0##) then it's an orthogonal projection, otherwise it's called an oblique projection.
 
Last edited:
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...

Similar threads

Replies
7
Views
2K
Replies
14
Views
4K
Replies
23
Views
2K
Replies
10
Views
2K
Replies
1
Views
3K
Replies
3
Views
3K
Back
Top