What is Kern and its relation to skew Hermitian matrices?

  • Thread starter BrainHurts
  • Start date
In summary, the given definition is for complex vector spaces C(A) and H(A,A*), which are defined for a matrix A in the complex vector space Mn(ℂ). The map T is then defined as a linear map between real vector spaces, with Kern(T) being the set of skew Hermitian matrices. The paper claims that this set is equal to iH(A,A*), which can be proven by showing that any element in Kern(T) must also be in iH(A,A*) and vice versa.
  • #1
BrainHurts
102
0
I need some help understanding the following definition:

Definition: Let A[itex]\in[/itex]Mn(ℂ) the complex vector space

C(A)={X[itex]\in[/itex]Mn(ℂ) : XA=AX}

For A[itex]\in[/itex]Mn(ℂ) which is similar to A* we define the complex vector spaces:

C(A,A*)={S[itex]\in[/itex]Mn(ℂ) : SA=A*S}

H(A,A*)={H[itex]\in[/itex]Mn(ℂ): H is Hermitian and HA=A*H} [itex]\subset[/itex] C(A,A*)

Define a map T:C(A,A*)→H(A,A*) by T(S)=[itex]\frac{1}{2}[/itex]S + [itex]\frac{1}{2}[/itex]S*

As a map between real vector spaces, T is linear and Kern(T)={X[itex]\in[/itex]Mn: X is skew Hermitian}=iH(A,A*)

I just want to make sure that my understanding is correct and what is "Kern" short for
To say that P[itex]\in[/itex]Kern(T) means that P is an element of C(A,A*) which means that PA=A*P such that P is skew Hermitian

the defintion is from the paper I am reading it is by J. Vermeer on page 263

http://www.math.technion.ac.il/iic/ela//ela-articles/articles/vol17_pp258-283.pdf

Thank you for any further comments
 
Physics news on Phys.org
  • #2
A matrix P is an element of Kern(T) if [itex]P\in C(A,A^*)[/itex] and if [itex]T(P)=0[/itex].
So you know that
[tex]PA=A^*P~\text{and}~\frac{1}{2}P+\frac{1}{2}P^*=0[/tex]

The paper now claims that

[tex]Kern(T)=iH(A,A^*)[/tex]

and that these are exactly the skew Hermitian matrices. This is not a definition of Kern(T), but it is a theorem.
 
  • #3
So I know we can look at the set of Hermitian matricies analogous to the real number (I think)

so let A be Hermitian. Then A can be written as A=B+iC where B and C are Hermitian.

If we look at that linear map it's like looking at the identity of complex numbers.

let z=x+iy

x=[itex]\frac{1}{2}[/itex](z+[itex]\overline{z}[/itex])

so if 0=[itex]\frac{1}{2}[/itex](z+[itex]\overline{z}[/itex]),then z is purely imaginary

so if we look at P in the Kern(T)

it's like saying P is skew-Hermitian which is analgous to a number being purely imaginaryand if A=A* it's like saying z=[itex]\overline{z}[/itex],then z is real, so if A = A* and A=B+iC this implies that C=0
right?
 
Last edited:
  • #4
BrainHurts said:
So I know we can look at the set of Hermitian matricies analogous to the real number (I think)

so let A be Hermitian. Then A can be written as A=B+iC where B and C are Hermitian.

But if A is hermitian,then C=0.

If we look at that linear map it's like looking at the identity of complex numbers.

let z=x+iy

x=[itex]\frac{1}{2}[/itex](z+[itex]\overline{z}[/itex])

so if 0=[itex]\frac{1}{2}[/itex](z+[itex]\overline{z}[/itex]),then z is purely imaginary

so if we look at P in the Kern(T)

it's like saying P is skew-Hermitian which is analgous to a number being purely imaginary


and if A=A* it's like saying z=[itex]\overline{z}[/itex],then z is real, so if A = A* and A=B+iC this implies that C=0
right?

OK, so you have the right analogue statements. But can you now prove the result for P directly?
 
  • #5
So let me start with this assertion he makes

"S[itex]\in[/itex]C(A,A*) implies S*[itex]\in[/itex]C(A,A*)

this is done by one of his "standard propositions"

for the proof of Kern(T)=iH(A,A*)

1) Kern(T)[itex]\subseteq[/itex]iH(A,A*)

let P [itex]\in[/itex] Kern(T)

then PA=A*P and [itex]\frac{1}{2}[/itex]P+[itex]\frac{1}{2}[/itex]P*=0

So P=-P*

It follows that P is skew hermitian and P[itex]\in[/itex]iH(A,A*)

so Kern(T)[itex]\subseteq[/itex]iH(A,A*)

Similarly let P[itex]\in[/itex]iH(A,A*), let S=iP

Then SA=A*S such that S is skew Hermitian (if P is Hermitian, then iP is skew Hermitian)

this is the part that I'm getting stuck on
 

What is the difference between Kern(T) and Ker(T)?

The difference between Kern(T) and Ker(T) lies in their definitions. Kern(T) refers to the kernel of a linear transformation T, which is the set of all vectors in the domain that are mapped to the zero vector in the range. Ker(T) is the same as Kern(T), but it specifically refers to the kernel as a subspace of the domain. In other words, Kern(T) is the set of vectors that are mapped to the zero vector, while Ker(T) is the vector space spanned by those vectors.

Why is Kern(T) important in linear algebra?

Kern(T) is important in linear algebra because it helps us understand the behavior of linear transformations. It allows us to identify which vectors are mapped to the zero vector, and therefore, determine the nullity of a linear transformation. Additionally, Kern(T) is crucial in determining the rank-nullity theorem, which provides important insights into the dimensions of the domain and range of a linear transformation.

How is Kern(T) related to the null space of a matrix?

Kern(T) and the null space of a matrix are closely related. The null space of a matrix A is the set of all vectors x such that Ax = 0. This is the same as saying that A maps x to the zero vector. Therefore, the null space of a matrix A is equivalent to the Kern(T) of the linear transformation T defined by A. In other words, the null space of a matrix is the Kern(T) when T is the linear transformation defined by the matrix A.

Can Kern(T) be empty?

Yes, Kern(T) can be empty. This means that there are no vectors in the domain that are mapped to the zero vector in the range. In other words, the linear transformation T is injective (one-to-one). In this case, the nullity of T is 0, and the dimension of the domain is equal to the dimension of the range.

How can Kern(T) be calculated?

Kern(T) can be calculated by finding the null space of the matrix or by solving the homogeneous system of equations Ax = 0. In other words, to find Kern(T), we need to solve for the values of x that make T(x) = 0. This can be done using various methods, such as Gaussian elimination or finding the basis of the null space. Alternatively, if the matrix A is known, we can also determine Kern(T) by finding the null space of A.

Similar threads

Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
2K
Replies
4
Views
3K
  • Calculus and Beyond Homework Help
2
Replies
43
Views
3K
  • Linear and Abstract Algebra
Replies
1
Views
2K
Replies
1
Views
1K
Replies
8
Views
2K
  • Calculus and Beyond Homework Help
Replies
11
Views
3K
  • Introductory Physics Homework Help
Replies
14
Views
2K
  • Math Proof Training and Practice
2
Replies
69
Views
3K
Back
Top