Adjoint of a linear operator Definition

In summary: Hey kingwinner,In summary, the adjoint of a linear operator T on an inner product space V is the mapping T* of V into V, defined by the equation <T*(v),w> = <v,T(w)> for all v, w E V. This definition is equivalent to the definition given by <T(v),w> = <v,T*(w)>, as the adjoint of the adjoint is the operator itself. Both definitions can be used interchangeably within the same problem. The adjoint of a linear transformation can be represented by a matrix, and a transformation is self-adjoint if its matrix form is symmetric or Hermitian, depending on the space it is defined on. The choice of variables
  • #1
kingwinner
1,270
0
Definition from my textbook: For each linear operator T on a inner product space V, the adjoint of T is the mapping T* of V into V that is defined by the equation <T*(v),w> = <v,T(w)> for all v, w E V.

My instructor defined it by <T(v),w> = <v,T*(w)> and he said that these 2 definitions are equivalent.

Now, can someone please explain WHY they are equaivalent?

Can both definitions be used at the SAME time, or do I have to choose 1 of the 2 definitions and use this chosen definition consistently everywhere?


Thanks for explaining!
 
Physics news on Phys.org
  • #2
Hey kingwinner,

They are equivalent because you can prove that the adjoint of the adjoint is the operator itself.

Here's a longer explanation: suppose one definition tells us that
<T*(v),w> = <v,T(w)>
and the other that
<T(v),w>=<v,T@(w)>
Note that I've used different symbols (*,@) to denote conjugation, because we are not yet sure it is the same thing.
It is possible to show, using these definitions, that
T@@ = T
and
T** = T
(I won't go into the details here.)

Assuming this, we have, on the one hand:
<T(v),w> = <T**(v),w> = <(T*)*(v),w> = <v,T*(w)>
and on the other (by definition!):
<T(v),w> = <v,T@(w)>
Since this is true for all v,w, we must have T@ = T*.

Hope this helps :).

--------
Assaf
http://www.physicallyincorrect.com/"
 
Last edited by a moderator:
  • #3
"Adjoint" is a dual property: If A* is the adjoint of A, the A is the adjoint of A* so either way works.
 
  • #4
kingwinner said:
Now, can someone please explain WHY they are equaivalent?

Isn't it obvious? The inner product operation is symmetric, and so is the equality relationship.
 
  • #5
Thanks!

So if I am solving a problem, can both definitions be used at the SAME time within the same problem, or do I have to choose 1 of the 2 definitions and use this chosen definition consistently everywhere?
 
  • #6
Since the definitions are equivalent, you may use either at will.

--------
Assaf
http://www.physicallyincorrect.com/"
 
Last edited by a moderator:
  • #7
kingwinner said:
Say, for example, if we consider C^2 with the standard inner product and T: C^2->C^2 is defined by T(x,y)=(x+(1-i)y, (1+i)x+2y), how exactly can I find T*(x,y)?

The definition of T* seems obscure to me...
Since T* is also linear, T*(u,v) must be (au+ bv, cu+ dv) for some complex numbers, a, b, c, and d.

You want [itex]<(x+(1-i)y, (1+i)x+2y), (u,v)>= xu+ (1-i)\overline{u}y+ (1+i)x\overline{v}+ 2y\overline{v} [/itex] to be equal to
[itex]< (x,y), (au+ bv,cu+dv)>= \overline{a}x\overline{u}+ \overline{b}x\overline{v}+ \overline{c}\overline{u}y+ \overline{d}\overline{v}y[/itex]. Since this must be true for all x, y, u, v, coefficients of the same products must be equal: [itex]1= \overline{a}, 1- i= \overline{c}, 1+i= \overline{b}, 2= \overline{d}[/itex]. That is, z= 1, b= 1- i, c= 1+i, and d= 2. T*(x,y)= (x+ (1-i)y, (1+i)x+ 2y). That happens to be exactly the same as T! This T is "self-adjoint".

I did that directly from the definition of adjoint to show how it works.
It is much simpler to just write this as a matrix:
[tex]T= \left[\begin{array}{cc}1 & 1- i \\ 1+i & 2\end{array}\right][/tex]
and take the "Hermitian adjoint": first swap row and columns (the "transpose")
[tex]T= \left[\begin{array}{cc}1 & 1+ i \\ 1-i & 2\end{array}\right][/tex]
then take the complex conjugate of each number.
[tex]T= \left[\begin{array}{cc}1 & 1- i \\ 1+i & 2\end{array}\right][/tex]
again, we see that T is self adjoint.

For linear transformations on a finite dimensional vector space over the real numbers, where the transformation can be written as a matrix with real number entries, its "adjoint" is the transpose and a linear transformation is "self adjoint" if and only if its matrix form is symmetric.

For linear transformations on a finite dimensional vector space over the complex numbers, where the transformation can be written as a matrix with complex number entries, its "adjoint" is the "Hermitian adjoint" described above. A transformation is "self adjoint" if and only if its matrix form is "Hermitian"- amn is the complex conjugate of anm.

By the way, if a linear transformation is from a vector space U of dimension m to a vector space V of dimension n, then it can be written as an m by n matrix. Its adjoint is a linear transformation from V to U and can be written as an n by m matrix- again the transpose.

Of course, to be "self adjoint", a matrix must be from vector space U to itself and represented by square matrix.
 
Last edited by a moderator:
  • #8
Thanks a lot!

But what is the relation between (u,v) and (x,y)? The question is looking for T*(x,y), but in your calculations, it seems that (u,v) is used instead of (x,y). I get a little confused here...
 
  • #9
You have two points in C^2. We can specify each point by giving two complex coordinates (e.g. (1, i) or (3+6i, 2-4i)). HallsOfIvy is taking two arbitrary points, and calls the coordinates of one of them x and y, and the coordinates of the other u and v.
Of course, these names are arbitrary. So if the calculation in the end shows that
T*(u,v) = (au+ bv, cu+ dv)
then obviously
T*(x,y) = (ax + by, cx+dy)
since x, y, u and v are just dummy placeholders (variables) for which you can choose any name. But since you are dealing with two different points (one in each slot of the inner product) you need to have at least four coordinates.
 
  • #10
kingwinner said:
Thanks a lot!

But what is the relation between (u,v) and (x,y)? The question is looking for T*(x,y), but in your calculations, it seems that (u,v) is used instead of (x,y). I get a little confused here...
Surely you understand that if I calculate that f(u)= u2, I can also write f(x)= x2? It's the same thing.

If T is a linear transformation from inner product space V to inner product space U, then its adjoint is a linear transformation from U to V. If v is any vector in V and u is any vector in U, then the condition for an adjoint is <Tu, v>= <u, T*v> where the left side is the inner product in U and the right side is the inner product in V.

Your example was a linear transformation from V= C2 to U= C2. You had already defined T in terms of x and y so I took (x,y) to be my vector in V. In order NOT to confuse things by using the same variables to mean something else, I took (u, v) to be my vector in U.
 
  • #11
Thanks!

I have one more question about inner product.

Let u=(u1,u2), v=(v1,v2)
I denote (u1)* as conjugate of u1

How can I prove that <u,v>=2(u1)*v1 + (1-i)(u1)*v2 + (1+i)(u2)*v1 + 3(u2)*v2 defines an inner product on C2?

Now
[2 1-i
1+i 3]
is a Hermitian matrix, so if I can prove that <v,v> >0 and <v,v>=0 iff v=0, then I'm done, but how can I do so?
 
  • #12
To prove that it is an inner product, you should check that it satisfies the properties of an inner product.

For the question on the last line, you just plug in u1 = v1, u2 = v2 and work out the equation. You will notice that the i's drop out and you can write the result in the form
(... + ...)^2 + ...^2 + ...^2
from which it is easy to complete the proof.
 
  • #13
<v,v>=2(v1)*v1 + (1-i)(v1)*v2 + (1+i)(v2)*v1 + 3(v2)*v2

(v1)*v2 and (v2)*v1 won't cancel, so how can the i's cancel?
 
  • #14
It's not necessary that the entire terms cancel, just the stuff with an i in it
If I work out the brackets on the second and third term, I get
v1 * v2 - i * v1 * v2 + v2 * v1 + i * v2 * v1
And since v1 and v2 are just components of a vector, they are numbers and commute; v1 * v2 = v2 * v1
 
  • #15
CompuChip said:
It's not necessary that the entire terms cancel, just the stuff with an i in it
If I work out the brackets on the second and third term, I get
v1 * v2 - i * v1 * v2 + v2 * v1 + i * v2 * v1
And since v1 and v2 are just components of a vector, they are numbers and commute; v1 * v2 = v2 * v1
Let v1=1+i, v2=1
v1* v2=(1-i)(1)=1-i
v2* v1= 1 (1+i) = 1+i

So in this example, they don't commute...
 
  • #16
Ooohh, confusion over the star :smile:
I used it as a multiplication sign, you used it as the conjugate.
Appearantly it still works out then, but not as easily as I thought... probably it is going to involve splitting everything in real and imaginary parts... I'll see if I can find an easier way.
 

1. What is the definition of an adjoint of a linear operator?

The adjoint of a linear operator is a mathematical concept that represents the transpose of the operator's corresponding matrix. It is a way to define a new linear operator that is related to the original one, but it has different properties and characteristics.

2. How is the adjoint of a linear operator calculated?

The adjoint of a linear operator is calculated by taking the complex conjugate of the operator's matrix and then taking the transpose of that result. This process is also known as the Hermitian conjugate.

3. What is the importance of the adjoint of a linear operator in mathematics?

The adjoint of a linear operator is important in mathematics because it can help to solve complex systems of equations, particularly in quantum mechanics and functional analysis. It also has applications in fields such as signal processing and control theory.

4. How does the adjoint of a linear operator relate to the concept of duality?

The adjoint of a linear operator is closely related to the concept of duality, which is the idea that for every linear operator, there exists a corresponding dual operator. The adjoint of a linear operator is the dual operator of the original operator.

5. Can the adjoint of a linear operator be used to prove theorems in mathematics?

Yes, the adjoint of a linear operator can be used to prove theorems in mathematics. It is often used in functional analysis to prove important theorems, such as the Hahn-Banach theorem and the Riesz representation theorem.

Similar threads

  • Linear and Abstract Algebra
Replies
3
Views
3K
Replies
4
Views
855
  • Linear and Abstract Algebra
Replies
10
Views
326
  • Linear and Abstract Algebra
Replies
3
Views
929
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
6
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
7
Views
2K
Back
Top