Tags:
1. Jan 12, 2015

### "Don't panic!"

My question is as it says in the title really. I've been reading Nakahara's book on geometry and topology in physics and I'm slightly stuck on a part concerning adjoint mappings between vector spaces. It is as follows:

Let $W=W(n,\mathbb{R})$ be a vector space with a basis $\lbrace\mathbf{f}_{\alpha}\rbrace$ and a vector space isomorphism $G:W\rightarrow W^{\ast}$.
Given a map $f:V\rightarrow W$ we may define the $\bf{adjoint}$ of $f$, denoted by $\tilde{f}$, by $$G(\mathbf{w},f\mathbf{v}) =g(\mathbf{v},\tilde{f}\mathbf{w})$$ where $\mathbf{v}\in V$ and $\mathbf{w}\in W$, $g(\cdot,\cdot)$ is the inner product between the two vectors $\mathbf{v}$ and $\tilde{f}\mathbf{w}$.

He then goes on to say that "it is easy to see from this, that $\widetilde{(\tilde{f})}=f$".
I'm having trouble showing that this is true given the definitions above.

Last edited: Jan 12, 2015
2. Jan 12, 2015

### Fredrik

Staff Emeritus
Edit: I started writing this before you posted #2. Our proofs are essentially the same. The main difference is that I assumed that we're working with complex inner product spaces.

The adjoint of a linear $A:V\to W$ is the unique $A^*:W\to V$ such that
$$\langle w,Av\rangle_W=\langle A^*w,v\rangle_V$$ for all $v\in V$ and all $w\in W$. So (assuming that $A^*$ is linear...this needs to be proved) the adjoint of $A^*$ is the unique $A^{**}:V\to W$ such that
$$\langle v,A^*w\rangle_V=\langle A^{**}v,w\rangle_W$$ for all $w\in W$ and all $v\in V$.

I will use these facts about the adjoint operation and the definition of the inner product to prove that $A^{**}=A$. For all $v\in V$ and all $w\in W$, we have
$$\langle A^{**}v,w\rangle_W = \langle v,A^*w\rangle_V =\langle A^*w,v\rangle_V^* = \langle w,Av\rangle_W^* =\langle Av,w\rangle_W,$$ and therefore
$$\langle A^{**}v-Av,w\rangle_W =0.$$ This implies that for all $v\in V$, we have
$$\langle A^{**}v-Av,A^{**}v-Av\rangle_W=0,$$ and therefore
$$A^{**}v-Av=0.$$ This implies that $A^{**}=A$.

3. Jan 12, 2015

### "Don't panic!"

Thanks Fredrik :-) ... I got cold-feet with my answer and so deleted it.
So is the notation $G(\cdot,\cdot)$ just denoting the inner product for the vector space $W$?
Also, in the notation that I used is it correct to say that the adjoint mapping of $f:V\rightarrow W$ is the linear map $\tilde{f}:W\rightarrow V$, and therefore the adjoint mapping of $\tilde{f}:W\rightarrow V$ is the linear map $\widetilde{(\tilde{f})}:V\rightarrow W$?

To prove that the adjoint map $\tilde{f}:W\rightarrow V$ of $f$ is unique, let $g:W\rightarrow V$ be any other adjoint map satisfying $$\langle \mathbf{w}, f\mathbf{v}\rangle_{W}=\langle \mathbf{v}, g\mathbf{w}\rangle_{V}$$
then $$\langle \mathbf{v}, g\mathbf{w}\rangle_{V}=\langle \mathbf{w}, f\mathbf{v}\rangle_{W}=\langle \mathbf{v}, \tilde{f}\mathbf{w}\rangle_{V}$$ which implies that $g=\tilde{f}$, i.e. the adjoint map of $f$ is unique. (Is this correct?!)

To prove that $\tilde{f}$ is linear, let $\mathbf{w}_{1}, \mathbf{w}_{2}\in W$ and $\mathbf{v}\in V$. Then, as $f$ is linear, we have $$\langle \mathbf{v}, \tilde{f}(\mathbf{w}_{1}+\mathbf{w}_{2})\rangle_{V}= \langle \mathbf{w}_{1}+\mathbf{w}_{2}, f\mathbf{v}\rangle_{W}=\langle \mathbf{w}_{1}, f\mathbf{v}\rangle_{W}+\langle \mathbf{w}_{2}, f\mathbf{v}\rangle_{W}$$
$$\qquad\qquad\qquad\qquad =\langle \mathbf{v}, \tilde{f}\mathbf{w}_{1}\rangle_{V}+\langle \mathbf{v}, \tilde{f}\mathbf{w}_{2}\rangle_{V}$$ where we have also used the linearity of the inner product.
Next, let $\mathbf{v}\in V, \mathbf{w}\in W$ and $c\in \mathbb{R}$. Then,
$$\langle \mathbf{v}, \tilde{f}(c\mathbf{w})\rangle_{V}= \langle c\mathbf{w}, f\mathbf{v}\rangle_{W} =c\langle \mathbf{w}, f\mathbf{v}\rangle_{W}= c\langle \mathbf{v}, \tilde{f}\mathbf{w}\rangle_{V}=\langle \mathbf{v}, c\tilde{f}\mathbf{w}\rangle_{V}$$
Would this be correct?

4. Jan 12, 2015

### Fredrik

Staff Emeritus
I don't fully understand what's going on in post #1. You say that $G:W\to W^*$ is an isomorphism. Is $W^*$ the dual space of $W$? If the domain of $G$ is W, then why do you write $G(\mathbf w,f\mathbf v)$ (which suggests that the domain is $W\times W$)? Is the latter $G$ defined from the first by $G(x,y)=G(x)(y)$? And finally, you don't mention that a map must be linear to have an adjoint. I'm not aware of a definition that applies to non-linear maps between vector spaces.

These statements are certainly true if for all linear transformation $A$, you denote its adjoint by $\tilde A$. But if that's how you define the notation, the statements are saying things like "the adjoint of $f$ is the adjoint of $f$".

Your notation is non-standard. The standard notations are $A^*$ or $A^\dagger$ rather than $\tilde A$. Also, linear transformations are usually denoted by uppercase symbols, like $A$ or $T$.

Yes. There are some minor inaccuracies in the language, but you have the right idea.

Yes.

5. Jan 12, 2015

### "Don't panic!"

In the first post I was basically quoting verbatim what Nakahara has written in his book. I was confused by this also, I don't really understand his notation (it was his notation I was using with the $\tilde$ everywhere, instead of $\ast$ or $\dagger$, which like you say are the standard notations)?! Hence leading to my difficulty in understanding how one can show from his definitions that $A^{\ast\ast}=A$.

As far as I understand what is written in the book, yes.

6. Jan 12, 2015

### Ben Niehoff

I'm pretty sure this holds only in finite-dimensional vector spaces. Doesn't make much difference in Nakahara, but keep in mind if you try to apply these things to QM that there are subtleties there regarding adjoints.

7. Jan 12, 2015

### Hawkeye18

I think I can clarify a bit the language used in the book. The spaces $V$ and $W$ can be treated as the spaces $\mathbb R^m$ and $\mathbb R^n$ respectively, but with a non-standard inner product. Namely, the inner product in $V$ is defined using a (symmetric) positive definite matrix $g$, $$(x,y)_V=(x,y)_g = (gx, y)_{\mathbb R^m} = \sum_{j,k} g_{j,k}x^k y^j;$$
here $(x,y)_g$ is the inner product in $V$, and what is denoted in the book as $g(x,y)$, and $(gx, y)_{\mathbb R^m}$ is the standard inner product of vectors $gx$ and $y$ in $\mathbb R^m$. Note, that if we are given an inner product in $V$, then the matrix $g$ is computed by $$g_{j,k} = (e_k, e_j)_V.$$The matrix $g$ is what is often called the metric tensor, and is interpreted as bilinear form on $V\times V$,
$$g(x,y) = \sum_{j,k} g_{j,k} y^j x^k .$$ For a fixed $x\in V$ the map $y\mapsto g(x,y)$ is a linear functional on $V$, i.e. an element of the dual space $V^*$, so the bilinear form $g$ defines a mapping from $V$ to $V^*$; it is easy to show that this map is linear and invertible; it is denoted in the book by the same letter $g$.

It is possible to start with the isomorphism $g:V\to V^*$, $$(gx)_j = \sum_{k} g_{j,k} x^k,$$ and then define the inner product as above, and that is exactly what is done in the book. Not all isomorphisms $g$ give rise to a "good" inner product: if we want the inner product to symmetric, the matrix $g$ has to be symmetric, and if we want the inner product to be non-negative, the matrix needs to be positive definite. So, for me it is more natural to start from the bilinear form than from the isomorphism $V\to V^*$.

Note that non-positive inner products are studied in math and physics, see for example the Minkowski metric, but I haven't met the non-symmetric inner product (OK, in the case of complex spaces, the inner product is not symmetric, but a conjugate symmetric, I include it in the "symmetric" case)

I agree with Fredrik that notation $\tilde f$ for the adjoint operator is quite non-standard, and that the adjoint is defined only for linear transformations.

One more detail: for the complex inner product spaces the author follows "physical" convention (more logical, in my opinion), and the inner product $(x,y)_V$ is linear in the second variable $y$ and conjugate linear in the first variable $x$: in mathematical literature it is the other way around.

And finally, the definition of an adjoint transformation it is usually written $(y, Ax)_W = (A^*y x)_V$, as Fredrik wrote (or, equivalently $(Ax, y)_W=(x, A^*y)_V$), which works in both real and complex cases. In the book the definition is written $(x, Ay)_W = (y, A^*x)$, which does not matter in the real case because of symmetry: but in the complex case, as you can see in the book, the ugly complex conjugation sign appears in the definition of the adjoint.

8. Jan 13, 2015

### "Don't panic!"

I'm still a little unsure on some of the notation used in the book though. When he writes $G:W\rightarrow W^{\ast}$ is he meaning that for some fixed $\mathbf{w}$ the isomorphism $G$ takes $\mathbf{w}\in W$ to $G(\mathbf{w},\cdot)\in W^{\ast}$, i.e. $$G:\mathbf{w}\mapsto \tilde{w}=G(\mathbf{w},\cdot)\in W^{\ast}$$ If so, is it correct to say that the inner product in $W$ is defined as $$G(\mathbf{w},f \mathbf{v})=\langle\mathbf{w},f \mathbf{v}\rangle_{W}, \qquad\mathbf{w},f \mathbf{v}\in W$$ and the inner product in $V$ defined as $$g(\cdot,\cdot)=\langle\cdot , \cdot\rangle_{V}$$ Then the adjoint mapping $f^{\ast}$ is defined such that $\langle\mathbf{w},f \mathbf{v}\rangle_{W}= \langle\mathbf{v},f ^{\ast}\mathbf{w}\rangle_{V}$. Would this be correct (using the notation from the book anyway)?!

9. Jan 13, 2015

### Hawkeye18

Yes, it is correct. He uses the same letter for the isomorphism and for the bilinear form, but he uses $G(x,y)$ when he treats $G$ as a bilinear form and $Gx$ when he treats $G$ as an isomorphism; having this in mind will help you avoid the confusion.

What could be confusing is that in formula (2.19) symbols $f$, $g$ and $G$ stand not for the respective objects, but for their matrices. For example, $g$ stands for the matrix of the isomorphism $g$ with respect to the bases $e_k: 1\le k\le m$ and $e_k': 1\le k\le m$ in the domain and in the target space respectively (or equivalently for the matrix of the bilinear form in the basis $e_k: 1\le k\le m$) The symbol $f$ in this formula stands for the matrix of the transformation $f$ in the basis $e_k: 1\le k\le m$.

Yes, you are correct

10. Jan 14, 2015

### "Don't panic!"

Ah ok, thank you very much for your help!

So, using the notation from the book and following Fredrik's input, I assume the following is correct (for the real case at least)?!

Let $f:V\rightarrow W$ be a linear map between $V$ and $W$ and $G:W\rightarrow W^{\ast}$ be an isomorphism between $W$ and $W^{\ast}$. Then the adjoint of $f$, denoted $\tilde{f}$ is defined such that $$G(\mathbf{w},f \mathbf{v})= g(\mathbf{v},\tilde{f} \mathbf{w})$$
Then, as $\tilde{f}$ is itself a linear map, we may define the adjoint of this map, denoted $\widetilde{(\tilde{f})}$ such that $$g(\mathbf{v},\tilde{f} \mathbf{w})= G(\mathbf{w},\widetilde{(\tilde{f})} \mathbf{v})$$
It follows then, that $$G(\mathbf{w},\widetilde{(\tilde{f})} \mathbf{v})=g(\mathbf{v},\tilde{f} \mathbf{w})=G(\mathbf{w},f \mathbf{v})$$
and as the inner product is linear in its arguments, we may write this as $$G(\mathbf{w},\widetilde{(\tilde{f})} \mathbf{v}-f \mathbf{v}) =0$$
As $\mathbf{v}\in V$ is arbitrary this implies that $$G(\widetilde{(\tilde{f})} \mathbf{v}-f \mathbf{v},\widetilde{(\tilde{f})} \mathbf{v}-f \mathbf{v})=0 \quad\forall \mathbf{v}\in V$$ which further implies that $$\widetilde{(\tilde{f})} \mathbf{v}-f \mathbf{v}=\mathbf{0}\Rightarrow \widetilde{(\tilde{f})}=f$$

Last edited: Jan 14, 2015
11. Jan 14, 2015

### mathwonk

If T is a matrix, then the associativity of matrix multiplication implies that for a row vector v and a column vector w, we have v(Tw) = v(Tw). If you think about it, (i.e. since (vT)w = < (vT)^t, w> = < T^t(v^t),w>, this says that for column vectors v,w, we have < T^t(v),w> = <v,T(w)>. I.e. the transpose of T acts like an adjoint. I.e. matrix terms, the adjoint is just the transpose.

Since T^t^t = T, i.e. doing transpose twice takes us back to T, i.e. the transpose of the transpose is the original matrix, so the adjoint of the adjoint is the original map.

12. Jan 15, 2015

### "Don't panic!"

Thanks for your input. Yes I understand it conceptually, but was just trying to show it rigorously in general (as the transpose is just a special of an adjoint mapping as far as I understand).

13. Jan 16, 2015

### mathwonk

well i would say my argument is rigorous, and yours is abstract. but every finite dimensional adjoint can be described as a transpose of a matrix (or in the complex case, a transpose conjugate), so the matrix argument is also general.

however you may be of the mind of e. artin who said matrices should not be used in vector space arguments. his point however was that they usually make the argument longer and less insightful. I thought this case was an exception where they make it simpler and more obvious.

Actually the problem you are considering has nothing to do with dot products, since the double adjoint of a map from V-->W is a map from V**-->W**, and these spaces are always (in finite dimensions) naturally isomorphic to V and W.

The word "naturally" here refers to the problem you are solving. I.e. given the "obvious" isomorphisms between V and V**, and W and W** (defined by evaluation), show that for any linear map f:V-->W the corresponding map f**:V**-->W** corresponds to f under those isomorphisms.

I.e. "the double dual functor, is naturally equivalent to the identity functor, on finite dimensional vector spaces."

but forgive me, as this is almost certainly not helpful right now, and you are already getting very fine assistance. I just couldn't help writing something about the "natural" way one usually looks at these things. Of course it is then incumbent to relate it to the somewhat "less natural", i.e. dot product oriented, way your book is doing it.

And of course I am now making it even more abstract, after claiming to make it simpler.

To sum up, the "natural" point of view is that the adjoint of a map f:V-->W is a map f*:W*-->V*, and the double adjoint is then a map f**:V**-->W**. There is always an evaluation map V-->V** taking x to "evaluation at x", which is an isomorphism in finite dimensions. Then we can ask whether under this isomorphism, f becomes f**, and it does.

An inner product on V is equivalent to an isomorphism V-->V*, since both things let us evaluate a vector of V on another vector from V. Thus given a dot product we have even more isomorphisms among duals, and it is a (bit of a messy) job to relate the two isomorphisms we get of the double duals, the natural one with the dot product one.

Last edited: Jan 16, 2015
14. Jan 17, 2015

### "Don't panic!"

Sorry, I probably didn't word my post very well, I hadn't meant to 'have a dig'. I guess I was just trying to understand it in the terms used in the book really.
Thanks very much for all the extra information though, it is interesting for insight into the area.
Would what I put in my earlier post be correct though (the one where a show that $\widetilde{(\tilde{f})}=f$)?

Last edited: Jan 17, 2015
15. Jan 17, 2015

### mathwonk

well i find this stuff highly confusing myself and decline to critique again what is above. Fredrick’s proof in #2 cannot be improved upon in the desired language.

Let me make one point about “naturality” and compatibility (always in finite dimensions, and over real scalars, with symmetric dot products).

A dot product on V defines an isomorphism *:V-->V*, sending x to x*, where for all y in V, the value of x* on y is x*(y) = <x,y> = y*(x), since this product is symmetric.

A different dot product on V will give a different isomorphism of V with V*.

The adjoint of this isomorphism gives an isomorphism V**-->V*, taking a functional f on V* to the functional (f o *) on V. Now composing this with the inverse isomorphism

*^(-1):V*-->V, gives an isomorphism V**-->V, taking f in V** above to the unique vector x in V such that x* = (f o *), i.e. such that for all y in V, x*(y) = f(y*).

Now this says that f(y*) = x*(y) = <x,y> = <y,x> = y*(x). Thus the functional f corresponding to x, acting on y*, is merely “evaluation of y* at x”. I.e. for all functionals t in V*, f(t) = t(x).

Note that this last description of f does not use the isomorphism * from V to V*, (since there is no * in the equation f(t) = t(x)), hence does not use the dot product. Thus in fact all dot products give the SAME isomorphism from V to V**, even though they all give different isomorphisms from V to V*, and these isomorphisms were used in the composite isomorphism V-->V**.

This is ultimately why, for a linear map A, although A* changes when the dot product changes, A** does not, indeed it is always equal to A.

16. Jan 17, 2015

### mathwonk

Well I knew I would regret this, but I might as well finish up the abstract way of doing it with no dot products. Now that we know that the natural isomorphism from V to V** takes a vector to "evaluation at that vector", to show that A = A** under these isomorphisms, amounts to showing that given A:V-->W, then for any vector x in V,
A**(evaluation at x ) = evaluation at A(x).

So denote evaluation at x by x**, and then we want to show that A**(x**) = (A(x))**, as elements of W**,

I.e. for every t in W*, we have (A(x))**(t) = t(A(x)) = (toA)(x) = A*(t)(x) (since by defn A* means "precede by A"), = x**(A*(t)) = (x** o A*)(t) = A**(x**)(t).

I.e. (A(x))** = A**(x**). taadaa!. oink.

By the way, this shows the compositions x-->A(x)-->(A(x))**, and x-->x**-->A**(x**), are equal as maps from V to W**, even when the maps V-->V** and W-->W** are not isomorphisms, i.e. even in infinite dimensions.

But for a general element s of V**, and t in W*, all we get is that (A**(s))(t) = (soA*)(t) = s(A*(t)) = s(toA), which is really just the definition of A**. On the other hand, given that fact, and setting s = x**, one (slightly more briefly) gets (A**(x**))(t) = x**(toA) = (toA)(x) = t(A(x)) = (A(x))**(t).

Last edited: Jan 17, 2015
17. Jan 18, 2015

### "Don't panic!"

Thanks, really appreciate the effort that you've gone to to help me with this. I think I'll have to go and have a think about it all a bit more now.