MHB Question about proof of the linear independence of a dual basis

lllllll
Messages
3
Reaction score
0
This is from Kreyszig's Introductory Functional Analysis Theorem 2.9-1.

Let $X$ be an n-dimensional vector space and $E=\{e_1, \cdots, e_n \}$ a basis for $X$. Then $F = \{f_1, \cdots, f_n\}$ given by (6) is a basis for the algebraic dual $X^*$ of $X$, and $\text{dim}X^* = \text{dim}X=n$.

Other background info:

$f_k(e_j)=\delta_{jk} = 0 \text{ if } j \not = k, 1 \text{ if } j=k$ (Kronecker delta).

Proof. $F$ is a linearly independent set since

\[\sum_{k=1}^n \beta_k f_k (x) = 0\]

with $x = e_j$ gives

\[\sum_{k=1}^n \beta_k f_k (e_j) = \sum_{k=1}^n \beta_k \delta_{jk} = \beta_j = 0\]

so that all the $\beta_k$'s are zero, as we go through each $e_j$.

And it continues to show that $F$ spans $X^*$.So we show that when we use some specific $x$'s, namely $x=e_j$, then $\beta_k = 0$. But, don't we want to show that $\beta_k$'s are all zero for any $x \in X$? Why does showing that the scalars are zero one at a time, when applying the basis elements of $X$ suffice?

I think I'm missing something really simple here..
 
Last edited:
Physics news on Phys.org
lllllll said:
So we show that when we use some specific $x$'s, namely $x=e_j$, then $\beta_k = 0$. But, don't we want to show that $\beta_k$'s are all zero for any $x \in X$? Why does showing that the scalars are zero one at a time, when applying the basis elements of $X$ suffice?

I think I'm missing something really simple here..

Do you agree that by the very definition of linear independence in $X^{\ast}$ we need to show: If
\[
\sum_{k=1}^n \beta_k f_k = 0 \qquad (*)
\]
then $\beta_k = 0$ for all $k=1,\ldots,n$.

If you do agree, then let's assume that $(*)$ holds. Act with $(*)$ on $x = e_j$. Then it follows as in your post that $\beta_j$ vanishes, for arbitrary $j$. There is indeed nothing more to it.
 
Krylov said:
Do you agree that by the very definition of linear independence in $X^{\ast}$ we need to show: If
\[
\sum_{k=1}^n \beta_k f_k = 0 \qquad (*)
\]
then $\beta_k = 0$ for all $k=1,\ldots,n$.

If you do agree, then let's assume that $(*)$ holds. Act with $(*)$ on $x = e_j$. Then it follows as in your post that $\beta_j$ vanishes, for arbitrary $j$. There is indeed nothing more to it.

Thanks for the post.

I'm still not convinced I guess. Here's another way to ask my question:
Let's say $x = \sum_j^n \xi_j e_j$ and then act with $(*)$ on $x$. Assume some of the $\xi_j \not = 1$.

So we assume $\sum_k^n \beta_k f_k(x) = 0$.

$f_k(x) = \sum_j^n \xi_j f_k(e_j) = \xi_k$

Then $\sum_k^n \beta_k f_k(x) = \sum_k^n \beta_k \xi_k = 0$

From here, how would we conclude that $\beta_1, \cdots, \beta_n = 0$?
 
lllllll said:
From here, how would we conclude that $\beta_1, \cdots, \beta_n = 0$?
I don't know how we could conclude it from here, but we can conclude it in another way, namely as Kreyszig does it.

Perhaps the difficulty is not so much with this particular theorem, but more with logic: The definition of linear independence of $f_1,\ldots,f_k$ in $X^*$ involves an implication: If $\sum_{k=1}^n \beta_k f_k = 0$ (premise) then $\beta_k = 0$ for all $k=1,\ldots,n$ (conclusion). If we can somehow, in any way, show that the conclusion is true, then the implication itself is true. We are not even obliged to use the information from the premise.

It seems to me that you think that the required conclusion is something else than what it actually is. The only thing that we somehow need to show, is that all $\beta_k$ are zero. If we can do that one was or the other (for example, by acting with $\sum_{k=1}^n \beta_k f_k = 0$ on a well-chosen element of $X$), then we are done.
 
Krylov said:
I don't know how we could conclude it from here, but we can conclude it in another way, namely as Kreyszig does it.

Perhaps the difficulty is not so much with this particular theorem, but more with logic: The definition of linear independence of $f_1,\ldots,f_k$ in $X^*$ involves an implication: If $\sum_{k=1}^n \beta_k f_k = 0$ (premise) then $\beta_k = 0$ for all $k=1,\ldots,n$ (conclusion). If we can somehow, in any way, show that the conclusion is true, then the implication itself is true. We are not even obliged to use the information from the premise.

It seems to me that you think that the required conclusion is something else than what it actually is. The only thing that we somehow need to show, is that all $\beta_k$ are zero. If we can do that one was or the other (for example, by acting with $\sum_{k=1}^n \beta_k f_k = 0$ on a well-chosen element of $X$), then we are done.

Okay, I think I finally get it. I think my confusion came from the quantification on $x \in X$. Please correct me if I'm still not getting it!

We are assuming $\sum_k^n \beta_k f_k(x)=0$ for every $x \in X$ (since $X$ is the domain of $f_k$).
Now, we choose $x = e_k \in X$, and this leads us to conclude that $\beta_k = 0$, and the quantification "for every $x \in X$" still holds.

So by choosing specific $x \in X$ to show $\beta_k = 0$, we aren't excluding the possibility that for some other $x' \in X$, $\beta_k$ might be non-zero. The conclusion of taking a "well-chosen" $x \in X$ and acting with $\sum_k^n \beta_k f_k(x)=0$ still holds for all $x$.
 
lllllll said:
Okay, I think I finally get it. I think my confusion came from the quantification on $x \in X$. Please correct me if I'm still not getting it!

We are assuming $\sum_k^n \beta_k f_k(x)=0$ for every $x \in X$ (since $X$ is the domain of $f_k$).
Now, we choose $x = e_k \in X$, and this leads us to conclude that $\beta_k = 0$, and the quantification "for every $x \in X$" still holds.

Yes, but the quantifier "for every $x \in X$ does not really matter anymore once you have proven that $\beta_k = 0$ for all $k$, since the $\beta$s do not depend on $x$ in any way.

lllllll said:
So by choosing specific $x \in X$ to show $\beta_k = 0$, we aren't excluding the possibility that for some other $x' \in X$, $\beta_k$ might be non-zero.

We are excluding that. Once we have proven that the $\beta$s are all zero, there is no way some or all of them can become non-zero by choosing another $x$.

lllllll said:
The conclusion of taking a "well-chosen" $x \in X$ and acting with $\sum_k^n \beta_k f_k(x)=0$ still holds for all $x$.

Yes, the latter equality was the premise we used to deduce that all $\beta$s should vanish. (How? By cleverly picking certain $x$s.)
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...

Similar threads

Replies
1
Views
3K
Replies
23
Views
2K
Replies
10
Views
2K
Replies
3
Views
2K
Replies
8
Views
5K
Replies
52
Views
3K
Replies
4
Views
2K
Back
Top