Question about proof of the linear independence of a dual basis

Click For Summary

Discussion Overview

The discussion revolves around the proof of the linear independence of a dual basis in the context of functional analysis, specifically referencing Kreyszig's work. Participants explore the implications of linear independence in the algebraic dual of a vector space and the sufficiency of demonstrating independence using specific basis elements.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • Some participants express confusion about why demonstrating that the coefficients are zero for specific basis elements suffices to prove linear independence for all elements in the vector space.
  • Others argue that by the definition of linear independence, if a linear combination of the dual basis functions equals zero, then all coefficients must be zero, which can be shown by evaluating the combination at specific basis vectors.
  • A participant suggests that the logic of linear independence involves showing that if the linear combination is zero, then the coefficients must vanish, regardless of the choice of vector.
  • Another participant proposes an alternative approach by considering a linear combination evaluated at a general vector expressed as a linear combination of basis vectors, questioning how to conclude that all coefficients are zero from this evaluation.
  • Some participants clarify that once it is established that the coefficients are zero for specific basis vectors, it follows that they must be zero for all vectors in the space, as the coefficients do not depend on the choice of vector.

Areas of Agreement / Disagreement

Participants generally agree on the definition of linear independence and the necessity of demonstrating that coefficients are zero. However, there remains some disagreement regarding the sufficiency of using specific vectors to establish this conclusion, with some participants still expressing uncertainty about the implications of their reasoning.

Contextual Notes

Participants highlight the importance of understanding the implications of linear independence and the role of specific choices of vectors in proving general statements. There is an ongoing exploration of the logical structure underlying the proof, indicating potential gaps in understanding.

lllllll
Messages
3
Reaction score
0
This is from Kreyszig's Introductory Functional Analysis Theorem 2.9-1.

Let $X$ be an n-dimensional vector space and $E=\{e_1, \cdots, e_n \}$ a basis for $X$. Then $F = \{f_1, \cdots, f_n\}$ given by (6) is a basis for the algebraic dual $X^*$ of $X$, and $\text{dim}X^* = \text{dim}X=n$.

Other background info:

$f_k(e_j)=\delta_{jk} = 0 \text{ if } j \not = k, 1 \text{ if } j=k$ (Kronecker delta).

Proof. $F$ is a linearly independent set since

\[\sum_{k=1}^n \beta_k f_k (x) = 0\]

with $x = e_j$ gives

\[\sum_{k=1}^n \beta_k f_k (e_j) = \sum_{k=1}^n \beta_k \delta_{jk} = \beta_j = 0\]

so that all the $\beta_k$'s are zero, as we go through each $e_j$.

And it continues to show that $F$ spans $X^*$.So we show that when we use some specific $x$'s, namely $x=e_j$, then $\beta_k = 0$. But, don't we want to show that $\beta_k$'s are all zero for any $x \in X$? Why does showing that the scalars are zero one at a time, when applying the basis elements of $X$ suffice?

I think I'm missing something really simple here..
 
Last edited:
Physics news on Phys.org
lllllll said:
So we show that when we use some specific $x$'s, namely $x=e_j$, then $\beta_k = 0$. But, don't we want to show that $\beta_k$'s are all zero for any $x \in X$? Why does showing that the scalars are zero one at a time, when applying the basis elements of $X$ suffice?

I think I'm missing something really simple here..

Do you agree that by the very definition of linear independence in $X^{\ast}$ we need to show: If
\[
\sum_{k=1}^n \beta_k f_k = 0 \qquad (*)
\]
then $\beta_k = 0$ for all $k=1,\ldots,n$.

If you do agree, then let's assume that $(*)$ holds. Act with $(*)$ on $x = e_j$. Then it follows as in your post that $\beta_j$ vanishes, for arbitrary $j$. There is indeed nothing more to it.
 
Krylov said:
Do you agree that by the very definition of linear independence in $X^{\ast}$ we need to show: If
\[
\sum_{k=1}^n \beta_k f_k = 0 \qquad (*)
\]
then $\beta_k = 0$ for all $k=1,\ldots,n$.

If you do agree, then let's assume that $(*)$ holds. Act with $(*)$ on $x = e_j$. Then it follows as in your post that $\beta_j$ vanishes, for arbitrary $j$. There is indeed nothing more to it.

Thanks for the post.

I'm still not convinced I guess. Here's another way to ask my question:
Let's say $x = \sum_j^n \xi_j e_j$ and then act with $(*)$ on $x$. Assume some of the $\xi_j \not = 1$.

So we assume $\sum_k^n \beta_k f_k(x) = 0$.

$f_k(x) = \sum_j^n \xi_j f_k(e_j) = \xi_k$

Then $\sum_k^n \beta_k f_k(x) = \sum_k^n \beta_k \xi_k = 0$

From here, how would we conclude that $\beta_1, \cdots, \beta_n = 0$?
 
lllllll said:
From here, how would we conclude that $\beta_1, \cdots, \beta_n = 0$?
I don't know how we could conclude it from here, but we can conclude it in another way, namely as Kreyszig does it.

Perhaps the difficulty is not so much with this particular theorem, but more with logic: The definition of linear independence of $f_1,\ldots,f_k$ in $X^*$ involves an implication: If $\sum_{k=1}^n \beta_k f_k = 0$ (premise) then $\beta_k = 0$ for all $k=1,\ldots,n$ (conclusion). If we can somehow, in any way, show that the conclusion is true, then the implication itself is true. We are not even obliged to use the information from the premise.

It seems to me that you think that the required conclusion is something else than what it actually is. The only thing that we somehow need to show, is that all $\beta_k$ are zero. If we can do that one was or the other (for example, by acting with $\sum_{k=1}^n \beta_k f_k = 0$ on a well-chosen element of $X$), then we are done.
 
Krylov said:
I don't know how we could conclude it from here, but we can conclude it in another way, namely as Kreyszig does it.

Perhaps the difficulty is not so much with this particular theorem, but more with logic: The definition of linear independence of $f_1,\ldots,f_k$ in $X^*$ involves an implication: If $\sum_{k=1}^n \beta_k f_k = 0$ (premise) then $\beta_k = 0$ for all $k=1,\ldots,n$ (conclusion). If we can somehow, in any way, show that the conclusion is true, then the implication itself is true. We are not even obliged to use the information from the premise.

It seems to me that you think that the required conclusion is something else than what it actually is. The only thing that we somehow need to show, is that all $\beta_k$ are zero. If we can do that one was or the other (for example, by acting with $\sum_{k=1}^n \beta_k f_k = 0$ on a well-chosen element of $X$), then we are done.

Okay, I think I finally get it. I think my confusion came from the quantification on $x \in X$. Please correct me if I'm still not getting it!

We are assuming $\sum_k^n \beta_k f_k(x)=0$ for every $x \in X$ (since $X$ is the domain of $f_k$).
Now, we choose $x = e_k \in X$, and this leads us to conclude that $\beta_k = 0$, and the quantification "for every $x \in X$" still holds.

So by choosing specific $x \in X$ to show $\beta_k = 0$, we aren't excluding the possibility that for some other $x' \in X$, $\beta_k$ might be non-zero. The conclusion of taking a "well-chosen" $x \in X$ and acting with $\sum_k^n \beta_k f_k(x)=0$ still holds for all $x$.
 
lllllll said:
Okay, I think I finally get it. I think my confusion came from the quantification on $x \in X$. Please correct me if I'm still not getting it!

We are assuming $\sum_k^n \beta_k f_k(x)=0$ for every $x \in X$ (since $X$ is the domain of $f_k$).
Now, we choose $x = e_k \in X$, and this leads us to conclude that $\beta_k = 0$, and the quantification "for every $x \in X$" still holds.

Yes, but the quantifier "for every $x \in X$ does not really matter anymore once you have proven that $\beta_k = 0$ for all $k$, since the $\beta$s do not depend on $x$ in any way.

lllllll said:
So by choosing specific $x \in X$ to show $\beta_k = 0$, we aren't excluding the possibility that for some other $x' \in X$, $\beta_k$ might be non-zero.

We are excluding that. Once we have proven that the $\beta$s are all zero, there is no way some or all of them can become non-zero by choosing another $x$.

lllllll said:
The conclusion of taking a "well-chosen" $x \in X$ and acting with $\sum_k^n \beta_k f_k(x)=0$ still holds for all $x$.

Yes, the latter equality was the premise we used to deduce that all $\beta$s should vanish. (How? By cleverly picking certain $x$s.)
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
4K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
5K
  • · Replies 52 ·
2
Replies
52
Views
4K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K