MHB Question about proof of the linear independence of a dual basis

Click For Summary
The discussion centers on proving the linear independence of a dual basis in the context of a finite-dimensional vector space. It begins with the assertion that if a linear combination of dual basis functions equals zero, then each coefficient must also be zero. Participants explore whether demonstrating this for specific basis elements suffices to conclude it for all elements in the vector space. The consensus is that by showing the coefficients vanish for a chosen basis element, it implies they must vanish for all elements, as the coefficients do not depend on the specific choice of element. Ultimately, the logic of linear independence is clarified, confirming that proving the coefficients are zero for any specific element is sufficient to establish their independence across the entire space.
lllllll
Messages
3
Reaction score
0
This is from Kreyszig's Introductory Functional Analysis Theorem 2.9-1.

Let $X$ be an n-dimensional vector space and $E=\{e_1, \cdots, e_n \}$ a basis for $X$. Then $F = \{f_1, \cdots, f_n\}$ given by (6) is a basis for the algebraic dual $X^*$ of $X$, and $\text{dim}X^* = \text{dim}X=n$.

Other background info:

$f_k(e_j)=\delta_{jk} = 0 \text{ if } j \not = k, 1 \text{ if } j=k$ (Kronecker delta).

Proof. $F$ is a linearly independent set since

\[\sum_{k=1}^n \beta_k f_k (x) = 0\]

with $x = e_j$ gives

\[\sum_{k=1}^n \beta_k f_k (e_j) = \sum_{k=1}^n \beta_k \delta_{jk} = \beta_j = 0\]

so that all the $\beta_k$'s are zero, as we go through each $e_j$.

And it continues to show that $F$ spans $X^*$.So we show that when we use some specific $x$'s, namely $x=e_j$, then $\beta_k = 0$. But, don't we want to show that $\beta_k$'s are all zero for any $x \in X$? Why does showing that the scalars are zero one at a time, when applying the basis elements of $X$ suffice?

I think I'm missing something really simple here..
 
Last edited:
Physics news on Phys.org
lllllll said:
So we show that when we use some specific $x$'s, namely $x=e_j$, then $\beta_k = 0$. But, don't we want to show that $\beta_k$'s are all zero for any $x \in X$? Why does showing that the scalars are zero one at a time, when applying the basis elements of $X$ suffice?

I think I'm missing something really simple here..

Do you agree that by the very definition of linear independence in $X^{\ast}$ we need to show: If
\[
\sum_{k=1}^n \beta_k f_k = 0 \qquad (*)
\]
then $\beta_k = 0$ for all $k=1,\ldots,n$.

If you do agree, then let's assume that $(*)$ holds. Act with $(*)$ on $x = e_j$. Then it follows as in your post that $\beta_j$ vanishes, for arbitrary $j$. There is indeed nothing more to it.
 
Krylov said:
Do you agree that by the very definition of linear independence in $X^{\ast}$ we need to show: If
\[
\sum_{k=1}^n \beta_k f_k = 0 \qquad (*)
\]
then $\beta_k = 0$ for all $k=1,\ldots,n$.

If you do agree, then let's assume that $(*)$ holds. Act with $(*)$ on $x = e_j$. Then it follows as in your post that $\beta_j$ vanishes, for arbitrary $j$. There is indeed nothing more to it.

Thanks for the post.

I'm still not convinced I guess. Here's another way to ask my question:
Let's say $x = \sum_j^n \xi_j e_j$ and then act with $(*)$ on $x$. Assume some of the $\xi_j \not = 1$.

So we assume $\sum_k^n \beta_k f_k(x) = 0$.

$f_k(x) = \sum_j^n \xi_j f_k(e_j) = \xi_k$

Then $\sum_k^n \beta_k f_k(x) = \sum_k^n \beta_k \xi_k = 0$

From here, how would we conclude that $\beta_1, \cdots, \beta_n = 0$?
 
lllllll said:
From here, how would we conclude that $\beta_1, \cdots, \beta_n = 0$?
I don't know how we could conclude it from here, but we can conclude it in another way, namely as Kreyszig does it.

Perhaps the difficulty is not so much with this particular theorem, but more with logic: The definition of linear independence of $f_1,\ldots,f_k$ in $X^*$ involves an implication: If $\sum_{k=1}^n \beta_k f_k = 0$ (premise) then $\beta_k = 0$ for all $k=1,\ldots,n$ (conclusion). If we can somehow, in any way, show that the conclusion is true, then the implication itself is true. We are not even obliged to use the information from the premise.

It seems to me that you think that the required conclusion is something else than what it actually is. The only thing that we somehow need to show, is that all $\beta_k$ are zero. If we can do that one was or the other (for example, by acting with $\sum_{k=1}^n \beta_k f_k = 0$ on a well-chosen element of $X$), then we are done.
 
Krylov said:
I don't know how we could conclude it from here, but we can conclude it in another way, namely as Kreyszig does it.

Perhaps the difficulty is not so much with this particular theorem, but more with logic: The definition of linear independence of $f_1,\ldots,f_k$ in $X^*$ involves an implication: If $\sum_{k=1}^n \beta_k f_k = 0$ (premise) then $\beta_k = 0$ for all $k=1,\ldots,n$ (conclusion). If we can somehow, in any way, show that the conclusion is true, then the implication itself is true. We are not even obliged to use the information from the premise.

It seems to me that you think that the required conclusion is something else than what it actually is. The only thing that we somehow need to show, is that all $\beta_k$ are zero. If we can do that one was or the other (for example, by acting with $\sum_{k=1}^n \beta_k f_k = 0$ on a well-chosen element of $X$), then we are done.

Okay, I think I finally get it. I think my confusion came from the quantification on $x \in X$. Please correct me if I'm still not getting it!

We are assuming $\sum_k^n \beta_k f_k(x)=0$ for every $x \in X$ (since $X$ is the domain of $f_k$).
Now, we choose $x = e_k \in X$, and this leads us to conclude that $\beta_k = 0$, and the quantification "for every $x \in X$" still holds.

So by choosing specific $x \in X$ to show $\beta_k = 0$, we aren't excluding the possibility that for some other $x' \in X$, $\beta_k$ might be non-zero. The conclusion of taking a "well-chosen" $x \in X$ and acting with $\sum_k^n \beta_k f_k(x)=0$ still holds for all $x$.
 
lllllll said:
Okay, I think I finally get it. I think my confusion came from the quantification on $x \in X$. Please correct me if I'm still not getting it!

We are assuming $\sum_k^n \beta_k f_k(x)=0$ for every $x \in X$ (since $X$ is the domain of $f_k$).
Now, we choose $x = e_k \in X$, and this leads us to conclude that $\beta_k = 0$, and the quantification "for every $x \in X$" still holds.

Yes, but the quantifier "for every $x \in X$ does not really matter anymore once you have proven that $\beta_k = 0$ for all $k$, since the $\beta$s do not depend on $x$ in any way.

lllllll said:
So by choosing specific $x \in X$ to show $\beta_k = 0$, we aren't excluding the possibility that for some other $x' \in X$, $\beta_k$ might be non-zero.

We are excluding that. Once we have proven that the $\beta$s are all zero, there is no way some or all of them can become non-zero by choosing another $x$.

lllllll said:
The conclusion of taking a "well-chosen" $x \in X$ and acting with $\sum_k^n \beta_k f_k(x)=0$ still holds for all $x$.

Yes, the latter equality was the premise we used to deduce that all $\beta$s should vanish. (How? By cleverly picking certain $x$s.)
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
5K
  • · Replies 52 ·
2
Replies
52
Views
4K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K