Question about proof of the linear independence of a dual basis

In summary: X^*$?). The only thing that we need to show is that all $\beta_k$ are zero, and that can be done by acting with $\sum_{k=1}^n \beta_k f_k = 0$ on a well-chosen element of $X$. Okay, I think I finally get it. I think my confusion came from the quantification on $x \in X$. Please correct me if I'm still not getting it!In summary, Kreyszig's Introductory Functional Analysis Theorem 2.9-1 states that a basis for an algebraic dual space is a linearly independent set.
  • #1
lllllll
3
0
This is from Kreyszig's Introductory Functional Analysis Theorem 2.9-1.

Let $X$ be an n-dimensional vector space and $E=\{e_1, \cdots, e_n \}$ a basis for $X$. Then $F = \{f_1, \cdots, f_n\}$ given by (6) is a basis for the algebraic dual $X^*$ of $X$, and $\text{dim}X^* = \text{dim}X=n$.

Other background info:

$f_k(e_j)=\delta_{jk} = 0 \text{ if } j \not = k, 1 \text{ if } j=k$ (Kronecker delta).

Proof. $F$ is a linearly independent set since

\[\sum_{k=1}^n \beta_k f_k (x) = 0\]

with $x = e_j$ gives

\[\sum_{k=1}^n \beta_k f_k (e_j) = \sum_{k=1}^n \beta_k \delta_{jk} = \beta_j = 0\]

so that all the $\beta_k$'s are zero, as we go through each $e_j$.

And it continues to show that $F$ spans $X^*$.So we show that when we use some specific $x$'s, namely $x=e_j$, then $\beta_k = 0$. But, don't we want to show that $\beta_k$'s are all zero for any $x \in X$? Why does showing that the scalars are zero one at a time, when applying the basis elements of $X$ suffice?

I think I'm missing something really simple here..
 
Last edited:
Physics news on Phys.org
  • #2
lllllll said:
So we show that when we use some specific $x$'s, namely $x=e_j$, then $\beta_k = 0$. But, don't we want to show that $\beta_k$'s are all zero for any $x \in X$? Why does showing that the scalars are zero one at a time, when applying the basis elements of $X$ suffice?

I think I'm missing something really simple here..

Do you agree that by the very definition of linear independence in $X^{\ast}$ we need to show: If
\[
\sum_{k=1}^n \beta_k f_k = 0 \qquad (*)
\]
then $\beta_k = 0$ for all $k=1,\ldots,n$.

If you do agree, then let's assume that $(*)$ holds. Act with $(*)$ on $x = e_j$. Then it follows as in your post that $\beta_j$ vanishes, for arbitrary $j$. There is indeed nothing more to it.
 
  • #3
Krylov said:
Do you agree that by the very definition of linear independence in $X^{\ast}$ we need to show: If
\[
\sum_{k=1}^n \beta_k f_k = 0 \qquad (*)
\]
then $\beta_k = 0$ for all $k=1,\ldots,n$.

If you do agree, then let's assume that $(*)$ holds. Act with $(*)$ on $x = e_j$. Then it follows as in your post that $\beta_j$ vanishes, for arbitrary $j$. There is indeed nothing more to it.

Thanks for the post.

I'm still not convinced I guess. Here's another way to ask my question:
Let's say $x = \sum_j^n \xi_j e_j$ and then act with $(*)$ on $x$. Assume some of the $\xi_j \not = 1$.

So we assume $\sum_k^n \beta_k f_k(x) = 0$.

$f_k(x) = \sum_j^n \xi_j f_k(e_j) = \xi_k$

Then $\sum_k^n \beta_k f_k(x) = \sum_k^n \beta_k \xi_k = 0$

From here, how would we conclude that $\beta_1, \cdots, \beta_n = 0$?
 
  • #4
lllllll said:
From here, how would we conclude that $\beta_1, \cdots, \beta_n = 0$?
I don't know how we could conclude it from here, but we can conclude it in another way, namely as Kreyszig does it.

Perhaps the difficulty is not so much with this particular theorem, but more with logic: The definition of linear independence of $f_1,\ldots,f_k$ in $X^*$ involves an implication: If $\sum_{k=1}^n \beta_k f_k = 0$ (premise) then $\beta_k = 0$ for all $k=1,\ldots,n$ (conclusion). If we can somehow, in any way, show that the conclusion is true, then the implication itself is true. We are not even obliged to use the information from the premise.

It seems to me that you think that the required conclusion is something else than what it actually is. The only thing that we somehow need to show, is that all $\beta_k$ are zero. If we can do that one was or the other (for example, by acting with $\sum_{k=1}^n \beta_k f_k = 0$ on a well-chosen element of $X$), then we are done.
 
  • #5
Krylov said:
I don't know how we could conclude it from here, but we can conclude it in another way, namely as Kreyszig does it.

Perhaps the difficulty is not so much with this particular theorem, but more with logic: The definition of linear independence of $f_1,\ldots,f_k$ in $X^*$ involves an implication: If $\sum_{k=1}^n \beta_k f_k = 0$ (premise) then $\beta_k = 0$ for all $k=1,\ldots,n$ (conclusion). If we can somehow, in any way, show that the conclusion is true, then the implication itself is true. We are not even obliged to use the information from the premise.

It seems to me that you think that the required conclusion is something else than what it actually is. The only thing that we somehow need to show, is that all $\beta_k$ are zero. If we can do that one was or the other (for example, by acting with $\sum_{k=1}^n \beta_k f_k = 0$ on a well-chosen element of $X$), then we are done.

Okay, I think I finally get it. I think my confusion came from the quantification on $x \in X$. Please correct me if I'm still not getting it!

We are assuming $\sum_k^n \beta_k f_k(x)=0$ for every $x \in X$ (since $X$ is the domain of $f_k$).
Now, we choose $x = e_k \in X$, and this leads us to conclude that $\beta_k = 0$, and the quantification "for every $x \in X$" still holds.

So by choosing specific $x \in X$ to show $\beta_k = 0$, we aren't excluding the possibility that for some other $x' \in X$, $\beta_k$ might be non-zero. The conclusion of taking a "well-chosen" $x \in X$ and acting with $\sum_k^n \beta_k f_k(x)=0$ still holds for all $x$.
 
  • #6
lllllll said:
Okay, I think I finally get it. I think my confusion came from the quantification on $x \in X$. Please correct me if I'm still not getting it!

We are assuming $\sum_k^n \beta_k f_k(x)=0$ for every $x \in X$ (since $X$ is the domain of $f_k$).
Now, we choose $x = e_k \in X$, and this leads us to conclude that $\beta_k = 0$, and the quantification "for every $x \in X$" still holds.

Yes, but the quantifier "for every $x \in X$ does not really matter anymore once you have proven that $\beta_k = 0$ for all $k$, since the $\beta$s do not depend on $x$ in any way.

lllllll said:
So by choosing specific $x \in X$ to show $\beta_k = 0$, we aren't excluding the possibility that for some other $x' \in X$, $\beta_k$ might be non-zero.

We are excluding that. Once we have proven that the $\beta$s are all zero, there is no way some or all of them can become non-zero by choosing another $x$.

lllllll said:
The conclusion of taking a "well-chosen" $x \in X$ and acting with $\sum_k^n \beta_k f_k(x)=0$ still holds for all $x$.

Yes, the latter equality was the premise we used to deduce that all $\beta$s should vanish. (How? By cleverly picking certain $x$s.)
 

1. What is a dual basis?

A dual basis is a set of vectors that are used to define a basis for the dual space of a vector space. It is made up of linear functionals that map each vector in the original basis to a scalar value.

2. Why is the linear independence of a dual basis important?

The linear independence of a dual basis is important because it ensures that the functionals in the dual basis are unique and do not overlap in their mapping. This allows for more accurate and efficient computations in linear algebra.

3. How is the proof of linear independence of a dual basis carried out?

The proof of linear independence of a dual basis is typically carried out by assuming a linear combination of the functionals in the dual basis and showing that it can only equal zero if all the coefficients are zero. This is often done using the properties of linear independence and the definition of a linear functional.

4. What is the relationship between a dual basis and a basis of a vector space?

A dual basis is closely related to a basis of a vector space. While a basis spans the vector space, a dual basis spans the dual space. This means that the functionals in the dual basis can be used to map any vector in the original basis to a unique scalar value.

5. Can a dual basis be unique?

Yes, a dual basis can be unique as long as the original basis is also unique. If there are multiple bases for the vector space, there will also be multiple dual bases. However, within a given basis, the dual basis is always unique.

Similar threads

  • Linear and Abstract Algebra
Replies
1
Views
2K
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
23
Views
1K
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
2
Replies
52
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
12
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
5K
Back
Top