Linear Space Shilov: Subspace Basis & Contradictions

  • Thread starter Thread starter Locrianz
  • Start date Start date
  • Tags Tags
    Linear Space
Click For Summary

Homework Help Overview

The discussion revolves around a concept in linear algebra related to the dimensions of vector spaces and their subspaces, specifically addressing a scenario presented in Shilov's text regarding bases and linear dependence. The original poster is grappling with a contradiction involving the linear dependence of vectors in a subspace when additional vectors are introduced from a larger vector space.

Discussion Character

  • Conceptual clarification, Assumption checking

Approaches and Questions Raised

  • Participants are exploring the implications of adding an additional vector to a basis of a subspace and questioning the conditions under which linear dependence arises. There is a focus on understanding the relationship between the coefficients of the vectors and their implications for independence.

Discussion Status

Some participants are offering insights into the reasoning behind the contradiction mentioned in the text, while others are questioning the clarity of the original statement and the assumptions that may not have been explicitly stated. There appears to be a productive exchange of ideas, with some participants expressing confusion and seeking further clarification.

Contextual Notes

There is mention of potential translation issues with the text, which may contribute to the confusion surrounding the problem. Participants are also reflecting on the completeness of the information provided in the original problem statement.

Locrianz
Messages
9
Reaction score
0
Hi guys i just had a quick question re shilov linear algebra page 44 here he shows that for an 'n' dimensional linear(vector) space K, a subspace, say L, must have dimension no greater than 'n'.
He then goed onto say that if a basis is chosen in the subspace L, say f1,f2,...,fr, then additional vectors can be chosen from the linear space K, say f(r+1),...,fn such that f1,f2,...,fr,...,fn form a basis for K. Which makes perfect sense. But here is the part that throws me:
Suppose we have the relation:
c1f1+c2f2+...+crfr+c(r+1)f(r+1)=0
Where the c terms are constants and f(r+1) is an additional vector from K added to the basis of L.
He says if c(r+1) does not equal zero then then L is K. Which is true. But then he says if c(r+1)=0 then the vectors f1,f2,...,fr are linearly dependent, which would be a contradiction. How can this be true? If this is the basis for L and we set it equal to zero then we have all the constants equal to zero, hence they will be linearly independent. I checked the errata and there was no mention of this.
Any help will be appreciated, thanks guys.
 
Physics news on Phys.org
Locrianz said:
He says if c(r+1) does not equal zero then then L is K. Which is true. But then he says if c(r+1)=0 then the vectors f1,f2,...,fr are linearly dependent, which would be a contradiction. How can this be true? If this is the basis for L and we set it equal to zero then we have all the constants equal to zero, hence they will be linearly independent. I checked the errata and there was no mention of this.
Any help will be appreciated, thanks guys.
Read this line again, f(r+1) is not included in the basis for L:
Locrianz said:
Where the c terms are constants and f(r+1) is an additional vector from K added to the basis of L.
 
Yes but he say if you set the f(r+1)'s constant equal to zero then the remaining vectors would be linearly dependent? If you set this constant(c(r+1)) equal to zero wouldn't the f(r+1) disappear as its coefficient is now zero and you are just left with the basis for L set equal to zero? Maybe I am missing something but that's the way i see it...
 
Locrianz said:
Yes but he say if you set the f(r+1)'s constant equal to zero then the remaining vectors would be linearly dependent? If you set this constant(c(r+1)) equal to zero wouldn't the f(r+1) disappear as its coefficient is now zero and you are just left with the basis for L set equal to zero? Maybe I am missing something but that's the way i see it...
It is a bit hard to read your description, some details are missing, maybe I assumed some things that weren't. I don't see how the first statement is true with this information you gave, just pick a vector from L in K so there is something missing. Or I am just dumb and missing something you wrote.
 
I think it may just be an error, the book was translated from russian so maybe that's an issue, he usually doesn't give much detail, but here he is trying to construct the basis for a vector space K from the basis of the subspace L by adding vectors that are not spanned by the basis of L. And he uses the reasoning above as a step in his proof, but no matter how i think of it it doesn't make sense.
 
Is he trying to show that \{f_1,...,f_{n+1}\} is linear independent?

In that case he supposes that that

c_1f_1+...+c_nf_n+c_{n+1}f_{n+1}=0

and that not all ck are 0. The aim is to get a contradiction.

and suppose that cn+1=0, then

c_1f_1+...+c_nf_n=0

Since the \{f_1,...,f_n\} forms a basis, we see that c_1=...=c_n=0, so all the ck are 0, which is a contradiction.
 
Ah yes that would make sense! Your a genius! He didnt mention explicitly that we should assume that not all the coefficients are zero, but its implict in there since he draws a contradiction, and your explanation is the only way this makes sense! Thanks a lot! I can't believe you figured that out with so little information! Cheers.
 

Similar threads

  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
Replies
15
Views
2K
  • · Replies 15 ·
Replies
15
Views
3K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 18 ·
Replies
18
Views
2K
Replies
34
Views
3K
Replies
8
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K