Register to reply 
Linear algebra proof 
Share this thread: 
#1
Dec2612, 08:49 PM

P: 232

1. The problem statement, all variables and given/known data
Let E_{1} = (1, 0, ... ,0), E_{2} = (0, 1, 0, ... ,0), ... , E_{n} = (0, ... ,0, 1) be the standard unit vectors of R^{n}. Let x_{1} ... ,x_{n} be numbers. Show that if x_{1}E_{1}+...+x_{n}E_{n}=0 then x_{i}=0 for all i. 2. Relevant equations 3. The attempt at a solution Proof By contradiction Assume to the contrary that x_{1}E_{1}+...+x_{n}E_{n}=0 then x_{i}≠0 for some i. We also assume that x_{1}...x_{i1} and x_{i+1}...x_{n} are zero. Rewriting the equation we get x_{1}E_{1}+.x_{p}E_{p}+...+x_{n}E_{n}=0 where x_{p}E_{p} is a nonzero scalar. x_{p}E_{p}=x_{1}E_{1}...x_{p}E_{p1}x_{p}E_{p+1}..x_{n}E_{n}. But this leads to a contradiction since we assumed earlier that x_{1}...x_{i1} and x_{i+1}...x_{n} are zero. Thus x_{1}E_{1}+...+x_{n}E_{n}=0 x_{i}=0 for all i. Let me know where my proof begins to fall apart? And how do I go about it? 


#3
Dec2612, 08:58 PM

Mentor
P: 18,345

Hint: making subscripts like you did in your post is very timeintensive. It is much easier for you to learn LaTeX. I will rewrite the first part of your post using LaTeX here, just push on "QUOTE" to see what I did.
Also see http://www.physicsforums.com/showpos...17&postcount=3 for a guide. 1. The problem statement, all variables and given/known data Let [itex]E_1 = (1, 0, ... ,0), E_2 = (0, 1, 0, ... ,0), ... , E_n = (0, ... ,0, 1)[/itex]. be the standard unit vectors of [itex]R^n[/itex]. Let [itex]x_1,...,x_n[/itex] be numbers. Show that if [tex]x_1E_1 + ... + x_nE_n=0[/tex] then [itex]x_i=0[/itex] for all i. 


#4
Dec2612, 09:32 PM

P: 232

Linear algebra proof
Oh thanks. Yeah my proof is bad I just realized I used the conclusion as my assumption.
I don't think I even need to use a proof by contradiction. Isn't just obvious that if one of the scalars is nonzero then the equation is not zero? Wouldn't that suffice as my proof. 


#5
Dec2612, 09:36 PM

Mentor
P: 18,345

If not, try to calculate [tex]x_1E_1+...+x_nE_n[/tex] By definition, we know that [itex]E_1=(1,0,....)[/itex]. So what is [itex]x_1E_1[/itex]? What is [itex]x_2E_2[/itex]? What happens if you add them? 


#6
Dec2612, 09:43 PM

P: 232

Oh x1E1=(x1,...,0) x2E2=(0,x2,...,0) all the way to xnEn=(0,...,xn). So by adding them together you get (x1,x2,...,xn). And they only way to get the zero vector is when x1...xn is zero? Would that be a way to explain it?



#8
Dec2612, 09:52 PM

P: 232

alright thanks.



#9
Dec2712, 12:10 AM

P: 428

Think about what relations the basis vectors satisfy, if you notice the right thing, the proof is pretty swift.



#10
Dec2712, 12:13 AM

P: 428

The problem with the above proof, it doesnt seem to use the fact that the basis is orthonormal. You could potentially "prove something false".



#12
Dec2812, 02:04 AM

P: 428

So I was just guessing that it relied on some qualities of the basis vector, but maybe the real mistake would be to not refer to the fact that they are linearly independent. We do have to mention that right? But perhaps the quality of orthogonal was not relavant, as you say. 


Register to reply 
Related Discussions  
Linear Algebra: Span, Linear Independence Proof  Calculus & Beyond Homework  4  
Another Linear Algebra proof about linear transformations  Calculus & Beyond Homework  4  
Linear Algebra: basic proof about a linear transformation  Calculus & Beyond Homework  3  
Linear algebra proof (matrices and linear transformations)  Calculus & Beyond Homework  2  
Linear Algebra proof with Linear Transformations  Calculus & Beyond Homework  15 