# Independently & Linearly Confused!

Hi, first off - Go easy on me, I'm only learning My book is talking about linear independence.

As I understand the concept it means that a vector has to be in a different direction, i.e. non-collinear, with the vector in comparison.

Mathematically, my book has defined linearly independent vectors as;

$$a \overline{u} \ + \ b \overline{v} \ = \ 0 \ with \ a \ = \ 0 \ and \ b = \ 0$$

The book says that linearly dependent vectors are those in which the scalars a & b in the above are not equal to zero.

The reason is that you could solve for either vector, if either scalar was non-zero, and then describe one vector as a scalar multiple of the other.

Okay, that seems about right.

My problem comes from the following;

$$assume \ \overline{u} \ and \ \overline{v} \ to \ be \ linearly \ independent$$

$$a \overline{u} \ + \ b \overline{v} \ = \ \alpha \overline{u} \ + \ \beta \overline{v}$$

$$( \ a \ - \ \alpha \ ) \overline{u} \ + \ ( \ b \ - \ \beta \ ) \overline{v} \ = \ 0$$

This must imply that a=α & that b=β.
This must also imply that a-α=0 & that b-β=0.

But shouldn't a & α, b & β, all have to be equal to zero anyway?

Isn't the above saying that zero minus zero equals zero?

If all the scalar coefficients are not equal to zero then the vectors must be linearly dependent, I get the feeling that we are trying to define a way to say that you can set a vector to have non-zero coefficents if you set it equal to itself & reference it to itself. But if you do this aren't you defining linearly dependent vectors simply by the fact that the scalars are non-zero?

What am I not realizing?

Please go easy, I don't know anything about basis or anything as the book is trying to define the concept using the above principle.

## Answers and Replies

dacruick
if 0 is the only solution to the equation, that means that they are linearly independant. it means that there are no other a and b values which can make the solution equal to 0.

The thing is, 0 is always a correct solution to that equation. but its if 0 is the only solution then they are independant. For example vector {1,0} and vector {0,1}. Those two are always linearly independant. their directions are seperated by 90 degrees therefore they do not share similar directional components.

Im not sure if I've answered your question, but if i haven't please tell me.

I intuitively understand what you mean by that. I understand that if zero is the only solution then they are independant. I just don't rigorously get whats going on.

It might help if I give the next sentence in the book, as I do not understand it.

An expression $$a \overline{u} \ + \ b \overline{v}$$ is called a linear combination of $$\overline{u} \ , \ \overline{v}$$. We say that $$\overline{u} \ , \ \overline{v}$$ form a basis if every vector $$\overline{w}$$ in the plane can be expressed as a linear combination of $$\overline{u} \ , \ \overline{v}$$ . Two linearly dependent vectors $$\overline{u} \ , \ \overline{v}$$ can never form a basis.

I don't understand how you can add scalars to vectors and not have them be linearly dependant, going off the information in the original post anyway...

This will lead to proving this stuff, I've never done that so I'd like to quantify my knowledge & iron out the kinks (of which there are many )

edit: Why does setting the equation equal to itself have to be done (albeit with greek lettered versions :p), it seems as though you are doing this to find an escape to the whole -not-having-scalars-equal-to-anything-but-zero thing, but if the scalars are anythign but zero aren't they automatically linearly dependent?

dacruick
once again, if the only solution to the equation brings the scalars to 0, it is automatically independent. if there is a solution where the scalars are not 0, they are linearly dependent. You might be looking too hard at this, it seems like you understand. what they are doing here is setting a basis (pun intended) for you moving to three space. they give you the concept which is somewhat trivial in two dimensions, and they will build upon that.

And i don't quite understand fully what they are doing with the greek letter thing. I actually despise linear algebra. I think it is ill explained. Linear algebra teachers don't give examples, i dont understand it... anyways.

what you will have to do soon is take a matrix of 3 vectors or 4 vectors and find if it is linearly dependent and find if its basis vectors and so forth. for example, if you have {1,0,0},{0,1,0}, and {0,0,1}, that forms a basis for 3 dimensions. every vector in 3 space can be defined by a scalar multiplied by those 3 vectors.
{0,1,0}, {0,0,2}, and {0,0,4} form a basis for the y-z plane. this is because the last two vectors are linearly dependent, and can be simplified to the vector {0,0,1}. Notice that with the two vectors {0,0,1} and {0,1,0}, you can form any vector that lies along the y-z plane, because the x value will always be 0.

i think you understand the concepts, but i also think that it is important for you at this point to just think about these ideas and to talk about them. so i encourage you to keep this thread going and ask question. I myself have done the exact opposite of what im telling you to do now. i had a crappy linear algebra teacher in first year university, then i just basically shoved it off and got mediocre marks. Now i have a poor foundation and its costing me time in my schooling. So i wouldn't mind continuing to talk about this with you just so I can grasp it a little better.

And I don't think we are adding scalars to vectors here. we are multiplying vectors by scalars, then adding the vectors. An example of linearly dependent vectors in 2 dimensions is{1,2} and {4,8} because they are parallel. I might be wrong here but i think that any two vectors in two space that are not parallel are linearly independent.

Hey listen, thanks a lot for the response. I too have studied some linear algebra before but I quit simply because I was studying material of which I hadn't a clue. No good examples either (yeah I noticed that problem too unfortunately) so I took a break & am re-attempting it now.

Yeah I do mean multiplication by a scalar, sorry about the lingo there.

I think the book has just confused me. It's talking about vectors as u or v and not mentioning any components. Then I think "well how do you know if a component is in the same plane as that of the other vector?". I am fine with dealing with vectors when they've specified, i.e. by using unit vectors or x's and y's to signify which dimension you're in. The book hasn't mentioned these things yet & it doesn't seem like it will in the near future.

I've skipped ahead and they've given an example that kind of clears things up, I'll give it and ask a question or two;

$$let \ \overline{u} \ and \ \overline{v} \ be \ linearly \ independent \ vectors. \ Show \ that \ :$$

$$\overline{w} \ = \ 3 \overline{u} \ + \ \overline{v} ,$$
$$\overline{z} \ = \ 2 \overline{u} \ - \ \overline{v} ,$$

$$\ also \ form \ a \ basis$$

Before I give the answer I'll say what I'm thinking.

1. To test these equation, you have to put the vectors into the form $$a \overline{u} \ + \ b \overline{v} \ = 0$$ . Is this what you would actually do? Like, is this is the method to test for linearity, end of! ?

2. By a basis, do they mean vectors that are not collinear? Like, everytime the word basis is mentioned you're talking about vectors that can be seperated by an angle of 1 degree & it still is a basis. A basis is not exclusively like the 90 degree angle between the x & y axis- directions (or i & j axes etc...).

3. Is a basis like a plane? Like, I don't understand - is it the x & y plane, x,y & z plane etc...? Like, is it the n-tuple plane or something...?
What I mean is, [queue authoritative voice] "when dealing with vectors u & v that have two components, the basis of vectors u & v is the whole x & y plane of real numbers" <---is that a logical and true statement or am I confused?

4. Do you see the way they've pre-specified that the vectors are linearly dependent.
The ways of determining if these initial vectors are linearly independent rests upon their components right? Like, you can determine geometrically via a picture (obviously) or by them having certain properties in their components, like in your examples of unit vectors.

Anyway, I'll give the answer now. We must show that w & z are also linearly independent.

$$a \overline{w} + b \ \overline{z} \ = \ 0$$

$$a ( 3 \overline{u} \ + \ \overline{v} ) \ + b ( 2 \overline{u} \ - \ \overline{v} ) \ = \ 0$$

$$3a \overline{u} \ + \ a \overline{v} \ + \ 2b \overline{u} \ - \ b \overline{v} \ = \ 0$$

$$(3a \ + \ 2b) \overline{u} \ + \ (a \ - \ b) \overline{v} \ = \ 0$$

We know that u & v are linearly independent so we get a system of two equations in two unknowns.

If we solve by any of the many methods we'll find that a & b are zero.

To me, that makes sense. If that is what needs to be done then everything is fine.

I think that in the above 4 questions, if you can clear up my crazy thoughts illustrated there I'll be able to move on to greener pastures.

dacruick
you've said a lot of right things here. whether or not the vector is linearly dependent is based on the components. since they have not given you the components, they have to tell you that it is either dependent or independent.

1) yes that is the way you test

2) and 3) and you're right, a basis can be a plane, a 3 space, a 4 space, it doesn't matter. when i think about a basis, i think about any two vectors in 3 space. a plane is defined by two vectors right? so those two vectors(if they aren't parallel), will always form a basis which is a plane in 3 space. and in 4 space 3 vectors span what is called a hyperplane i believe, and so on.

and i think i may have answered question 4.

another question I'd like to present to you is something that i just thought about and am not positive about. It seems like if you have vectors made from two linearly independent vectors, such as your w-bar and z-bar, those two vectors will be linearly independent as well. unless of course they are the same.

Woo!!! That's great. Thanks a lot for sticking with me through this.

I think one of the biggest problems I encounter with math is just the terminology, it can be so vague sometimes. I always feel my intuition is right but then I'll see some concept I thought I understood being applied in such a different way to that I had assumed & get all flustered... :tongue2:

As for the question you put to me, that is what the question I typed out (albeit stretched out throuout my response) was proving.Vectors composed of vectors composed of vectors, scary stuff I sincerely know that setting the vectors (3u + v) & (2u - v) equal to w & z respectively will be the biggest source of confusion for me from now on when trying to prove anything. Well, we'll see how things go.

Again, thanks a lot for giving your time, I may be back pending any problems with related material. Your patience is a credit to physicsforums dacruick
right on. if you have any questions, send me a personal message or let me know the post that you put up and ill work through them with you. its been a while since ive done any linear algebra and i have quantum mechanics coming up.

I'd say these videos would help you out a lot, I remember watching them way back when & Susskind spends the first two lectures (at least) of the top link just setting up the linear algebra required for the following QM.

I'm pretty sure all that's required is multivariable calc and a little Lin Alg, he builds the rest up from there. Hopfully they may help somewhat...

Thanks for the offer, I very may well take you up on it someday ;)

dacruick
thanks for the links, ill check them out today.