# Homework Help: Linear combinations of 3 non-parallel vectors in 2D Euc.-space

1. Sep 11, 2014

### Alephu5

1. The problem statement, all variables and given/known data
Suppose $\mathbf{u,v,w} \in {\rm I\!R}^2$ are noncollinear points, and let $\mathbf{x} \in {\rm I\!R}^2$.

Show that we can write $\mathbf{x}$ uniquely in the form $\mathbf{x} = r\mathbf{u} + s\mathbf{v} + t\mathbf{w}$, where $r + s + t = 1$.

2. Relevant equations
Suppose $\mathbf{a,b} \in {\rm I\!R}^2$ are non-parallel.
Where $$A = \begin{bmatrix} a_1 & b_1 \\ a_2 & b_2 \end{bmatrix}$$
We know $A^{-1}$ exists because $\mathbf{a}$ and $\mathbf{b}$ are non-parallel.

If $\mathbf{c} = A^{-1}\mathbf{x} = \begin{bmatrix} c_1 \\ c_2\end{bmatrix}$ for some $\mathbf{x} \in {\rm I\!R}^2$ then $\mathbf{x} = A\mathbf{c} = c_1\mathbf{a} + c_2\mathbf{b}$.

Thus any $\mathbf{x}$ can be represented as a unique linear combination of $\mathbf{a}$ and $\mathbf{b}$.

3. The attempt at a solution
$\mathbf{u-w}$ and $\mathbf{v-w}$ must be non-parallel, so let $\mathbf{a} = \mathbf{u-w}$ and $\mathbf{b} = \mathbf{v-w}$ (substituting into the theorem above) so that $$\mathbf{x} = c_1\mathbf{u} + c_2\mathbf{v} - (c_1 + c_2)\mathbf{w}.$$

But substituting $r,s$ and $t$ from the hypothesis of the question gives $$r + s = -t$$
or
$$r + s +t = 0.$$

I have been wondering if perhaps there is an important distinction between 'noncollinear points' and the vectors between the origin and the given points. If we don't allow ourselves to treat the points like vectors, then it only makes sense to talk about linear combinations of subtractions; i.e. $c_1(\mathbf{u-w}) + c_2(\mathbf{v-w})$ but not $c_1\mathbf{u} - c_2\mathbf{w}$ or $c\mathbf{v}$ (what is the scalar multiple of a point?). With this interpretation my approach no longer applies to this problem, but it also so happens that the problem itself does not make any sense.

So by treating the points as heads of vectors I must be working with the authors intended mindset. I have been fiddling with this for over an hour and cannot see what I'm doing wrong. Can anyone help?

Last edited: Sep 11, 2014
2. Sep 11, 2014

### Staff: Mentor

Why?
Let u=(1,0), w=(0,-1), v=(2,1). No pair of them is parallel, but u-w = (1,1) and v-w = (2,2) are parallel.

I don't understand your argument afterwards.

3. Sep 11, 2014

### D_Tr

In $\mathbb{R}^3$, pick any 3 non-colinear points (position vectors, take the origin to be $[0, 0, 0]$) on the plane $z = 1$. These three points will be of the form $[x, y, 1]$. Every point (position vector) on the plane can be uniquely written as a linear combination of the position vectors of these 3 points (since they form a basis), and the sum of the coefficients will be one (see plane equation in vector form). So, to each vector of the form $[x, y, 1]$ we have mapped exactly one triplet $[r, s, t]$ with $r+s+t=1$. If you just ignore the third component of the vectors the mapping we have done is still valid for the first two components, so we have the unique combination we want for any vector x. I am sure someone more experienced could write this in a rigorous way but I think the idea is valid (to map your 2D situation to 3D and then map back to 2D).

@mfb: I think you cannot use these 3 as an example because they represent co-linear points. If three points are non-colinear, they form a triangle, so all the sides of the triangle that represent the differences have different directions.

Last edited: Sep 11, 2014
4. Sep 11, 2014

### Staff: Mentor

Oops, you are right. I first had different points in mind, "fixed" them to get the parallel results and then forgot about the collinearity.

A very nice solution with the third dimension.

5. Sep 12, 2014

### Alephu5

@D_Tr: Thank you for the elegant demonstration of this fact. I knew I could demonstrate it by thinking about the vectors that lie along the lines between the heads of $\mathbf{u}$ and $\mathbf{w}$; and $\mathbf{v}$ and $\mathbf{w}$. Explicitly these are $\mathbf{w} + r\mathbf{(u-w)}$ and $\mathbf{w} + s\mathbf{(v-w)}$.

Having worked previously with these equations I have seen that they span the plane with coefficients $r, s, 1-(r+s)$ but I was really trying to understand how I was able to derive a set of coefficients that add up to 0, if $r, s$ and $t$ really are unique.

My error was that I falsely believed I was being asked:
Show that there is a unique linear combination of $\mathbf{u,v,w}$ for any given $\mathbf{x}$. Moreover show that these add to 1.

Which would incorrectly rule out the possibility of there being a linear combination with coefficients that add up to something other than one.

Consider the following:
Having shown that there exists a linear combination whose coefficients add up to 1, we have found scalars $\{r,s,t\}$ and $\{r',s',t'\}$ such that
$$\mathbf{x} = r\mathbf{u} + s\mathbf{v} + t\mathbf{w} \\ \mathbf{x} - c\mathbf{u} = r'\mathbf{u} + s'\mathbf{v} + t'\mathbf{w}$$
with $r' + s' + t' = 1.$
It follows that
$$\mathbf{x} = (r'+c)\mathbf{u} + s'\mathbf{v} + t'\mathbf{w}$$
With $$(r'+c) + s' + t' = 1 + c$$
so we can also find linear combinations with coefficients adding to $1+c$. This included my case where $$r + s + t =0.$$

6. Sep 12, 2014

### D_Tr

For the case $r+s+t=0$ the combination is not unique, however. The vectors in 3-D space I used in my post become coplanar if the plane has distance 0 through the origin, so they are not linearly independent in 3-D space anymore. This can also be demonstrated in matrix form. The problem we want to solve is essentially this:
$$\begin{bmatrix} u_1 & v_1 & w_1 \\[0.1em] u_2 & v_2 & w_2 \\[0.1em] 1 & 1 & 1 \end{bmatrix} \begin{bmatrix} r \\[0.1em] s \\[0.1em] t \end{bmatrix}= \begin{bmatrix} x1 \\[0.1em] x2 \\[0.1em] 1 \end{bmatrix}$$
The rank of this matrix will be 3 if u, v, w are not colinear, but will become 2 if the last row becomes all zeroes, so there will not be a unique solution anymore.

7. Sep 12, 2014

### Staff: Mentor

Just the 1 on the right side should become zero, and you keep the unique solutions.

If there would be two ways to express an arbitrary vector where the coefficients follow r+s+t=0, then you can express the zero-vector as difference between the two ways and add this to any solution with a fixed r+s+t, contradicting the unique solution property shown earlier.

8. Sep 12, 2014

### D_Tr

Sorry Alephu5 and mfb, your explanations make perfect sense! I should be more careful :)