# Vector Analysis using Basis Vectors

1. Dec 23, 2015

### Jimmy87

Hi pf,

Having some trouble with basis vectors for expanding a given vector in 3-D space.

Any given vector in 3-D space can be given by a sum of component vectors in the form:
V = e1V1 + e2V2 + e3V3 (where e1, e2 and e3 are the same as i, j and k unit vectors). Equation 1.

I am happy with this.

If you want to find the coefficient V2 you can do the following:

e2 . V = (ei . e2) Vi = (di2) Vi (the dot in between is supposed to be the dot product and d is supposed to be kronecker's delta). Equation 2.

When i is two then delta i2 is one which means 1 x V2 which equals V2.

Equation 2 is what I am not happy with. I get the equation and I understand what kronecker's delta is but how would this ever help you find the coefficient V2? Let's say that you have some vector of magnitude 5 in a 2-D space then we know the components are 3 (y component) and 4 (x component). Let's say you didn't know the y-component was 3 then how would equation 2 be of any use to you? Equation 2 seems nonsense to me. It just looks like its saying (e2 x e2) V2 = (1) x V2 = V2 which is obvious but I don't see how you can use it to get V2 when you don't know what V2 is which is what my book seems to be implying. Could somebody give me an example of how you would use equation 2 to find a coefficient of a vector?

Thanks in advance for any help offered.

2. Dec 23, 2015

### mathman

You can't get V2 out of equation 2. Equation 2 is a description of the relationship between components of an arbitrary vector and the unit vectors. It doesn't tell how to get the components unless you know them in advance.

3. Dec 23, 2015

### Jimmy87

I found this lecture which pretty much contains what it says in my book.

Look at 34mins 30secs. He says "How much e1 do I need, there is a very simple trick for that". He then pretty much writes my equation two. By "how much e1 do I need" sounds to me like "what is coefficient V2". What is the point in the equation he writes down (my equation 2). How is it useful if you can't use it to calculate anything?

4. Dec 24, 2015

### mathman

I don't fully understand what he is doing. He must have a definition of this V somehow.

5. Dec 30, 2015

### croad

I guess you need to think that e2 = 0e1 + 1e2 + 0e3. Now take the dot product with any other vector you will only get the coefficient of e2 in that vector. Try matrix multiplications if you are familiar, that would help. Matrices are essentially representations of vectors using numbers and gets rid of somewhat pesky basis vectors. Useful for simple cases for you to visualize the more abstract vectors.

6. Dec 30, 2015

### Fightfish

The result appears "trivial" to you because you have been given a toy model for illustrative purposes. If you work purely in a single coordinate basis (and everything is only represented in that basis), then of course the orthogonality relation doesn't tell you anything new. However, vectors exist independently of their representations in a coordinate basis.
As another toy (but possibly more useful) example, lets say we have a vector that has the representation $$\vec{v} \equiv v_{x} \hat{e}_{x} + v_{y}\hat{e}_{y} + v_{z} \hat{e}_{z}.$$ We might want to work in a slightly different coordinate basis, say $$\{\hat{e}_{x}' = \left(\hat{e}_{x} + \hat{e}_{y}\right)/\sqrt{2}\quad,\quad \hat{e}_{y}' = \left(\hat{e}_{x} - \hat{e}_{y}\right)/\sqrt{2}\quad,\quad \hat{e}_{z}' = \hat{e}_{z}\},$$ and the components of $\vec{v}$ in this new basis can simply be gotten as $$v_{x}' = \vec{v}\cdot \hat{e}_{x}'\quad,\quad v_{y}' = \vec{v}\cdot \hat{e}_{y}'\quad,\quad v_{z}' = \vec{v}\cdot \hat{e}_{z}'$$
On a more abstract level, this idea becomes much more important when dealing with more general vector spaces. Let's consider the familiar Fourier expansion for instance: $$f(x) = \frac{a_{0}}{2} + \sum_{n = 1}^{\infty} a_{n} \cos(nx) + b_{n} \sin(nx)$$ How might we recover the coefficients $a_{n}$ and $b_{n}$? Well, because $\{1,\cos(nx),\sin(nx)\}$ is an orthogonal basis with respect to integration over the periodic interval $[-\pi,\pi]$, we can apply a similar idea as before to arrive at $$a_{n} = \frac{1}{\pi} \int_{-\pi}^{\pi} f(x) \cos (nx) \mathrm{d}x \qquad \qquad b_{n} = \frac{1}{\pi} \int_{-\pi}^{\pi} f(x) \sin (nx) \mathrm{d}x$$
The same concept also occurs in quantum mechanics when you decompose a wavefunction into a complete eigenbasis set.

Share this great discussion with others via Reddit, Google+, Twitter, or Facebook