# A few questions from definitons

As is often the case with mathematics, definitions are usually NOT unique.

Take for example the dot or scalar product.

Two nice, and perfectly reasonable, ways to define the dot product in 2 dimensional space are as follows.
Let A=(a1, a2) and B=(b1,b2). Let us also define the angle these two vectors make to be ω

1) A.B=|A||B|cos(ω)
2) A.B=a1*b1+a2*b2

Now, if we accept one of these definitions as 'law' then it should be possible to deduce the other definition and vice versa.

Let us take 2) as our definition.

Then it must be possible to deduce |A||B|cos(ω)=a1*b1+a2*b2.

I have tried doing this by setting |A|=(a1^2+a2^2)^0.5 but then you get stuck with cos(ω) and I cannot see a way of relating cos(ω) with the components of A and B without using the definition I'm trying to deduce!

Both definitions of the dot product are used happily in n dimensional space. I find this quite unsettling. It is possible to geometrically prove the 1) and 2) are equal in 2D and possible, albeit slightly harder, to also prove equality in 3D. When we extend this theorem to 4D there is no way of proving equality so how is it that it is used so freely in n-dimensional space! There isn't even a concept for cos(ω) for 4d and beyond. Does anyone else have these issues or is it me just not understanding something fundamental?

I have another question that is kind of unrelated. Rather than posting a new thread I shall just make an unusually large post.

Elipses, hyperbolas and parabolas are all cases of conic sections. Taking a plane and intersecting it with a cone. The different possibilities of this intersection gives rise to these different curves. My question is, taking the definition of these curves to be slices through a cone how does one deduce the general equations of the curves? It must be possible to deduce, for example, the equation of an ellipse from this method. It must also give rise to the eccentricity relationship between the shapes (i.e. a circle has eccentricity of 0). A relationship which I have never truly undestood. I have tried to do this and have failed to make any real progress. I'd appreciate it if someone gave me a helping hand!

I apologise for the essay ^^

You may not find this satisfactory, but once the dot product is defined, the angle between two vectors a and b is usually defined to be $arccos(\frac{a \bullet b}{\left| a \right| \left| b \right|}),$ which should seem reasonable from the fact that this can be proved geometrically in two and three dimensions. One benefit of this definition of the angle between vectors is that it also makes sense for the more general definition of a dot product. In general, if we have a rule $\bullet$ which assigns exactly one number to each pair of vectors (that number being denoted $a \bullet b$ for vectors a and b), and if that rule satisfies the following for all vectors a, b, and c and all numbers r:

1) $(a+c) \bullet b = (a \bullet b) + (c \bullet b) = a \bullet (b+c)$

2) $(ra) \bullet b = a \bullet (rb) = r(a \bullet b)$

3) $a \bullet b = b \bullet a$ and

4) $a \bullet a > 0$ if $a \neq 0$ and $0 \bullet 0 = 0,$

then we call $\bullet$ a dot product, and we define the angle between vectors as above.

uart
Two nice, and perfectly reasonable, ways to define the dot product in 2 dimensional space are as follows.
Let A=(a1, a2) and B=(b1,b2). Let us also define the angle these two vectors make to be ω

1) A.B=|A||B|cos(ω)
2) A.B=a1*b1+a2*b2

Now, if we accept one of these definitions as 'law' then it should be possible to deduce the other definition and vice versa.
Yes those two definitions are equivalent. This is easily proved via the "cosine of difference" formula: $\cos (x-y) = \cos x \cos y + \sin x \sin y$.

For convenience (save typing) let me denote A and B as the vector magnitudes, and (a1,a2) and (b1,b2) as the vector components.

Let $u$ be the angle of the first vector and $u+w$ be the angle of the second vector (so that the angle difference is $w$).

Now $a_1=A \cos u$, $a_2= A \sin u$, $b_1 = B \cos(u+w)$ and $b_2 = B \sin(u+w)$.

Forming the dot product from the second of your definitions gives,

$$\mathbf{A} \cdot \mathbf{B} = AB (\cos(u+w) \cos(u) + \sin(u+w) \sin(u)) = AB \cos(w)$$

Last edited:
Thanks for your responses. They have stimulated this, which is roughly my attempt to justify extending the scalar product to n dimensional space.

From my lecture notes and laughingebony's response: the most abstract and therefore most general definition of a dot product is:

1) (a+c)∙b=(a∙b)+(c∙b)=a∙(b+c)

2) (ra)∙b=a∙(rb)=r(a∙b)

3) a∙b=b∙a and

4) a∙a>0 if a≠0 and 0∙0=0,

From this we can see that the standard scalar product that we know and love in 2 and 3 dimensions respects these axioms and is thus a dot product. Now this the part where I kind of start to get lost...

We can then extend this notion of a standard inner product in n dimensions once we have proven that
1) A.B=|A||B|cos(ω)
2) A.B=a1*b1+a2*b2
are equivalent by using the definiton given by 2) on n-tuples of numbers.

The 'angle' for these vectors of ℝ^(n) are then defined to be arccos(a∙b∣∣a∣∣∣∣b∣∣). Right?

I've found a book I've had for a while with a good section on conics however my question still stands. I'd love to see an algebraic deduction of the equations of ellipses, parabolas and hyperbolas from conic sections

It has occured to me my question about conics needs some refining.

Firstly when talking about conic sections such as the hyperbola. They are really just projections of a circle onto a plane.

If we start with a circle (C) and project it from a point O (place O above the centre of C for convenience, although this is not necessary to generate the conic sections) then the resulting shape is a double cone with their tips touching and extending infinitely in both directions. Now a conic section is the intersection of a plane with this double cone. Thus as the double cone is a projection of a circle the whole problem boils down to projections of a circle onto a plane.

Now, all the conic sections have equation of the form:

3) ax^2 + by^2 +cxy + dx + ey + f = 0 with a,b,c,d,e,f being elements of R.

So what my original problem amounts to is proving any curve defined by 3) can be equally generated by one of the conic sections. I hope this clears up what I was asking