Help Understanding Dot Product

Opus_723
Messages
175
Reaction score
3
I'm learning about dot products, and I'm having a bit of trouble grasping why axbx + ayby + azbz = ab*cosΘ. I understand how it works in two dimensions, I think, but three is still fuzzy.

This is what I came up with for two dimensions. The angle between the vectors is simply the difference between the angle between each vector and the x-axis.

ab*cosΘ
= ab*cos(Θab)
= ab*(cosΘacosΘb + sinΘasinΘb)
= a*cosΘab*cosΘb + a*sinΘab*sinΘb
= axbx + ayby

Now I understand intuitively that the dot product rule should work in three dimensions, since you could always orient the axes so that the two vectors lie on the x-y plane, in which case the above applies. But I'd like to see an analytical proof like the one above, except including azbz. I would feel a lot better about this if I could see that.
 
Mathematics news on Phys.org
If you rotate the vectors a and b in a way such that a is aligned along the z axis, the expression for the scalar product reduces to a \cdot b=a b_z =a b cos \theta. The last passage follows the change of coordinate system (from cartesian to spherical).
 
Opus_723 said:
I'm learning about dot products, and I'm having a bit of trouble grasping why axbx + ayby + azbz = ab*cosΘ. I understand how it works in two dimensions, I think, but three is still fuzzy.

This is what I came up with for two dimensions. The angle between the vectors is simply the difference between the angle between each vector and the x-axis.

ab*cosΘ
= ab*cos(Θab)
= ab*(cosΘacosΘb + sinΘasinΘb)
= a*cosΘab*cosΘb + a*sinΘab*sinΘb
= axbx + ayby

Now I understand intuitively that the dot product rule should work in three dimensions, since you could always orient the axes so that the two vectors lie on the x-y plane, in which case the above applies. But I'd like to see an analytical proof like the one above, except including azbz. I would feel a lot better about this if I could see that.

Hey there Opus_723.

Dot products (as well as general inner products of which a dot product is one example of an inner product) provide a description of geometry of the respective vector space.

The best way to think about this is to think first about the notion of length. This is captured by what is called a norm.

One norm you should be familiar with is the pythagorean theorem of a triangle. c^2 = a^2 + b^2, where a and b are the sides of a triangle and c is the length of a hypotenuse. The sides a and b are perpendicular to each other and the length of c is given by SQRT(a^2 + b^2).

Now it turns out that in an inner product space (provided that it meets the axioms) can be written in terms of a norm. We can form the inner product using the definition of the norm, and since the norm is known with euclidean geometry (the pythagorean theorem but in any dimension), we can use this to get an expression of the inner product for euclidean spaces (think right angle geometry that you are used to).

In geometries that are not "flat" (i.e. curved geometries), we have to use more advanced mathematics, but the idea is the same. If the conditions for an inner product are satisfied, you can find ways of calculating the inner product in that geometry.

Hermann Grassman wrote a theory about geometric calculus using inner and outer products and if you read modern accounts of geometric calculus, you'll find that these come in at a very abstract level. For linear systems, these are more straightforward, but for general systems (i.e. curved space), you are dealing with differentials and it can get hairy.

Also with regard to the |a||b|cos(theta) the best way to understand this is to look at the Cauchy-Schwarz inequality. This inequality says that the inner product (or in your case dot product) |<a,b>| = |a . b| <= ||a|| x ||b|| where |x| is the absolute value of x, and ||a|| is the length of vector a. This equality establishes that the cos(theta) term relates to an "angle" and is why we associate geometry with inner products.
 
I use the cosine law for the intuition. Let's say you have a vector a and b. Their difference is a-b. These 3 vectors can form a triangle. Let /theta be the angle between a and b. By the cosine law, we have
|\mathbf{a}-\mathbf{b}|^2=|\mathbf{a}|^2+|\mathbf{b}|^2-2|\mathbf{a}||\mathbf{b}|\cos\theta.
Since the norm of the vector squared is equal to the vector dot itself, we have
(\mathbf{a}-\mathbf{b})\cdot(\mathbf{a}-\mathbf{b})=\mathbf{a}\cdot \mathbf{a}+\mathbf{b}\cdot \mathbf{b} -2|\mathbf{a}||\mathbf{b}|\cos\theta.
Since the dot product is distributive, we have
\mathbf{a}\cdot \mathbf{a}-2\mathbf{a}\cdot\mathbf{b}+\mathbf{b}\cdot \mathbf{b}=\mathbf{a}\cdot \mathbf{a}+\mathbf{b}\cdot\mathbf{b} -2|\mathbf{a}||\mathbf{b}|\cos\theta.
Simplifying yields
-2\mathbf{a}\cdot \mathbf{b}=-2|\mathbf{a}||\mathbf{b}|\cos\theta.
Further simplifying yields
\mathbf{a}\cdot \mathbf{b}=|\mathbf{a}||\mathbf{b}|\cos\theta.
 
dalcde, Thanks, that makes the most sense so far. That satisfies me enough to start using it. As a side note, does anyone know where I could find some history as to how to dot and cross products were originally developed?
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.
Back
Top