Shedding some light on the dot product

cytochrome
Messages
163
Reaction score
3
The dot product A . B is the magnitude of vector A times the projection of B onto A.

B . A is the magnitude of vector B times the projection of A onto B.

Correct?

A . B = B . A and this makes sense. But, say you're trying to find the components of a vector V in the direction of a vector W. Would it matter whether or not you wrote V . W or W . V?

EDIT: Also, does anyone know what it means (geometrically speaking) to find the components of a vector in the direction of another vector? I can give an example from a book if needed.
 
Mathematics news on Phys.org
To the first question, no it wouldn't matter. The dot product is commutative so ##\vec{V} \cdot \vec{W} = \vec{W} \cdot \vec{V}## for any vectors V and W.

To the edit, imagine you have your vector lying in the plane. Now imagine it is the hypotenuse of a right triangle where one of the sides of the triangle is parallel to the x-axis and the other side is parallel to the y-axis. The component of the main vector (which remember is represented as the hypotenuse) in the x direction is the length of the side of the triangle parallel to the x-axis, and the same for the y direction.
 
cytochrome said:
The dot product A . B is the magnitude of
A . B = B . A and this makes sense. But, say you're trying to find the components of a vector V in the direction of a vector W. Would it matter whether or not you wrote V . W or W . V?

It does not matter which way you write. But none of them will give you the component of V along the direction of W.
You need to multiply V by the unit vector along W.
So the magnitude of the projection is given by V.W/W
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
I'm interested to know whether the equation $$1 = 2 - \frac{1}{2 - \frac{1}{2 - \cdots}}$$ is true or not. It can be shown easily that if the continued fraction converges, it cannot converge to anything else than 1. It seems that if the continued fraction converges, the convergence is very slow. The apparent slowness of the convergence makes it difficult to estimate the presence of true convergence numerically. At the moment I don't know whether this converges or not.

Similar threads

Back
Top