Trying to understand dot product of two DIFFERENT vectors

Click For Summary
SUMMARY

The discussion focuses on the dot product of two different vectors, emphasizing its geometric interpretation and mathematical properties. The dot product is defined as the product of the lengths of the vectors and the cosine of the angle between them, which reflects their orientation and intertwined relationship. Key properties include bilinearity and the ability to express the dot product in terms of vector components, making it a special case of inner products in linear algebra. The discussion also highlights the projection of one vector onto another as a practical application of the dot product.

PREREQUISITES
  • Understanding of vector mathematics and geometry
  • Familiarity with linear algebra concepts, particularly bilinear forms
  • Knowledge of the Pythagorean theorem as it applies to vectors
  • Basic grasp of matrix operations and identities
NEXT STEPS
  • Study the geometric interpretation of vector projections in depth
  • Learn about inner products and their applications in linear algebra
  • Explore the properties of bilinear forms and their significance
  • Investigate the relationship between dot products and cosine similarity in machine learning
USEFUL FOR

Mathematicians, physics students, computer scientists, and anyone interested in understanding vector operations and their applications in various fields.

symbolipoint
Homework Helper
Education Advisor
Gold Member
Messages
7,629
Reaction score
2,062
Understanding the use of Pythagorean theorem for the length and square of a vector and how this is the dot product of a vector with itself is no problem. I'm trying to look inside the meaning of the dot product of two different vectors and understand it. I can also accept (just following the dot product definition) the use or meaning of cosine of the angle between two vectors and derivation for this as part of their dot product; but I just cannot understand the meaning of the dot product of two different vectors in a clear way.

I have already looked in some YouTube videos and seen a couple of wikipedia articles.

Is this just supposed to be unintuitive, and we must just accept it?
v*v, okay no trouble.
u*w, not understood, only can blindly follow the definition.
 
Physics news on Phys.org
We can define the projection of a vector ##v## onto another vector ##w## as ##\text{proj}_{w}v = \frac{v\cdot w}{\left \| w \right \|^{2}}w##. So we can view ##v \cdot w## as a way of picking out the part of ##v## along ##w## or vice versa (if we wanted the projection of ##w## onto ##v##).
 
First the dot product is a product so it is induced by a linear operator. Second the dot product should be coordinate independent so that operator must be the identity. We can also note it is reasonable that the dot product can be written in terms of the lengths and angle as those determine the coordinate free relationship between the two vectors.
 
Its a relation between two vectors in terms of their length and orientation.

If you normalize both vectors then its purely in terms of orientation, but if you don't then its a measure that represents that intertwined relationship (i.e. length and orientation).

It also does this within the plane that spans those vectors so there is a geometric aspect with regards to this sub-space: (i.e. it is made relative to the vectors and its space being considered).

Relating two objects together is the same as relating two scalars together: the thing is though that vectors carry length along with orientation so the relation must take into account both as opposed to just the length (or magnitude) if you are going to use a single number.
 
An important property of the dot product is that it is bilinear (linear in both the first and second argument). This means
<br /> (c\mathbf{v}) \cdot \mathbf{w} = c(\mathbf{v} \cdot \mathbf{w}) \\<br /> (\mathbf{v}_1 + \mathbf{v}_2) \cdot \mathbf{w} = \mathbf{v}_1 \cdot \mathbf{w} + \mathbf{v}_2 \cdot \mathbf{w}<br /> and similar when you have scalar multiple or sum of vectors on the right side. But these properties still don't make the dot product unique. There are other possible bilinear operators. But for those other operators, there is an appropriate basis \beta, such that if you write the vectors in terms of \beta, then the bilinear operator reduces to the dot product. In fact, if you have defined a bilinear operator already, then any other bilinear operator can be defined in terms of the first one by choosing a different basis. So why is the dot product special instead of the other bilinear operators? Its because in terms of the components of the vectors, the dot product is simply
\mathbf{v} \cdot \mathbf{w} = (v_1, \dotsc, v_n) \cdot (w_1, \dotsc, w_n) = v_1w_1 + \dotsb + v_nw_n = \mathbf{v}I\mathbf{w}^t<br /> We look at the vectors as matrices on the very right side with I being the identity matrix. Turns out all other bilinear operators can be defined in a similar way but using other square matrices instead of I. Because of these nice properties of the dot product, it is generalized into inner products in linear algebra.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 5 ·
Replies
5
Views
5K
  • · Replies 33 ·
2
Replies
33
Views
4K
Replies
16
Views
3K
  • · Replies 3 ·
Replies
3
Views
11K
  • · Replies 14 ·
Replies
14
Views
4K
  • · Replies 5 ·
Replies
5
Views
1K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K