Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Dot product clarification

  1. Sep 4, 2014 #1
    I'm just trying to understand from a linear algebra standpoint how they define dot product from the inner product and how this gives rise to a definition of length and angle. somehow there is a way to combine points in space to a scalar value that unambiguously determines length and angle? Is that all it is, or what else does the inner product account for that the dot product doesn't?

    More or less: I'm trying to understand how a notion of space from linear algebra gives rise to an inner product.
  2. jcsd
  3. Sep 4, 2014 #2
    Dot product is an inner product.

    Read relevant parts of this article. Large parts were written by me and Ī won’t repeat it.

    Inner product is a symmetric bilinear form (also said “quadratic form” – over real numbers there is no difference) that is positively definite. It is one of possible additional structures on linear and affine spaces.
    Last edited: Sep 4, 2014
  4. Sep 4, 2014 #3


    User Avatar
    Science Advisor
    Gold Member

    Once you have an inner-product defined, Cauchy-Schwarz inequality gives rise to angles or angle-equivalents, . Length is given by

    Now, L(v) defines a norm ||.|| on V, and a norm gives rise to a metric defined by d(a,b):=||a-b||. Angles (more precisely, their respective cosines ) result from Cauchy Schwarz

    In a finite-dimensional space, you can always define a norm by pulling back, using the isomorphism between spaces of the same dimension,
    the standard Euclidean norm; the norm of Euclidean space.
  5. Sep 4, 2014 #4


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Most of what I was going to say has been said already, so I will just add a clarification about the Cauchy-Schwartz inequality. If x and y are vectors in ##\mathbb R^2##, we have ##x\cdot y=|x||y|\cos\theta##, where ##\theta## is the angle between the vectors. This is a theorem. Note that since ##|\cos\theta|\leq 1## for all ##\theta##, this result implies that
    $$\frac{|x\cdot y|}{|x||y|}\leq 1.$$
    The Cauchy-Schwartz inequality is the generalization of this result to arbitrary inner product spaces. If x and y are vectors in an inner product space, we can prove that ##|\langle x,y\rangle|\leq\|x\|\|y\|##. The proof does not involve angles. When we're dealing with a real vector space, this result allows us to define an angle ##\theta## between x and y by
    $$\cos\theta=\frac{\langle x,y\rangle}{\|x\|\|y\|}.$$
    Ask if you want to see any of these claims proved.
  6. Sep 4, 2014 #5
    ok well I pretty much new the essentials of what has been mentioned already, it just seems strange to me that once you arrive at this scalar value that comes from the space it doesn't actually have a position in space. It's not hard to imagine it's possible, it just seems like a strange thing to do... but I suppose if you want a notion of length (or angle if necessary) in the order of linear algebra you have to define a dot product - and the Cauchy-Schwartz inequality reminded me of this so thanks.

    How about a direct product then, how does this compare/contrast with the dot product, and how is the inner product more general than either one? and sure Fredrick, proving the claims won't hurt at all and may even help.

  7. Sep 5, 2014 #6


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    To prove that ##x\cdot y=|x||y|\cos\theta##, you need to know the law of cosines. I'm sure you know the pythagorean theorem ##a^2=b^2+c^2##. The law of cosines is a generalization of this that holds for arbitrary triangles. ##a^2=b^2+c^2-2bc\cos\theta##, where ##\theta## is the angle opposite to the edge of length a. Wikipedia has several nice proofs of this. I think the "another proof in the acute case" is especially easy to follow. http://en.wikipedia.org/wiki/Law_of_cosines

    Now let x and y be two non-zero vectors. You should be able to prove that ##x\cdot y=|x||y|\cos\theta## when they're parallel, so I will only consider the case when they're not. We have ##|x-y|^2=(x-y)\cdot(x-y)=|x|^2+|y|^2-2x\cdot y##. The arrows representing x and y can be interpreted as two edges of a triangle. The third edge has length ##|x-y|##. If you apply the law of cosines to this triangle, you get ##|x-y|^2=|x|^2+|y^2|-2|x||y|\cos\theta##, where ##\theta## is the angle opposite to the edge with length |x-y|. These results imply that ##x\cdot y=|x||y|\cos\theta##.

    Cauchy-Schwartz (for real inner product spaces): For all real numbers t and all vectors x,y, we have
    $$0 \leq \langle x+ty,x+ty\rangle=\langle x,x\rangle+2t\langle x,y\rangle+t^2\langle y,y\rangle =\|x\|^2+2t\langle x,y\rangle +t^2\|y\|^2.$$ In particular, this holds when ##t=-\frac{\langle x,y\rangle}{\|y\|^2}##. So for all x,y, we have
    $$0\leq\|x\|^2-2\frac{\langle x,y\rangle^2}{\|y\|^2}+\frac{\langle x,y\rangle^2}{\|y\|^2} =\|x\|^2-\frac{\langle x,y\rangle^2}{\|y\|^2}.$$ This implies that ##|\langle x,y\rangle|\leq\|x\|\|y\|##.
  8. Sep 5, 2014 #7


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Are you sure you meant "direct product"? Did you read about that somewhere? That's at a level that's way more advanced than this. It's not even about combining vectors into something new. It's about combining vector spaces into something new.
  9. Sep 5, 2014 #8
    It was mentioned in a discussion about tensors, and tensors are a generalization of vectors - just sets of numbers subject to the same linearity rules as vectors. A direct product is also known as a tensor product - doesn't the direct product have any meaning in linear algebra? Or can it not make much sense within a well defined vector space?
  10. Sep 5, 2014 #9
    Not clear what are you going to learn, but you hardly can learn many things simultaneously.
    • Direct product and tensor product are so different as addition and multiplication are.
    • Direct product is an operation applicable to vector spaces and, a such, has certain meaning in linear algebra.
    Don’t learn tensors before linear operators and bilinear forms. One can’t understand tensors without a motivation to “generalize vectors”.
  11. Sep 5, 2014 #10


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Only if we use an old and terrible definition of "vector". The modern definitions say that a vector is an element of a vector space, and a tensor is a multilinear map
    $$T:V^*\times\cdots\times V^*\times V\times\cdots\times V\to\mathbb R,$$ where V is a vector space and V* is its dual (i.e. the set of all linear maps from V into ##\mathbb R##).

    They're not the same. I actually don't even know what a direct product of vector spaces is. I understand direct products of groups, and direct sums of vector spaces, but not direct products of vector spaces. For groups, direct products and direct sums are the same thing. The term "direct sum" is preferred over "direct product" in the special case when both groups are abelian and their binary operations are referred to as "addition". Since vector spaces are abelian groups whose binary operation is called "addition", I had expected that a direct "product" of vector spaces would actually be the same thing as a direct sum of vector spaces, but I see now that Wikipedia is saying that they're not the same.

    A vector space X is said to be a direct sum of two of its subspaces Y and Z, if for each x in X, there's a unique pair (y,z) such that ##y\in Y##, ##z\in Z## and ##x=y+z##.

    This is the "internal" direct sum. There's also an "external" kind. If Y and Z are vector spaces, we can define ##X=Y\times Z## and define a vector space structure on X in an obvious way. This X is said to be the (external) direct sum of Y and Z. The reason is that X is the (internal) direct sum of the sets ##\{(y,0)|y\in Y\}## and ##\{(0,z)|z\in Z\}##. These subspaces are clearly isomorphic to Y and Z respectively.

    I don't know what a direct product of vector spaces is, but this is the definition of "tensor product": A bilinear function ##\tau:X\times Y\to Z##, where X,Y and Z are vector spaces, is said to be a tensor product if, for each bilinear function ##\sigma:X\times Y\to W##, where W is a vector space, there's a unique linear function ##\sigma_*:Z\to W## such that ##\sigma=\sigma_*\circ\tau##.

    Direct products, direct sums and tensor products are all about combining vector spaces into new vector spaces. They're not about combining vectors in some space V.

    I agree with Incis Mrsi that you shouldn't be worrying about these things now. They are things that you can return to when you already know linear algebra pretty well.

    Do you understand the proofs I posted?
  12. Sep 5, 2014 #11
    I think everyone can only actually learn by understanding how things are related - clearly inner product, dot product and direct product are related. But beyond this there is a wide variation between individuals in the way this is done.

    However it is true that apparently to understand the direct product it means to step slightly outside the scope of linear algebra. Reviewing bilinear maps and bilinear form was instructive so thanks. Although the way I'm asking these questions may make it seem like I don't know very much about linear algebra (especially if I indicated such a conceptual disparity about the dot product), I'm actually just pushing for a way to clarify and distinguish between these three forms of products. I'm beginning to study tensors more closely in general and I already understand the motivation to not have to assign every element an exact position is space but instead relate their position exclusively with each other.

    Incnins Mrsi: if you can explain the distinction between tensor product and direct product instead of just claiming there is one - albeit in an interesting way - would be great, because I have a direct quote from the textbook that a direct product is sometimes called a tensor product - but that is as far as it goes.

    Fredrik: I believe so - the result of following the rules of doting the difference of two vectors with itself is equated with the law of cosines for this resulting vector for which the Cauchy-Schwartz inequality follows.

    The second proof started with the inner product of the same vector - being two vectors added with one subject to a parameter - set as being greater than or equal to zero, i.e. stating this operation by definition must be positive definite. Then the rules of the inner product operation are carried out and the parameter is solved in terms of the vectors using the obligation that it must be positive definite. once this is substituted back in there is now an expression for how the inner product of these two vectors are positive definite.
    ---In summary starting with inner of same vector as a sum of two being positive definite, you can arrive at an expression of inner of the two being positive definite.

  13. Sep 5, 2014 #12

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    That isn't a recipe for learning. It's a recipe for not learning anything.

    We teach young children how to compute 3.5*1.4=4.9. We teach slightly older children how to compute √2=1.414213562373... What makes these calculations valid is quite deep, and also quite recent (end of the 19th century). Along with those young children, mathematicians, accountants, scientists, and engineers did quite nicely using the reals for centuries without learning about Dedekind cuts or Cauchy sequences.

    Your recipe would require students to learn abstract algebra before they started adding.

    The inner product and dot product, yes. The direct product? The only relation is that all three use the word "product".

    As far as the relation between the inner product and dot product, the dot product came first. The inner product is a generalization of the concept of the dot product.

    The three dimensional vectors, the three dimensional dot product, and the the three dimensional cross product are offshoots from Hamilton's quaternions. Gibbs and Heaviside liked the vectorial part of Hamilton's quaternions but not the quaternions themselves. They developed the concept of vectors as taught in introductory physics in the late 19th century.

    Meanwhile, Grassmann developed a very abstract concept that eventually became the basis for vector spaces (and a bunch of other concepts) at about the same time Hamilton was developing his quaternions. Grassmann was perhaps a bit ahead of his time; very few paid much attention to Grassman's work. Clifford did. He unified Hamilton's quaternions and Grassman's algebra into even more abstract concepts. Clifford, too, was a bit ahead of his time.

    Later mathematicians finally paid attention to Grassmann's and Clifford's work, and also looked to the Gibbs/Heaviside vectors. All of them were addressing the same base concept. How could this be unified and generalized? That's when the inner product was born.

    The dot product obviously generalizes beyond the Gibbs/Heaviside three dimensional vectors. The inner product generalizes even further. There are inner products that at face value look nothing at all like the three dimensional dot product.

    The Gibbs/Heaviside three dimensional vector (and its two dimensional counterpart) are simple enough to be taught to students before they enter college. The more abstract linear algebra requires a lot more sophistication. It isn't taught until after students have learned about those simple vectors, and then have taken multiple introductory calculus classes.

    This kind of thing happened time after time in the history of science and mathematics. Different people developed different concepts, and later on yet others saw that these apparently distinct concepts were deeply related. Then science and mathematics instructors tried to make sense of it all and determine how and in which order to teach these concepts to students.
  14. Sep 5, 2014 #13

    Well what you say would make more sense if I were learning direct product before anything else, which I'm not. But in any case just because material has an underlying sequence to it doesn't necessitate having it be learned in a particular conceptual order. And I'm not even asking to know everything about direct product in this context; there are properties of irrational numbers that can still be appreciated in the way it relates to rational numbers, for instance taking the square root of 2 - so your example kind of lacks....

    But if you're saying direct product has nothing to do with them then fine. I don't know why they would use the same word to describe all three, but it's possible I suppose. But you're being a little presumptious and dramatic if you think I will "unlearn" just because I would have in the back of my mind how a direct product could be related to other products. My motto is as long as I'm being logical I have nothing to fear - so any assumption that would give me the wrong idea would not be for being handed the wrong information. And is it possible for someone to have an understanding of a highly profound mathematical idea without having ever come across it in math? Of course - there are some jazz musicians that I sometimes believe must know a huge set of undiscovered and profound mathematical ideas.

    Thanks for the interesting history lesson about dot product and inner product though, I enjoyed it.
    Last edited: Sep 5, 2014
  15. Sep 6, 2014 #14
    direct vs tensor

    As Fredrik said, one does not speak about direct products in linear algebra. But operation known as the “direct sum” of vector spaces (as well as modules and abelian groups) leads to the direct product of groups (under addition). Hence Ī ignored terminological aberration.

    For (different) vectors spaces, direct sum is something like addition. In particular, dimensions of spaces add. Tensor product is something like multiplication. In particular, dimensions of spaces multiply.
    Last edited: Sep 6, 2014
  16. Sep 6, 2014 #15


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I said that that's what I thought, but Wikipedia says that there's a concept of direct product of modules (and therefore vector spaces), that's different from the concept of direct sum. They're the same when we're dealing with finite collections of vector spaces, but not when we're dealing with infinite collections.

  17. Sep 6, 2014 #16
    Yes, infinitary operations have their traps. That “sum–product” terminology tries to distinguish different types of generalization, but in the same time it becomes confusing (even for us).
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook