Cool Beginner Vector Problem from K&K

Click For Summary

Discussion Overview

The discussion revolves around a vector problem from "An Introduction to Classical Mechanics" by K&K, specifically finding a unit vector in the x-y plane that is perpendicular to a given vector A = (3, 5, 1). Participants explore the mathematical setup involving the dot product and the conditions for orthogonality, as well as the implications of using orthogonal coordinates.

Discussion Character

  • Exploratory
  • Technical explanation
  • Mathematical reasoning
  • Debate/contested

Main Points Raised

  • One participant describes the initial setup of the problem, noting that the dot product of vectors A and B must equal zero to find a perpendicular vector.
  • Another participant points out that the expression for the dot product should include a term for the z-component, although it drops out since Bz is zero.
  • A different participant emphasizes that the form of the dot product used is valid only when the axes are orthogonal.
  • One participant expresses interest in deriving the dot product expression from its definition, suggesting that it can be shown using the cosine of the angle between the vectors.
  • Another participant shares their attempt at writing a proof and mentions hiding it under spoiler tags to avoid unnecessary LaTeX display.
  • There is a suggestion to prove the dot product without relying on the law of cosines, with one participant indicating difficulty in finding such a proof.
  • One participant proposes using the abstract definition of a scalar product and mentions the Schmidt algorithm for orthonormalizing a basis as a method to derive the dot product representation.
  • A participant expresses concern about using advanced concepts to prove basic results, fearing it may lead to convoluted reasoning.

Areas of Agreement / Disagreement

Participants do not reach a consensus on the best approach to prove the properties of the dot product or whether advanced methods should be applied to basic concepts. Multiple competing views and methods are presented.

Contextual Notes

There are unresolved assumptions regarding the applicability of the dot product in non-orthogonal systems and the implications of using advanced mathematical concepts in foundational proofs.

logan3
Messages
83
Reaction score
2
I thought this was a super cool vector, example problem from An Introduction to Classical Mechanic by K&K. It says:
The problem is to find a unit vector lying in the x−y plane that is perpendicular to the vector A = (3, 5, 1).
The solution first begins by recognizing that since the problem is asking to find a unit vector perpendicular to A, then we can conversely say that the dot product of A and B must be equal to zero. And by using the vector component definition of the dot product, A \cdot B = A_x B_x + A_y B_y = 0, we can set up our first equation: 3B_x + 5B_y = 0.

Next, we can say that since it's a unit vector, then the magnitude must be equal to one, and hence B_x^2 + B_y^2 = 1^2.

Now we have two equations to solve B_x and B_y with!

I thought this was a great example, neatly combining all the definitions and ideas we'd learned so far.
 
  • Like
Likes   Reactions: vanhees71
Physics news on Phys.org
Your A.B expression is missing a term namely Az.Bz but since Bz is zero then it drops out. I mention it because it's being solved in x, y and z space and you should show it to complete your solution.
 
Last edited:
Thanks. The book didn't include it, so I didn't include it either.
 
The other thing they may not have mentioned is that form of A.B that was used works only when x, y and z are orthogonal.

To me the really cool proof was to show that from the definition of dot product namely

A.B = |A||B|cos(ab-angle)

one can derive the AxBx + AyBy + AzBz expression given the axes are orthogonal.
 
  • Like
Likes   Reactions: mafagafo
I had never written the proof myself. Inspired by what jedishrfu wrote, my attempt follows.

To avoid the problem of showing unnecessary \LaTeX, the proof is hidden under spoiler tags.

<br /> \begin{array}{rcl}<br /> A \cdot B &amp;=&amp; |A||B| \cos{\theta} \\<br /> &amp;=&amp; |A||B| \left( \frac{|A|^2+|B|^2-|C|^2}{2|A||B|} \right) \\<br /> &amp;=&amp; \frac{|A|^2+|B|^2-|C|^2}{2} \\<br /> &amp;=&amp; \frac{|A|^2+|B|^2-|B-A|^2}{2} \\<br /> &amp;=&amp; \frac{\left( A_x \right) ^ 2 + \left( A_y \right) ^ 2 + \left( A_z \right) ^ 2<br /> + \left( B_x \right) ^ 2 + \left( B_y \right) ^ 2 + \left( B_z \right) ^ 2<br /> - \left(B_x - A_x \right) ^ 2 - \left(B_y - A_y \right) ^ 2 - \left(B_z - A_z \right) ^ 2}{2} \\<br /> &amp;=&amp; \frac{\left( A_x \right) ^ 2 + \left( A_y \right) ^ 2 + \left( A_z \right) ^ 2<br /> + \left( B_x \right) ^ 2 + \left( B_y \right) ^ 2 + \left( B_z \right) ^ 2<br /> - \left(B_x \right)^2 + 2 \left(A_x B_x \right) - \left(A_x \right) ^ 2<br /> - \left(B_y \right)^2 + 2 \left(A_y B_y \right) - \left(A_y \right) ^ 2<br /> - \left(B_z \right)^2 + 2 \left(A_z B_z \right) - \left(A_z \right) ^ 2}{2} \\<br /> &amp;=&amp; \frac{2 \left(A_x B_x \right) + 2 \left(A_y B_y \right) + 2 \left(A_z B_z \right)}{2} \\<br /> &amp;=&amp; A_x B_x + A_y B_y + A_z B_z<br /> \end{array}<br />
 
jedishrfu said:
The other thing they may not have mentioned is that form of A.B that was used works only when x, y and z are orthogonal.

To me the really cool proof was to show that from the definition of dot product namely

A.B = |A||B|cos(ab-angle)

one can derive the AxBx + AyBy + AzBz expression given the axes are orthogonal.
Is there a way to prove it without relying on the law of cosines?
 
Well you can just use the abstract definition of a scalar product as a positive definite bilinear form on a real vector space. Then you prove that there always exists an orthonormal basis (and thus arbitrarily many), e.g., by using the Schmidt algorithm for orthonormalizing a given basis (that every vector space has a (Hamel) basis is equivalent to the assumption of the validity of the axiom of choice, by the way), and then the formula for the representation of the scalar product follows.

Then you can define angles by using the cosine rule as definition. The cosine is, of course, defined by calculus, e.g., via its Taylor series. In this way you can derive all theorems about Euclidean geometry in a completely analytic way. It's just beautiful!
 
Can you provide a reference for the alternate strategy? It sounds very cool.

The one thing I worry about is using more advanced stuff to proof basic stuff and thus get caught in a kind of mathematical reasoning loop.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 11 ·
Replies
11
Views
4K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 17 ·
Replies
17
Views
3K
Replies
26
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 3 ·
Replies
3
Views
5K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 1 ·
Replies
1
Views
3K