Cool Beginner Vector Problem from K&K

AI Thread Summary
The discussion centers on a problem from "An Introduction to Classical Mechanics" that involves finding a unit vector in the x-y plane perpendicular to a given vector A = (3, 5, 1). The solution requires using the dot product, establishing two equations: one for the dot product equating to zero and another for the unit vector's magnitude. Participants highlight the importance of recognizing that the dot product formula used is valid only in orthogonal coordinates and discuss deriving the dot product expression from its geometric definition. There is also a conversation about proving mathematical concepts without relying on advanced methods, expressing concerns about complicating basic proofs. The exchange emphasizes the beauty of mathematical derivations and the pursuit of understanding foundational principles.
logan3
Messages
83
Reaction score
2
I thought this was a super cool vector, example problem from An Introduction to Classical Mechanic by K&K. It says:
The problem is to find a unit vector lying in the x−y plane that is perpendicular to the vector A = (3, 5, 1).
The solution first begins by recognizing that since the problem is asking to find a unit vector perpendicular to A, then we can conversely say that the dot product of A and B must be equal to zero. And by using the vector component definition of the dot product, A \cdot B = A_x B_x + A_y B_y = 0, we can set up our first equation: 3B_x + 5B_y = 0.

Next, we can say that since it's a unit vector, then the magnitude must be equal to one, and hence B_x^2 + B_y^2 = 1^2.

Now we have two equations to solve B_x and B_y with!

I thought this was a great example, neatly combining all the definitions and ideas we'd learned so far.
 
  • Like
Likes vanhees71
Physics news on Phys.org
Your A.B expression is missing a term namely Az.Bz but since Bz is zero then it drops out. I mention it because it's being solved in x, y and z space and you should show it to complete your solution.
 
Last edited:
Thanks. The book didn't include it, so I didn't include it either.
 
The other thing they may not have mentioned is that form of A.B that was used works only when x, y and z are orthogonal.

To me the really cool proof was to show that from the definition of dot product namely

A.B = |A||B|cos(ab-angle)

one can derive the AxBx + AyBy + AzBz expression given the axes are orthogonal.
 
  • Like
Likes mafagafo
I had never written the proof myself. Inspired by what jedishrfu wrote, my attempt follows.

To avoid the problem of showing unnecessary \LaTeX, the proof is hidden under spoiler tags.

<br /> \begin{array}{rcl}<br /> A \cdot B &amp;=&amp; |A||B| \cos{\theta} \\<br /> &amp;=&amp; |A||B| \left( \frac{|A|^2+|B|^2-|C|^2}{2|A||B|} \right) \\<br /> &amp;=&amp; \frac{|A|^2+|B|^2-|C|^2}{2} \\<br /> &amp;=&amp; \frac{|A|^2+|B|^2-|B-A|^2}{2} \\<br /> &amp;=&amp; \frac{\left( A_x \right) ^ 2 + \left( A_y \right) ^ 2 + \left( A_z \right) ^ 2<br /> + \left( B_x \right) ^ 2 + \left( B_y \right) ^ 2 + \left( B_z \right) ^ 2<br /> - \left(B_x - A_x \right) ^ 2 - \left(B_y - A_y \right) ^ 2 - \left(B_z - A_z \right) ^ 2}{2} \\<br /> &amp;=&amp; \frac{\left( A_x \right) ^ 2 + \left( A_y \right) ^ 2 + \left( A_z \right) ^ 2<br /> + \left( B_x \right) ^ 2 + \left( B_y \right) ^ 2 + \left( B_z \right) ^ 2<br /> - \left(B_x \right)^2 + 2 \left(A_x B_x \right) - \left(A_x \right) ^ 2<br /> - \left(B_y \right)^2 + 2 \left(A_y B_y \right) - \left(A_y \right) ^ 2<br /> - \left(B_z \right)^2 + 2 \left(A_z B_z \right) - \left(A_z \right) ^ 2}{2} \\<br /> &amp;=&amp; \frac{2 \left(A_x B_x \right) + 2 \left(A_y B_y \right) + 2 \left(A_z B_z \right)}{2} \\<br /> &amp;=&amp; A_x B_x + A_y B_y + A_z B_z<br /> \end{array}<br />
 
jedishrfu said:
The other thing they may not have mentioned is that form of A.B that was used works only when x, y and z are orthogonal.

To me the really cool proof was to show that from the definition of dot product namely

A.B = |A||B|cos(ab-angle)

one can derive the AxBx + AyBy + AzBz expression given the axes are orthogonal.
Is there a way to prove it without relying on the law of cosines?
 
Well you can just use the abstract definition of a scalar product as a positive definite bilinear form on a real vector space. Then you prove that there always exists an orthonormal basis (and thus arbitrarily many), e.g., by using the Schmidt algorithm for orthonormalizing a given basis (that every vector space has a (Hamel) basis is equivalent to the assumption of the validity of the axiom of choice, by the way), and then the formula for the representation of the scalar product follows.

Then you can define angles by using the cosine rule as definition. The cosine is, of course, defined by calculus, e.g., via its Taylor series. In this way you can derive all theorems about Euclidean geometry in a completely analytic way. It's just beautiful!
 
Can you provide a reference for the alternate strategy? It sounds very cool.

The one thing I worry about is using more advanced stuff to proof basic stuff and thus get caught in a kind of mathematical reasoning loop.
 

Similar threads

Back
Top