Cool Beginner Vector Problem from K&K

I worry about this too, but I think it's important to be able to do more advanced stuff to prove basic stuff.If you are stuck and can't seem to come up with a proof, is there a way to get help from a professor?In summary, the problem is to find a unit vector lying in the x-y plane that is perpendicular to the vector A = (3, 5, 1). The solution first begins by recognizing that since the problem is asking to find a unit vector perpendicular to A, then we can conversely say that the dot product of A and B must be equal to zero. And by using the vector component definition of the dot product, A \cdot B = A_f
  • #1
83
2
I thought this was a super cool vector, example problem from An Introduction to Classical Mechanic by K&K. It says:
The problem is to find a unit vector lying in the x−y plane that is perpendicular to the vector A = (3, 5, 1).
The solution first begins by recognizing that since the problem is asking to find a unit vector perpendicular to A, then we can conversely say that the dot product of A and B must be equal to zero. And by using the vector component definition of the dot product, [itex]A \cdot B = A_x B_x + A_y B_y = 0[/itex], we can set up our first equation: [itex]3B_x + 5B_y = 0[/itex].

Next, we can say that since it's a unit vector, then the magnitude must be equal to one, and hence [itex]B_x^2 + B_y^2 = 1^2[/itex].

Now we have two equations to solve [itex]B_x[/itex] and [itex]B_y[/itex] with!

I thought this was a great example, neatly combining all the definitions and ideas we'd learned so far.
 
  • #2
Your A.B expression is missing a term namely Az.Bz but since Bz is zero then it drops out. I mention it because it's being solved in x, y and z space and you should show it to complete your solution.
 
Last edited:
  • #3
Thanks. The book didn't include it, so I didn't include it either.
 
  • #4
The other thing they may not have mentioned is that form of A.B that was used works only when x, y and z are orthogonal.

To me the really cool proof was to show that from the definition of dot product namely

A.B = |A||B|cos(ab-angle)

one can derive the AxBx + AyBy + AzBz expression given the axes are orthogonal.
 
  • #5
I had never written the proof myself. Inspired by what jedishrfu wrote, my attempt follows.

To avoid the problem of showing unnecessary [itex]\LaTeX[/itex], the proof is hidden under spoiler tags.

[tex]
\begin{array}{rcl}
A \cdot B &=& |A||B| \cos{\theta} \\
&=& |A||B| \left( \frac{|A|^2+|B|^2-|C|^2}{2|A||B|} \right) \\
&=& \frac{|A|^2+|B|^2-|C|^2}{2} \\
&=& \frac{|A|^2+|B|^2-|B-A|^2}{2} \\
&=& \frac{\left( A_x \right) ^ 2 + \left( A_y \right) ^ 2 + \left( A_z \right) ^ 2
+ \left( B_x \right) ^ 2 + \left( B_y \right) ^ 2 + \left( B_z \right) ^ 2
- \left(B_x - A_x \right) ^ 2 - \left(B_y - A_y \right) ^ 2 - \left(B_z - A_z \right) ^ 2}{2} \\
&=& \frac{\left( A_x \right) ^ 2 + \left( A_y \right) ^ 2 + \left( A_z \right) ^ 2
+ \left( B_x \right) ^ 2 + \left( B_y \right) ^ 2 + \left( B_z \right) ^ 2
- \left(B_x \right)^2 + 2 \left(A_x B_x \right) - \left(A_x \right) ^ 2
- \left(B_y \right)^2 + 2 \left(A_y B_y \right) - \left(A_y \right) ^ 2
- \left(B_z \right)^2 + 2 \left(A_z B_z \right) - \left(A_z \right) ^ 2}{2} \\
&=& \frac{2 \left(A_x B_x \right) + 2 \left(A_y B_y \right) + 2 \left(A_z B_z \right)}{2} \\
&=& A_x B_x + A_y B_y + A_z B_z
\end{array}
[/tex]
 
  • #6
The other thing they may not have mentioned is that form of A.B that was used works only when x, y and z are orthogonal.

To me the really cool proof was to show that from the definition of dot product namely

A.B = |A||B|cos(ab-angle)

one can derive the AxBx + AyBy + AzBz expression given the axes are orthogonal.
Is there a way to prove it without relying on the law of cosines?
 
  • #8
Well you can just use the abstract definition of a scalar product as a positive definite bilinear form on a real vector space. Then you prove that there always exists an orthonormal basis (and thus arbitrarily many), e.g., by using the Schmidt algorithm for orthonormalizing a given basis (that every vector space has a (Hamel) basis is equivalent to the assumption of the validity of the axiom of choice, by the way), and then the formula for the representation of the scalar product follows.

Then you can define angles by using the cosine rule as definition. The cosine is, of course, defined by calculus, e.g., via its Taylor series. In this way you can derive all theorems about Euclidean geometry in a completely analytic way. It's just beautiful!
 
  • #9
Can you provide a reference for the alternate strategy? It sounds very cool.

The one thing I worry about is using more advanced stuff to proof basic stuff and thus get caught in a kind of mathematical reasoning loop.
 

Suggested for: Cool Beginner Vector Problem from K&K

Replies
30
Views
893
Replies
12
Views
1K
Replies
5
Views
1K
Replies
28
Views
2K
Replies
5
Views
825
Replies
6
Views
1K
Replies
4
Views
2K
Replies
2
Views
1K
Replies
2
Views
1K
Replies
6
Views
936
Back
Top