B Error in Vector Addition: U & V Perpendicular

AI Thread Summary
The discussion centers on an error in vector addition involving two perpendicular unit vectors, U and V, and a vector W. The user is trying to resolve the equation W × V = U - W but encounters inconsistencies in their calculations. Clarification is provided regarding notation, specifically the use of the cross product, and the correct application of vector rules. The key error identified is a sign mistake and a misplacement of components in the calculations. The resolution emphasizes the importance of correctly pairing orthogonal vectors to derive accurate equations.
Amine_prince
Messages
38
Reaction score
0
i have two unitary vectors in space U and V , U and V are perpendicular .
W is a vector that verifies W ^ V = U - W .
the following resolution is incorrect , i wan't to understand why :

we use (o,U,V,(U^V)) . components of U (1,0,0) , V(0,1,0) , W(a,b,c) where a , b and c are real numbers .
components of W ^ V ( c , 0 , a) . and U - W ( 1-a , -b, -c)
where is the error here ?
 
Mathematics news on Phys.org
Amine_prince said:
i have two unitary vectors in space U and V , U and V are perpendicular .
W is a vector that verifies W ^ V = U - W .
the following resolution is incorrect , i wan't to understand why :

we use (o,U,V,(U^V)) . components of U (1,0,0) , V(0,1,0) , W(a,b,c) where a , b and c are real numbers .
components of W ^ V ( c , 0 , a) . and U - W ( 1-a , -b, -c)
where is the error here ?

I'm not sure about your notation: ^ means vector cross-product? I've always used \times

If so, then you are on the right track. U, V, U \times V can be used as an orthonormal basis. So we can write W as a linear combination:

W = a U + b V + c (U \times V)

Then W \times V = U - W becomes:

(a U + b V + c (U \times V)) \times V = U - a U - b V - c (U \times V)

Now, we use the rules:
X \times X = 0
(X \times Y) \times Z = (X \cdot Z) V - X (Y \cdot Z)

where \cdot is the vector scalar product.

Applying these rules gives us:
a U \times V + 0 + c (U \cdot V) V - c U (V \cdot V) = U - a U - bV - c (U \times V)

This simplifies to:
a U \times V - c U = (1-a)U - b V - c(U \times V)

So if you just pair up the corresponding orthogonal vectors, this gives three equations:
  1. a = -c
  2. 0 = -b
  3. -c = (1-a)
 
thank you sir :)
 
Amine_prince said:
thank you sir :)
Looking at what you wrote, I think that your problem is that

(a, b, c) \times (0,1,0) = (-c, 0, a)
 
yes , i missed the sign . and flipped the numbers by mistake .
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...

Similar threads

Back
Top