LosTacos
- 79
- 0
x1 ⋅ x1 = 1
The discussion revolves around proving the equality of dot products in an orthonormal basis, specifically that for vectors v1 and v2 in a k-dimensional subspace V of ℝn, the equation v1·v2 = [v1]B · [v2]B holds true. Participants clarify the notation and steps involved in expressing v1 and v2 as linear combinations of basis vectors b1, b2, ..., bk. The final conclusion confirms that the dot product of the vectors in the original space equals the dot product of their representations in the orthonormal basis.
PREREQUISITESStudents and educators in mathematics, particularly those focusing on linear algebra, vector calculus, and related fields. This discussion is beneficial for anyone seeking to deepen their understanding of vector operations in orthonormal spaces.
I don't think this shortcut is helpful to the OP's understanding. I am not convinced that the OP would be able to distinguish between, say, c12 and x2; i.e., that there are different kinds of multiplication occurring.Fredrik said:I think the notation ##\mathbf x^2=\mathbf x\cdot\mathbf x## is not uncommon.
Fredrik said:My objection to it here is that it's just the same thing in a different notation, so it doesn't bring us closer to the final answer.
Right.LosTacos said:(u1v1) + (u2v2) + (u3v3)
The calculation is correct, but the second equality (the step where you go from the first line to the second) looks very strange. Can you please make another attempt to understand my post #43? I think I explained it there.LosTacos said:Problem: Let B be an ordered orthonormal basis for a k-dimensional subspace V of ℝn. Prove that for all v1,v2 ∈ V, v1·v2 = [v1]B · [v2]B, where the first dot product takes place in ℝn and the second takes place in ℝk.
Okay so:
v1 ⋅ v2 = (a1b1 + a2b2 +... + ak bk) ⋅ (c1b1 +c2b2 ... + ck bk)
= (a1c1b1b1) + (a2c2b2b2) + ... +
(akckbkbk)
= (a1c1 + a2c2 + ... +
akck
= [v1]B ⋅ [v2]B
LosTacos said:Problem: Let B be an ordered orthonormal basis for a k-dimensional subspace V of ℝn. Prove that for all v1,v2 ∈ V, v1·v2 = [v1]B · [v2]B, where the first dot product takes place in ℝn and the second takes place in ℝk.
Okay so:
v1 ⋅ v2 = (a1b1 + a2b2 +... + ak bk) ⋅ (c1b1 +c2b2 ... + ck bk)
= (a1c1b1b1) + (a2c2b2b2) + ... +
(akckbkbk)
= (a1c1 + a2c2 + ... +
akck
= [v1]B ⋅ [v2]B
This part would require more explanation. (Edit: Actually, it's wrong. See below). Of course if you do the calculation one step at a time, then all you have to do to "go in the other direction" is to read the string of equalities from right to left.LosTacos said:So,
[v1]B ⋅ [v2]B = [a1c1, a2c2, ... , akck]
Since each v is expressed as the coordinatization with respect to basis B, these can just be expanded to the linear combination of each and therefore = v1⋅[v2
LosTacos said:So,
[v1]B ⋅ [v2]B = [a1c1, a2c2, ... , akck]
Since each v is expressed as the coordinatization with respect to basis B, these can just be expanded to the linear combination of each and therefore = v1⋅[v2
The equation above doesn't make sense to me. I'm reading the right side as a vector, which doesn't make sense as the output of a dot product.Fredrik said:This part would require more explanation. Of course if you do the calculation one step at a time, then all you have to do to "go in the other direction" is to read the string of equalities from right to left.
Ah, yes you're right. I was too fast on the trigger there.Mark44 said:The equation above doesn't make sense to me. I'm reading the right side as a vector, which doesn't make sense as the output of a dot product.
I can say from personal experience, it happens.Fredrik said:Ah, yes you're right. I was too fast on the trigger there.
This is fine, but it's much harder to see what the next step is when you start at this end. If you want to see what the next step is, all you have to do is to write out all the steps of the previous calculation, the one that started withLosTacos said:I am confused as to what is correct for the reverse direction.
From the definition of coordinatization,
Let B = (b1, b2, ..., bk) be an ordered basis. Suppose v1= a1b1 + a2b2 + ... + anbn. Then, [v1]B, the coordinatization of v1 with respect to B is the n-vector [a1, a2, ..., an]
So if this follows true for [v2]B as well, the dot product will give me
[v1]B⋅[v1]B = [a1, a2, ..., an] ⋅ [c1, c2, ..., cn].
So, how when doing the dot product, why do the non-identical terms cancel out. OR is this not the right approach.
I was wondering why you were talking about "both ways". You don't have to do anything like that here. It's true that every equality in mathematics (at least in the branch of mathematics defined by ZFC set theory) is an equality between sets, and that this means that the equalityLosTacos said:Well to prove that each side is equal to one another, I have to prove that each is a subset of the other. In essence, prove it both ways.