Proving Orthogonal Bases Homework Statement

  • Thread starter Thread starter LosTacos
  • Start date Start date
  • Tags Tags
    Bases Orthogonal
  • #51
(u1x1)⋅(v1x1) +
(u1x1)⋅(v2x2) +
(u1x1)⋅(v3x3) +
(u2x2)⋅(v1x1) +
(u2x2)⋅(v2x2) +
(u2x2)⋅(v3x3) +
(u3x3)⋅(v1x1) +
(u3x3)⋅(v2x2) +
(u3x3)⋅(v3x3)
 
Physics news on Phys.org
  • #52
LosTacos said:
(u1x1)⋅(v1x1) +
(u1x1)⋅(v2x2) +
(u1x1)⋅(v3x3) +
(u2x2)⋅(v1x1) +
(u2x2)⋅(v2x2) +
(u2x2)⋅(v3x3) +
(u3x3)⋅(v1x1) +
(u3x3)⋅(v2x2) +
(u3x3)⋅(v3x3)

Good - now keep going, but let's go at it in baby steps. To help you along, the first product above can be written as
u1v1x1x1
This is NOT u1v1x1!
What's the simplest way to write this product?

What does the second product in the list above, in simplest form?
 
  • #53
u1v1x1x1 = (u1v1)(x1)^2

u1v2x1x2 = (u1v2)(0)
 
  • #54
LosTacos said:
u1v1x1x1 = (u1v1)(x1)^2

u1v2x1x2 = (u1v2)(0)

Why is ##\mathbf{x}_1 \cdot \mathbf{x}_1 = 0##?
 
  • #55
Because it is an orthogonal set.
 
  • #56
LosTacos said:
Because it is an orthogonal set.

I think you mean orthonormal. What is the definition of an orthonormal set?
 
  • #57
LosTacos said:
u1v1x1x1 = (u1v1)(x1)^2
No, you can't square a vector. What is x1x1? This question gets to the heart of the problems you're having.
LosTacos said:
u1v2x1x2 = (u1v2)(0)
Yes.
 
  • #58
Mark44 said:
No, you can't square a vector.
I think the notation ##\mathbf x^2=\mathbf x\cdot\mathbf x## is not uncommon. My objection to it here is that it's just the same thing in a different notation, so it doesn't bring us closer to the final answer.
 
  • #59
Orthonormal set is set of vectors where all are unit vectors.
 
  • #60
LosTacos said:
Orthonormal set is set of vectors where all are unit vectors.
There's more.
 
  • #61
x1 ⋅ x1 = 1
 
  • #62
Fredrik said:
I think the notation ##\mathbf x^2=\mathbf x\cdot\mathbf x## is not uncommon.
I don't think this shortcut is helpful to the OP's understanding. I am not convinced that the OP would be able to distinguish between, say, c12 and x2; i.e., that there are different kinds of multiplication occurring.
Fredrik said:
My objection to it here is that it's just the same thing in a different notation, so it doesn't bring us closer to the final answer.
 
  • #63
So what does this simplify to?
(u1x1)⋅(v1x1) +
(u1x1)⋅(v2x2) +
(u1x1)⋅(v3x3) +
(u2x2)⋅(v1x1) +
(u2x2)⋅(v2x2) +
(u2x2)⋅(v3x3) +
(u3x3)⋅(v1x1) +
(u3x3)⋅(v2x2) +
(u3x3)⋅(v3x3)
 
  • #64
(u1v1) + (u2v2) + (u3v3)
 
  • #65
LosTacos said:
(u1v1) + (u2v2) + (u3v3)
Right.

Now back to the original problem:

So v1 ##\cdot## v2 = (a1b1 + a2b2 + ... + akbk) ##\cdot## (c1b1 + c2b2 + ... + ckbk)

When the dot product on the right is calculated, how many terms will there be, before simplification?
How many terms will there be after simplification.
 
  • #66
v1v2 = (a1b1 + a2b2 +... + ak bk) ⋅ (c1b1 +c2b2 ... + ck bk)

= (a1c1b1b1) + (a2c2b2b2) + ... +
(akckbkbk)

= (a1c1 + a2c2 + ... +
akck

The only terms that simplify are the ones where bi = bi. If bi does not equal bi, then it will be multiplied by 0 and the dot product will cancel.
 
  • #67
OK, good. We've been struggling to get you to realize that bi ##\cdot## bi = 1 and bi ##\cdot## bj = 0 if i ≠ j.

Now you're in a better position to tackle the problem you posted in #1. It would be nice if you restated the problem.
 
  • #68
Problem: Let B be an ordered orthonormal basis for a k-dimensional subspace V of ℝn. Prove that for all v1,v2 ∈ V, v1·v2 = [v1]B · [v2]B, where the first dot product takes place in ℝn and the second takes place in ℝk.

Okay so:

v1v2 = (a1b1 + a2b2 +... + ak bk) ⋅ (c1b1 +c2b2 ... + ck bk)

= (a1c1b1b1) + (a2c2b2b2) + ... +
(akckbkbk)

= (a1c1 + a2c2 + ... +
akck

= [v1]B ⋅ [v2]B
 
  • #69
Now for the opposite direction:

Let v1 = a1b1 + ... + akbk

Then,
[v1]B = [a1, a2, ... , ak]

Let v2 = c1b1 + ... + ckbk

Then,
[v2]B = [c1, c2, ... , ck]

So,

[v1]B ⋅ [v2]B = [a1c1, a2c2, ... , akck]

Since each v is expressed as the coordinatization with respect to basis B, these can just be expanded to the linear combination of each and therefore = v1⋅[v2
 
  • #70
LosTacos said:
Problem: Let B be an ordered orthonormal basis for a k-dimensional subspace V of ℝn. Prove that for all v1,v2 ∈ V, v1·v2 = [v1]B · [v2]B, where the first dot product takes place in ℝn and the second takes place in ℝk.

Okay so:

v1v2 = (a1b1 + a2b2 +... + ak bk) ⋅ (c1b1 +c2b2 ... + ck bk)

= (a1c1b1b1) + (a2c2b2b2) + ... +
(akckbkbk)

= (a1c1 + a2c2 + ... +
akck

= [v1]B ⋅ [v2]B
The calculation is correct, but the second equality (the step where you go from the first line to the second) looks very strange. Can you please make another attempt to understand my post #43? I think I explained it there.
 
Last edited:
  • #71
LosTacos said:
Problem: Let B be an ordered orthonormal basis for a k-dimensional subspace V of ℝn. Prove that for all v1,v2 ∈ V, v1·v2 = [v1]B · [v2]B, where the first dot product takes place in ℝn and the second takes place in ℝk.

Okay so:

v1v2 = (a1b1 + a2b2 +... + ak bk) ⋅ (c1b1 +c2b2 ... + ck bk)

= (a1c1b1b1) + (a2c2b2b2) + ... +
(akckbkbk)

= (a1c1 + a2c2 + ... +
akck

= [v1]B ⋅ [v2]B

What you have here is the right side of the equation you're trying to prove. IOW, what you have shown is that [v1]B · [v2]B = (a1c1 + a2c2 + ... +
akck.

What you need to do is work with the other side of the equation - v1v2 - taking into consideration that, although v1 and v2 are vectors in the k-dimensional subspace V, they are also vectors in the n-dimensional vector space Rn.
 
  • #72
LosTacos said:
So,

[v1]B ⋅ [v2]B = [a1c1, a2c2, ... , akck]

Since each v is expressed as the coordinatization with respect to basis B, these can just be expanded to the linear combination of each and therefore = v1⋅[v2
This part would require more explanation. (Edit: Actually, it's wrong. See below). Of course if you do the calculation one step at a time, then all you have to do to "go in the other direction" is to read the string of equalities from right to left.
 
Last edited:
  • #73
LosTacos said:
So,

[v1]B ⋅ [v2]B = [a1c1, a2c2, ... , akck]

Since each v is expressed as the coordinatization with respect to basis B, these can just be expanded to the linear combination of each and therefore = v1⋅[v2
Fredrik said:
This part would require more explanation. Of course if you do the calculation one step at a time, then all you have to do to "go in the other direction" is to read the string of equalities from right to left.
The equation above doesn't make sense to me. I'm reading the right side as a vector, which doesn't make sense as the output of a dot product.
 
  • #74
Mark44 said:
The equation above doesn't make sense to me. I'm reading the right side as a vector, which doesn't make sense as the output of a dot product.
Ah, yes you're right. I was too fast on the trigger there.
 
  • #75
Fredrik said:
Ah, yes you're right. I was too fast on the trigger there.
I can say from personal experience, it happens.
 
  • #76
I am confused as to what is correct for the reverse direction.

From the definition of coordinatization,
Let B = (b1, b2, ..., bk) be an ordered basis. Suppose v1= a1b1 + a2b2 + ... + anbn. Then, [v1]B, the coordinatization of v1 with respect to B is the n-vector [a1, a2, ..., an]

So if this follows true for [v2]B as well, the dot product will give me

[v1]B⋅[v1]B = [a1, a2, ..., an] ⋅ [c1, c2, ..., cn].

So, how when doing the dot product, why do the non-identical terms cancel out. OR is this not the right approach.
 
  • #77
LosTacos said:
I am confused as to what is correct for the reverse direction.

From the definition of coordinatization,
Let B = (b1, b2, ..., bk) be an ordered basis. Suppose v1= a1b1 + a2b2 + ... + anbn. Then, [v1]B, the coordinatization of v1 with respect to B is the n-vector [a1, a2, ..., an]

So if this follows true for [v2]B as well, the dot product will give me

[v1]B⋅[v1]B = [a1, a2, ..., an] ⋅ [c1, c2, ..., cn].

So, how when doing the dot product, why do the non-identical terms cancel out. OR is this not the right approach.
This is fine, but it's much harder to see what the next step is when you start at this end. If you want to see what the next step is, all you have to do is to write out all the steps of the previous calculation, the one that started with

v1v2 = ...

and ended with

= [v1]B ⋅ [v2]B. Now you can just read this string of equalities from right to left, and you should see it. (It might still be somewhat hard to see it, because of the strange way you have continued to write the calculation, in spite of what I said in #43).
 
  • #78
Well to prove that each side is equal to one another, I have to prove that each is a subset of the other. In essence, prove it both ways. So when doing the dot product of [a1, a2, ..., an] ⋅ [c1, c2, ..., cn] are you saying that i need to create 9 different products for this example. And then bc it is orthonormal, 6 of them cancel out and the other 3 are left which then show that this equals v1v2
 
  • #79
LosTacos said:
Well to prove that each side is equal to one another, I have to prove that each is a subset of the other. In essence, prove it both ways.
I was wondering why you were talking about "both ways". You don't have to do anything like that here. It's true that every equality in mathematics (at least in the branch of mathematics defined by ZFC set theory) is an equality between sets, and that this means that the equality
v1v2 = [v1]B ⋅ [v2]B
holds if and only if every element of the left-hand side is an element of the right-hand side, and vice versa.

This doesn't mean that you haven't already proved the equality above. You have. You did it by proving a string of equalities
v1v2 = X = Y = Z = [v1]B ⋅ [v2]B.​
This is sufficient since equality is a transitive operation. (If x=y and y=z, then x=z).

Another way of looking at it is that the first of these equalities means that every element of v1v2 is an element of X and that every element of X is an element of v1v2. That's just what the equality sign means. The other equalities can be interpreted similarly. Because of this, there's no need to do anything more than to prove this string of equalities.

I thought that what you wanted to do was just to start with [v1]B ⋅ [v2]B, and here use the definitions of [v1]B, [v2]B and the dot product, to discover each step in the string of equalities in the opposite order compared to before. This is why I said is that all you need to do to see the steps is to read the string of equalities from right to left.
 
Last edited:
  • #80
Did you understand that explanation? Is everything clear now? Do you understand why we found the way you wrote the solution confusing?

Since you solved the problem, I'm going to show you how I would have done it. I typed this up a week ago. I just didn't want to post it until you had worked out the solution for yourself.

Define ##v_1,\cdots,v_k\in V## by ##B=(v_1,\cdots,v_n)##. Let ##x,y\in V## be arbitrary. Let ##x_1,\dots,x_k## and ##y_1,\dots,y_k## be the unique real numbers such that ##x=\sum_{i=1}^k x_i v_i## and ##y=\sum_{i=1}^k y_i v_i##. We have
$$x\cdot y=\bigg(\sum_{i=1}^k x_i v_i\bigg)\cdot\bigg(\sum_{j=1}^k y_j v_j\bigg)=\sum_{i=1}^k\sum_{j=1}^k x_i y_j \underbrace{v_i\cdot v_j}_{=\delta_{ij}}=\sum_{i=1}^k x_i y_i =[x]_B\cdot[y]_B.$$
In case you're not familiar with the Kronecker delta, it's defined by
$$\delta_{ij}=\begin{cases}1 &\text{if }i=j\\0 &\text{if }i\neq j.\end{cases}$$
 
Back
Top