# Question about implication from scalar product

• I
• Peter_Newman
In summary: My question now is, where is the connection between lattices of rank n-1 and those with rank n? So where is the connection here? And why can we assume the validity of the assertion for...The connection is that the Gram Schmidt orthogonalization is unique for these ##d_n## to ##d_2##.

#### Peter_Newman

Hi,

Let's say we have the Gram-Schmidt Vectors ##b_i^*## and let's say ##d_n^*,...,d_1^*## is the Gram-Schmidt Version of the dual lattice Vectors of ##d_n,...,d_1##. Let further be ##b_1^* = b_1## and ##d_1^*## the projection of ##d_1## on the ##span(d_2,...,d_n)^{\bot} = span(b_1)##. We have the case ##d_1^* \in span(b_1)## and ##\langle d_1^*,b_1 \rangle = \langle d_1 , b_1 \rangle = 1##. Why does this implies that ##d_1^* = b_1 / ||b_1||^2 = b_1^* / ||b_1^*||^2## ?

What I have proven so far is, that ##||d_1^*|| ||b_1^*|| = 1##, but I can not see from the scalar product, that this implies ##d_1^* = b_1^* / ||b_1*||^2##. From what I have proven I can rewrite ##||d_1^*|| = 1/||b_1^*|| = ||b_1^*||/||b_1^*||^2## but I see here no way to deduct that ##d_1^* = b_1^* / ||b_1^*||^2## holds... I suppose you can't just drop the norm here.

If anyone here has any ideas, it would help me a lot!

##b_1/||b_1||^2## and ##b_1^*/||b_1^*||^2## have norm ##1/||b_1||## and ##1## respectively. They are not equal.

Why is it even possible to say ##\langle d_1^*,b_1 \rangle = \langle d_1 , b_1 \rangle = 1##? I mean if the dual basis is ##d_n,...,d_1## and ##d_n^*,...,d_1^*## are Gram-Schmidt vectors, then isn't ##d_1 = d_1^*## valid anymore ? Is this only valid because ##d_1^* \in span(b_1)## ?

Last edited:
Sorry i realized maybe I'm being dumb. When you do gram Schmidt do you normalize the vectors or only orthogonalize?

Hey @Office_Shredder, I will quote the source:

I would say we are just orthogonalizing.

In the meantime I have found out why this scalar product and it's implication are valid here (in the case ##i=1##). But how to generalize this for the other vectors?

But I don't quite understand the last paragraph, what exactly is the hypothesis here, how do we use this here for the induction step?

Last edited:
Can someone perhaps explain to me in more detail how the generalization is carried out here, because I have difficulty seeing the transition from rank n-1 bases to general rank n bases.

As you saw in another thread, ##\pi(b_2),...,\pi(b_n)## is a lattice in the span orthogonal to ##b_1##. ##d_j##for ##j\geq 2## still satisfies ##<d_j,\pi(b_k)>=\delta_{j,k}>## (do you know why?) So ##d_n,....,d_2## is a dual lattice to ##\pi(b_2),\pi(b_3),...##. Since it's the same list of vectors in the same order as the first time, the gram Schmidt gives the same vectors as well. So by induction ##d_2^*=\pi(b_2)/||\pi(b_2)||^2##. But ##\pi(b_2)=b_2^*## by definition.

Peter_Newman
Hey @Office_Shredder, that ##<d_j,\pi(b_k)>=\delta_{j,k}## holds, I have proved this via a (longer) calculation, which ends up in ##<d_j,\pi(b_k)>= <d_j,b_k>## and the last part of that can be interpreted as the delta. I can argue based on my calculation, but not in a gemoetrical way...

What do you mean with "the gram Schmidt gives the same vectors as well", the Gram Schmidt of what? Is this what we assume in the claim?

The last step "so by induction ##d_2^*##=.." comes because we now have a view on ##d_n,....,d_2## and ##\pi(b_2),\pi(b_3),...##? Or what else do we view for this step?

Remember ##\pi(b_k)=b_j+\alpha b_1## for some ##\alpha##. So ##<d_j,\pi(b_k)<=<d_j,b_k>+\alpha<d_j,b_1>## and we know ##j>1##so the second term is zero.

When i say gram Schmidt gives the same vectors, consider the sets ##d_1,...,d_n## and ##d_2,...,d_n## in that order. ##d_2^*## is ambiguous, the two sets give different results for it. You don't have this problem with ##d_n,...,d_2## and ##d_n,...,d_1##.

The by induction step is just you have ##n-1## vectors that are the basis of a lattice, and you have its dual lattice. The inductive hypothesis is applied directly to them.

Thanks for your answer! I have the following question regarding your last statement, for which index ##\pi_?## is ##\pi(b_k)=b_j+\alpha b_1## valid?

Peter_Newman said:
Thanks for your answer! I have the following question regarding your last statement, for which index ##\pi_?## is ##\pi(b_k)=b_j+\alpha b_1## valid?

##\pi## is the projection along ##b_1##. ##\pi(v)=v-\frac{<v,b_1>}{||b_1||^2}b_1## for any vector ##v##.

Office_Shredder said:
##\pi(b_k)=b_j+\alpha b_1##
Is there a little typo? You mean ##\pi(b_k)=b_k+\alpha b_1## right? Instead of ##b_j##.

Sorry yes, typo on the index :)

Peter_Newman

I still have one question. So For the proof, we assume that the assertion holds, so we have ##d_n ,.... , d_1## and ##b_1^*,...,b_n^*##, we then show that this relation ##d_1^* = \frac{b_1^*}{||b_1^*||^2}## holds. Ok so far. Then we take next ##d_n ,... , d_2## and ##b_2^*,...,b_n^*## and apply the induction assumption, since the Gram Schmidt orthogonalization is unique for these ##d_n## to ##d_2##, we can again say that this relation ##d_2^* = \frac{b_2^*}{|b_2^*||^2}## holds. In this way we could continue now for further vectors...

My question now is, where is the connection between lattices of rank n-1 and those with rank n? So where is the connection here? And why can we assume the validity of the assertion for n-1?

Last edited:
Note ##\pi(b_k)=b_k^*## only when k=2 here. On ##b_3## it subtracts out the ##b_1## component, but not the ##b_2## component.

As far as the connection between n and n-1, this is how induction always works. Prove it for the base case of n=1, then prove if it's true for arbitrary n-1, it's also true for n. It's equivalent to just repeatedly applying the computation for ##b_1## to the other vectors after protecting.

Yes exactly, this is the procedure of induction. However, I don't see here exactly where the base case and the induction step (transition from n-1 to n, where this takes place) is. For me this two parts are a bit hidden...

Peter_Newman said:
Yes exactly, this is the procedure of induction. However, I don't see here exactly where the base case and the induction step (transition from n-1 to n, where this takes place) is. For me this two parts are a bit hidden...

The base case is a 1 dimensional lattice. The induction step is that if you have an n dimensional lattice, you can show that ##d_1^*=b_1^*/||b_1||^2##, plus youthen get an n-1 dimensional lattice by projecting ##b_2,...,b_n##. You can apply the inductive hypothesis to this lattice, but you have to prove that this is useful (e.g. that the dual lattice is still ##d_n,...,d_2## and that ##\pi(b_k)^*=b_k^*##)

How exactly does the base case look like? So I would proceed like this, if ##n = 1##, then we have ##b_1## and ##b_1^*##, as well as ##d_1## and ##d_1^*##, where according to Gram Schmidt now the relation ##b_1 = b_1^*## and ##d_1 = d_1^*## is valid. But now I have the problem that ##d_1^*## is not the projection of ##d_1## onto ##span(d_1)^{\perp}## (this last statement is a modified version as stated in the source). So I wouldn't know here if ##d_1^* \in span(b_1)## is true. Maybe this is also trivial, but I don't see that.

Last edited:
Peter_Newman said:
How exactly does the base case look like? So I would proceed like this, if ##n = 1##, then we have ##b_1## and ##b_1^*##, as well as ##d_1## and ##d_1^*##, where according to Gram Schmidt now the relation ##b_1 = b_1^*## and ##d_1 = d_1^*## is valid. But now I have the problem that ##d_1^*## is not the projection of ##d_1## onto ##span(d_1)^{\perp}## (this last statement is a modified version as stated in the source). So I wouldn't know here if ##d_1^* \in span(b_1)## is true. Maybe this is also trivial, but I don't see that.

I'm going to take a step back here and ask a very generic question.

##v,w\in V## where ##V## is a 1 dimensional vector space. Prove ##v\in span(w)##

Any amount of linear algebra intuition make this so blindingly obvious that you wouldn't even think to prove it. Based on a couple of your threads, you obviously know the machinery of linear algebra, but it seems to me like it's all abstract logic that you're trying to piece together. It might be worth re-studying the subject - a second pass through from a different textbook can often lead to a deeper understanding.

If ##v## and ##w## belong to the same vector space, then there must also be a corresponding linear combination, so that one vector can be represented by the other.

Good, then I understand what you are trying to say. Of course, ##d's## and ##b's## must also be in the same one-dimensional space.

Last edited:
Office_Shredder