Question about implication from scalar product

Click For Summary

Discussion Overview

The discussion revolves around the implications of scalar products in the context of Gram-Schmidt orthogonalization and dual lattices. Participants explore the relationships between Gram-Schmidt vectors, dual lattice vectors, and the conditions under which certain equalities hold. The scope includes theoretical aspects of linear algebra and vector spaces.

Discussion Character

  • Exploratory
  • Technical explanation
  • Conceptual clarification
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • One participant questions the implication that ##d_1^* = b_1^* / ||b_1^*||^2## follows from the scalar product conditions given, despite having proven that ##||d_1^*|| ||b_1^*|| = 1##.
  • Another participant points out that the norms of ##b_1/||b_1||^2## and ##b_1^*/||b_1^*||^2## are different, suggesting they cannot be equal.
  • Concerns are raised about the validity of stating ##\langle d_1^*,b_1 \rangle = \langle d_1 , b_1 \rangle = 1##, with questions about the implications of the dual basis and the relationship between ##d_1## and ##d_1^*##.
  • Participants discuss whether Gram-Schmidt involves normalization or just orthogonalization, with one confirming that only orthogonalization is performed.
  • There is a request for clarification on how to generalize findings from rank n-1 bases to rank n bases, indicating a need for understanding the transition in the context of induction.
  • One participant provides a proof regarding the relationship between the Gram-Schmidt vectors and the dual lattice, suggesting that the same vectors are obtained through induction.
  • Another participant expresses confusion about the connection between the bases of rank n-1 and n, seeking clarity on the inductive hypothesis and its application.
  • There is a discussion about the base case of induction, with participants exploring how it applies to one-dimensional lattices and the implications for the projection of vectors.
  • One participant reflects on the abstract nature of linear algebra concepts, questioning the intuitive understanding of span in one-dimensional vector spaces.

Areas of Agreement / Disagreement

Participants express differing views on the implications of scalar products and the relationships between the Gram-Schmidt vectors and dual lattice vectors. There is no consensus on the validity of certain implications or the generalization of results, indicating ongoing debate and exploration of the topic.

Contextual Notes

Participants highlight the need for clarity on assumptions regarding the projection of vectors and the definitions of dual bases. The discussion reveals complexities in the relationships between the various vectors involved, as well as the conditions under which certain properties hold.

Peter_Newman
Messages
155
Reaction score
11
Hi,

Let's say we have the Gram-Schmidt Vectors ##b_i^*## and let's say ##d_n^*,...,d_1^*## is the Gram-Schmidt Version of the dual lattice Vectors of ##d_n,...,d_1##. Let further be ##b_1^* = b_1## and ##d_1^*## the projection of ##d_1## on the ##span(d_2,...,d_n)^{\bot} = span(b_1)##. We have the case ##d_1^* \in span(b_1)## and ##\langle d_1^*,b_1 \rangle = \langle d_1 , b_1 \rangle = 1##. Why does this implies that ##d_1^* = b_1 / ||b_1||^2 = b_1^* / ||b_1^*||^2## ?

What I have proven so far is, that ##||d_1^*|| ||b_1^*|| = 1##, but I can not see from the scalar product, that this implies ##d_1^* = b_1^* / ||b_1*||^2##. From what I have proven I can rewrite ##||d_1^*|| = 1/||b_1^*|| = ||b_1^*||/||b_1^*||^2## but I see here no way to deduct that ##d_1^* = b_1^* / ||b_1^*||^2## holds... I suppose you can't just drop the norm here.

If anyone here has any ideas, it would help me a lot!
 
Physics news on Phys.org
##b_1/||b_1||^2## and ##b_1^*/||b_1^*||^2## have norm ##1/||b_1||## and ##1## respectively. They are not equal.
 
Why is it even possible to say ##\langle d_1^*,b_1 \rangle = \langle d_1 , b_1 \rangle = 1##? I mean if the dual basis is ##d_n,...,d_1## and ##d_n^*,...,d_1^*## are Gram-Schmidt vectors, then isn't ##d_1 = d_1^*## valid anymore ? Is this only valid because ##d_1^* \in span(b_1)## ?
 
Last edited:
Sorry i realized maybe I'm being dumb. When you do gram Schmidt do you normalize the vectors or only orthogonalize?
 
Hey @Office_Shredder, I will quote the source:

1673092209136.png

I would say we are just orthogonalizing.

In the meantime I have found out why this scalar product and it's implication are valid here (in the case ##i=1##). But how to generalize this for the other vectors?

But I don't quite understand the last paragraph, what exactly is the hypothesis here, how do we use this here for the induction step?
 
Last edited:
Can someone perhaps explain to me in more detail how the generalization is carried out here, because I have difficulty seeing the transition from rank n-1 bases to general rank n bases.
 
As you saw in another thread, ##\pi(b_2),...,\pi(b_n)## is a lattice in the span orthogonal to ##b_1##. ##d_j##for ##j\geq 2## still satisfies ##<d_j,\pi(b_k)>=\delta_{j,k}>## (do you know why?) So ##d_n,....,d_2## is a dual lattice to ##\pi(b_2),\pi(b_3),...##. Since it's the same list of vectors in the same order as the first time, the gram Schmidt gives the same vectors as well. So by induction ##d_2^*=\pi(b_2)/||\pi(b_2)||^2##. But ##\pi(b_2)=b_2^*## by definition.
 
  • Informative
Likes   Reactions: Peter_Newman
Hey @Office_Shredder, that ##<d_j,\pi(b_k)>=\delta_{j,k}## holds, I have proved this via a (longer) calculation, which ends up in ##<d_j,\pi(b_k)>= <d_j,b_k>## and the last part of that can be interpreted as the delta. I can argue based on my calculation, but not in a gemoetrical way...

What do you mean with "the gram Schmidt gives the same vectors as well", the Gram Schmidt of what? Is this what we assume in the claim?

The last step "so by induction ##d_2^*##=.." comes because we now have a view on ##d_n,....,d_2## and ##\pi(b_2),\pi(b_3),...##? Or what else do we view for this step?
 
Remember ##\pi(b_k)=b_j+\alpha b_1## for some ##\alpha##. So ##<d_j,\pi(b_k)<=<d_j,b_k>+\alpha<d_j,b_1>## and we know ##j>1##so the second term is zero.

When i say gram Schmidt gives the same vectors, consider the sets ##d_1,...,d_n## and ##d_2,...,d_n## in that order. ##d_2^*## is ambiguous, the two sets give different results for it. You don't have this problem with ##d_n,...,d_2## and ##d_n,...,d_1##.

The by induction step is just you have ##n-1## vectors that are the basis of a lattice, and you have its dual lattice. The inductive hypothesis is applied directly to them.
 
  • #10
Thanks for your answer! I have the following question regarding your last statement, for which index ##\pi_?## is ##\pi(b_k)=b_j+\alpha b_1## valid?
 
  • #11
Peter_Newman said:
Thanks for your answer! I have the following question regarding your last statement, for which index ##\pi_?## is ##\pi(b_k)=b_j+\alpha b_1## valid?

##\pi## is the projection along ##b_1##. ##\pi(v)=v-\frac{<v,b_1>}{||b_1||^2}b_1## for any vector ##v##.
 
  • #12
Office_Shredder said:
##\pi(b_k)=b_j+\alpha b_1##
Is there a little typo? You mean ##\pi(b_k)=b_k+\alpha b_1## right? Instead of ##b_j##.
 
  • #13
Sorry yes, typo on the index :)
 
  • Like
Likes   Reactions: Peter_Newman
  • #14
Thanks for your help, your comment about the order of the Gram Schmidt vectors is very helpful!

I still have one question. So For the proof, we assume that the assertion holds, so we have ##d_n ,.... , d_1## and ##b_1^*,...,b_n^*##, we then show that this relation ##d_1^* = \frac{b_1^*}{||b_1^*||^2}## holds. Ok so far. Then we take next ##d_n ,... , d_2## and ##b_2^*,...,b_n^*## and apply the induction assumption, since the Gram Schmidt orthogonalization is unique for these ##d_n## to ##d_2##, we can again say that this relation ##d_2^* = \frac{b_2^*}{|b_2^*||^2}## holds. In this way we could continue now for further vectors...

My question now is, where is the connection between lattices of rank n-1 and those with rank n? So where is the connection here? And why can we assume the validity of the assertion for n-1?
 
Last edited:
  • #15
Note ##\pi(b_k)=b_k^*## only when k=2 here. On ##b_3## it subtracts out the ##b_1## component, but not the ##b_2## component.

As far as the connection between n and n-1, this is how induction always works. Prove it for the base case of n=1, then prove if it's true for arbitrary n-1, it's also true for n. It's equivalent to just repeatedly applying the computation for ##b_1## to the other vectors after protecting.
 
  • #16
Yes exactly, this is the procedure of induction. However, I don't see here exactly where the base case and the induction step (transition from n-1 to n, where this takes place) is. For me this two parts are a bit hidden...
 
  • #17
Peter_Newman said:
Yes exactly, this is the procedure of induction. However, I don't see here exactly where the base case and the induction step (transition from n-1 to n, where this takes place) is. For me this two parts are a bit hidden...

The base case is a 1 dimensional lattice. The induction step is that if you have an n dimensional lattice, you can show that ##d_1^*=b_1^*/||b_1||^2##, plus youthen get an n-1 dimensional lattice by projecting ##b_2,...,b_n##. You can apply the inductive hypothesis to this lattice, but you have to prove that this is useful (e.g. that the dual lattice is still ##d_n,...,d_2## and that ##\pi(b_k)^*=b_k^*##)
 
  • #18
How exactly does the base case look like? So I would proceed like this, if ##n = 1##, then we have ##b_1## and ##b_1^*##, as well as ##d_1## and ##d_1^*##, where according to Gram Schmidt now the relation ##b_1 = b_1^*## and ##d_1 = d_1^*## is valid. But now I have the problem that ##d_1^*## is not the projection of ##d_1## onto ##span(d_1)^{\perp}## (this last statement is a modified version as stated in the source). So I wouldn't know here if ##d_1^* \in span(b_1)## is true. Maybe this is also trivial, but I don't see that.
 
Last edited:
  • #19
Peter_Newman said:
How exactly does the base case look like? So I would proceed like this, if ##n = 1##, then we have ##b_1## and ##b_1^*##, as well as ##d_1## and ##d_1^*##, where according to Gram Schmidt now the relation ##b_1 = b_1^*## and ##d_1 = d_1^*## is valid. But now I have the problem that ##d_1^*## is not the projection of ##d_1## onto ##span(d_1)^{\perp}## (this last statement is a modified version as stated in the source). So I wouldn't know here if ##d_1^* \in span(b_1)## is true. Maybe this is also trivial, but I don't see that.

I'm going to take a step back here and ask a very generic question.

##v,w\in V## where ##V## is a 1 dimensional vector space. Prove ##v\in span(w)##

Any amount of linear algebra intuition make this so blindingly obvious that you wouldn't even think to prove it. Based on a couple of your threads, you obviously know the machinery of linear algebra, but it seems to me like it's all abstract logic that you're trying to piece together. It might be worth re-studying the subject - a second pass through from a different textbook can often lead to a deeper understanding.
 
  • #20
If ##v## and ##w## belong to the same vector space, then there must also be a corresponding linear combination, so that one vector can be represented by the other.

Good, then I understand what you are trying to say. Of course, ##d's## and ##b's## must also be in the same one-dimensional space.
 
Last edited:
  • Like
Likes   Reactions: Office_Shredder

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 23 ·
Replies
23
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 34 ·
2
Replies
34
Views
2K
  • · Replies 25 ·
Replies
25
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 23 ·
Replies
23
Views
2K