Question about implication from scalar product

In summary: My question now is, where is the connection between lattices of rank n-1 and those with rank n? So where is the connection here? And why can we assume the validity of the assertion for...The connection is that the Gram Schmidt orthogonalization is unique for these ##d_n## to ##d_2##.
  • #1
Peter_Newman
155
11
Hi,

Let's say we have the Gram-Schmidt Vectors ##b_i^*## and let's say ##d_n^*,...,d_1^*## is the Gram-Schmidt Version of the dual lattice Vectors of ##d_n,...,d_1##. Let further be ##b_1^* = b_1## and ##d_1^*## the projection of ##d_1## on the ##span(d_2,...,d_n)^{\bot} = span(b_1)##. We have the case ##d_1^* \in span(b_1)## and ##\langle d_1^*,b_1 \rangle = \langle d_1 , b_1 \rangle = 1##. Why does this implies that ##d_1^* = b_1 / ||b_1||^2 = b_1^* / ||b_1^*||^2## ?

What I have proven so far is, that ##||d_1^*|| ||b_1^*|| = 1##, but I can not see from the scalar product, that this implies ##d_1^* = b_1^* / ||b_1*||^2##. From what I have proven I can rewrite ##||d_1^*|| = 1/||b_1^*|| = ||b_1^*||/||b_1^*||^2## but I see here no way to deduct that ##d_1^* = b_1^* / ||b_1^*||^2## holds... I suppose you can't just drop the norm here.

If anyone here has any ideas, it would help me a lot!
 
Physics news on Phys.org
  • #2
##b_1/||b_1||^2## and ##b_1^*/||b_1^*||^2## have norm ##1/||b_1||## and ##1## respectively. They are not equal.
 
  • #3
Why is it even possible to say ##\langle d_1^*,b_1 \rangle = \langle d_1 , b_1 \rangle = 1##? I mean if the dual basis is ##d_n,...,d_1## and ##d_n^*,...,d_1^*## are Gram-Schmidt vectors, then isn't ##d_1 = d_1^*## valid anymore ? Is this only valid because ##d_1^* \in span(b_1)## ?
 
Last edited:
  • #4
Sorry i realized maybe I'm being dumb. When you do gram Schmidt do you normalize the vectors or only orthogonalize?
 
  • #5
Hey @Office_Shredder, I will quote the source:

1673092209136.png

I would say we are just orthogonalizing.

In the meantime I have found out why this scalar product and it's implication are valid here (in the case ##i=1##). But how to generalize this for the other vectors?

But I don't quite understand the last paragraph, what exactly is the hypothesis here, how do we use this here for the induction step?
 
Last edited:
  • #6
Can someone perhaps explain to me in more detail how the generalization is carried out here, because I have difficulty seeing the transition from rank n-1 bases to general rank n bases.
 
  • #7
As you saw in another thread, ##\pi(b_2),...,\pi(b_n)## is a lattice in the span orthogonal to ##b_1##. ##d_j##for ##j\geq 2## still satisfies ##<d_j,\pi(b_k)>=\delta_{j,k}>## (do you know why?) So ##d_n,....,d_2## is a dual lattice to ##\pi(b_2),\pi(b_3),...##. Since it's the same list of vectors in the same order as the first time, the gram Schmidt gives the same vectors as well. So by induction ##d_2^*=\pi(b_2)/||\pi(b_2)||^2##. But ##\pi(b_2)=b_2^*## by definition.
 
  • Informative
Likes Peter_Newman
  • #8
Hey @Office_Shredder, that ##<d_j,\pi(b_k)>=\delta_{j,k}## holds, I have proved this via a (longer) calculation, which ends up in ##<d_j,\pi(b_k)>= <d_j,b_k>## and the last part of that can be interpreted as the delta. I can argue based on my calculation, but not in a gemoetrical way...

What do you mean with "the gram Schmidt gives the same vectors as well", the Gram Schmidt of what? Is this what we assume in the claim?

The last step "so by induction ##d_2^*##=.." comes because we now have a view on ##d_n,....,d_2## and ##\pi(b_2),\pi(b_3),...##? Or what else do we view for this step?
 
  • #9
Remember ##\pi(b_k)=b_j+\alpha b_1## for some ##\alpha##. So ##<d_j,\pi(b_k)<=<d_j,b_k>+\alpha<d_j,b_1>## and we know ##j>1##so the second term is zero.

When i say gram Schmidt gives the same vectors, consider the sets ##d_1,...,d_n## and ##d_2,...,d_n## in that order. ##d_2^*## is ambiguous, the two sets give different results for it. You don't have this problem with ##d_n,...,d_2## and ##d_n,...,d_1##.

The by induction step is just you have ##n-1## vectors that are the basis of a lattice, and you have its dual lattice. The inductive hypothesis is applied directly to them.
 
  • #10
Thanks for your answer! I have the following question regarding your last statement, for which index ##\pi_?## is ##\pi(b_k)=b_j+\alpha b_1## valid?
 
  • #11
Peter_Newman said:
Thanks for your answer! I have the following question regarding your last statement, for which index ##\pi_?## is ##\pi(b_k)=b_j+\alpha b_1## valid?

##\pi## is the projection along ##b_1##. ##\pi(v)=v-\frac{<v,b_1>}{||b_1||^2}b_1## for any vector ##v##.
 
  • #12
Office_Shredder said:
##\pi(b_k)=b_j+\alpha b_1##
Is there a little typo? You mean ##\pi(b_k)=b_k+\alpha b_1## right? Instead of ##b_j##.
 
  • #13
Sorry yes, typo on the index :)
 
  • Like
Likes Peter_Newman
  • #14
Thanks for your help, your comment about the order of the Gram Schmidt vectors is very helpful!

I still have one question. So For the proof, we assume that the assertion holds, so we have ##d_n ,.... , d_1## and ##b_1^*,...,b_n^*##, we then show that this relation ##d_1^* = \frac{b_1^*}{||b_1^*||^2}## holds. Ok so far. Then we take next ##d_n ,... , d_2## and ##b_2^*,...,b_n^*## and apply the induction assumption, since the Gram Schmidt orthogonalization is unique for these ##d_n## to ##d_2##, we can again say that this relation ##d_2^* = \frac{b_2^*}{|b_2^*||^2}## holds. In this way we could continue now for further vectors...

My question now is, where is the connection between lattices of rank n-1 and those with rank n? So where is the connection here? And why can we assume the validity of the assertion for n-1?
 
Last edited:
  • #15
Note ##\pi(b_k)=b_k^*## only when k=2 here. On ##b_3## it subtracts out the ##b_1## component, but not the ##b_2## component.

As far as the connection between n and n-1, this is how induction always works. Prove it for the base case of n=1, then prove if it's true for arbitrary n-1, it's also true for n. It's equivalent to just repeatedly applying the computation for ##b_1## to the other vectors after protecting.
 
  • #16
Yes exactly, this is the procedure of induction. However, I don't see here exactly where the base case and the induction step (transition from n-1 to n, where this takes place) is. For me this two parts are a bit hidden...
 
  • #17
Peter_Newman said:
Yes exactly, this is the procedure of induction. However, I don't see here exactly where the base case and the induction step (transition from n-1 to n, where this takes place) is. For me this two parts are a bit hidden...

The base case is a 1 dimensional lattice. The induction step is that if you have an n dimensional lattice, you can show that ##d_1^*=b_1^*/||b_1||^2##, plus youthen get an n-1 dimensional lattice by projecting ##b_2,...,b_n##. You can apply the inductive hypothesis to this lattice, but you have to prove that this is useful (e.g. that the dual lattice is still ##d_n,...,d_2## and that ##\pi(b_k)^*=b_k^*##)
 
  • #18
How exactly does the base case look like? So I would proceed like this, if ##n = 1##, then we have ##b_1## and ##b_1^*##, as well as ##d_1## and ##d_1^*##, where according to Gram Schmidt now the relation ##b_1 = b_1^*## and ##d_1 = d_1^*## is valid. But now I have the problem that ##d_1^*## is not the projection of ##d_1## onto ##span(d_1)^{\perp}## (this last statement is a modified version as stated in the source). So I wouldn't know here if ##d_1^* \in span(b_1)## is true. Maybe this is also trivial, but I don't see that.
 
Last edited:
  • #19
Peter_Newman said:
How exactly does the base case look like? So I would proceed like this, if ##n = 1##, then we have ##b_1## and ##b_1^*##, as well as ##d_1## and ##d_1^*##, where according to Gram Schmidt now the relation ##b_1 = b_1^*## and ##d_1 = d_1^*## is valid. But now I have the problem that ##d_1^*## is not the projection of ##d_1## onto ##span(d_1)^{\perp}## (this last statement is a modified version as stated in the source). So I wouldn't know here if ##d_1^* \in span(b_1)## is true. Maybe this is also trivial, but I don't see that.

I'm going to take a step back here and ask a very generic question.

##v,w\in V## where ##V## is a 1 dimensional vector space. Prove ##v\in span(w)##

Any amount of linear algebra intuition make this so blindingly obvious that you wouldn't even think to prove it. Based on a couple of your threads, you obviously know the machinery of linear algebra, but it seems to me like it's all abstract logic that you're trying to piece together. It might be worth re-studying the subject - a second pass through from a different textbook can often lead to a deeper understanding.
 
  • #20
If ##v## and ##w## belong to the same vector space, then there must also be a corresponding linear combination, so that one vector can be represented by the other.

Good, then I understand what you are trying to say. Of course, ##d's## and ##b's## must also be in the same one-dimensional space.
 
Last edited:
  • Like
Likes Office_Shredder

1. What is the definition of scalar product?

The scalar product, also known as the dot product, is a mathematical operation that takes two vectors and returns a single scalar value. It is calculated by multiplying the magnitude of the two vectors and the cosine of the angle between them.

2. How is scalar product used in physics?

In physics, scalar product is used to calculate work, energy, and power. It is also used in determining the direction of a force acting on an object. Additionally, it is used in calculating the projection of one vector onto another.

3. What is the difference between scalar product and vector product?

The main difference between scalar product and vector product is that scalar product results in a scalar value, while vector product results in a vector value. Scalar product involves multiplication and cosine of the angle between two vectors, while vector product involves multiplication and sine of the angle between two vectors.

4. How is scalar product related to orthogonality?

Two vectors are orthogonal if their scalar product is equal to zero. This means that the angle between the two vectors is 90 degrees. Orthogonality is important in many fields, such as geometry, physics, and engineering.

5. Can scalar product be negative?

Yes, scalar product can be negative. This occurs when the angle between two vectors is greater than 90 degrees, resulting in a negative value for the cosine of the angle. A negative scalar product indicates that the two vectors are pointing in opposite directions.

Similar threads

  • Linear and Abstract Algebra
Replies
5
Views
987
  • Linear and Abstract Algebra
Replies
34
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
944
  • Linear and Abstract Algebra
Replies
4
Views
930
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
23
Views
1K
  • Linear and Abstract Algebra
Replies
7
Views
814
  • Linear and Abstract Algebra
Replies
15
Views
1K
  • Linear and Abstract Algebra
Replies
14
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
982
Back
Top