# Question regarding sublattice

• I
• Peter_Newman
In summary, the conversation discusses the concept of projecting lattice points onto a line orthogonal to one of its basis vectors. It is questioned whether this projection results in a lattice again. The conversation then goes on to discuss the 2-dimensional case and provides a formula for projecting onto a line in that case. The conversation also touches on the n-dimensional case and the difficulty in showing that the projection forms a lattice. Further discussion includes the use of Gram-Schmidt vectors in the projection and the need to prove that the resulting values are integers.

#### Peter_Newman

Hello,

let's say that we have a lattice ##\Lambda##. Now I take one of it's basis vectors say ##b_1## and project all the lattice points onto the line which is orthogonal to ##b_1##, this line could be calculated via Gram-Schmidt for instance. But does this then form a lattice again? It is possible to formalize this a bit more? With my definition of a lattice ##\Lambda = \{\sum \lambda_ib_i, \lambda_i \in \mathbb{Z}\}##, however, I cannot prove that this projection on the line again represents a lattice.

Last edited:
Some questions. Do the ##b_i## span a finite or infinite dimensional space? Are all the ##b_i## linearly independent?

Is your lattice 2 dimensional, or is there more than one choice of line to pick, or did you mean you would project onto the orthogonal hyperspace?

Thank you for your questions. So this is the question I asked myself for an ##n##-dimensional lattice. The ##b_i## are all linearly independent according to definition of an ##n##-dimensional lattice. Then I would project the ##n##-dimensional lattice ##\Lambda## onto the hyperplane orthogonal to ##b_1## (of the dimension ##n-1##), if this is thought correctly. In the ##2##-dimensional it is a line like this:

I tried to represent this graphically in 2D. The blacks are the lattice points and the greens their projection onto the plane/space/... orthogonal to ##b_1##, the point in the origin belongs to both...

In the 2d case it's not hard to see it's a lattice. Let ##c## be orthogonal to ##b_1## with unit length.

##\alpha b_1 + \beta b_2\to \beta (<b_2,c> c)## so the image is a 1 dimensional lattice generated by ##<b_2,c> c##.

Peter_Newman
Office_Shredder said:
In the 2d case it's not hard to see it's a lattice. Let ##c## be orthogonal to ##b_1## with unit length.

##\alpha b_1 + \beta b_2\to \beta (<b_2,c> c)## so the image is a 1 dimensional lattice generated by ##<b_2,c> c##.

How you came up with this (##\alpha b_1 + \beta b_2\to \beta (<b_2,c> c)## ...) so quickly, I have yet to understand... If I understand this right, ##<b_2,c>## expresses the ##b_2## component of a lattice point, now I just think what ##<b_2,c> c## means then...

How is that then generally in the n-dimensional case?

Peter_Newman said:
How you came up with this (##\alpha b_1 + \beta b_2\to \beta (<b_2,c> c)## ...) so quickly, I have yet to understand... If I understand this right, ##<b_2,c>## expresses the ##b_2## component of a lattice point, now I just think what ##<b_2,c> c## means then...

How is that then generally in the n-dimensional case?

The inner product measures the length of the projection, and the multiplication by c measures the direction of the new post- projection vector. It probably looks too obvious because you already knew it was in the c direction.

Peter_Newman
Office_Shredder said:
The inner product measures the length of the projection
Exactly, that's what I wanted to say in my previous post :)

For the ##n## dimensional case, you basically get the same result. Let ##P## be the projection. Obviously the projected lattice is spanned by integer combinations of ##Pb_2,...,Pb_n##. We need to prove these are linearly independent.

Suppose ##\sum_{i\geq 2} \alpha_i Pb_i=0##. Then ##\sum_{i\geq 2} \alpha_i b_i=\alpha_1 b_1## since it has to live in the kernel of ##P##
But the original basis is linearly independent, so all the coefficients must be zero.

Hey, so I would have started like this now. So the projection of a vector from the lattice onto the span orthogonal to ##b_1 = b_1^*## would be, to my knowledge:

##\pi_{i=2}(x) = \sum_{j\geq i=2}^n \frac{\langle x, b_j^*\rangle}{\langle b_j^*, b_j^*\rangle}b_j^* \quad\text{(1)}##

But now you would have to show that this is a lattice, which I think is difficult here, because the fraction would have to be an integer, compare the lattice definition from above:

##\Lambda = \{\sum \lambda_ib_i, \lambda_i \in \mathbb{Z}\}##

now I am not sure how to show that equation (1) is or builds a lattice... The ##b_j^*## are not a problem because they are in ##R^n## but the factor has to be in ##Z## to build a lattice...

Last edited:
What is your definition of ##b_2^*##? For that matter, what is ##b_1^*##?

I suspect your issue is just that those denominators are absorbed into your new lattice basis which you haven't correctly identified, but I'm not really sure what's going on.

Hey, sorry I forgot to name this, with the asterisks I express Gram-Schmidt vectors. Then e.g. ##\pi_{i=2}(x)## is composed of all vectors orthogonal to ##b_1 = b_1^*##.

I believe, without having checked or proved this now, that what you say concerning the denominator could also fit, that the denominator together with the ##b_j^*## is my new basis lattice vector. It then remains to show, that ##\langle x, b_j^* \rangle ## is an integer value, but I'am not sure how to proof this. We could express ##x## in the Gram-Schmidt basis, something like ##x = \sum a_i b_i^*##. Then ##\langle x, b_j^* \rangle = \langle \sum a_i b_i^*, b_j^* \rangle = a_j \langle b_j^*, b_j^* \rangle ## remains and here would be the question, whether ##a_j \langle b_j^*, b_j^* \rangle## is an integer...

Last edited:
Got it, I think those are not a basis. In general there is no reason for your projected lattice to contain an orthogonal basis.

Ok and yes I would agree with that. But I actually meant with my question rather whether ##\pi_{i}(\Lambda)## forms again a lattice.

Last edited:
Yes, the projection forms a lattice, and my proof gives the right basis.

##\pi_i(\Lambda)## is spanned by integer combinations of ##\pi_i(b_j)## for ##i\neq j ##. These vectors are also linearly independent - if ##\Sigma \alpha_j \pi_i(b_j)=0##, then ##\pi_i(\Sigma \alpha_j b_j)=0##. But this means ##\Sigma \alpha_j b_j## lies in the span of ##b_i##, which by linear independence of the original basis is impossible unless all the coefficients are zero. Hence they are linearly independent.

And any integer span of linearly independent vectors forms a lattice (note the integer span of linearly dependent vectors may not form a lattice, unless you happen to get lucky)

Last edited:
Peter_Newman
Office_Shredder said:
Suppose ##\sum_{i\geq 2} \alpha_i Pb_i=0##. Then ##\sum_{i\geq 2} \alpha_i b_i=\alpha_1 b_1## since it has to live in the kernel of ##P## But the original basis is linearly independent, so all the coefficients must be zero.
In reverse order, I understand that. But not quite as it stands there.
the original basis is linearly independent, so all the coefficients must be zero
Works! This is exactly the definition of a basis.
##\sum_{i\geq 2} \alpha_i b_i=\alpha_1 b_1## since it has to live in the kernel of ##P##
The kernel of ##P## is or better satisfies ##\sum_{i=1}^n \alpha_i b_i = 0## so, you have simply rewritten this statement?
Suppose ##\sum_{i\geq 2} \alpha_i Pb_i=0##
Why can we assume that this is true? I would rather conclude that by reading the proof backwards?!

So the goal is to prove that ##\pi_i(\Lambda)## is a lattice. Let's start real slow. Do you agree that ##\pi_i(\Lambda)## is spanned by integer combinations of ##\pi_i(b_j)## for ##j\neq i##, and also that if those are linearly independent, you have proven that this is an n-1 dimensional lattice?

Peter_Newman
Office_Shredder said:
Do you agree that ##\pi_i(\Lambda)## is spanned by integer combinations of ##\pi_i(b_j)## for ##j\neq i##
Yes, that could fit. ##\Lambda## is a linear combination of lattice basis vectors and a scalar (in front of each basis vector) that is an integer by definition. The span of the projection could then actually come down to integer linear combinations of ##\pi_i(b_j)##. The projection subtracts a component of ##\Lambda##, so we can't have a full rank anymore, right? So only components remain, which are on the space of the projection, on which we project.

Last edited:
So all that's left is to prove that they are linearly independent. Suppose a linear combination ##\sum_j \alpha_j \pi_i(b_j)=0##. Use linearity of ##\pi_i## to write down a vector in the kernel of ##\pi_i##. But what is the kernel of ##\pi_i##?

To give a non example to see what the point is, suppose I have a "lattice" with basis vectors ##(1,0,0)##, ##(1,1,0)## and ##(1,\pi,0)##. I project the x axis to 0. ##\pi_1(b_2)=(0,1,0)## and ##\pi_1(b_3)=(0,\pi,0)##. This does *not* form a lattice, because integer combinations of those two vectors form a dense subset. It's a non example because the original thing also was not a lattice, but this is the kind of situation you need to watch out for.

Office_Shredder said:
But what is the kernel of ##\pi_i##?
So the image of ##\pi_i## looks like this ##\pi_{i}(x) = \sum_{j\geq i}^n \frac{\langle x, b_j^*\rangle}{\langle b_j^*, b_j^*\rangle}b_j^*##. Now we know that by the projection thus the components from ##b_1^*## to ##b_{i-1}^*## respectively ##b_1## to ##b_{i-1}## are lost.

The kernel (ker f := {a ##\in## A | f(a) = 0}) of a linear mapping f are the solutions of the system f(a) = 0. So if I pick any vector ##(a_1,...,a_n)^T## then what is left after the projection ##\pi_{i}## is of the form ##(0_{1},...,0_{i-1},\beta_{i},...,\beta_n)^T## (only the indices are important here). So that means in the kernel of ##\pi_i## are ##\sum_{i=1}^{i-1}a_i b_i = 0##. Is this possible to say?

Peter_Newman said:
So that means in the kernel of ##\pi_i## are ##\sum_{i=1}^{i-1}a_i b_i = 0##. Is this possible to say?

Why did you write =0? The kernel is a set of vectors, not an equation.

Peter_Newman
With this I wanted to express that the kernel consists of the solutions of the homogeneous system of equations. But was the rest right so far?

Peter_Newman said:
With this I wanted to express that the kernel consists of the solutions of the homogeneous system of equations. But was the rest right so far?

The equation ##\sum_{j<i} a_j b_j =0## has only one solution, which is every ##a_j=0##
We know this because the b's are linearly independent.

So I'm not sure what you mean here.

Peter_Newman
Hey @Office_Shredder the "rest" was related to my post #20

Yes the linear independence of the vectors implies then that there can be only this solution, that makes sense!

Office_Shredder