# Looking for a nice basis for the general linear group

1. Jan 26, 2016

### Strilanc

I'm looking for a nice set of basis matrices $B_{i,j}$ that cover the matrices of size $n \times n$ when linear combinations are allowed. The nice property I want them to satisfying is something like $B_{i,j} \cdot B_{a,b} = B_{i+a, j+b}$, i.e. I want multiplication of two basis matrices to turn into a simple computation inside the indices (using a single simple binary operator).

For example, define $U^{\oplus x} = \otimes_k^{\lg_2 n} U^{2^i \land k}$ where $\otimes_a^b f(a) = f(0) \otimes f(1) \otimes ... \otimes f(b-1)$ is a kronecker product expression, and $a \land b$ returns the number of bit positions where both $a$ and $b$ have a 1-bit in binary. Then an almost-nice basis, for a power-of-2 sized $n$, is $B_{i,j} = X^{\oplus i} Z^{\oplus j}$, where $X$ and $Z$ are the usual Pauli matrices. This is the basis of observables that quantum superdense coding is typically done in. Consider:

$B_{i,j} \cdot B_{a,b}$
$= X^{\oplus i} Z^{\oplus j} X^{\oplus a} Z^{\oplus b}$
$= X^{\oplus i} X^{\oplus a} Z^{\oplus j} Z^{\oplus b} (-1)^{a \land j}$
$= X^{\oplus i \oplus a} Z^{\oplus j \oplus b}(-1)^{a \land j}$
$= B_{i \oplus a, j \oplus b} (-1)^{a \land j}$

Which is pretty close to having the operation happen entirely inside the indices with a single operand. There's just that nasty extra sign factor. (The same thing happens if you work from the proper pauli set I, X, Y, Z instead of I, X, Z, and XZ; the factor just gets more complicated.)

Another almost-nice example is the combination of "offset" and "stride" matrices. To make an offset matrix you take the identity matrix then cycle its entries vertically by the given offset. The stride matrices work similarly, except instead of setting every $i, j$ entry satisfying $i + d = j\pmod{n}$ to 1 for any integer $d$ you set every entry satisfying $i = cj \pmod{n}$ for any $c$.

For example, $\text{Offset}_1 = \begin{bmatrix} 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 0 \end{bmatrix}$ and $\text{Stride}_2 = \begin{bmatrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{bmatrix}$ when working with $n = 3$. Offset matrices add their offsets when combined, whereas stride matrices multiply their stride.

If you define $B_{i,j} = \text{Offset}_i \cdot \text{Stride}_{g^j}$ and $n$ is a prime with generator $g$, then:

$B_{i,j} \cdot B_{a,b}$
$= \text{Offset}_i \cdot \text{Stride}_{g^j} \cdot \text{Offset}_a \cdot \text{Stride}_{g^b}$
$= \text{Offset}_i \cdot \text{Offset}_{a g^j} \cdot \text{Stride}_{g^j} \cdot \text{Stride}_{g^b}$
$= B_{i + a g^j, j+b}$

Which does do all the computation inside the indicates, but doesn't manage to do it with one operator. Also it misses the zero-stride matrices due to the use of the generator.

Last edited: Jan 26, 2016
2. Jan 27, 2016

### Strilanc

Apparently the standard name for what I called "offset" matrices is "shift" matrices. And they're more typically combined with "clock" matrices. That gives you a system where $B_{i,j} \cdot B_{a,b} = B_{i+a, j+b} \omega^{aj}$, whose extra term is directly analogous to the sign error from the Pauli-matrix-XZ-tensor basis. So still not as simple as desired.

3. Jan 28, 2016

### Staff: Mentor

Wouldn't this imply you could perform matrix multiplication in $O(n^2)$ essential multiplications, which cannot be done?

4. Jan 28, 2016

### Strilanc

It depends on how expensive the basis transformation is. If it takes n^3 time to get into the nice multiplication basis, or if the basis has n^1.5 elements due to necessary redundancy, then it wouldn't help much with multiplication. The basis examples I gave are in fact cheap to get into though, and applying it to multiplication is what I had in mind.