Local decomposition of unitary matrices

chiyosdad
Messages
1
Reaction score
0
I know that, given an arbitrary unitary matrix A, it can be written as the product of several "local" unitary matrices Ai -- local in the sense that they only act on a small constant number of vector components, in fact it is sufficient to take 2 as that constant, which is best possible. For example,
A=\left(\begin{array}{cccc} a &amp; 0 &amp; b &amp; 0 \\0 &amp; 1 &amp; 0 &amp; 0 \\ <br /> c &amp; 0 &amp; d &amp; 0\\ 0 &amp; 0 &amp; 0 &amp; 1\end{array}\right)
is such a local matrix, because it only acts on the basis element e1 and e3. Another way to think of it is that it's only nontrivial over a subspace of dimension 2.

So the picture is that there are these gigantic linear transformations on C^n(n is very large), being built out of smaller transforms which act on small constant-dimension subspaces. So as n grows, the number of local Ai's needed will grow too. I have an upper bound on the number of Ai needed in the decomposition, in terms of n. But that's not what I'm interested in. What I would like to know, is what is the least number of Ai's needed to decompose a specific matrix, or a class of matrices. I know of no machinery that makes this easy, and looking around online for a bit didn't help.

So the question is, if I have a specific unitary matrix, how do I go about calculating the minimum number of Ais needed in the decomposition, and if I have a class of matrices, how do I figure it out. Is there some reading on this subject I can find online?
 
Physics news on Phys.org
To prove lower bounds is generically difficult. You mentioned a specific matrix, which means this could be done more easily depending on the properties of this matrix. E.g. it could just be ##1##. In the general case one needs to find a subsystems of values which have to be calculated regardless of these properties and count the steps for those calculations. The bigger this subsystem, the better the lower bound. However, it is not clear what such a subsystem must look like.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...

Similar threads

Back
Top