# Direct sum decomposition into orthogonal subspaces

• B
• sindhuja
sindhuja
Hello All, I am trying to understand quantum information processing. I am reading the book "Quantum Computing A Gentle Introduction" by Eleanor Rieffel and Wolfgang Polak. I want to understand the following better:

" Let V be the N = 2^n dimensional vector space associated with an n-qubit system. Any device that measures this system has an associated direct sum decomposition into orthogonal subspaces V = S1 ⊕ · · · ⊕ Sk for some k ≤ N. The number k corresponds to the maximum number of possible measurement outcomesfor a state measured with that particular device."

Could anyone explain the intuition behind this statement. I think it is a quiet simple beginner level concept which I have not been getting a satisfactory explanation for. Thank you!

I don't know this book, but I guess what's meant is the following: If you measure some observable (in this case on a system ##n## qubits), this observable is described by some self-adjoint operator on the ##2^n##-dimensional Hilbert space, describing the ##n##-qubit system. You can think of it as a matrix ##\hat{A}## operating on ##\mathbb{C}^{2^n}##-column vectors, which are the components of a vector wrt. an aribtrary orthonormal basis (e.g., the product basis of the ##n## qubits). The possible outcomes of measurements are the eigenvalues of this operator/matrix. To each eigenvalue ##a## there is at least one eigenvector. There's always a basis of eigenvectors, and you can always choose this basis to be an orthonormal set. The eigenvectors for each eigenvalue ##a## span a subspace ##S_i=\mathrm{Eig}(a_i)##. The vectors in eigenspaces of different eigenvalues are always orthogonal to each other (again, because the matrix is self-adjoint). Thus the entire vector space is decomposed into the orthogonal sum of these eigenspaces, ##V=S_1 \oplus S_2 \oplus \cdots \oplus S_k##, where the ##a_i## with ##i \in \{1,\ldots,k \}## are the different eigenvectors. Of course the dimensions of these subspaces are such that
$$\sum_{i=1}^k \mathrm{dim} \text{Eig}(a_i)=\mathrm{dim} V=2^n.$$

Haborix
You can also think of it as saying that when there are degenerate eigenvalues, a measuring device capable of measuring only the associated observable cannot give complete state information. The measuring device is incapable of resolving the decomposition of the state within the degenerate subspace, ##S_i##.

vanhees71

## What is direct sum decomposition?

Direct sum decomposition is a way to express a vector space as a direct sum of its subspaces. This means that every vector in the original vector space can be uniquely written as a sum of vectors from each of these subspaces. Mathematically, if $$V$$ is a vector space and $$V_1, V_2, ..., V_n$$ are subspaces, then $$V = V_1 \oplus V_2 \oplus ... \oplus V_n$$.

## What does it mean for subspaces to be orthogonal?

Subspaces are orthogonal if every vector in one subspace is orthogonal to every vector in the other subspaces. Orthogonality is usually defined using an inner product, such that if $$V_1$$ and $$V_2$$ are orthogonal subspaces, for any $$v_1 \in V_1$$ and $$v_2 \in V_2$$, the inner product $$\langle v_1, v_2 \rangle = 0$$.

## Why is orthogonal decomposition important?

Orthogonal decomposition is important because it simplifies many problems in linear algebra and functional analysis. It allows for the separation of a vector space into simpler, non-overlapping components, making it easier to study and solve linear equations, perform projections, and analyze properties like eigenvalues and eigenvectors.

## How do you find an orthogonal decomposition of a vector space?

To find an orthogonal decomposition, one typically starts with a basis for the vector space and uses methods like the Gram-Schmidt process to generate an orthogonal (or orthonormal) basis. These basis vectors then span orthogonal subspaces, and the original vector space can be decomposed as a direct sum of these orthogonal subspaces.

## Can you give an example of direct sum decomposition into orthogonal subspaces?

Consider the vector space $$\mathbb{R}^3$$ with the standard inner product. Let $$V_1$$ be the subspace spanned by $$\mathbf{e}_1 = (1, 0, 0)$$ and $$\mathbf{e}_2 = (0, 1, 0)$$, and let $$V_2$$ be the subspace spanned by $$\mathbf{e}_3 = (0, 0, 1)$$. These subspaces are orthogonal because the inner product of any vector in $$V_1$$ with any vector in $$V_2$$ is zero. Therefore, $$\mathbb{R}^3 = V_1 \oplus V_2$$ is a direct

Replies
62
Views
7K
Replies
8
Views
2K
Replies
5
Views
1K
Replies
1
Views
1K
Replies
35
Views
4K
Replies
87
Views
6K
Replies
1
Views
945
Replies
4
Views
373
Replies
1
Views
2K
Replies
2
Views
9K