- #1

ARoyC

- 56

- 11

You are using an out of date browser. It may not display this or other websites correctly.

You should upgrade or use an alternative browser.

You should upgrade or use an alternative browser.

- I
- Thread starter ARoyC
- Start date

In summary, the equation states that the matrix elements of the operator "A" are the sum of the matrix elements of the operator "1" and the operator "A" itself.

- #1

ARoyC

- 56

- 11

Physics news on Phys.org

- #2

- 24,488

- 15,026

$$\hat{A}=v_3 |0 \rangle \langle 0| + (v_1-\mathrm{i} v_2) |0 \rangle \langle 1| + (v_1+\mathrm{i} v_2) |1 \rangle \langle 0| - v_3 |1 \rangle \langle 1|.$$

The matrix elements in your matrix are then taken with respect to the basis ##(|0 \rangle,|1 \rangle)##.

$$(A_{jk})=\langle j|\hat{A}|k \rangle, \quad j,k \in \{0,1 \}.$$

To see this, simply use ##\langle j|k \rangle=\delta_{jk}##. Then you get, e.g.,

$$A_{01}=\langle 0|\hat{A}|1 \rangle=v_1-\mathrm{i} v_2.$$

- #3

ARoyC

- 56

- 11

Oh! Then we can go to the LHS of the equation from the RHS. Can't we do the reverse?vanhees71 said:

$$\hat{A}=v_3 |0 \rangle \langle 0| + (v_1-\mathrm{i} v_2) |0 \rangle \langle 1| + (v_1+\mathrm{i} v_2) |1 \rangle \langle 0| - v_3 |1 \rangle \langle 1|.$$

The matrix elements in your matrix are then taken with respect to the basis ##(|0 \rangle,|1 \rangle)##.

$$(A_{jk})=\langle j|\hat{A}|k \rangle, \quad j,k \in \{0,1 \}.$$

To see this, simply use ##\langle j|k \rangle=\delta_{jk}##. Then you get, e.g.,

$$A_{01}=\langle 0|\hat{A}|1 \rangle=v_1-\mathrm{i} v_2.$$

- #4

- 24,488

- 15,026

$$\hat{A}=\sum_{j,k} |j \rangle \langle j|\hat{A}|k \rangle \langle k| = \sum_{jk} A_{jk} |j \rangle \langle k|.$$

The mapping from operators to matrix elements with respect to a complete orthonormal system is one-to-one. As very many formal manipulations in QT, it's just using the completeness relation,

$$\sum_j |j \rangle \langle j|=\hat{1}.$$

- #5

ARoyC

- 56

- 11

How are we getting the very first equality that is A = Σ|j><j|A|k><k| ?vanhees71 said:

$$\hat{A}=\sum_{j,k} |j \rangle \langle j|\hat{A}|k \rangle \langle k| = \sum_{jk} A_{jk} |j \rangle \langle k|.$$

The mapping from operators to matrix elements with respect to a complete orthonormal system is one-to-one. As very many formal manipulations in QT, it's just using the completeness relation,

$$\sum_j |j \rangle \langle j|=\hat{1}.$$

- #6

Haborix

- 363

- 404

- #7

ARoyC

- 56

- 11

Oh, okay, thanks a lot!Haborix said:

Spectral decomposition, also known as eigendecomposition, is a method where a matrix is broken down into its constituent elements based on its eigenvalues and eigenvectors. For a square matrix \(A\), it can be expressed as \(A = V \Lambda V^{-1}\), where \(V\) is a matrix whose columns are the eigenvectors of \(A\), and \(\Lambda\) is a diagonal matrix containing the eigenvalues of \(A\).

Spectral decomposition is used in various fields such as quantum mechanics, vibration analysis, and facial recognition. It helps in simplifying complex matrix operations, solving differential equations, and in principal component analysis (PCA) where it is used to reduce the dimensionality of data while preserving its variance.

For a matrix to be spectrally decomposed, it must be a square matrix. Additionally, the matrix should have a complete set of linearly independent eigenvectors. This is always possible for normal matrices (matrices that commute with their conjugate transpose), including symmetric matrices, Hermitian matrices, and unitary matrices.

While both spectral decomposition and singular value decomposition (SVD) are techniques for matrix factorization, they are used in different contexts. Spectral decomposition is applicable only to square matrices and involves eigenvalues and eigenvectors. In contrast, SVD can be applied to any m x n matrix and decomposes it into \(U \Sigma V^*\), where \(U\) and \(V\) are orthogonal matrices and \(\Sigma\) is a diagonal matrix containing the singular values.

Non-diagonalizable matrices cannot undergo spectral decomposition in the strict sense because they do not have a full set of linearly independent eigenvectors. However, they can be decomposed using the Jordan canonical form, which generalizes the concept of eigendecomposition by including Jordan blocks corresponding to each eigenvalue.

- Replies
- 6

- Views
- 982

- Replies
- 2

- Views
- 819

- Replies
- 14

- Views
- 1K

- Replies
- 5

- Views
- 1K

- Replies
- 11

- Views
- 2K

- Replies
- 9

- Views
- 1K

- Replies
- 3

- Views
- 2K

- Replies
- 2

- Views
- 2K

- Replies
- 5

- Views
- 3K

- Replies
- 1

- Views
- 1K

Share: