Eigenvectors and matrix inner product

In summary, the conversation discusses the use of inner product formula to prove that the eigenvalues, elements, eigenfunctions, and eigenvectors of a matrix form a Hilbert space. The conversation also mentions the general solution of a matrix and the conditions for it to satisfy the inner product. The speaker is trying to prove that their matrix satisfies these conditions and is looking for a way to do so.
  • #1
SeM
Hi, I am trying to prove that the eigevalues, elements, eigenfunctions or/and eigenvectors of a matrix A form a Hilbert space. Can one apply the inner product formula :

\begin{equation}
\int x(t)\overline y(t) dt
\end{equation}

on the x and y coordinates of the eigenvectors [x_1,y_1] and [x_2,y_2], x_1 and y_1 and x_2 and y_2:\begin{equation}
\int x_1\overline y_1 dt
\end{equation}

\begin{equation}
\int x_2\overline y_2 dt
\end{equation}and by that prove that the inner product of the matrix vectors is complete and therefore forms a Hilbert space L² [a,b] ?

The reason I am asking about this is because I am looking for a way to prove that the matrix elements, its vectors and its solution is/are Hilbert space, L^2[a,b]

The solution is in general form:

\begin{equation}
\psi = \alpha v_1 e^{\lambda_1t}+\alpha v_2 e^{\lambda_2t}
\end{equation}

where v_1 and v_2 are the eigenvectors of the matrix. In Kreyszig Functional Analysis, p 132, he says "In example 2.2-7 the functions were assumed to be real-valued. In certain cases, that restriction can be removed, to consider complex valued functions. These function form a vector space, which becomes an inner product space if we define:

\begin{equation}
\int x(t)\overline y(t) dt
\end{equation}

This gives also the complex norm.

\begin{equation}
\int \big(|x(t)|^2dt \big)^{1/2}
\end{equation}

And then Kreyszig ends with "The completion of the metric space corresponding to the inner product for the complex matrix (which I just gave above) is the real space L^2[a,b] .

I would like to prove that "my" matrix satisfies this condition too. So , because the general solution given above does not explicitly show that it satisfies the inner product, can I use the eigenvectors in the inner product formula to conclude that the matrix eigenvectors form a Hilbert space?
 
Last edited by a moderator:
Physics news on Phys.org
  • #2
I think you are confusing various things here. Given a matrix ##A## and an eigenvalue ##\lambda##, such that ##A.\vec{v}=\vec{v}## then ##\vec{v}## spans of course a vector space. For different eigenvalues, you can use the direct product of those eigenspaces. These will have an inner product if the original vector space has. There is nothing to show here.
 
Last edited by a moderator:

1. What is an eigenvector?

An eigenvector is a vector that, when multiplied by a given matrix, only changes in magnitude but not in direction. In other words, the eigenvector remains pointing in the same direction after the multiplication.

2. How is an eigenvector related to eigenvalues?

Eigenvectors are associated with eigenvalues, which are scalar values that represent the amount by which the eigenvector is scaled during the matrix multiplication. The eigenvalue and eigenvector pair provide important information about the behavior of a matrix.

3. What is the significance of eigenvectors and eigenvalues?

Eigenvectors and eigenvalues are important in many fields, including physics, engineering, and computer science. They are used to solve systems of linear equations, find equilibrium points, and understand the behavior of dynamical systems.

4. How do you calculate eigenvectors?

To calculate eigenvectors, you first need to find the eigenvalues of the given matrix. Then, for each eigenvalue, you solve the equation (A - λI)x = 0, where A is the matrix, λ is the eigenvalue, and x is the eigenvector. The resulting vector(s) will be the eigenvector(s) corresponding to that eigenvalue.

5. What is the matrix inner product?

The matrix inner product, also known as the dot product, is a mathematical operation between two matrices that results in a scalar value. It is calculated by multiplying the corresponding elements in the matrices and then summing the products. This operation is used in many applications, such as solving systems of equations and finding projections in linear algebra.

Similar threads

  • Linear and Abstract Algebra
Replies
2
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
3K
  • Linear and Abstract Algebra
Replies
34
Views
2K
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
4
Views
959
  • Linear and Abstract Algebra
Replies
16
Views
2K
  • Linear and Abstract Algebra
Replies
11
Views
944
  • Linear and Abstract Algebra
Replies
15
Views
1K
  • Linear and Abstract Algebra
Replies
5
Views
2K
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
Back
Top