Similar to an eigenvalue problem.help

  • Thread starter BobbyBear
  • Start date
  • Tags
    Eigenvalue
In summary, the conversation discusses the linear homogeneous ordinary differential equation system that describes the movement of a two degree of freedom structural system. The solution is assumed to have the form of a constant vector multiplied by a trigonometric function. The eigenvalues of the matrix [m]^{-1}[k] are found to be real and positive, which is necessary for a non-trivial solution. However, the deduction of this fact is not entirely clear and further discussion is needed to prove it.
  • #1
BobbyBear
162
1
Consider the following linear homogeneous ordinary differential equation system:
(NB this system describes the movement of the natural response of a two degree of freedom structural system made up of two lumped masses connected by elastic rigidities) :

[tex]
\left( \begin{array}{cc}
m_1 & 0 \\
0 & m_2 \\
\end{array} \right)

\left( \begin{array}{cc}
\ddot{u}_1 \\
\ddot{u}_2 \\
\end{array} \right)
+
\left( \begin{array}{cc}
(k_1 + k_2) & -k_2 \\
-k_2 & k_2 \\
\end{array} \right)

\left( \begin{array}{cc}
u_1 \\
u_2 \\
\end{array} \right)

=
\left( \begin{array}{cc}
0 \\
0 \\
\end{array} \right)


[/tex]

which I shall compactly write as:

[tex]
[m] \vec{\ddot{u}} + [k] \vec{u}} = \vec{0}
[/tex]

Now, to solve, we assume a solution of the form:

[tex]
\vec{u}(t)=q_n(t) \vec{\phi _n}
[/tex]

where

[tex]
q_n(t) = A_n cos (\omega _n t) + B_n sin (\omega _n t)
[/tex]

and

[tex]
\vec{\phi _n}
[/tex]

is a constant vector.

Then
[tex]
\vec{\ddot{u}}(t)=-\omega _n^2 q_n(t) \vec{\phi _n}
[/tex]

Substituting into the differential system,

[tex]
\left[-\omega _n^2 [m] \vec{\phi _n} + [k] \vec{\phi _n} \right] q_n(t) = \vec{0}
[/tex]

from which

[tex]
-\omega _n^2 [m] \vec{\phi _n} + [k] \vec{\phi _n} = \vec{0}
[/tex]

[tex]
(-\omega _n^2 [m] + [k]) \vec{\phi _n} = \vec{0}
[/tex]

and for there to be a non trivial solution, we need:

[tex]
det(-\omega _n^2 [m] + [k]) = 0
[/tex]

from which we get two values of

[tex]
\omega _n^2
[/tex]

Now, my book (Dynamics of Structures by Chopra) says that the [tex] \omega _n^2 [/tex] are real and positive because [k] and [m] are real symmetric and positive definite.
I don't see how this deduction is made! I mean, I know that if a matrix [A] is a real symmetric matrix that is positive definite, then all its eigenvalues are real and positive (the proof is available in any standard text of linear algebra).
But I just don't see how to prove the other statement! the [tex] \omega _n^2 [/tex] are not the eigenvalues of any matrix, are they? (even though it's a similar problem to an eigenvalue problem). Can someone help me see how that deduction is made?
 
Mathematics news on Phys.org
  • #2
The matrix [itex][m][/itex] is invertible. Factor it out of

[tex]-\omega _n^2 [m] \vec{\phi _n} + [k] \vec{\phi _n} = \vec{0}[/tex]

to get

[tex]-\omega _n^2 \vec{\phi _n} + [m]^{-1}[k] \vec{\phi _n} = \vec{0}[/tex]

or

[tex][m]^{-1}[k] \vec{\phi _n} = \omega _n^2 \vec{\phi _n},[/tex]

which means the [itex]\omega_n^2[/itex] are eigenvalues of the matrix [m]^{-1}[k].
 
  • #3
Thank you Mute, I never thought of doing that!

But I'm still not quite able to reach the desired conclusion...

Okay so the [itex]
\omega_n^2
[/itex] are eigenvalues of the matrix [itex]
[m]^{-1}[k]
[/itex]

And I've read that every positive definite matrix is invertible, and its inverse is also positive definite, so that means that if [m] is positive definite, then so is [itex]
[m]^{-1}
[/itex]

So we have that both [itex]
[m]^{-1}
[/itex] and [itex]
[k]
[/itex] are positive definite, but in general that does not mean that [itex]
[m]^{-1} [k]
[/itex] is positive definite, does it?
I've read that if two matrices [M] and [N] are positive definite, then their product is positive definite if [itex]
[M] [N] = [N] [M]
[/itex], but this is not the case with [itex]
[m]^{-1}
[/itex] and [k], their product is not commutative in general. So how can we see that [itex]
[m]^{-1} [k]
[/itex] is positive definite?

Thanks for your help!
 
  • #4
Oh but wait! I just realized that even though [m] and [k] are real symmetric, [itex] [m]^{-1}[k] [/itex] is not even symmetric, so it wouldn't be of any use to prove that [itex] [m]^{-1}[k] [/itex] is positive definite, would it, because we don't know that the eigenvalues are real...

[tex] [m]^{-1}[k] =

\left( \begin{array}{cc}
1/m_1 & 0 \\
0 & 1/m_2 \\
\end{array} \right)

\left( \begin{array}{cc}
(k_1 + k_2) & -k_2 \\
-k_2 & k_2 \\
\end{array} \right)

=
\left( \begin{array}{cc}
(k_1 + k_2)/m_1 & -k_2/m_1 \\
-k_2/m_2 & k_2/m_2 \\
\end{array} \right)



[/tex]

So how do we see that the eigenvalues of [itex] [m]^{-1}[k] [/itex] are real and positive?
 

FAQ: Similar to an eigenvalue problem.help

What is an eigenvalue problem?

An eigenvalue problem is a mathematical problem that involves finding the values (known as eigenvalues) and corresponding vectors (known as eigenvectors) that satisfy a certain equation, usually involving a matrix or operator.

How is an eigenvalue problem solved?

Solving an eigenvalue problem involves finding the eigenvalues and eigenvectors of a given matrix or operator. This is usually done through a process called diagonalization, which involves transforming the matrix into a diagonal form using various mathematical techniques.

What is the significance of eigenvalues and eigenvectors?

Eigenvalues and eigenvectors are important in many areas of mathematics and science. They are used to study and analyze systems such as differential equations, linear transformations, and quantum mechanics. They also have applications in fields such as engineering, physics, and computer science.

Can an eigenvalue problem have multiple solutions?

Yes, an eigenvalue problem can have multiple solutions. In fact, most eigenvalue problems have an infinite number of solutions, as there are usually multiple eigenvalues and corresponding eigenvectors that satisfy the given equation.

How are eigenvalue problems used in data analysis?

Eigenvalue problems are commonly used in data analysis and machine learning. They can be used to reduce the dimensionality of data, identify patterns and relationships, and extract important features from large datasets. They are also used in techniques such as principal component analysis and singular value decomposition.

Similar threads

Replies
7
Views
2K
Replies
2
Views
1K
Replies
5
Views
200
Replies
3
Views
1K
Replies
7
Views
2K
Replies
17
Views
1K
Back
Top