Matrix Equation -- clarification about solving a system

In summary, the conversation discusses solving a system involving matrices and a vector, and determining the necessity of the vector in the solution. It is determined that the vector may not be necessary, and the eigenvalue problem can be solved by finding the roots of a polynomial. However, there may be difficulties with finding stable solutions.
  • #1
member 428835
Hi PF!

Just want to make sure I'm not crazy: if we're solving a system ##K a = \sigma^2 M a## where ##K## and ##M## are ##n\times n## matrices, ##a## an ##n\times 1## vector and ##\sigma## a scalar, then ##a## is unnecessary, and all we really need to solve is ##K=\sigma^2 M##, right?
 
Physics news on Phys.org
  • #2
joshmccraney said:
Hi PF!

Just want to make sure I'm not crazy: if we're solving a system ##K a = \sigma^2 M a## where ##K## and ##M## are ##n\times n## matrices, ##a## an ##n\times 1## vector and ##\sigma## a scalar, then ##a## is unnecessary, and all we really need to solve is ##K=\sigma^2 M##, right?
Depends on the intention, i.e. what is meant. ##K=\sigma^2 M## describes the entire kernel ##\mathcal{K}=\operatorname{ker} (K-\sigma^2M)## and ##Ka=\sigma^2 Ma## means a certain vector ##a \in \mathcal{K}##.
 
  • #3
fresh_42 said:
Depends on the intention, i.e. what is meant. ##K=\sigma^2 M## describes the entire kernel ##\mathcal{K}=\operatorname{ker} (K-\sigma^2M)## and ##Ka=\sigma^2 Ma## means a certain vector ##a \in \mathcal{K}##.
What do you mean by "describes entire kernel"? I don't know ##a## and am trying to determine it from the equation ##Ka=\sigma^2Ma## but it seems impossible to determine a non-trivial ##a## from this. Your thoughts?
 
  • #4
Let's say ##\varphi = K - \sigma^2M## for short and my laziness.

I don't know where it came from, so one possibility could have been: A statement where a vector ##a## occurs at some point and a condition ##\varphi(a)=0## is derived later. This way the ##a## already was there before the condition showed up.

If you want to solve ##\varphi(a)=0## for all possible ##a## then this means to calculate a basis of the vector space ##\operatorname{ker}\varphi = \{ a \,\vert \, \varphi(a)=0 \}## and a certain ##a## can be expressed in this basis.

One possibility is, that ##\operatorname{ker}\varphi = \{0\}## in which case ##a=0## is a consequence, which means ##\varphi## is injective. So whether there are non-trivial solutions ##a## is equivalent to whether ##\operatorname{dim} \operatorname{ker} \varphi## is zero or positive.

You've asked, whether there is a difference. And the answer is: given ##a## with ##a \in \operatorname{ker}\varphi## (formula with ##a##) is formally different from ##\operatorname{ker}\varphi## (formula without referencing ##a##) which describes all possible ##a##.
 
  • #5
joshmccraney said:
Hi PF!

Just want to make sure I'm not crazy: if we're solving a system ##K a = \sigma^2 M a## where ##K## and ##M## are ##n\times n## matrices, ##a## an ##n\times 1## vector and ##\sigma## a scalar, then ##a## is unnecessary, and all we really need to solve is ##K=\sigma^2 M##, right?
Is ##K a = \sigma^2 M a## true for all vectors a or just for one vector a?
 
  • #6
FactChecker said:
Is ##K a = \sigma^2 M a## true for all vectors a or just for one vector a?
It's an eigenvalue problem, only instead of ##M=I## it's something different. To solve I'm taking ##\det(K-\sigma^2 M)=0##. Sound right?
 
  • #7
joshmccraney said:
It's an eigenvalue problem, only instead of ##M=I## it's something different. To solve I'm taking ##\det(K-\sigma^2 M)=0##. Sound right?
That sounds wrong to me, but maybe I'm missing something. In the first post, it sounds like you are asking if K and σ2M agree for one vector, a, then determine if K and σ2M are identical. But K and σ2M either are identical or they are not, there is no solving to do.
 
  • #8
Well, at least it can be used to decide whether there is a non-trivial ##a## or not. But you won't find it this way.
 
  • #9
Sorry, let me be clear: I'm trying to solve the eigenvalue problem ##K_{ij}a_j=\sigma^2M_{ij}a_j##. I know what ##M## and ##K## are, so I need to determine the eigenvalue ##\sigma^2## and the eigenvector ##a##. I thought solving this algebraic system is equivalent to $$K_{ij}a_j=\sigma^2M_{ij}a_j \implies\\
Ka=\sigma^2 M a\implies\\
(K-\sigma^2M)a=0.$$
Then we solve for ##\sigma^2## by solving ##\det(K-\sigma^2M)=0##. Isn't this standard and correct?
 
  • #10
##\operatorname{det}(K-\sigma^2M)\neq 0## means, only ##a=0## fulfills the equation ##Ka=\sigma^2 Ma\,.##
##\operatorname{det}(K-\sigma^2M) = 0## means, there is an ##a_\sigma \neq 0## with ##Ka_\sigma=\sigma^2 Ma_\sigma\,.##
It doesn't tell how many linear independent ##a_\sigma## there are or which.
 
  • #11
fresh_42 said:
##\operatorname{det}(K-\sigma^2M)\neq 0## means, only ##a=0## fulfills the equation ##Ka=\sigma^2 Ma\,.##
##\operatorname{det}(K-\sigma^2M) = 0## means, there is an ##a_\sigma \neq 0## with ##Ka_\sigma=\sigma^2 Ma_\sigma\,.##
It doesn't tell how many linear independent ##a_\sigma## there are or which.
Right, but that is how you find the eigenvalues, right? To find eigenvectors corresponding to the eigenvalues I'd then solve ##(K-\sigma^2 M)a=0##, right?
 
  • #12
joshmccraney said:
Right, but that is how you find the eigenvalues, right? To find eigenvectors corresponding to the eigenvalues I'd then solve ##(K-\sigma^2 M)a=0##, right?
I'm not sure whether I would use this method to determine possible values for ##\sigma^2##, as it looks as one could get an unpleasant polynomial in ##\sigma^2## which might not be easy to find roots for. Maybe it would be easier to use ##\sigma^2## to parameterize the solutions of the linear equations ##(K-\sigma^2 M)a=0## and do everything in one step. But basically, yes.

Another point is, that I have some difficulties to call them eigenvalues and eigenvectors. Of what? As I understand it, you are only interested in eigenvectors ##a_\sigma## to the eigenvalue ##0## of ##\varphi_\sigma = K-\sigma^2 M##, resp. possible pairs ##(\sigma,a_\sigma)## or ##(\sigma,\operatorname{ker}\varphi_\sigma)\,.##
 
  • #13
Yea, the polynomial could be ugly but I'm letting Mathematica crank through it. The eigenvalues/vectors correspond to perturbations of a capillary surface. The equation ##Ka=\sigma^2Ma## is an integro-differential equation, though I've not defined those matrices here for convenience.
 
  • #14
The difficulty with those equations is, that numerically derived roots aren't stable solutions, i.e. if we wobble a bit at ##\sigma##, we'll get stuck with ##a=0## since the solutions different from zero are dense, and the solutions are not.
 
  • #15
Hmmmm so how would you do it? You mentioned parameterize ##\sigma^2##; can you elaborate?

Also, if I were solving ##Ka=\sigma^2Ma## couldn't I instead do ##M^{-1}Ka=\sigma^2 a##, which can be recast as ##(M^{-1}K-\sigma^2I)a=0##, which amounts to finding the eigenvectors/values of the matrix ##M^{-1}K##?
 
  • #16
joshmccraney said:
Hmmmm so how would you do it? You mentioned parameterize ##\sigma^2##; can you elaborate?

Also, if I were solving ##Ka=\sigma^2Ma## couldn't I instead do ##M^{-1}Ka=\sigma^2 a##, which can be recast as ##(M^{-1}K-\sigma^2I)a=0##, which amounts to finding the eigenvectors/values of the matrix ##M^{-1}K##?
I thought of something like Gauß elimination to solve for ##(K - \sigma^2 M)x = 0## and keeping ##\sigma^2## in the process as variable, if this is possible. Then you see at the result for which values of ##\sigma^2## there are non-trivial solutions and for which there are none. But I'm not really good in numerical analysis, so maybe the same numbers which make the determinant ugly as you said, will also make the description of the solutions by matrix operations ugly. And maybe it doesn't matter for your problem. I only wanted to say, that ##\det (K-\sigma^2 M) = 0## is a straight, which means a little disturbance and you're no longer on it; but then ##\det (K-\sigma^2 M) \neq 0## and ##a=0## is the only solution.
 
  • #17
Thanks for your insight! I appreciate it!
 

1. What is a matrix equation?

A matrix equation is a mathematical representation of a system of linear equations using matrices. It consists of two parts: a coefficient matrix and a constant matrix, with the variables represented as unknowns.

2. How do you solve a matrix equation?

To solve a matrix equation, you can use a variety of methods such as Gaussian elimination, Cramer's rule, or matrix inversion. These methods involve manipulating the matrices to isolate the variables and find their values.

3. Can a matrix equation have multiple solutions?

Yes, a matrix equation can have multiple solutions depending on the number of variables and the rank of the coefficient matrix. If the rank is less than the number of variables, there will be infinitely many solutions.

4. What is the difference between a consistent and inconsistent matrix equation?

A consistent matrix equation has at least one solution, while an inconsistent matrix equation has no solution. This can be determined by checking if the coefficient matrix is invertible.

5. Can a matrix equation be solved for non-linear systems?

No, matrix equations can only be used to represent and solve linear systems. For non-linear systems, different methods such as substitution or graphing may be used to find solutions.

Similar threads

  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
6
Views
509
  • Linear and Abstract Algebra
Replies
8
Views
780
  • Linear and Abstract Algebra
Replies
10
Views
1K
  • Linear and Abstract Algebra
Replies
2
Views
2K
  • Linear and Abstract Algebra
Replies
5
Views
1K
Replies
2
Views
1K
Replies
1
Views
543
  • Linear and Abstract Algebra
Replies
20
Views
3K
  • Linear and Abstract Algebra
Replies
9
Views
1K
Back
Top