Linear Algebraic System Solution for Known and constrained variables

Click For Summary
To solve a linear algebraic system represented as K*X=F with known or constrained variables, one approach is to add equations to enlarge the matrices K and F, ensuring they accommodate the known values or constraints. For instance, expressing constraints like X5=X12 can be done by adding equations such as X5 - X12 = 0. However, traditional software routines may struggle with efficiently solving these modified systems, particularly if they rely on Gaussian elimination or pseudoinverses, which can introduce roundoff errors. A more effective method involves substituting known values directly into the matrices, allowing for elimination without adding extra equations. Exploring linear programming may also provide contemporary solutions for handling variable constraints.
Ronankeating
Messages
62
Reaction score
0
Dear All,

If I have the linear algebraic system where its composed as of matrices in that form K*X=F, what column/row operations should I perform if I want to solve it where some of the X variables are known (targeted values) or if I want to solve when variables are constrained relative to each other.

For example, say that I want to solve linear algebraic equations(K*X=F) when its constrained (e.g. X5=X12=X125=X128) or when it has targeted values such as (X13=25; X23=5.4; X33=13 etc...)

Your help will be appreciated!

Regards,
 
Physics news on Phys.org
That's a good question! Off the top of my head, the brute force approach would be to add more equations and enlarge the matrix K and the vector F so they have more columns. That would work if you have software that solve systems where there are more equations than unknowns. For example x_5 = x_{12} is expressed by the equation x_5 - x_{12} = 0. Of course the "target" x_{13} = 25 is itself an equation.

However, I don't know that the average software routine for solving linear equations can handle the problem efficiently. For example, if it uses Gaussian elimination, I don't know that it would perform the operations in an order that would make the answer for x_{13} exactly equal to 25. If the software computes the answer by finding the pseudoinverse of a non-square matrix then roundoff errors might be a problem.

Obviously the proper way to do things would be to use substitutions and rewrite your original system of equations so its has fewere variables. You seem to be asking for a matrix oriented method of implementing the substitutions. Off hand, I don't know one. I'll have to think about it.
 
Your question about how to apply known values of X when solving the matrix equation K*X = F has been discussed recently in this thread:

https://www.physicsforums.com/showthread.php?t=691663

In short, using the known values of X, the matrices K and F can be modified without adding additional equations such that the original matrix equation can be solved using elimination or whatever solution method you choose. The attached pdf in the thread will illustrate the procedure.
 
SteamKing said:
Your question about how to apply known values of X when solving the matrix equation K*X = F has been discussed recently in this thread:

https://www.physicsforums.com/showthread.php?t=691663

Yes SteamKing you've answered this question in the shown thread and thank you for your help, but if you remember I was looking additionally for the constrained type solution also where I couldn't find entry point.

What inspires me actually to ask those question again, I still do have belief deep down inside my brain that I'm following very old methods for this, instead of contemporary and robust ones.
 
For other types of constraint, one possible source of a solution would be to study a branch of mathematics called 'linear programming'.
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 19 ·
Replies
19
Views
4K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K