Linear Algebraic System Solution for Known and constrained variables

Ronankeating
Messages
62
Reaction score
0
Dear All,

If I have the linear algebraic system where its composed as of matrices in that form K*X=F, what column/row operations should I perform if I want to solve it where some of the X variables are known (targeted values) or if I want to solve when variables are constrained relative to each other.

For example, say that I want to solve linear algebraic equations(K*X=F) when its constrained (e.g. X5=X12=X125=X128) or when it has targeted values such as (X13=25; X23=5.4; X33=13 etc...)

Your help will be appreciated!

Regards,
 
Physics news on Phys.org
That's a good question! Off the top of my head, the brute force approach would be to add more equations and enlarge the matrix K and the vector F so they have more columns. That would work if you have software that solve systems where there are more equations than unknowns. For example x_5 = x_{12} is expressed by the equation x_5 - x_{12} = 0. Of course the "target" x_{13} = 25 is itself an equation.

However, I don't know that the average software routine for solving linear equations can handle the problem efficiently. For example, if it uses Gaussian elimination, I don't know that it would perform the operations in an order that would make the answer for x_{13} exactly equal to 25. If the software computes the answer by finding the pseudoinverse of a non-square matrix then roundoff errors might be a problem.

Obviously the proper way to do things would be to use substitutions and rewrite your original system of equations so its has fewere variables. You seem to be asking for a matrix oriented method of implementing the substitutions. Off hand, I don't know one. I'll have to think about it.
 
Your question about how to apply known values of X when solving the matrix equation K*X = F has been discussed recently in this thread:

https://www.physicsforums.com/showthread.php?t=691663

In short, using the known values of X, the matrices K and F can be modified without adding additional equations such that the original matrix equation can be solved using elimination or whatever solution method you choose. The attached pdf in the thread will illustrate the procedure.
 
SteamKing said:
Your question about how to apply known values of X when solving the matrix equation K*X = F has been discussed recently in this thread:

https://www.physicsforums.com/showthread.php?t=691663

Yes SteamKing you've answered this question in the shown thread and thank you for your help, but if you remember I was looking additionally for the constrained type solution also where I couldn't find entry point.

What inspires me actually to ask those question again, I still do have belief deep down inside my brain that I'm following very old methods for this, instead of contemporary and robust ones.
 
For other types of constraint, one possible source of a solution would be to study a branch of mathematics called 'linear programming'.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top