Constructing a 2D Matrix for Solving Equations with Multiple Unknowns

Hypatio
Messages
147
Reaction score
1
I am trying to figure out how you would write out a matrix for the solution to an equation such as the following:

\alpha x^{rr}-(2\alpha+2\gamma) x^r+(\alpha+\gamma) x^i+(\beta+\gamma) x^{br}-\beta z^r -(\beta+2\eta) z^b+(\beta+\eta) z^i +\eta z^{bb}=0

where alpha, beta, gamma, and eta are coefficients, and x and z are unknowns, and superscripts simply indicate the relative locations of the nodes.

I do not understand how such an equation can possibly have an invertible matrix because for each equation I need to know the x and z values. If I only needed solutions to x or z values, the matrix might look something like this, depending on the values of alpha, beta, etc.:

http://www.eecs.berkeley.edu/~demmel/cs267/lecture17/DiscretePoisson.gif

but I have no idea how to construct a matrix when you need to know two values for each node location.
 
Last edited:
Physics news on Phys.org
I don't understand your notation. Why do x and z sometimes have two superscripts and sometimes only one?
 
AlephZero said:
I don't understand your notation. Why do x and z sometimes have two superscripts and sometimes only one?
Sorry, this is a notation which is easier for me to understand for index references. b=i,j-1, bb= i,j-2,, r=i+1,j, br=i+1,j-1, i=i,j, rr=i+2,j. Think of it as bottom, bottom right,etc.
 
I would write it using 2 matrices A and B, in the form
A x + B z = 0
where x is the vector (x11 ... x1n, x21 ... x2n, ... xn1 ... xnn)
and similarly for z.

Clearly x = z = 0 is always a solution. Whether there are non-trivial solutions will depend on the structure of A and B and mayve on the values of the constants alpha, beta, etc.
 
AlephZero said:
I would write it using 2 matrices A and B, in the form
A x + B z = 0
where x is the vector (x11 ... x1n, x21 ... x2n, ... xn1 ... xnn)
and similarly for z.

Clearly x = z = 0 is always a solution. Whether there are non-trivial solutions will depend on the structure of A and B and mayve on the values of the constants alpha, beta, etc.

If I could solve an equation of the form Ax+Bz=0 that would be fantastic, but I don't see how to approach such a solution. Is this a special type of linear system that has been studied or is it not much different from an equation of the form Ax=B? In particular, I do not see how it is consistent to use two separate matrices. How are the matrices associated such that I can, ideally, perform gaussian elimination over it to arrive at the solution? I apologize if this is actually trivial.

Thank you very much for the help. I will be ecstatic if I can figure out how to solve such equations. I can then move on to solving stokes flow and problems of linear elasticity :D
 
If z and x are both unknowns, so far you have n^2 equations in 2n^2 variables. You need another n^2 equations from somewhere.
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top