Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Solutions to Ax=b lie in row space?

  1. Mar 27, 2013 #1
    Hi,

    My textbooks say that when a solution, x, is found to Ax=b it has a particular solution, x_0, such that A*x_0=b which is then combined with other solutions from the null space, n_i, such that A*n_i=0.

    However, when playing about with this I seem to have come across a problem.

    for the system:

    |1 3 3 2| |u| |1|
    |2 6 9 5| |v| = |5|
    |-1 -3 3 0| |w| |5|
    |y|

    I get an LU factorisation of A to be:

    | 1 0 0 | | 1 3 3 2 |
    | 2 1 0 | | 0 0 3 1 |
    |-1 2 1 | | 0 0 0 0 |

    when solved for x this gives x_0 as:

    |-2 |
    | 0 |
    | 1 |
    | 0 |

    and n_1 and n_2 as:

    |-3 | | -1 |
    | 1 | | 0 |
    | 0 | |-1/3|
    | 0 | | 1 |

    Using the rows of U as a basis for the row space of A, the particular solution, x_0, cannot be formed, so does not lie in the row space of A as it should.

    Have I done something wrong or is my understanding incorrect (is the "row space component of x" my books talk about not the same as x_0...)?

    Many hanks in advance ;)
     
  2. jcsd
  3. Mar 27, 2013 #2
    Your vector:

    |-2 |
    | 0 |
    | 1 |
    | 0 |

    is not just composed of a row space part. It is the sum of a row space part:

    [ -.211009174311 ]
    [ -.633027522933 ]
    [ .963302752291 ]
    [ .110091743118 ]

    and a null space part:

    [ -1.78899082569 ]
    [..633027522933 ]
    [ .036697247709 ]
    [ -.110091743118 ]
     
  4. Mar 28, 2013 #3
    Hi,

    Thanks very much, I thought it was something like that.
    So am I correct in saying the row space part lies only in the row space, and the null space part of x_0 is just a linear combination of the null space basis that gives zero (i.e. is redundant) when multiplied, but produces "nice" numbers in our x_0?

    Also, when solving, why do we get the "nice" numbers and not the row space part only?

    How do I find these two separate parts? Do I take a dot product with something...or solve more simultaneous equations...

    Thanks for your help!
    The books dont seem too clear on this matter!
     
  5. Mar 28, 2013 #4
    I'm going to lunch right now and don't have time for much of a post, but I'll post more when I get back.

    In the mean time, tell me what software you are using to calculate your LU decomposition?

    I'm an engineer myself (EE) and I've been using linear algebra more in recent years. I'm curious; your handle suggests you may be an engineer? Is that the case? Why the interest in linear algebra?
     
  6. Mar 28, 2013 #5
    Hi, thanks!

    Im using Matlab (But did the above example by hand without partial pivoting), and yes, I'm an engineering student (general currently).
    Im studying linear algebra as part of a maths course, applicable to the general solving of simultaneous equations and computer programming etc. the main application so far is presented as fluids.
    What is the electrical/electronic application of linear algebra?
     
  7. Mar 28, 2013 #6
    Yes, that's pretty much the case.

    Solving a matrix equation Ax=b is, in effect, asking what linear combination of the columns of A will give b? One can solve the problem from that point of view with a solver as found in many modern mathematical software packages. For example, here is the solution given by Mathematica. We're asking, what values of a, b, c and d will provide linear combinations of the columns of A equal to b:

    attachment.php?attachmentid=57232&stc=1&d=1364521046.png

    The solution is saying that we may pick a and b arbitrarily and then c and d will be determined as shown in the solution. Notice that we may pick integers a and b such that c and d will also be integers. There are your "nice" solutions. And, of course, we can see why there are an infinite number of solutions.

    Elimination methods, such as Gaussian elimination, or the Gauss-Jordan method don't just naturally give a solution that excludes a null space component. The solution is nice looking because you are starting with integer coefficients and doing rational arithmetic to obtain a solution.

    To answer this question will, I suspect, go beyond the scope of the course you're taking.

    My favorite way to do this is to use the singular value decomposition, which you will have to find out about elsewhere.

    Your A matrix is not square; you have fewer equations than variables, so right away there is the possibility of an infinite number of solutions. Yours is a 3x4 A matrix, so the rank of the matrix (you may have to look up the word "rank") can't be greater than 3, but in fact the rank is only 2. This means that only 2 vectors are needed to span the row space and 2 more will span the null space.

    Here is the singular value decomposition obtained with the help of Mathematica. The two (orthogonal) column vectors in green span the row space (providing a row space basis), and the two (orthogonal) red column vectors span the null space (providing a null space basis). You can see the result of the premultiplication by your A matrix; two non-zero columns (from the row space basis) and two zero columns (from the null space basis).

    Given these basis vectors, you can decompose any solution into its component parts.

    attachment.php?attachmentid=57233&stc=1&d=1364520368.png

    As I said, this is probably beyond the scope of your coursework, but if it piques your interest, good for you!

    Well, that answers a question I had. As far as I know, the LU decomposition is only applicable to square matrices. See if you get an error when you ask Matlab to perform the LU decomposition on your A matrix. I use Mathematica, and I get an error if I try to apply the LU decomposition to your A matrix.

    In EE linear algebra is used for network analysis. See this thread:

    https://www.physicsforums.com/showthread.php?t=674884

    for an example.
     

    Attached Files:

  8. Mar 29, 2013 #7
    Excellent, Thank you very much!

    Matlab does perform LU on this matrix, but uses partial pivoting to give PA=LU

    I just checked, and singular value decomposition is covered at the end of his terms course. ;)

    Thanks once again for the clear detailed answers.
     
    Last edited: Mar 29, 2013
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook