# `Forgotten' linear algebra

## Main Question or Discussion Point

Hi all,

I learned this stuff years ago and wasn't brilliant at it even then so I think a refresher is in order.

Suppose I have n distinct homogeneous equations in n unknowns. I want to find the solution so I write down the matrix of coefficients multiplying my vector of variables as follows

$A \mathbf{x} =\mathbf{0}$.

Now, we don't want $\deta A \neq 0$ to happen otherwise the columns of A are linearly independent so the only solution to $A \mathbf{x} = \mathbf{C}_1 x_1 + \cdots \mathbf{C}_n x_n = \mathbf{0}$ is $\mathbf{0}$.

Now how do we actually solve this for $\mathbf{x}$, do we just do Gaussian elimination followed by back-substitution? Is the solution unique in this case?

Now suppose the system is inhomogeneous

$A\mathbf{x} = \mathbf{b}$ where $\mathbf{b}\neq 0$. In this case we actually want $\det A \neq 0$ because then we can instantly write down the unique solution

$\mathbf{x} = A^{-1}\mathbf{b}$.

Have I gotten the solution to square systems about right? If yes, I'll try to figure out the non-square case.

Related Linear and Abstract Algebra News on Phys.org
Now how do we actually solve this for $\mathbf{x}$, do we just do Gaussian elimination followed by back-substitution? Is the solution unique in this case?
If the null space of A is not empty, then it is a subspace, so the solution is an entire subspace of the space you're working with, not just a single vector. The subspace containing only the zero vector is the only degenerate subspace that does consist of a single vector, and it is always in the null space.

Now suppose the system is inhomogeneous

$A\mathbf{x} = \mathbf{b}$ where $\mathbf{b}\neq 0$. In this case we actually want $\det A \neq 0$ because then we can instantly write down the unique solution

$\mathbf{x} = A^{-1}\mathbf{b}$.

Have I gotten the solution to square systems about right? If yes, I'll try to figure out the non-square case.
Yep, that's right.

Last edited:
i suggest that after you write your matrix
just make a row reduction
and you are not supposed to write a column of zeros in the end
the last column depends on the last number after the "=" sign

You can also solve systems of equations of this form with the wedge product (wedging the column vectors). I'd put an example of this in the wiki Geometric Algebra page a while back when I started learning the subject:

http://en.wikipedia.org/wiki/Geometric_algebra#Cramer.27s_rule.2C_determinants.2C_and_matrix_inversion_can_be_naturally_expressed_in_terms_of_the_wedge_product.

Looking at the example now, I don't think it's the greatest. It should also probably be in a wedge product page instead of GA ... but that was the context that I learned about it first (I chose to use the mostly empty wiki page to dump down my initial notes on the subject as I started learning it;)