What is the general algorithm for computing the null space of a 3x3 matrix?

junglebeast
Messages
514
Reaction score
2
There are 2 issues I want to talk about in this post.
(1) General algorithm for gauss-jordan elimination computation of null space
(2) Closed form solution to 3x3 null space

Following the example here,

https://en.wikipedia.org/wiki/Kernel_(linear_algebra)
I thought a general algorithm to compute the null space would be to

1) augment with 0 vector on the right
2) compute gauss-jordan elimination
3) take 2nd to last column, and fill in extra elements with 1's to get the null space

This works in the example provided there. However, on the next example, step 3 needs to be changed...

1, 0, 1
2, 1, 3
1, 1, 2

which has a null space of 1, 1, -1

using gauss-jordan elimination, the closest I can get is

1, 0.5, 1.5, 0
0, 1, 1, 0
0, 0, 0, 0

x1 = -0.5 x2 - 1.5 x3
x2 = -x3
[1, 1, -1]

This gives me the right null space, but step #3 of my above method clearly wasn't right..how can I generalize step 3 into a straight-forward algorithm?

Now onto my second issue.

I found a method to compute the eigenvector corresponding to an eigenvalue of a 3x3 matrix closed form. It is simply:

a1*a5 - a2*(a4-e)
a1*a2 - a5*(a0-e)
(a0-e)*(a4-e) - a1*a1

Removing the 'e's, this is essentially a short cut to get the null space. However it doesn't seem to work for non-symmetric matrices. I feel like there should be a similar method that works for non-symmetric 3x3's...which could be used to avoid the SVD method in this case
 
Last edited by a moderator:
Physics news on Phys.org
Algorithms to solve linear equation systems are well known and are part of basic computer science courses. The fact that there might be examples with faster algorithms is well known, too, and we cannot reason upon examples. This doesn't allow a generalization which would be needed to talk about an algorithm.

If you are interested in the subject, then you might want to read about the improvements on the matrix exponent: https://en.wikipedia.org/wiki/Strassen_algorithm#Asymptotic_complexity
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
Back
Top