What Are Rank and Bases in Linear Algebra?

dekoi
I'm finding it difficult to grasp the concept of rank (more specifically, of bases).

First of all, what excactly is a basis? The textbook definition doesn't suffice.

What is "column space" (colA) and "row space" (rowA)?

If I am given a matrix A and told to find the bases for rowA and colA, how do i go about doing this? At first i thought of doing the following: convert A --> U (row echelon form); the non-zero rows is the basis for rowA and the columns with non-zero leading values correspond to the columns of A which are the bases for colA. Am i right? And if i am, why am i? I really don't understand the concept.

Thank you.
 
Physics news on Phys.org
A basis is a linearly independent spanning set. Which if linearly independent and spanning set don't make sense? Note the rank of a basis is always the number of elements in the basis, since they are linearly independent.

The column space is the subspace of R^n that the columns viewed as vectors in R^n span. Similarly the row space.

Row, or column echelon form works nicely to find the span: you're not altering the span by combining rows, row echelon form just happens to be a really nice combination where it is clear what the span is because of all the zeroes lying around.

I mean if i asked you what the span of (1,3,4) (2,3,1) ans (1,2,3) is you couldn't say of the top of your head, but put them as rows of a matrix and do row ops and it becomes clearer (I won't do it because it is a pain to typset), but suppose the rows came out as (1,0,0), (0,1,0) and (0,0,1) then clearly these vectors span all of R^3, don't they. EMPHASIS: I do not know this happens, I picked them at random.They might come out as (1,0,0) (0,0,1) and (0,0,0) but then its easy to see they span a 2-dimensional subspace.
 
Last edited:
A basis is basically a minimal spanning set (which automatically has to be linearly independent). Any set that spans the space will have cardinality equal or greater than the basis. Of course there are concepts of finitely generated spaces (spaces that have a basis with finite cardinality R^n where n is finite) and infinitely generated spaces (the other way round, example- R^infinity)

A basis of a space need not be unique.
This is easy to see. Take a basis, and add one element to all the others. Since they are linearly independent, this new set will also remain linearly independent, and will have the same cardinality as the basis (since you're not adding newer elements) and ... lots of things to be proved, great, easy to grasp results...

the rank of a matrix as the number of linearly independent rows or columns. Also note that all zero matrices have rank = 0. The rank of a (m*n) matrix is equal to the rank of the largest sub matrix with a determinant different from zero.

A full matrix is matrix with rank equal to min( n(rows), n(cols)). only a full matrix is non-singular. If a matrix is not full, then it is easy to see that it can be transformed to a matrix with atleast one row/col of zeroes, which obviously makes it singular. (of course, singularity and stuff are defined where applicable only i.e in square matrices)
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top