General Questions on Polynomials, Vectors, and Matrices

  • Thread starter Thread starter mrxtothaz
  • Start date Start date
  • Tags Tags
    General
mrxtothaz
Messages
14
Reaction score
0
I'm doing a bit of review and have a few brief questions.

1) Say you have 3 polynomials that generates a 3dimensional space. Let this basis be {x2 + 3x -2, 2x2 +5x -3, -x2 -4x +4}. To prove that these vectors span the space, I want to show that any vector in this space can be expressed using these basis elements. I did this at the beginning of the year before learning matrices; I have attempted to do this with the use of matrices and for some reason I can't figure it out.

My approach has basically been to put the coefficients of each degree 2 term in a row and solve for a, and then doing the same thing for the terms of different degree (in terms of b and c). What am I doing wrong?

2) To find the equation of a line passing through two points, say: (3, -2, 4) and (-5, 7, 1), why is it that this is done by subtracting them, to get some parameter t? The equation ends up being v=(3,-2,4) +t(-5,7,1), where I presume t = (-5,7,1) - (3,-2,4)=(-8,9,-3). But I don't understand the geometric reason behind this.

3) Is there such a thing as an invertible mxn transformation? Or does this concept apply only to square matrices? In my reading, I have come across the definition that an isomorphism between vector spaces (say S and T) is when you have an invertible map between them such that both ST = I and TS = I. Is this only the case for square matrices?

4) Before being introduced to linear transformations being represented by matrices, I used to think of vectors as being row vectors. Then I learned it was the other way around. Something I have recently read has brought a new question to mind. Basically, what I read was that the correspondence of a vector x with [x]B (coordinate vector of x relative to B) provides a linear transformation from V to Fn. So am I correct to interpret this as saying that you can consider the elements of some vector space as row vectors, but with respect to some basis (putting them w.r.t to a basis is a transformation), they end up becoming column vectors?

5) By using row reduction, you can find out the rank and nullity of a given transformation. I have read that you can tell this both by looking at the columns or the rows (the maximum number of linearly independent rows OR columns). Do both these techniques work with row reduction?

I would appreciate any help. Thanks in advance.
 
Physics news on Phys.org
  1. Write the polynomials in your basis as vectors, so you have {u, v, w} (for example u = (1, 3, -2)). Now put them in a matrix [u v w]. Show this matrix is invertible (reducible to the identity matrix)
  2. Instead of a 3-dimensional example you might want to look at a 2-dimensional example. Drawing them could help too.
  3. Yes, vector-spaces can only be isomorphic when the transformation matrix is square.
  4. Well, vectors only become a row or a column when you write them down on paper. In fact, vectors are elements of a vector-space and do not come with an instruction to write them down in a predefined way. However when doing linear algebra it is common to write elements x from Rn or Cn in the following way

    \left[ \begin {array}{c} x_{1}\\\noalign{\medskip}\cdot\\\noalign{\medskip}\cdot\\\noalign{\medskip}x_{n}\end {array} \right]​

    You can write them (x_{1}, \dots, x_{n}) as well or as rows but the common way is as columns like I just did.
  5. Yes, by row reduction you can easily find the rank of an m \times n matrix A. To find the Nullspace you have to do a little bit more work. There is a useful theorem (maybe the fundamental theorem of linear algebra) which states

    \mbox{rank}{A} + \dim\mbox{Nul}{A} = n​
 
Last edited:
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top