MHB Linear Dependency: Find x for a,b,c in R3

  • Thread starter Thread starter Yankel
  • Start date Start date
  • Tags Tags
    Linear
Yankel
Messages
390
Reaction score
0
Let a,b,c be 3 vectors in R3.

Let A be a 3X3 matrix with a,b,c being it's columns. It is known that there exist x such that:

A^{17}\cdot \begin{pmatrix} 1\\ 2\\ x \end{pmatrix}= \begin{pmatrix} 0\\ 0\\ 0 \end{pmatrix}

Which statement is the correct one:

1) a,b and c are linearly independent
2) a,b and c are linearly dependent
3) transpose((1,2,x)) is linear combination of a,b,c
4) the system:
A\cdot \begin{pmatrix} 1\\ 2\\ x \end{pmatrix}
has a non trivial solution

The correct answer is (2), but I don't understand why it is correct...

thanks.
 
Physics news on Phys.org
if a,b,c are linearly independent, then rank(A) = 3.

this means, in particular that:

3 = dim(ker(A)) + 3, so:

dim(ker(A)) = 0, that is, the null space of A is {(0,0,0)}.

but if A17(x,y,z) = (0,0,0), then:

A16(x,y,z) = (0,0,0), and thus:

A15(x,y,z) = A14(x,y,z) =...= A(x,y,z) = (0,0,0),

so that (x,y,z) = (0,0,0).

since (1,2,x) ≠ (1,0,0) (no matter what we choose for x),

the columns of A cannot be linearly independent. this means (1) is not true.

let's look at (3). suppose that:

$A = \begin{bmatrix}0&0&0\\0&0&0\\0&0&1 \end{bmatrix}$

then for x = 0, we have:

A(1,2,0) = (0,0,0), so certainly A17(1,2,0) = (0,0,0), but (1,2,x) is not in im(A) (which it would be if it were in the column space of A).

note that it IS possible to have SOME A such that:

A17(1,2,0) = (0,0,0), with (1,2,x) in the column space of A. let:

$A = \begin{bmatrix}-2&1&0\\-4&2&0\\-2x&x&0 \end{bmatrix}$

clearly A(0,1,0) = (1,2,x) so that (1,2,x) = b (and is thus a linear combination of a,b, and c). but an easy calculation shows that:

A2 = 0, for any choice of x, so that A17(x,y,z) = A15(A2(x,y,z)) = A15(0,0,0) = (0,0,0).

so (3) isn't ALWAYS true, but it MIGHT be true.

you have some typo in (4), as you haven't defined a system of equations (no "equals" sign), so until you rectify this, i cannot give a proper argument. however, the argument for (1) shows that indeed, {a,b,c} cannot be linearly independent, so must be linearly dependent.
 
thanks for your help

I never studies transformations (yet), so I am struggling with im() and ker()...

I do understand why the columns of A^17 are dependent, the only part I got missing is why if the columns of A^17 are dependent, the columns of A are also dependent...

I need an explanation that doesn't use linear transformations knowledge...thanks !
 
Last edited:
fix a basis for Rn, and another one for Rm. then there is a unique matrix relative to those bases for any linear transformation T, and every such matrix corresponds to some linear transformation T.

loosely, matrices and linear transformations are "the same things", they're just "written" differently.

you have probably studied null spaces and column spaces belonging to a matrix $A$. these ARE the direct analogues (for a linear transformation $v \to Av$) of ker(T) and im(T) for a general linear transformation T. there's nothing mysterious about this:

kernels are what maps to 0.
images are the entirety of what gets mapped TO.

kernels (or null spaces) measure "how much shrinkage we get". images measure "how big what's left goes to". there's a natural trade-off, here: bigger image means smaller kernel, and smaller image means bigger kernel. the way we keep score is called "dimension".

linear independence is related to kernels
spanning is related to images

what this means for matrices is:

a matrix is 1-1 if the nullspace is {0}, which means ALL its columns are linearly independent. for square matrices, this means the matrix is invertible.

a matrix is onto if it has as many independent columns as the dimension of its co-domain (target space). in particular if it is an mxn matrix with m < n, the columns will be linearly dependent.

linear transformations (think: matrices with a "fancy name". this is not quite accurate, but close enough for visualization) change one vector space to another. they preserve "vector-space-ness": that is they preserve sums:

T(u+v) = T(u) + T(v)

and scalar multiples:

T(cv) = c(T(v)).

since they are functions, they can't "enlarge" a vector space:

dim(T(V)) ≤ dim(V)

but "good ones" preserve dimension:

dim(T(V)) = T(V) <---these are invertible.

********

for any linear transformation T:V-->W, the set ker(T) = {u in V: T(u) = 0} is a SUBSPACE of V. this boils down to the following facts:

1) if u is in ker(T) and v is in ker(T), then:

T(u+v) = T(u) + T(v) (since T is linear)
= 0 + 0 = 0 (since T(u) = 0, and T(v) = 0).

2) if u is in ker(T), so is cu:

T(cu) = c(T(u)) = c(0) = 0

3) 0 is always in ker(T):

T(0) = T(0+0) = T(0) + T(0)
0 = T(0) (subtracting T(0) from both sides).

if T:V-->W is a linear transformation, then the set:

im(T) = T(V) = {w in W: w = T(v) for some v in V} is a subspace of W.

1) suppose w,x are in im(T).

then w = T(u), x = T(v) for some u,v in V.

thus w+x = T(u) + T(v) = T(u+v), so w+x is in im(T).

2) if w is in W, so is cw:

since w = T(u), cw = c(T(u)) = T(cu), so cw is in im(T).

3) 0 is in im(T):

0 = T(0), and 0 is always in V, for any vector space.

*********
now, basis vectors are useful: they let us use COORDINATES (numbers) to represent vectors. but bases aren't UNIQUE, we can have several different coordinate systems on the same space. so it's better not to get "too attached" to any particular basis, dimension is one of those things that stay the same no matter which basis we use. so theorems that say something about dimension are more POWERFUL than theorems which rely on numerical calculation.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...

Similar threads

Replies
2
Views
3K
Replies
7
Views
3K
Replies
10
Views
2K
Replies
34
Views
2K
Replies
52
Views
3K
Replies
15
Views
2K
Back
Top