Linear Algerba - Finding linearly independent vectors

Moomax
Messages
7
Reaction score
0
Hi guys,

I am solving a problem in the form: (ATx=0 where A is a matrix of known numbers and I am solving for x. After performing reduction and multiplying ATx, I am left with the following equations:


-X1 + X4 - X5 = 0

-X2 + X4 = 0

-X3 + X4 -X5 + 28X6 = 0

From these equations, I am trying to find the linearly independent vectors however, I am not sure how to do this. I tried pulling out a linear book but I couldn't make straight forward sense of the procedure. Can anyone help me out? Thanks!
 
Physics news on Phys.org
I may be missing the scampi for the bees, but those aren't vectors, they're parametric eqn's, which describe surfaces.

Solving these equations is equivalent to finding the intersection of the three surfaces. I count 6 unknowns and 3 eqns, in which case the solution will be a 3D relation.
 
I'm not sure where linear independence comes into it but if X1, ..., X6 are the components of your vector then like christianjb says you will only be able to fix a 3D subspace, where all vectors in it satisfy your equation.

If you just need one specific solution, try fixing X1, X2 and X3. You'll then be able to calculate X4, X5 and X6 from your equations.
 
Moomax said:
Hi guys,

I am solving a problem in the form: (ATx=0 where A is a matrix of known numbers and I am solving for x. After performing reduction and multiplying ATx, I am left with the following equations:


-X1 + X4 - X5 = 0
So X5= X4- X1

-X2 + X4 = 0
and X2= X4

-X3 + X4 -X5 + 28X6 = 0
X3= X4- X5+ 28X6
= X4- (X4- X1)+ 28X6
= X1+ 28X6

So X5, X2, and x3 can be written in terms of X1, X4, and X6.

Now, take X11= 1, X4= x6= 0. Then X5= -1, X2= 0, and X3= 1. One solution is <1, 0, 1, 0, -1, 0>.

Take X4= 1, X1= X6= 0. Then X5= 1, X2= 1, and X3= 0. Another solution is <0, 1, 0, 1, 1, 0>.

Take X6= 1, X1= X4= 0. Then X5= 0, X2= 0, and X3= 28. A third solution is <0, 0, 28, 0, 0, 1>.

The three vectors, <1, 0, 1, 0, -1, 0>, <0, 1, 0, 1, 1, 0>, and <0, 0, 28, 0, 0, 1> form a basis for the solution space.

From these equations, I am trying to find the linearly independent vectors however, I am not sure how to do this. I tried pulling out a linear book but I couldn't make straight forward sense of the procedure. Can anyone help me out? Thanks!
 
HallsofIvy is on the right track with what I mean. But I am still a little bit confused about the actual proceedure.

Like what made you simplify the equations the way you did? When I was playing around with them trying to simpify it never occurred to me to set them equal the way you did.

Also after that was done, what made you pick the variables you did to equal 0 while others equal to 1?
 
I was just "solving" the equations. It would be nice if we could solve the equations for specific numbers (in which case the solution space would be the trivial subspace, containing only (0,0,0)) but here we can't. So we do the best we can. Since there are three (independent) equations, we can solve for three values- in terms of the other three. In this problem, ANY three can be solved for in terms of the other three. I just chose the easiest.

As for setting one variable equal to 1 and the others 0, that guarantees that the vectors you get will be independent.
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...

Similar threads

Back
Top