Why Are 3x3 Systems with Arithmetically Increasing Constants Always Infinite?

  • Thread starter Thread starter AlphaNoodle
  • Start date Start date
  • Tags Tags
    3x3 System
AlphaNoodle
Messages
3
Reaction score
0
I encountered a system:
3x+5y+7z = 9
7x+3y-z=-5
12x+13y+14z=15
And the solution was infinite solutions.

However, when looking at each equation, the constants (including coefficients) increase/decrease by a constant amount.
3x+5y+7z = 9 (+2)
7x+3y-z=-5 (-4)
12x+13y+14z=15 (+1)

And I made other systems using that same format, where all the equations' constants increased/decreased arithmetically (by a constant)
And the solution was always infinity.

I am curious to why this is and is there any proof behind this?
I got as far as:

Ax+(A+m)y+(A+2m)z=(A+3m)
Bx+(B-n)y+(B-2n)z=(B-3n)
Cx+(C+k)y+(C+2k)z=(C+3k)

But I am clueless to why there is always infinite solutions to these types of systems. I always get to the point where a number = a number (using elimination) to solve the system, and form there the solutions are always infinite. I don't even know how to proceed with a proof, and I was hoping for some help. I am sure I am interpreting something wrong or missing something, but anything would be appreciated. Thanks in advanced.
 
Physics news on Phys.org
One way to solve a system of equations is to "column" reduce the augmented matrix.

Here, the augmented matrix is
\begin{bmatrix}A & A+ n & A+ 2n & A+ 3n \\ B & B- m & B- 2m & B- 3m \\ C & C+ k & C+ 2k & C+ 3k\end{bmatrix}

If we add -1 times the first column to each of the other columns we get
\begin{bmatrix}A & n & 2n & 3n \\ B & -m & -2m & -3m \\ C & k & 2k & 3k\end{bmatrix}

Now add -2 times the second column to the third column and -3 times the second column to the fourth column to get
\begin{bmatrix}A & n & 0 & 0 \\ B & -m & 0 & 0 \\ C & k & 0 & 0\end{bmatrix}
and now we can see that, since the last two columns are all "0"s, the last unknown value, z, can be anything at all and still satisfy these equations.
 
I did not know there were column operations, still in high school haha. But a question I have is how do you represent column operations? in other words row operations are simply L1 -> L1+3 or something like that, but how are column operations represented?
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top