MHB If a row is added to any matrix is it still the same matrix?

  • Thread starter Thread starter find_the_fun
  • Start date Start date
  • Tags Tags
    Matrix Row
find_the_fun
Messages
147
Reaction score
0
Is it correct to say it doesn't matter if a row of zeros is added on to a matrix?

For example does
$$
\begin{bmatrix}1&2\\3&4 \end{bmatrix} = \begin{bmatrix}1&2\\3&4 \\ 0&0 \end{bmatrix}$$

Does it depend on context? For example if the matrix is representing a linear system of equations then this would imply 0x=0y which may not necessarily be the case...I'm guessing.
 
Physics news on Phys.org
Two matrices are equal if and only if they have the same number of rows, the same number of columns, and corresponding entries are equal. So, no, if you add a row, of "0"s or anything else, to a matrix, the result is NOT the same matrix.

You could define a "correspondence" mapping matrix A to the matrix with a row of "0"s added to the bottom and show that this is an "equivalence relation" but being "equivalent" does not mean they are the "same".
 
Matter...how?

Technically, no: one lives in vector space of dimension 4 (4 entries in the matrix), one lives in a vector space of dimension 6.

However, the space of 3x2 matrices with 0 last row is isomorphic to the space of 2x2 matrices (as a subspace of all 3x2 matrices), and the two matrices share many common properties: the same rank, for example.

Your first matrix is equivalent to a linear system of equations:

x + 2y = a
3x + 4y = b

The second matrix is equivalent to the system:

x + 2y = a
3x + 4y = b
0x + 0y = c

If c is non-zero, the second system has no solution, even if the first one does.

On a more abstract level, it depends on what you mean by "the same". An extreme view would be that the symbol $A$ (whether it be a matrix, an equation, or a sentence in some language) is only equal to $A$ and nothing else. This is, for most purposes, far too restrictive, for example, when we write:

2 + 2 = 4

Obviously we do not mean "2 + 2" and "4" are literally the same string of symbols, but rather the two functions:

$f(x,y) = x+y$
$g(z) = z$

have the same image for $(x,y) = (2,2)$ and $z = 4$.

A better example: the Euclidean plane $\Bbb R^2$ and the polynomial space:

$P_1(\Bbb R) = \{f: \Bbb R \to \Bbb R| f(x) = a + bx\}$

are isomorphic as vector spaces, but clearly they are not "the same" (one consists of pairs of real numbers, and one consists of functions on the real numbers) but they share a common dimension, which means in a suitable coordinate system they share a common arithmetic:

one can calculate the polynomial $(f + g)(x) = f(x) + g(x)$ where:

$f(x) = a + bx$
$g(x) = c + dx$

by doing a similar sum in $\Bbb R^2$: adding $(a,b) + (c,d)$.

Much of mathematics is a process of increasing abstraction: which is essentially a process of replacing equality with equivalence. The reason this works so well is that equality itself is an equivalence relation on any set (in fact, the MINIMAL one), so much of what we feel "ought to be true" turns out to be true even when we replace "equal" by "equivalent".

You can think of this process as the process of choosing "the right size filter". Equality is the finest filter possible, whereas "everything is indistinguishable" is the coarsest filter possible. Usually we want something in-between.

The main problem with saying your two matrices are "the same" is that in fact there are MANY possible ways to embed the space of 2x2 matrices within the space of 3x2 matrices (we could map our original 4 entries to any of the 6 possible positions in a 3x3 matrix...for example we could add a 0 row at the top, or in the middle, or take the transpose, and then add a 0 row). Each of these ways produces a subspace of the 3x2 matrices that is somehow the same, but still not equal to the other spaces (in other words, if we are modelling a physical problem "position may matter").

Consider the linear mapping $\Bbb R^2 \to \Bbb R^2$ that has the matrix in the standard basis:

$[T] = \begin{bmatrix}1&0\\0&0 \end{bmatrix}$

This is the projection function onto the $x$-axis:

$T(x,y) = (x,0)$.

Clearly, this isn't the same thing as the identity function $f(x) = x$ on the real numbers, $f$ is bijective, and $T$ is not. And yet, if we restrict $T$ TO the $x$-axis, it acts pretty much the same as the identity function. The difference being, is that $T$ restricted to the $x$-axis, and the identity function on $\Bbb R$ have different domains, and co-domains, and so represent distinct, if eerily similar, functions.
 
Ok thanks. A somewhat similar question is if you are given to matrices with no context and the numbers and dimensions are the same is it right to say they are equal?

e.g. If
$$A=\begin{bmatrix}1&2\\3&4 \end{bmatrix}[/math] and [math]B=\begin{bmatrix}1&2\\3&4 \end{bmatrix}[/math] then does A=B?
 
find_the_fun said:
Ok thanks. A somewhat similar question is if you are given to matrices with no context and the numbers and dimensions are the same is it right to say they are equal?

e.g. If
$$A=\begin{bmatrix}1&2\\3&4 \end{bmatrix}$$ and [math]B=\begin{bmatrix}1&2\\3&4 \end{bmatrix}[/math] then does A=B?
Yes! Have a nice day!

Regards,
$$|\pi\rangle$$
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...

Similar threads

Back
Top