Prove the identity matrix is unique

AI Thread Summary
The discussion centers on proving the uniqueness of the identity matrix, with the user struggling to understand how the equation AC = BC implies A = B. They explore the implications of using different matrices A, including the zero matrix, and express confusion about the role of invertibility in their proof. Participants suggest that if A is invertible, one can multiply both sides of the equation by A's inverse to conclude that I_1 = I_2. The conversation also touches on the nature of vectors as matrices and their invertibility, clarifying that only square matrices can be invertible. Ultimately, the consensus is that the identity element must be unique under the right conditions.
askmathquestions
Messages
65
Reaction score
6
Homework Statement
Prove the identity matrix is unique.
Relevant Equations
I_1 * A = A, I_2 * A = A
I would appreciate help walking through this. I put solid effort into it, but there's these road blocks and questions that I can't seem to get past. This is homework I've assigned myself because these are nagging questions that are bothering me that I can't figure out. I'm studying purely on my own, no professors, but according to freely accessible MIT open courseware material on linear algebra.

I'm trying to prove and figure out how and why that the identity matrix is unique, but I can't quite figure out how AC = BC implies A = B, I don't know how you can suddenly remove the "C" from the equation. Here's where I'm at, and I don't know if there's a LaTeX editor I can use:

Let $ I_1 $ and $ I_2$ be two $n$ x $n$ matrices acting on an $n$ x $p$ matrix $A$, such that $ I_1*A = A $ and $ I_2*A = A $. Suppose $A$ is not identically the $0$ matrix.

How do we show $ I_1 = I_2 $ ?

We have by equality that
$ I_2 * I_1 * A = I_2 * A = A $
and so $ I_1 * A = I_2 * A $

But how do I make the leap to saying $ I_1 = I_2$? Every other attempt I have is just some combinatoric mess of matrices, there's something fundamental I'm not getting and I don't know what.

If we made some additional assumptions of the framework, we could require $A$ is invertible, but then we'd lose the identity's uniqueness on non-invertible matrices.

This reminds me of another question that's bothering me: are column vectors, like x = [[x_1],[x_2],[x_3]] "invertible" matrices? Because conceivably, we could define a row vector y = (1/3) [[1/x_1, 1/x_2, 1/x_3 ]] so that when we multiply $ y*x $ we obtain 1, but I'm confused because historically we don't refer to vectors as "matrices", we refer to them as "vectors", so it's confusing to assume a vector is a matrix, and furthermore this "1" that is the byproduct of multiplying y and x is a just scalar quantity, not a matrix, so I don't know whether to say "y" is the "left-inverse" of x. I'm confused by how all the dimensions of each component keep changing.
 
Last edited:
Physics news on Phys.org
What happens if there are two matrices ##I_1## and ##I_2##, both having the properties of the identity matrix?

Can you show that ##I_1 = I_2##?

Is that sufficient to show that the identity matrix is unique?
 
That's my general outline. If I suppose $ I_1*A = I_2 * A $ then I feel like I should somehow be able to derive $I_1 = I_2, $ but I don't know how to get rid of this $A$ without making all these other assumptions about invertibility and how AA^{1} equals the identity, it's circular logic then because how then do I know "which" identity A*A^{-1} is equal to? I don't know how to show that such a statement is true "independent" of your choice of A.
 
Perhaps you need a clever choice of ##A##?
 
PeroK said:
Perhaps you need a clever choice of ##A##?
But A is supposed to be any matrix, of the correct dimensions. Picking and limiting yourself to only a specific A seems like it would defeat the purpose of the proof.
 
  • Skeptical
  • Like
Likes scottdave and PeroK
askmathquestions said:
Homework Statement:: Prove the identity matrix is unique.
Relevant Equations:: I_1 * A = A, I_2 * A = A

I would appreciate help walking through this. I put solid effort into it, but there's these road blocks and questions that I can't seem to get past. This is homework I've assigned myself because these are nagging questions that are bothering me that I can't figure out. I'm studying purely on my own, no professors, but according to freely accessible MIT open courseware material on linear algebra.

I'm trying to prove and figure out how and why that the identity matrix is unique, but I can't quite figure out how AC = BC implies A = B, I don't know how you can suddenly remove the "C" from the equation. Here's where I'm at, and I don't know if there's a LaTeX editor I can use:

Let $ I_1 $ and $ I_2$ be two $n$ x $n$ matrices acting on an $n$ x $p$ matrix $A$, such that $ I_1*A = A $ and $ I_2*A = A $.

How do we show $ I_1 = I_2 $ ?

We have by equality that
$ I_2 * I_1 * A = I_2 * A = A $
and so $ I_1 * A = I_2 * A $

But how do I make the leap to saying $ I_1 = I_2$? Every other attempt I have is just some combinatoric mess of matrices, there's something fundamental I'm not getting and I don't know what.

If we made some additional assumptions of the framework, we could require $A$ is invertible, but then we'd lose the identity's uniqueness on non-invertible matrices.

This reminds me of another question that's bothering me: are column vectors, like x = [[x_1],[x_2],[x_3]] "invertible" matrices? Because conceivably, we could define a row vector y = (1/3) [[1/x_1, 1/x_2, 1/x_3 ]] so that when we multiply $ y*x $ we obtain 1, but I'm confused because historically we don't refer to vectors as "matrices", we refer to them as "vectors", so it's confusing to assume a vector is a matrix, and furthermore this "1" that is the byproduct of multiplying y and x is a just scalar quantity, not a matrix, so I don't know whether to say "y" is the "left-inverse" of x. I'm confused by how all the dimensions of each component keep changing.
It is crucial to know where ##A## is from. E.g. if it is invertible, we could simply multiply with ##A^{-1}## from the right. If ##A=0## then there is more than one identity matrix. Sure, these are extreme examples, but they show that ##A\in ?## is crucial. And there are domains where left identity and right identity are different.
 
I can see how if A = 0 then we don't have uniqueness. I just don't know about assuming invertibility, you'd have the assume an inverse exists, but this seems like circular logic because of the definition of an inverse is a matrix multiplication which returns the identity, the thing I'm trying to prove, so without uniqueness, how do you know "which" identity A*A^{-1} returns?
 
askmathquestions said:
But A is supposed to be any matrix, of the correct dimensions. Picking and limiting yourself to only a specific A seems like it would defeat the purpose of the proof.
There's no answer to that except to say that your thinking is illogical.
 
fresh_42 said:
It is crucial to know where ##A## is from. E.g. if it is invertible, we could simply multiply with ##A^{-1}## from the right. If ##A=0## then there is more than one identity matrix. Sure, these are extreme examples, but they show that ##A\in ?## is crucial. And there are domains where left identity and right identity are different.
I don't follow what you are trying to say here.
 
  • #10
PeroK said:
There's no answer to that except to say that your thinking is illogical.
Just because AX = B doesn't suddenly mean, if we pick another, random, different matrix from X, say Y, that AY is still equal to B in the general case.

PeroK said:
I don't follow what you are trying to say here.
They are pointing out a point of contention in the assumptions of the problem. I didn't grab this problem out of a book, that's why it's ill-posed, so I need to figure out what minimal additional assumptions I need to make to prove uniqueness, which I obviously didn't know when writing the problem. I amended my problem to include that $A$ not be identically equal to the 0 matrix.
 
  • #11
askmathquestions said:
I can see how if A = 0 then we don't have uniqueness. I just don't know about assuming invertibility, you'd have the assume an inverse exists, but this seems like circular logic because of the definition of an inverse is a matrix multiplication which returns the identity, the thing I'm trying to prove, so without uniqueness, how do you know "which" identity A*A^{-1} returns?
No. If we have a multiplicative group, i.e. an associative [##A(BC)=(AB)C##] structure in which all elements have inverses and there is an identity element, then we are allowed to multiply from one side with an inverse element, and
\begin{align*}
I_1\cdot A =I_2\cdot A &\Longrightarrow (I_1\cdot A)\cdot A^{-1} =(I_2\cdot A)\cdot A^{-1}\\
&\Longrightarrow I_1\cdot (A\cdot A^{-1}) =I_2\cdot (A\cdot A^{-1})\\
&\Longrightarrow I_1\cdot I_1= I_1=I_2=I_2\cdot I_2
\end{align*}
This would be necessary if we only have a minimal number of axioms for a group. Then we would have demanded only the existence of an identity element, not its uniqueness or - even more important - that left and right identity are the same.

If we have no group, then it is still important where ##A## is from. I assume we have ##A\in \mathbb{M}_{(n,n)}(\mathbb{R}),## that is all possible real square matrices of a given finite dimension. In this case, you can look for appropriate matrices ##A## and test what you get. Hint: use matrices with a lot of zero entries. However, this would have been a guess on my part. There are domains where it is not automatically the case that there is only one identity. That's why I asked.
 
  • #12
If I works out as AI =A, then it should do so for all matrices. If it's not unique for some matrices it's just not unique overall, which is the endgoal.
So, what if A is n by n and invertible ? Then A I_1= AI_2 .
Can you prove uniqueness now?
 
  • #13
askmathquestions said:
Just because AX = B doesn't suddenly mean, if we pick another, random, different matrix from X, say Y, that AY is still equal to B in the general case.They are pointing out a point of contention in the assumptions of the problem. I didn't grab this problem out of a book, that's why it's ill-posed, so I need to figure out what minimal additional assumptions I need to make to prove uniqueness, which I obviously didn't know when writing the problem. I amended my problem to include that $A$ not be identically equal to the 0 matrix.
No additional assumptions are required. The identity element must be unique.

Considering the case where we have only the zero matrix and hence no identity is unnecessarily muddying the waters.
 
  • #14
WWGD said:
If I works out as AI =A, then it should do so for all matrices. If it's not unique for some matrices it's just not unique overall, which is the endgoal.
So, what if A is n by n and invertible ? Then A I_1= AI_2 .
Can you prove uniqueness now?
Well, in that case is ##AA^{-1} = I_1## or ##I_2##?
 
  • #15
PeroK said:
Well, in that case is ##AA^{-1} = I_1## or ##I_2##?
My idea is to see what happens if A(I_1 -I_2)=0 when we know A is invertible. Here 0 is the 0 matrix. Then show I_1-I_2 must be identically zero. We can do it without fancy results on the rank of products.
 
  • #16
fresh_42 said:
No. If we have a multiplicative group, i.e. an associative [##A(BC)=(AB)C##] structure in which all elements have inverses and there is an identity element, then we are allowed to multiply from one side with an inverse element, and
\begin{align*}
I_1\cdot A =I_2\cdot A &\Longrightarrow (I_1\cdot A)\cdot A^{-1} =(I_2\cdot A)\cdot A^{-1}\\
&\Longrightarrow I_1\cdot (A\cdot A^{-1}) =I_2\cdot (A\cdot A^{-1})\\
&\Longrightarrow I_1\cdot I_1= I_1=I_2=I_2\cdot I_2
\end{align*}
This would be necessary if we only have a minimal number of axioms for a group. Then we would have demanded only the existence of an identity element, not its uniqueness or - even more important - that left and right identity are the same.

If we have no group, then it is still important where ##A## is from. I assume we have ##A\in \mathbb{M}_{(n,n)}(\mathbb{R}),## that is all possible real square matrices of a given finite dimension. In this case, you can look for appropriate matrices ##A## and test what you get. Hint: use matrices with a lot of zero entries. However, this would have been a guess on my part. There are domains where it is not automatically the case that there is only one identity. That's why I asked.
Thanks for your reply, I'm interested in A being a vector OR a matrix, so I'd like the proof to be broad enough to cover both basis. Are vectors invertible matrices if they are not the 0 vector?
 
  • #17
askmathquestions said:
Thanks for your reply, I'm interested in A being a vector OR a matrix, so I'd like the proof to be broad enough to cover both basis. Are vectors invertible matrices if they are not the 0 vector?
A vector is an 1xp or px1 matrix. As such it's not invertible on both sides, as only square, e.g. nxn matrices can be.
 
  • Like
Likes malawi_glenn
  • #18
askmathquestions said:
Are vectors invertible matrices if they are not the 0 vector?
No.
 
  • #19
askmathquestions said:
Thanks for your reply, I'm interested in A being a vector OR a matrix, so I'd like the proof to be broad enough to cover both basis. Are vectors invertible matrices if they are not the 0 vector?
No. If you multiply two vectors (by the usual method), then you get either a number (row times column) or a matrix (column times row) that isn't invertible. If you multiply differently, say column times column componentwise, then you can't get an inverse as soon as one component is zero, without the need to be completely zero.
 
  • #20
WWGD said:
My idea is to see what happens if A(I_1 -I_2)=0 when we know A is invertible. Here 0 is the 0 matrix. Then show I_1-I_2 must be identically zero. We can do it without fancy results on the rank of products.
Since A(I_1-I_2)=0, it follows each of the column vectors are in the nullspace of A, and, since A is invertible, it has trivial kernel, so each column vector in ( I_1-I_2 ) must be the 0 vector,
 
  • #21
WWGD said:
A vector is an 1xp or px1 matrix. As such it's not invertible on both sides, as only square, e.g. nxn matrices can be.
Well that's a problem, because I'm trying to prove uniqueness for the case A is a vector.

What about my example? x = [[x_1],[x_2],[x_3]], a column vector, and a row vector y = (1/3) [[1/x_1, 1/x_2, 1/x_3 ]].

If you multiply y*x you get 1, the "scalar" identity matrix as it were.

Maybe we need some additional specificity, like A is an n x p (which includes vectors, n x 1) matrix with entries drawn from complex numbers, but such that all entries are not 0. A vector would then be of the space $ \mathbb{C}^{n x 1} $.

P.S. does this website process LaTeX?
 
  • #22
PeroK said:
I don't follow what you are trying to say here.
I pointed to the lack of clarity in the question. The answer depends on the ring, group, or set. You simply assumed things and claimed to know better. I asked instead.

Have a look at:
askmathquestions said:
Well that's a problem, because I'm trying to prove uniqueness for the case A is a vector.
 
  • #23
askmathquestions said:
Well that's a problem, because I'm trying to prove uniqueness for the case A is a vector.
In that case: how is the multiplication defined? Are ##I_1,I_2## matrices?
 
  • #24
fresh_42 said:
In that case: how is the multiplication defined? Are ##I_1,I_2## matrices?
Maybe we're talking past each other. I_1 and I_2 are n x n matrices, acting on an n x p matrix A. We have the usual definition of matrix multiplication.
 
  • #25
Then the path you should follow is to solve ##(x)_{ij} \cdot A = A## for several ...
PeroK said:
Perhaps you need a clever choice of ##A##?
... and consider ...
fresh_42 said:
Hint: use matrices with a lot of zero entries.

Edit: From ##I_1A=I_2A=A ## we get ##(I_1-I_2)\cdot A=0.## Thus ##X=I_1-I_2## might be easier to solve. A matter of taste, finally.
 
  • #26
fresh_42 said:
Then the path you should follow is to solve ##(x)_{ij} \cdot A = A## for several ...

... and consider ...Edit: From ##I_1A=I_2A=A ## we get ##(I_1-I_2)\cdot A=0.## Thus ##X=I_1-I_2## might be easier to solve. A matter of taste, finally.
This depends, if we assume ##A## is non-zero, how do we know we can imply that ##(I_1 - I_2) = 0?##
 
  • #27
In the Relevant Equations, can you also say that ##AI_1=A## and ##AI_2=A## for all matrices ##A##?
If so, what can you say about ##I_1I_2##?
 
  • #28
askmathquestions said:
This depends, if we assume ##A## is non-zero, how do we know we can imply that ##(I_1 - I_2) = 0?##
You need to solve ##X\cdot A=0.## These are ##n^2## variables and ##n\cdot p## linear equations. It is immediately clear that
  1. there is possibly more than one solution, i.e. ##X## isn't unique if ##n<p.##
  2. there is possibly no solution, i.e. ##I_1## and ##I_2## do not exist if ##n>p.##
  3. there is only a unique solution guaranteed if ##n=p.##
The ##n\cdot p## many linear equations have variables ##x_{ij}## and parameters ##a_{ij}.##
I still do not know where ##A## is allowed to be from, but identity means, that it has to hold for all entries from whatever this be-from is. I assume we can plugin any real number. In that case, plugin ##a_{11}=1,## and ##a_{ij}=0## elsewhere, then proceed with ##a_{12}=1## and ##a_{ij}=0## elsewhere etc. This gives you tons of equations that all have to be true.
 
  • #29
FactChecker said:
In the Relevant Equations, can you also say that ##AI_1=A## and ##AI_2=A## for all matrices ##A##?
If so, what can you say about ##I_1I_2##?
Possibly that the matrices commute, I thought about that but I'm not sure how it helps.
 
  • #30
askmathquestions said:
Possibly that the matrices commute, I thought about that but I'm not sure how it helps.
It would help if you would multiply matrices!
\begin{align*}
I_1\cdot A= \begin{bmatrix}
x_{11}&x_{12}&\ldots&x_{1n}\\ \vdots &\vdots &\ldots&\vdots \\x_{n1}&x_{n2}&\ldots&x_{nn}
\end{bmatrix}\cdot
\begin{bmatrix}
a_{11}&a_{12}&\ldots&a_{1p}\\ \vdots &\vdots &\ldots&\vdots \\a_{n1}&a_{n2}&\ldots&a_{np}
\end{bmatrix} =
\begin{bmatrix}
a_{11}&a_{12}&\ldots&a_{1p}\\ \vdots &\vdots &\ldots&\vdots \\a_{n1}&a_{n2}&\ldots&a_{np}
\end{bmatrix}=A
\end{align*}

I like to consider the matrices
$$E_{ij}:= \begin{bmatrix}
0&0&\ldots&0\\
\vdots &\vdots &\ldots&\vdots \\
0&0&\ldots 1_{ij}\ldots &0\\
\vdots &\vdots &\ldots&\vdots \\
0&0&\ldots&0
\end{bmatrix}
$$
with a ##1## at position ##(i,j)## and ##0## elsewhere. Then
$$
E_{pq}\cdot E_{rs} =\begin{cases} 0&\text{ if }q\neq r\\ E_{ps} &\text{ if }q= r\end{cases}
$$
Now, set ##I_1-I_2=X## as an arbitrary matrix ##X=(x_{ij})=\sum_{i=1}^n\sum_{j=1}^nx_{ij}E_{ij}## and for ##A## all ##E_{pq}## within your given sizes.
 
  • #31
askmathquestions said:
Possibly that the matrices commute, I thought about that but I'm not sure how it helps.
Which matrix/matrices does ##I_1I_2## equal?
 
  • #32
Hi @askmathquestions. It may be worth noting the following (if not already clear).

Suppose ##A## is any ##m \times n## matrix where ##m \ne n##.

There are two (each unique) different identity matrices, ##I_{m \times m}## and ##I_{n \times n}## such that
##I_{m \times m} A = A## and
##A I_{n \times n} = A##

Of course if ##m=n##, then only a single identity matrix exists.

Proof of the uniqueness should not use inverses for various reasons. Not all matrices are invertible but all matrices can be (left or right) multiplied by the approriate identity matrix.. But more fundamentally, inverses are defined in terms of a unique identity matrix; so a proof of identity uniqueness - using inverses - is a circular argument.
 
  • #33
Steve4Physics said:
Hi @askmathquestions. It may be worth noting the following (if not already clear).

Suppose ##A## is any ##m \times n## matrix where ##m \ne n##.

There are two (each unique) different identity matrices, ##I_{m \times m}## and ##I_{n \times n}## such that
##I_{m \times m} A = A## and
##A I_{n \times n} = A##

Of course if ##m=n##, then only a single identity matrix exists.

Proof of the uniqueness should not use inverses for various reasons. Not all matrices are invertible but all matrices can be (left or right) multiplied by the approriate identity matrix.. But more fundamentally, inverses are defined in terms of a unique identity matrix; so a proof of identity uniqueness - using inverses - is a circular argument.
Thanks for pointing that out, so there are two different, each unique identities, one for left multiplication and one for right multiplication.

I guess I'm interested in left multiplication, that, for a given vector (or square matrix) ##x##, we have ##Ax## for some appropriately sized square matrix ##A##.
I'm not sure though why you required ##m \neq n##, isn't the identity unique when acting on square matrices too?
 
  • #34
askmathquestions said:
Thanks for pointing that out, so there are two different, each unique identities, one for left multiplication and one for right multiplication.
May happen, usually doesn't. I guess if you ask for an example in a thread, it will take a while before someone has an answer, if at all, except for the case of unequal dimensions.
 
  • #35
fresh_42 said:
You need to solve ##X\cdot A=0.## These are ##n^2## variables and ##n\cdot p## linear equations. It is immediately clear that
  1. there is possibly more than one solution, i.e. ##X## isn't unique if ##n<p.##
  2. there is possibly no solution, i.e. ##I_1## and ##I_2## do not exist if ##n>p.##
  3. there is only a unique solution guaranteed if ##n=p.##
The ##n\cdot p## many linear equations have variables ##x_{ij}## and parameters ##a_{ij}.##
I still do not know where ##A## is allowed to be from, but identity means, that it has to hold for all entries from whatever this be-from is. I assume we can plugin any real number. In that case, plugin ##a_{11}=1,## and ##a_{ij}=0## elsewhere, then proceed with ##a_{12}=1## and ##a_{ij}=0## elsewhere etc. This gives you tons of equations that all have to be true.
Well, I'm interested in an argument that works for square matrices acting on vectors, as well as square matrices acting on other square matrices, I'm interested in both because both vectors and square matrices have many conventional properties and applications, which I generalized by saying ##A## is an ##n## x ##p## matrix in the original problem. So if you require ##n = p##, does that mean your domain of identities can no longer include vectors? i.e. there does not exist a unique identity for vectors?
 
  • #36
PeroK said:
No additional assumptions are required. The identity element must be unique.

Considering the case where we have only the zero matrix and hence no identity is unnecessarily muddying the waters.
If vectors were invertible matrices, then taking the inverse would be sufficient for most purposes and I would be more satisfied with that, I realized that specifically because it wouldn't matter "which" identity you ended up with after obtaining ##AA^{-1}##, it would simplify to ##I_1 = I_2## in either of any case.

Still no one has addressed the ##x## ##y## example I brought up, which seems to suggest vectors are invertible matrices.

Though in practice, it is a better proof if the proof can be extended to non-vectors and non-square matrices too.

There also might be some confusion in all the back and forth between what is ##A## ##I## and ##x##.

In my original problem, I took ##A## to be any ##n## x ##p## matrix that is not identically the zero matrix, so that this would cover both vectors and square matrices. I'm mainly interested in left-multiplication by an identity ##IA = A.##
 
  • #37
askmathquestions said:
If vectors were invertible matrices, ...
They are not, simply because vector times vector isn't a vector anymore.

askmathquestions said:
Though in practice, it is a better proof if the proof can be extended to non-vectors and non-square matrices too.
That's why I asked you about the domains right from the beginning. This seems to make no sense. Vectors are elements of a vector space. A vector space has certain properties. A matrix represents a function between vector spaces. Such a space of linear functions has certain properties. You compare apples with oranges.
 
  • #38
fresh_42 said:
They are not, simply because vector times vector isn't a vector anymore.That's why I asked you about the domains right from the beginning. This seems to make no sense. Vectors are elements of a vector space. A vector space has certain properties. A matrix represents a function between vector spaces. Such a space of linear functions has certain properties. You compare apples with oranges.
Okay, it sounds like you're saying the uniqueness of the identity matrix acting on square matrices and the uniqueness of the identity matrix acting on vectors are two different problems.

I don't quite know if that's true, because let's say you have a matrix ##\begin{bmatrix} a & b \\ c & d \end{bmatrix}## and a vector ##\begin{bmatrix} x \\ y \end{bmatrix}##. Well, the left-hand multiplicative identity for both of these is the matrix ##\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}##.
Recall I assumed that ##I_1## and ##I_2## were two ##n## x ##n## matrices, does this help?
 
  • #39
askmathquestions said:
Okay, it sounds like you're saying the uniqueness of the identity matrix for square matrices and the uniqueness of the identity matrix for vectors are two different problems.
No. I am saying that you do not properly distinguish between vectors and matrices. You say that ##A## is a ##n## times ##p## matrix, then you call it a vector, i.e. ##p=1.## But these are fundamentally different multiplications:
##I_1## and ##I_2## are functions from a ##n##-dimensional vector space into a ##n##-dimensional vector space.
p>1
##A## is a function from a ##p##-dimensional vector space into a ##n##-dimensional vector space.
Hence, ##I_1\cdot A## is a multiplication of functions.
p=1
##I_1\cdot A## is the function ##I_1## evaluated on the vector ##A.##

If you only want to solve
$$
\begin{bmatrix}a&b\\c&d\end{bmatrix}\cdot \begin{bmatrix}x\\y\end{bmatrix}=\begin{bmatrix}x\\y\end{bmatrix}
$$
for all possible ##x,y##, then solve
$$
\begin{bmatrix}a&b\\c&d\end{bmatrix}\cdot \begin{bmatrix}1\\0\end{bmatrix}=\begin{bmatrix}1\\0\end{bmatrix} \text{ and }
\begin{bmatrix}a&b\\c&d\end{bmatrix}\cdot \begin{bmatrix}0\\1\end{bmatrix}=\begin{bmatrix}0\\1\end{bmatrix}
$$
and calculate ##a,b,c,d.##

If you want to prove ##I_1A=I_2A=A \Longrightarrow I_1=I_2## by general properties then specify the linear function spaces. Such a specification determines which conclusions are allowed.

askmathquestions said:
I don't quite know if that's true, because let's say you have a matrix ##[[a,b],[c,d]]## and a vector ##[[x],[y]]##. Well, the multiplicative identity for both of these is the matrix ##\begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}##.
Recall I assumed that ##I_1## and ##I_2## were two ##n## x ##n## matrices, does this help?
Yes, see above. And the general case is accordingly.
 
  • #40
fresh_42 said:
No. I am saying that you do not properly distinguish between vectors and matrices. You say that ##A## is a ##n## times ##p## matrix, then you call it a vector, i.e. ##p=1.## But these are fundamentally different multiplications:
##I_1## and ##I_2## are functions from a ##n##-dimensional vector space into a ##n##-dimensional vector space.
p>1
##A## is a function from a ##p##-dimensional vector space into a ##n##-dimensional vector space.
Hence, ##I_1\cdot A## is a multiplication of functions.
p=1
##I_1\cdot A## is the function ##I_1## evaluated on the vector ##A.##

If you only want to solve
$$
\begin{bmatrix}a&b\\c&d\end{bmatrix}\cdot \begin{bmatrix}x\\y\end{bmatrix}=\begin{bmatrix}x\\y\end{bmatrix}
$$
for all possible ##x,y##, then solve
$$
\begin{bmatrix}a&b\\c&d\end{bmatrix}\cdot \begin{bmatrix}1\\0\end{bmatrix}=\begin{bmatrix}1\\0\end{bmatrix} \text{ and }
\begin{bmatrix}a&b\\c&d\end{bmatrix}\cdot \begin{bmatrix}0\\1\end{bmatrix}=\begin{bmatrix}0\\1\end{bmatrix}
$$
and calculate ##a,b,c,d.##

If you want to prove ##I_1A=I_2A=A \Longrightarrow I_1=I_2## by general properties then specify the linear function spaces. Such a specification determines which conclusions are allowed.Yes, see above. And the general case is accordingly.
Couldn't I just say ##A \in \mathbb{C}^{n \ x \ p}##? You're saying vectors and matrices are different. For practical purposes I agree, but for this specific proof, both vectors and square matrices are a class of ##n## x ##p## matrices, so if you can prove uniqueness for the ##n## by ##p## case, then you've proven uniqueness for both vectors and other matrices.
 
  • #41
askmathquestions said:
Couldn't I just say ##A \in \mathbb{C}^{n \ x \ p}##? You're saying vectors and matrices are different. For practical purposes I agree, but for this specific proof, both vectors and matrices are a class of ##n## x ##p## matrices, so if you can prove uniqueness for the ##n## by ##p## case, then you've proven uniqueness for both vectors and other matrices.
Yes, but the word vector is confusing then. Obviously.

You need to multiply matrices! Solve ##(x)_{rs} \cdot E_{uv}=E_{uv}## for all ##u,v## and calculate ##(x)_{rs}.##
 
  • #42
fresh_42 said:
You need to multiply matrices!
Right, you can multiply a vector by a matrix, or you can multiply another matrix by a matrix, in either case you're multiplying by a matrix, so I don't quite understand what the issue is. Both circumstances can be generalized as multiplying an ##n## x ##p## matrix ##A## by a square ##n## by ##n## matrix ##I##.
 
Last edited:
  • #43
Also, I didn't notice there was a linear and abstract algebra section of this website. Would it be possible for a moderator to move this thread there?
 
  • #44
It is not clear to me what the full problem statement, assumptions, and prior proven facts are.
If you are given a right and a left multiplicative identity, ##I_R, I_L##, respectively, then you know that ##I_L = I_L I_R = I_R##.
Does that help you?
 
  • #45
FactChecker said:
It is not clear to me what the full problem statement, assumptions, and prior proven facts are.
If you are given a right and a left multiplicative identity, ##I_R, I_L##, respectively, then you know that ##I_L = I_L I_R = I_R##.
Does that help you?
You're sure that this is true for a general ##n## x ##p## matrix? Another user said those identities are different. Perhaps you're assuming ##A## is a square matrix.
 
  • #46
askmathquestions said:
You're sure that this is true for a general ##n## x ##p## matrix? Another user said those identities are different. Perhaps you're assuming ##A## is a square matrix.
It is true by the definition of the multiplicative identity.
##I_L = I_L I_R## by the definition of the right multiplicative identity matrix.
##I_L I_R = I_R## by the definition of the left multiplicative identity matrix.
I thought that you specified earlier that the matrices were nxn.
 
  • #47
FactChecker said:
It is true by the definition of the multiplicative identity.
##I_L = I_L I_R## by the definition of the right multiplicative identity matrix.
##I_L I_R = I_R## by the definition of the left multiplicative identity matrix.
I thought that you specified earlier that the matrices were nxn.
The left-multiplicative identity is ##n## by ##n##, I don't know that the right-multiplicative identity necessarily is, I wasn't assuming the same restrictions on that since I'm concerned partly about left-hand transformations on vectors.
 
  • #48
askmathquestions said:
The left-multiplicative identity is ##n## by ##n##, I don't know that the right-multiplicative identity necessarily is, I wasn't assuming the same restrictions on that since I'm concerned partly about left-hand transformations on vectors.
IMHO, with all of the "I wasn't assuming" and "I'm concerned", it is not clear what the exact problem statement in the book was versus how much you modified the problem.
 
  • #49
FactChecker said:
IMHO, with all of the "I wasn't assuming" and "I'm concerned", it is not clear what the exact problem statement in the book was versus how much you modified the problem.
There is no problem statement in a book, I said that already, and the fact that this question isn't well-posed yet should also let you know that it's not from any official text. This is my own inquiry, and further why I asked for this thread to be moved to the linear algebra section. I explicitly said I need to figure out what assumptions need to be made and then amended my statement accordingly.
 
  • #50
askmathquestions said:
There is no problem statement in a book, I said that already,
Sorry, I missed that.
askmathquestions said:
and the fact that this question isn't well-posed yet should also let you know that it's not from any official text. This is my own inquiry, and further why I asked for this thread to be moved to the linear algebra section. I explicitly said I need to figure out what assumptions need to be made and then amended my statement accordingly.
Ok. I see that is part of the goal of the question.

Suppose that you show that the matrix ##I## = diag(1,1,1,1,...,n) is a right multiplicative identity and that you have ##I_1## and ##I_2## as left multiplicative identities.
Then you can use the appropriate property at each step to say that ##I_1 = I_1 I = I = I_2 I = I_2##.
 
  • Like
Likes Office_Shredder
Back
Top