# Small lemma about sums and products

Hey,
I'm trying to prove a larger theorem; in order to complete my proof I need to use the following lemma (or, if it turns out to be false, try a completely different method of proof):

Consider any two sets of n nonzero integers, A and B. If their respective sums and products are equal, then A = B.
i.e if A = {a1, a2, ... aN} and B = (b1, b2, ... bN},
and a1*a2*...*aN = b1*b2*...*bN
and a1 + a2 +... + aN = b1 + b2 + ... bN

then A and B are the same set.

For some reason I'm sure I've seen this theorem before, but I can't for the life of me remember where or whether it was exactly this or slightly different. I'd search it on Google, but then I'd need to know what terms to search for - and if I knew that I would probably know the answer.

So, any ideas/pointers/links to wikipedia articles are very much appreciated.

## Answers and Replies

mathwonk
Science Advisor
Homework Helper
2020 Award
have you tried induction?

Deveno
Science Advisor
the case |A| = |B| = 2 might be instructive:

let A = {x,y}, B = {a,b}

then x + y = a + b, and xy = ab.

so y = ab/x, and

x + ab/x = a + b, that is:

x^2 + ab = (a+b)x, so

x^2 - (a+b)x + ab = 0

(x - a)(x - b) = 0.

thus x = a, or x = b, in either case, y is the remaining element of B.

AlephZero
Science Advisor
Homework Helper
Deveno's approach, suggests ths is false if n > 2.

Consider the polynomial
$$(x-a_1)(x-a_2)\cdots(x-a_n) = x^n + A_{n-1}x^{n-1} \cdots + A_0$$

The sum and product of the a's determine $A_{n-1}$ and $A_0$ but not the other A's.

I conjecture you could find a counterexample that way. But, your conditon that all the a's are nonzero integers imposes some more constraints on the A's ...

AlephZero
Science Advisor
Homework Helper
Unless you meant to restrict it to positive integers, here's a counter example:

A = -1 2 3 -4
B = 1 -2 -3 4

thus x = a, or x = b, in either case, y is the remaining element of B.
&&
AlephZero said:
A = -1 2 3 -4
B = 1 -2 -3 4
In other words, the sets are not necessarily the same.

In Deveno's case, the elements are the same, but the possibility exists that a1=b2 and b1=a2 (the set order is different).

In AlephZero's case, the elements have the same magnitude, but have differing signs (and they could have alternate set ordering as well).
|-1| = |1|

This lemma is not even true for the positive integers. Consider the multisets {1, 5, 8} and {2, 2, 10}. Alternatively, if we want to require that all three numbers be distinct, we can consider the sets {1, 9, 10} and {2, 3, 15}.

uart
Science Advisor
Good counter example Citan. I wonder if the Lemma could be salvaged with the condition that all the factors be positive and non-trivial (that is, we also exclude "1"). Can anyone think of a counter example that doesn't include "1" as one of the factors?

In other words, the sets are not necessarily the same.

In Deveno's case, the elements are the same, but the possibility exists that a1=b2 and b1=a2 (the set order is different).
Matt, sets don't have ordering so that isn't important here.

uart
Science Advisor
I wonder if the Lemma could be salvaged with the condition that all the factors be positive and non-trivial (that is, we also exclude "1")?

Just to answer my own question. Yes obviously if we just multiply each of the elements in the above counter example by "2" then the sums will increase by a factor of two and the products by eight, so they'll still be equal. So it looks like the proposed lemma is definitely sunk.

Oh wow, good catch Citan! (but in fact negative integers are allowable, so even Aleph's counterexample works).

Thanks guys. I guess it's back to the drawing board with the proof. :(

Just out of curiosity, what's the larger theorem you're trying to prove?

Given two nxn matrices A, B,
The eigenvalues of AB = the eigenvalues of BA.

Essentially I had proved that the trace and determinant of both matrices was the same, and if this lemma were true that's all I would need.

Last edited:
Deveno
Science Advisor
actually i thought of a better way:

suppose 0 is an eigenvalue of AB.

then either A or B must be singular.

hence BA is singular, so 0 is an eigenvalue of BA.

suppose λ is a non-zero eigenvalue for AB, with eigenvector v.

then BA(Bv) = B(ABv) = B(λv) = λ(Bv),

so λ is an eigenvalue for BA (with eigenvector Bv,

Bv cannot be 0, since ABv = λv is non-zero).

that's half, the other half is similar.

You can actually go farther, and prove that AB and BA have the same eigenvalues with the same multiplicity. For this, it is necessary and sufficient to prove that AB and BA have the same characteristic polynomial. Let Char(AB) = det(λI-AB) be the characteristic polynomial of AB, and similarly for BA. If B is invertible, we have det(B) * Char(AB) = det(B(λI - AB)) = det (λB - BAB) = det ((λI-BA)B) = Char(BA) det(B), and dividing by both sides yields that Char(AB) = Char(BA). Now, note that the coefficients of Char(A) are polynomials in the entries of A, so the map A↦Char(A) is continuous. The invertible matrices are dense in the set of all matrices, so if B is not invertible, there is a sequence of invertible matrices {Bn} such that Bn → B. Then from the previous case we have Char(AB) = lim Char(ABn) = lim Char(BnA) = Char(BA), which completes the proof.

Deveno
Science Advisor
The invertible matrices are dense in the set of all matrices

this is a topological fact unlikely to be covered in a linear algebra course (but I do like the way you think).