Is -(x^-1) = (-x)^-1 true for all nonzero x in any ring?

  • Thread starter 1MileCrash
  • Start date
  • Tags
    Ring
In summary: Yes, that is right. A more advanced way of seeing it is that there is a homomorphism of rings ##\Phi:\mathbb{Z}\rightarrow R## such that1) ##\Phi(1) = 1## and ##\Phi(0) = 0##2) ##\Phi(m+1) = \Phi(m) + 1##3) ##\Phi(-m) = -\Phi(m)##.Of course you can now ask yourself if this is always injective/surjective.All of this will become much clearer when you learn about ring theory, particularly the concept of a quotient ring.
  • #1
1MileCrash
1,342
41
Is -(x^-1) = (-x)^-1 true for all nonzero x in any ring, where x^-1 denotes the multiplicative inverse of x?
 
Physics news on Phys.org
  • #2
Not all nonzero ##x## are invertible in a ring. So you must demand ##x## to be invertible.
In that case, why don't you try to prove it?
 
  • #3
I was only able to prove it for a commutative ring.
 
  • #4
Small hint: What's ##x*(-1) + x##? How about ##(-1)*x + x##?
 
  • #5
1MileCrash said:
I was only able to prove it for a commutative ring.

Please post your proof. Where do you use commutativity?
 
  • #6
micromass said:
Please post your proof. Where do you use commutativity?

-x • -(x)^-1 = -1 • x • -1 • (x)^-1 = -1 • -1 • x • (x)^-1 [with commutativity] = 1 • 1 = 1

Is the proof I did.
 
  • #7
1MileCrash said:
-x • -(x)^-1 = -1 • x • -1 • (x)^-1 = -1 • -1 • x • (x)^-1 [with commutativity] = 1 • 1 = 1

Is the proof I did.

OK, so the step where you used commutativity is

[tex](-1) * x = x* (-1)[/tex]

But in fact, this is true in any ring, even noncommutative. So your goal should now be to prove this without using commutativity.
 
  • #8
micromass said:
OK, so the step where you used commutativity is

[tex](-1) * x = x* (-1)[/tex]

But in fact, this is true in any ring, even noncommutative. So your goal should now be to prove this without using commutativity.

Yeah, I haven't responded to this because I am just so embarrassed. I am so careful not to assume what I'm not given in proofs regarding algebraic structures that I often make mistakes like this. Thanks again.
 
  • #9
1MileCrash said:
Yeah, I haven't responded to this because I am just so embarrassed. I am so careful not to assume what I'm not given in proofs regarding algebraic structures that I often make mistakes like this. Thanks again.

So you found it?
 
  • #10
AFAIK, the unity satisfies 1r=r1 for all r in the ring, by definition.
 
Last edited:
  • #11
WWGD said:
AFAIK, the unity satisfies 1r=r1 for all r in the ring, by definition.

That's right, I just somehow chose not to see that this implied the same for -1.

mm, I just adjoined this as a lemma:
-1 • x = x • -1
Pf:
x + (-1 • x) = (1 • x) + (-1 • x) = (1-1)x = 0x = 0

x + (x • -1) = (x • 1) + (x • -1) = x(1•1 + 1•-1) = x0 = 0

Additive inverses are unique, so -1•x = x•-1
 
  • #12
1MileCrash said:
That's right, I just somehow chose not to see that this implied the same for -1.

mm, I just adjoined this as a lemma:
-1 • x = x • -1
Pf:
x + (-1 • x) = (1 • x) + (-1 • x) = (1-1)x = 0x = 0

x + (x • -1) = (x • 1) + (x • -1) = x(1•1 + 1•-1) = x0 = 0

Additive inverses are unique, so -1•x = x•-1

Seems good. In fact, if ##n\in \mathbb{Z}## (interpreted suitably), you can prove that ##nx = xn## for all ##x\in R##.
 
  • #13
micromass said:
Seems good. In fact, if ##n\in \mathbb{Z}## (interpreted suitably), you can prove that ##nx = xn## for all ##x\in R##.

The proof is quick (I didn't write one, but it is obvious that (nx = (1+1+..+1)x = x+x+...x = (x•1+x•1+..+x•1) = x(1+1+..+1) = xn) - this seems even easier than the specific -1 case, but let me make sure I intuitively understand this..

If we have nx - even if these integers aren't in the ring R at all, this still makes sense because this can be considered to be "x to the (additive) power of n," x operated additively on itself n times, and that idea is commutative. It can't be considered a binary operation on members of R (because n is not in R here) but it is still well defined for any ring. Like, if I talk about the ring of mxm matrices in the usual way, and A is an mxm matrix, nA is just A added to itself n times, it doesn't matter that n isn't in the ring, nA makes sense.

If R DOES contain these integers n, then the above still applies, but we can additionally consider it to be exactly the same as a multiplicative operation on two members of the ring, n and x, which will be commutative since it is the same idea as the "x to the nth power" concept above.

Is that roughly correct, or am I spewing nonsense?
 
  • #14
1MileCrash said:
The proof is quick (I didn't write one, but it is obvious that (nx = (1+1+..+1)x = x+x+...x = (x•1+x•1+..+x•1) = x(1+1+..+1) = xn) - this seems even easier than the specific -1 case, but let me make sure I intuitively understand this..

If we have nx - even if these integers aren't in the ring R at all, this still makes sense because this can be considered to be "x to the (additive) power of n," x operated additively on itself n times, and that idea is commutative. It can't be considered a binary operation on members of R (because n is not in R here) but it is still well defined for any ring. Like, if I talk about the ring of mxm matrices in the usual way, and A is an mxm matrix, nA is just A added to itself n times, it doesn't matter that n isn't in the ring, nA makes sense.

If R DOES contain these integers n, then the above still applies, but we can additionally consider it to be exactly the same as a multiplicative operation on two members of the ring, n and x, which will be commutative since it is the same idea as the "x to the nth power" concept above.

Is that roughly correct, or am I spewing nonsense?

Yes, that is right. A more advanced way of seeing it is that there is a homomorphism of rings ##\Phi:\mathbb{Z}\rightarrow R## such that

1) ##\Phi(1) = 1## and ##\Phi(0) = 0##
2) ##\Phi(m+1) = \Phi(m) + 1##
3) ##\Phi(-m) = -\Phi(m)##.

Of course you can now ask yourself if this is always injective/surjective.

All of this will be much clearer once you see the concept of ##\mathbb{Z}##-module or ##\mathbb{Z}##-algebra. Basically, a vector space over a field is an abelian group that allows you to multiply by scalars. A ##\mathbb{Z}##-module allows you to multiply with integers. It is interesting to note that the ##\mathbb{Z}##-modules correspond exactly with the abelian groups.
 
  • #15
Furthermore, rings are the monoid objects in the monoidal category of abelian groups together with the tensor product over ##\mathbb{Z}##. I.e. a ring R is a abelian group together with a (multiplication) group homomorphism ##R \otimes R \to R## and a (unit) group homomorphism ##\mathbb{Z} \to R##. This fact means that you can switch the order of any factor coming from ##\mathbb{Z}##, including -1.
 
  • #16
Of course, this is all beyond my current studies, but I do plan to learn more about this eventually. Very cool stuff. So it appears by what you are saying is that by "a vector space being an abelian group with scalar multiplication" that this is actually not so different from a normal abelian group, it's not that scalar multiplication is a completely new operation, it is that scalar multiplication allows for all elements from an arbitrary field to act just as the integers already do for any abelian group. So the only thing keeping all abelian groups from being vector spaces is the fact that Z is not a field. That is quite an eye opener to me.
 
  • #17
1MileCrash said:
Of course, this is all beyond my current studies, but I do plan to learn more about this eventually. Very cool stuff. So it appears by what you are saying is that by "a vector space being an abelian group with scalar multiplication" that this is actually not so different from a normal abelian group, it's not that scalar multiplication is a completely new operation, it is that scalar multiplication allows for all elements from an arbitrary field to act just as the integers already do for any abelian group. So the only thing keeping all abelian groups from being vector spaces is the fact that Z is not a field. That is quite an eye opener to me.

Yes, in that sense abelian groups and vector spaces are not different. They both have a scalar multiplication. An abelian group always has ##\mathbb{Z}## as multiplication, while a vector space has an entire field. This is the difference between the ##\mathbb{Z}##-module and a vector space.

So in one sense they are very alike. But in another sense, the different scalars mean a world of difference. For example, any vector space has a basis, but the same is not true for a ##\mathbb{Z}##-module. However, we can adapt some of the results. For example, you are undoubtly aware that finite-dimensional vector spaces are isomorphic to ##\mathbb{K}^n##. It is interesting to notice that a similar theorem also holds for ##\mathbb{Z}##-modules. You probably know the result too (but maybe didn't see it as similar to the vector space version). It is simply the fundamental theorem for abelian groups. http://en.wikipedia.org/wiki/Finitely-generated_abelian_group#Classification

Once you see modules, it is interesting to search for analogs like this.
 
  • #18
Now hold on, what about vector spaces whose underlying field F do not contain integers? Since they are abelian groups with additional features, scalar multiplication by integers in a vector space must always be possible, and so the set of scalars for the vector space would actually be F union Z, which could never be a field.

So what's the deal with that? I can't think of an example off hand, but I have no immediate objections to a field that does not contain integers.
 
  • #19
Every ring, and therefore field, has a distinguished homomorphism ##\mathbb{Z} \to R## picking out its unit, but it is not required to be injective. So it does have multiplication by ##\mathbb{Z}##, and not-injectivity amounts to finite characteristic.
 
  • #20
1MileCrash said:
Now hold on, what about vector spaces whose underlying field F do not contain integers? Since they are abelian groups with additional features, scalar multiplication by integers in a vector space must always be possible, and so the set of scalars for the vector space would actually be F union Z, which could never be a field.

So what's the deal with that? I can't think of an example off hand, but I have no immediate objections to a field that does not contain integers.

Certainly, some fields do not contain the integers. For example, ##\mathbb{Z}_2## (the integers mod 2) are a field that do not contain the integers. However, we see that there is still always a standard map ##\Phi:\mathbb{Z}\rightarrow F## as described in one of my previous posts. The issue is that this map isn't always injective. (by the way, the kernel of this map is always of the form ##\alpha\mathbb{Z}## and ##\alpha## is called the characteristic of the field, for example ##\mathbb{Z}_2## has characteristic 2, while ##\mathbb{R}## has characteristic 0. Exercise: prove that the characteristic of a field is either 0 or a prime number).

Now, take a vector field ##V## over ##\mathbb{Z}_2##, for example. We can always multiply a vector ##\mathbf{v}## by an integer ##m\in \mathbb{Z}##. But it turns out (as you can easily check) that ##m\mathbf{v} = \Phi(m) \mathbf{v}##. So there is no real difference between multiplying by an element of ##\mathbb{Z}## and multiplying by a suitable element of ##\mathbb{Z}_2##.

You can generalize this. If ##M## is an ##R##-module with ##R## a ring (again an ##R##-module is just a vector space where the scalars are in a ring ##R## and not a vector space) and if ##f:R^\prime\rightarrow R## is a morphism of rings, then ##M## is an ##R^\prime##-module in a very canonical way: ##\alpha\mathbf{m} := f(\alpha)\mathbf{m}##.
You of course know about every ##\mathbb{C}##-vector space being a ##\mathbb{R}##-vector space since if you can multiply by a complex number, then you can multiply with a real. This is just a special case of the previous generalization with the morphism ##f:\mathbb{R}\rightarrow \mathbb{C}:x\rightarrow x##.

I guess what I want to say is that every module (and thus every vector space) determines in a canonical way many other modules and vector spaces. Indeed, any ##R##-module determines an ##R^\prime##-module for each morphism ##\Phi:R^\prime\rightarrow R##. It is in this light that you should see every vector space being a ##\mathbb{Z}##-module. That ##\mathbb{Z}## is actually included in the field is not necessary. Also note that for every ring ##R## (and thus field), there is always a unique morphism ##f:\mathbb{Z}\rightarrow R##. So the result is even better: every ##R##-module determines by this a unique ##\mathbb{Z}##-module.

Does this make a bit of sense? I'm sorry to use so much module-specific language that I think you haven't seen yet, but I found no other way of discussing this.
 
  • #21
I'm sure it will all be clear eventually. I really just started my first course on groups a week ago, I never really took any abstract algebra courses. I'm just glad that no level of abstraction is lost for vector spaces as I was starting to think. Almost cried.
 

1. What is a basic ring?

A basic ring is a mathematical structure that consists of a set of elements and two operations, addition and multiplication, that follow certain rules.

2. What are the properties of a basic ring?

A basic ring must have closure, associativity, commutativity, identity elements, and inverse elements for addition and multiplication.

3. How is a basic ring different from other types of rings?

A basic ring does not necessarily have a multiplicative identity element, unlike other types of rings such as commutative rings or integral domains.

4. Can a basic ring have more than one identity element?

No, a basic ring can only have one identity element for both addition and multiplication.

5. How are basic rings used in science?

Basic rings are used in various scientific fields, such as physics and chemistry, to model and solve problems involving quantities and operations that follow the rules of a basic ring.

Similar threads

  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
2
Replies
55
Views
3K
  • Linear and Abstract Algebra
Replies
3
Views
2K
  • Linear and Abstract Algebra
Replies
5
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
2K
  • Linear and Abstract Algebra
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
625
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
3
Views
2K
Back
Top