Undergrad Proving an inverse of a groupoid is unique

Click For Summary
The discussion centers on the uniqueness of the inverse element in a groupoid, specifically questioning whether this can be proven without relying on the associative property of a semigroup. It is noted that removing properties from an algebraic structure, such as associativity, leads to a broader set of examples, which may include cases where the original theorem does not hold. The uniqueness of the identity element is highlighted as a critical factor in defining inverses within these structures. The conversation also introduces the term "oneoid" to describe a non-associative algebraic structure. Ultimately, the complexity of proving the uniqueness of inverses without certain properties is acknowledged.
Matejxx1
Messages
72
Reaction score
1
Hello
I have a question about the uniqueness of the inverse element in a groupoid. When we were in class our profesor wrote ##\text{Let} (M,*) \,\text{be a monoid then the inverse (if it exists) is unique}##. He then went off to prove that and I understood it, however I got curious and started thinking if it is possible to prove that there is only one unique inverse without taking into account the associative property of the semigroup. So then I started trying to prove it but I didn't really get too far and I tried looking online and also didn't find much about it. Could anybody tell me how this would be done ?
thanks
 
Last edited:
Physics news on Phys.org
Matejxx1 said:
Hello
I have a question about the uniqueness of the inverse element in a groupoid. When we were in class our profesor wrote ##\text{Let} (M,*) \,\text{be a monoid then the inverse (if it exists) is unique}##. He then went off to prove that and I understood it, however I got curious and started thinking if it is possible to prove that there is only one unique inverse without taking into account the associative property of the semigroup. So then I started trying to prove it but I didn't really get too far and I tried looking online and also didn't find much about it. Could anybody tell me how this would be done ?
thanks
If you have only a binary operation and a unit, you can define whatever you want. E.g.
$$ \begin{bmatrix}*&e&a&b&c\\e&e&a&b&c\\a&a&b&e&e\\b&b&e&c&b\\c&c&e&b&a\\\end{bmatrix} $$
 
Matejxx1 said:
Could anybody tell me how this would be done ?
If it is a false statement then it can't be done.

The general form of your question is "How do I prove a theorem about an algebraic structure that has certain properties without using some of those properties? ". The usual interpretation of that type of question is that we consider a different algebraic structure that is formed by removing some properties of the original algebraic structure. Then we try to prove the theorem for this new structure. (Of course "algebraic structure" refers to the collection of possible examples that satisfy the definition of that structure. So if we remove properties from the definition of a structure we enlarge the number of examples that we must consider. If we enlarge the number of examples then we incur the risk of allowing an example where the statement of our theorem is false.)

If we take the definition of monoid and remove the requirement that it be associative then we create a definition of a new algebraic structure. Even if we keep the requirement that an identity element exists in this new structure it is not clear that the identity element is unique. If we don't have a unique identity element, then how do we define an inverse of an element x in this new algebraic structure? We would have to look at the definition for "the inverse of x" in a monoid and see if that definition relies on the uniqueness of the identity in a monoid.
 
If you mean a two sided identity how could it fail to be unique? 1a1b=?.
 
  • Like
Likes Stephen Tashi
Stephen Tashi wrote:

"If we take the definition of monoid and remove the requirement that it be associative then we create a definition of a new algebraic structure."


I propose the the name oneoid after J. Milton Hayes.
 
Thread 'How to define a vector field?'
Hello! In one book I saw that function ##V## of 3 variables ##V_x, V_y, V_z## (vector field in 3D) can be decomposed in a Taylor series without higher-order terms (partial derivative of second power and higher) at point ##(0,0,0)## such way: I think so: higher-order terms can be neglected because partial derivative of second power and higher are equal to 0. Is this true? And how to define vector field correctly for this case? (In the book I found nothing and my attempt was wrong...

Similar threads

  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 26 ·
Replies
26
Views
825
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 31 ·
2
Replies
31
Views
2K
  • · Replies 1 ·
Replies
1
Views
497
  • · Replies 25 ·
Replies
25
Views
3K
Replies
4
Views
3K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 69 ·
3
Replies
69
Views
9K
  • · Replies 25 ·
Replies
25
Views
3K