Have Unusual Theorems on Vector Spaces and Semigroups Been Explored?

jcsd
Science Advisor
Gold Member
Messages
2,112
Reaction score
13
Do you ever think up theorums and think: "that's inetersting, I wonder if anyone's ever thought of that before?"

In this vein the other day, I thougt up these two. They are both fairly trivial and possibly it's only me that finds them worth even bothering with, but what I want to know is if any of them have ever been applied to any area of maths?


Theorum 1: The set of all functions V from a set A to a field of scalars K form a vector space over K where for any such functions f, g and h: f + g = h , where f(x) + g(x) = h(x) and for a scalar a: a.f = g, where f(a*x) = g(x).

The main reason this seems interesting to me is that the axioms governing the behaviour of +:VxV-->V and .:KxV--> V are automatically implied in their defintion, dim(V) is simply |A|, plus all isomorphism classes of vector spaces can be described by such objects.

Theorum 2: Any group (G,*) forms a subsemigroup of a semigroup (that is not a group) (G+{0},*) where for any g in G: 0*g = g*0 = 0.


The reason I find this interesting is that the muplicative semigroup in a divison algebra is such a semigroup (i.e. a group plus a '0 element'). Also when a group has some sort of toplogical structure you can add such an element and define a new topology, e.g. in the group (R,+) you can add such an elemnt in a natural way to go from an open set to one thta is neither open nor closed.
 
Physics news on Phys.org
They are both fairly trivial and possibly it's only me that finds them worth even bothering with
Here's a little secret: the space \mathbb{R}^n is of exactly the type you describe. :smile: In general, the set A^B is the set of all functions from B to A. Set theoretically, we often use the natural numbers to denote the set of all smaller natural numbers. So, the set \mathbb{R}^n is really the set of all functions from {0, 1, 2, 3, ..., n-1} to the real numbers. (Though, for small n, we often identify such functions with n-tuples)

By the way, since we can define a function by defining its values at each point, we usually use a pointwise definition of arithmetic:

(f + g)(x) := f(x) + g(x)
(a f)(x) := a f(x)


Also, instead of considering the set of all functions, sometimes it is interesting to only consider special functions.

For example, we might consider the vector space of all functions f for which \int_{-\infty}^{+\infty} f^2 \, dx exists. This is extremely important for quantum mechanics.

Or, we might consider the space of all continuous functions. (Or maybe differentiable, or maybe analytic...)

In algebraic geometry (at least the "easy" stuff), some of the central objects of study are the rings of polynomial and of rational functions on an algebraic set. (an equationally defined subset of K^n)


Another interesting case is the set of all functions with only finitely many nonzero values. This one has the nifty property that A is a basis for the set of all such functions A-->K.



In general, there are a lot of mathematical structures for which you can do this -- if you have an object X with structure, you can consider the set of functions from some set S into X, and that set will often have the same sort of structure. (Though, sometimes you need to consider a special class of functions, or weaken the structure slightly) Try extending your theorem to things like groups, rings, fields, partially ordered sets, and anything else you can imagine. Try both the cases of all functions, and of continuous functions from a "nice" space (like Euclidean space, or a manifold). (Hint: not all of the results will be as nice as your theorem 1)


In other words, you've stumbled across a very important basic concept, and it would be a good idea not to forget it. :smile:
 
Last edited:
As for the second one, it's an occasionally useful thing. If you know what a monoid is, you should notice that the same argument says you can add a zero to any monoid to get a new monoid. (A group is a monoid in which every element is invertible)

If you've seen the definition of a ring before, you should notice that a ring is a group under addition, and a monoid under multiplication!
 
Unfortuanelty I can't claim to of stumbled across functional analysis as what you have said is known to me already :), but functional analysis is what led me to this line of thought. It seems to me that the basic idea of functional analysis is T1, thoguh functional analysis generally only bothers with a limited number of subspaces of such vector spaces.

I was mainly wondering if anyone had ever used the more general idea.
 
I would guess that there are simply "too many" functions in the vector space of all functions from X to R. By allowing all functions, you're essentially discarding all structure on X... and sets without structure usually aren't very interesting! (Essentially, the only interesting property of a set is its size!)
 
I.e. the most important functors (read: "natural constructions") have the form Hom(X,.) or Hom(.,X).
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top