MHB You're welcome, Peter! I'm glad it was helpful.

  • Thread starter Thread starter Math Amateur
  • Start date Start date
  • Tags Tags
    Representations
Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading "Introduction to Ring Theory" by P. M. Cohn (Springer Undergraduate Mathematics Series)

In Chapter 2: Linear Algebras and Artinian Rings, Cohn introduces representations of k-algebras as follows:
View attachment 3152So, essentially Cohn considers a right multiplication:

$$\rho_a \ : \ x \mapsto xa$$ where $$x \in A$$

and then declares the representation to be the matrix $$( \rho_a )_{ij}$$

BUT … what is the point here … … ?

… … and why take a right multiplication anyway …Can anyone help me to see the motivation for introducing the notion of representations of k-algebras?

Peter
 
Physics news on Phys.org
A ($k$-)algebra is, essentially an algebraic structure. As with many algebraic structures, information about the "internal workings" of a particular example of this structure is often revealed by the behavior of the (algebra) homomorphisms to and from our particular example. This is a highly "conceptual" point of view, and often properties of a given algebra are deduced by the properties of various mappings, without ever looking at "a single element".

On the other hand, matrices are fairly "concrete" things, with operations we can manipulate mechanically, through arithmetic. There is an analogy with groups here: an "abstract group" can be realized (faithfully) as a "concrete" group of permutations of a set. That is, we can transfer "abstract" characterizations, such as normality, to specific shuffling operations on a set.

For example, the dihedral group of order 6, can be realized as "the game of 3-card monte". Conjugation (of an abstract group) corresponds to "replacement" of a shuffling sequence "with different cards".

Moreover, the theory of linear algebra is quite extensively developed, with many useful results on inverting matrices, useful decompositions, and various "canonical" or "normal" forms. These results can be "pulled back" to abstract statements involving $k$-algebras (since this representation is FAITHFUL).

There are two parallel benefits, here: the first is that the abstract theory allows us to "save computation" with specific examples, by applying high-level theorems to "skip steps". The second benefit is that by studying the PARTICULAR $k$-algebra $\text{End}(k^n)$, we can learn many things about how $k$-algebras work IN GENERAL, allowing us to develop a sense of what feels "intuitive" (we gain INSIGHT).

This kind of trade-off is at the border between "pure" and "applied" math-chemists, for example, will work with the representation (images) themselves in analyzing molecular symmetry, whereas a group theorist is more likely to look at the associated $F[G]$-module. Going in a more abstract direction is "why", and in a more concrete direction is "how".
 
Deveno said:
A ($k$-)algebra is, essentially an algebraic structure. As with many algebraic structures, information about the "internal workings" of a particular example of this structure is often revealed by the behavior of the (algebra) homomorphisms to and from our particular example. This is a highly "conceptual" point of view, and often properties of a given algebra are deduced by the properties of various mappings, without ever looking at "a single element".

On the other hand, matrices are fairly "concrete" things, with operations we can manipulate mechanically, through arithmetic. There is an analogy with groups here: an "abstract group" can be realized (faithfully) as a "concrete" group of permutations of a set. That is, we can transfer "abstract" characterizations, such as normality, to specific shuffling operations on a set.

For example, the dihedral group of order 6, can be realized as "the game of 3-card monte". Conjugation (of an abstract group) corresponds to "replacement" of a shuffling sequence "with different cards".

Moreover, the theory of linear algebra is quite extensively developed, with many useful results on inverting matrices, useful decompositions, and various "canonical" or "normal" forms. These results can be "pulled back" to abstract statements involving $k$-algebras (since this representation is FAITHFUL).

There are two parallel benefits, here: the first is that the abstract theory allows us to "save computation" with specific examples, by applying high-level theorems to "skip steps". The second benefit is that by studying the PARTICULAR $k$-algebra $\text{End}(k^n)$, we can learn many things about how $k$-algebras work IN GENERAL, allowing us to develop a sense of what feels "intuitive" (we gain INSIGHT).

This kind of trade-off is at the border between "pure" and "applied" math-chemists, for example, will work with the representation (images) themselves in analyzing molecular symmetry, whereas a group theorist is more likely to look at the associated $F[G]$-module. Going in a more abstract direction is "why", and in a more concrete direction is "how".
Thanks for a very insightful and informative post ...

Peter
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...

Similar threads

Back
Top