Multiplication Maps on Algebras ... Bresar, Lemma 1.25 ...

  • #1
Math Amateur
Gold Member
1,067
47
I am reading Matej Bresar's book, "Introduction to Noncommutative Algebra" and am currently focussed on Chapter 1: Finite Dimensional Division Algebras ... ...

I need help with the proof of Lemma 1.25 ...

Lemma 1.25 reads as follows:


?temp_hash=b5ba6c53eab99366cd5dc7529c3c1321.png

?temp_hash=b5ba6c53eab99366cd5dc7529c3c1321.png





My questions on the proof of Lemma 1.25 are as follows:


Question 1

In the above text from Bresar we read the following:

" ... ... Therefore ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]## ... ... "


Can someone please explain exactly why Bresar is concluding that ##[ M(A) \ : \ F ] \ge d^2## ... ... ?





Question 2

In the above text from Bresar we read the following:

" ... ... Therefore ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]##

and so ##M(A) = [ \text{ End}_F (A) \ : \ F ]##. ... ... "


Can someone please explain exactly why ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]## ... ...

... implies that ... ##M(A) = [ \text{ End}_F (A) \ : \ F ]## ...



Hope someone can help ...

Peter



===========================================================


*** NOTE ***

So that readers of the above post will be able to understand the context and notation of the post ... I am providing Bresar's first two pages on Multiplication Algebras ... ... as follows:



?temp_hash=b5ba6c53eab99366cd5dc7529c3c1321.png

?temp_hash=b5ba6c53eab99366cd5dc7529c3c1321.png
 

Attachments

Last edited:

Answers and Replies

  • #2
14,163
11,464
Question 1
In the above text from Bresar we read the following:
" ... ... Therefore ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]## ... ... "
Can someone please explain exactly why Bresar is concluding that ##[ M(A) \ : \ F ] \ge d^2## ... ... ?
We know that ##M(A) = \langle L_a , R_b \,\vert \, a,b \in A \rangle ## is generated by left- and right-multiplications by definition.
Lemma 1.24 guarantees us that ##\{L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}\,\vert \, 1 \leq i,j \leq d \}## are linear independent ##^{*})##. These are ##d^2## many, so the dimension of ##M(A)## can only be greater than ##d^2##, because we already have found ##d^2## linear independent vectors, which can be extended to a basis.

##^{*}) \; 0 = \sum \lambda_{ij}L_{u_i}R_{u_j} = \sum L_{u_i} R_{b_i} \Longrightarrow ## (Lemma 1.24) ## b_i = \sum \lambda_{ij}u_j = 0 \Longrightarrow \lambda_{ij}=0## because ##\{u_k\}## is a basis, hence ##L_{u_i}R_{u_j} = L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}## are linear independent.
Question 2
In the above text from Bresar we read the following:
" ... ... Therefore ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]##
and so ##M(A) = [ \text{ End}_F (A) \ : \ F ]##. ... ... "
Can someone please explain exactly why ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]## ... ...
... implies that ... ##M(A) = [ \text{ End}_F (A) \ : \ F ]## ...
Well, ##L_a## as well as ##R_b## are endomorphisms of ##A##, i.e. ##\mathbb{F}-##linear mappings ##A \rightarrow A##.
Therefore ##M(A) \subseteq End_\mathbb{F}(A)## and we have a subspace ##M(A)## which has at least dimension ##d^2##. On the other hand ##End_\mathbb{F}(A)## has exactly the dimension ##d^2##, so there is no room left between ##M(A)## and ##End_\mathbb{F}(A)##.
That ## \dim End_\mathbb{F}(A)=d^2 ## can fastest be seen, if we think of matrices: Since ##\{u_1,\ldots, u_d\}## is a basis of ##A##, all linear functions, i.e. every element of ##End_\mathbb{F}(A)## can be written as a ##(d \times d)-##matrix with respect to this basis.
 
  • Like
Likes Math Amateur
  • #3
Math Amateur
Gold Member
1,067
47
We know that ##M(A) = \langle L_a , R_b \,\vert \, a,b \in A \rangle ## is generated by left- and right-multiplications by definition.
Lemma 1.24 guarantees us that ##\{L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}\,\vert \, 1 \leq i,j \leq d \}## are linear independent ##^{*})##. These are ##d^2## many, so the dimension of ##M(A)## can only be greater than ##d^2##, because we already have found ##d^2## linear independent vectors, which can be extended to a basis.

##^{*}) \; 0 = \sum \lambda_{ij}L_{u_i}R_{u_j} = \sum L_{u_i} R_{b_i} \Longrightarrow ## (Lemma 1.24) ## b_i = \sum \lambda_{ij}u_j = 0 \Longrightarrow \lambda_{ij}=0## because ##\{u_k\}## is a basis, hence ##L_{u_i}R_{u_j} = L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}## are linear independent.

Well, ##L_a## as well as ##R_b## are endomorphisms of ##A##, i.e. ##\mathbb{F}-##linear mappings ##A \rightarrow A##.
Therefore ##M(A) \subseteq End_\mathbb{F}(A)## and we have a subspace ##M(A)## which has at least dimension ##d^2##. On the other hand ##End_\mathbb{F}(A)## has exactly the dimension ##d^2##, so there is no room left between ##M(A)## and ##End_\mathbb{F}(A)##.
That ## \dim End_\mathbb{F}(A)=d^2 ## can fastest be seen, if we think of matrices: Since ##\{u_1,\ldots, u_d\}## is a basis of ##A##, all linear functions, i.e. every element of ##End_\mathbb{F}(A)## can be written as a ##(d \times d)-##matrix with respect to this basis.

Thanks fresh_42 ... most helpful in helping me to grasp the meaning of Lemma 1.25 ...

But just a clarification ... You write:

" ... ... Lemma 1.24 guarantees us that ##\{L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}\,\vert \, 1 \leq i,j \leq d \}## are linear independent ##^{*})##. "


What do you mean when you write " ##L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}## " ...

There appear to be two "multiplications" involved, namely ##\cdot## and ##\circ## ... but what are these ...?

and, further what is the meaning and significance of the equality " ##L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}## "

Can you help ...


Still reflecting on your post ...

Peter
 
  • #4
14,163
11,464
I simply wanted to indicate, that multiplication ##"\cdot"## here is the successive application of mappings ##"\circ"##, no matter how it is written even if without multiplication sign. In the end it is ##(L_{u_i}R_{u_j})(x) = L_{u_i}(R_{u_j}(x))=L_{u_i}(x\cdot u_j) = u_i \cdot x \cdot u_j##.
 
  • Like
Likes Math Amateur
  • #5
399
117
I am learning a bit about algebra(s) that I never knew! What exactly does the property of being "central simple" have to do with the conclusions above?

Also: now I want to understand the "Brauer group".
 
  • #6
14,163
11,464
A ##\mathbb{K}-##algebra ##\mathcal{A}## is central simple, if the center ##\mathcal{C}(\mathcal{A})=\{c\in \mathbb{A}\,\vert \,ca=ac \,\forall \,a\in\mathcal{A}\}## of ##\mathcal{A}## equals ##\mathbb{K}## and ##\mathcal{A}## as a ring is simple, i.e. without ideals.

The Brauer group is a long time no see. So I've read the definition again. A funny thing, that you're talking about. How does it come?

According to the theorem of Artin-Wedderburn, every central simple algebra is isomorphic to a matrix algebra ##\mathbb{M}(n,\mathcal{D})## over a division ring ##\mathcal{D}##, here ##\mathcal{D}=\mathbb{K}##. Now all ##\mathbb{M}(n,\mathbb{K})## are considered equivalent, i.e. ##\mathbb{M}(n,\mathbb{K}) \text{ ~ } \mathbb{M}(m,\mathbb{K})## and the elements of the Brauer group (of ##\mathbb{K}##) are the equivalence classes. E.g. ##[1] = [\mathbb{M}(1,\mathbb{K})]=[\mathbb{K}]## and the inverse element is the opposite algebra ##\mathcal{A}^{op}## with the multiplication ##(a,b) \mapsto ba##.

However, the really interesting question here is: Do all Scottish mathematicians (Hamilton, Wedderburn, ...) have a special relationship to strange algebras and why is it so? :cool:
 
  • Like
Likes Math Amateur
  • #7
14,163
11,464
I wasn't quite satisfied with this lapidary description of an equivalence relation here. Unfortunately the English and German Wikipedia page are a one-to-one translation. But the French has been a little bit better. Starting with a central simple algebra ##\mathcal{A}## over a field ##\mathbb{K}##, we have ##\mathcal{A} \otimes_{\mathbb{K}} \mathbb{L} \cong \mathbb{M}(n,\mathbb{L})## for a finite field extension ##\mathbb{L} \supseteq \mathbb{K}##

Now ##\mathcal{A} \text{ ~ } \mathcal{B}## are considered equivalent, if there are natural numbers ##n,m## and an isomorphism such that ##\mathcal{A} \otimes_{\mathbb{K}} \mathbb{M}(n,\mathbb{K}) \cong \mathcal{B} \otimes_{\mathbb{K}} \mathbb{M}(m,\mathbb{K})##.
The (Abelian) Brauer group are now the equivalence classes with multiplication ##\otimes##.

(At least as far as my bad French allowed me to translate it.)
 
  • Like
Likes Math Amateur
  • #8
jim mcnamara
Mentor
4,175
2,701
@fresh_42 - Re: Scots & maths - try the Jack polynomial. :smile:
 
  • Like
Likes Math Amateur
  • #9
14,163
11,464
@fresh_42 - Re: Scots & maths - try the Jack polynomial. :smile:
His research dealt with the development of analytic methods to evaluate certain integrals over matrix spaces.
Hmmm ... I wonder whether they spoke Gaelic ...
 
  • Like
Likes Math Amateur
  • #10
Math Amateur
Gold Member
1,067
47
We know that ##M(A) = \langle L_a , R_b \,\vert \, a,b \in A \rangle ## is generated by left- and right-multiplications by definition.
Lemma 1.24 guarantees us that ##\{L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}\,\vert \, 1 \leq i,j \leq d \}## are linear independent ##^{*})##. These are ##d^2## many, so the dimension of ##M(A)## can only be greater than ##d^2##, because we already have found ##d^2## linear independent vectors, which can be extended to a basis.

##^{*}) \; 0 = \sum \lambda_{ij}L_{u_i}R_{u_j} = \sum L_{u_i} R_{b_i} \Longrightarrow ## (Lemma 1.24) ## b_i = \sum \lambda_{ij}u_j = 0 \Longrightarrow \lambda_{ij}=0## because ##\{u_k\}## is a basis, hence ##L_{u_i}R_{u_j} = L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}## are linear independent.

Well, ##L_a## as well as ##R_b## are endomorphisms of ##A##, i.e. ##\mathbb{F}-##linear mappings ##A \rightarrow A##.
Therefore ##M(A) \subseteq End_\mathbb{F}(A)## and we have a subspace ##M(A)## which has at least dimension ##d^2##. On the other hand ##End_\mathbb{F}(A)## has exactly the dimension ##d^2##, so there is no room left between ##M(A)## and ##End_\mathbb{F}(A)##.
That ## \dim End_\mathbb{F}(A)=d^2 ## can fastest be seen, if we think of matrices: Since ##\{u_1,\ldots, u_d\}## is a basis of ##A##, all linear functions, i.e. every element of ##End_\mathbb{F}(A)## can be written as a ##(d \times d)-##matrix with respect to this basis.

Hi fresh_42 ...

Just a further clarification ... ...

You write:

" ... ... We know that ##M(A) = \langle L_a , R_b \,\vert \, a,b \in A \rangle ## is generated by left- and right-multiplications by definition. ... ... "


Now ... if ##M(A)## is generated by ##L_a## and ##R_b## then it should contain elements like ##L_a L_b L_c## and ##L_a^2 R_b^2 R_c## ... and so on ...


BUT ... how do elements like these fit with Bresar's definition of ##M(A)## ... as follows:

##M(A) := \{ L_{a_1} R_{b_2} + \ ... \ ... \ + L_{a_1} R_{b_2} \ | \ a_i, b_i \in A, n \in \mathbb{N} \}##

... ...

... ... unless ... we treat ##L_a L_b L_c = L_{abc} R_1 = L_t R_u##

where ##t = abc## and ##u = 1## ... ...and ##t, u \in A## ... ...

... and ...

we treat ##L_a^2 R_b^2 R_c = L_{aa} R_{cbb} = L_r R_s##

where ##r = aa## and ##s = cbb## ...


Can you help me to clarify this issue ....

Peter
 
  • #11
14,163
11,464
That's correct. In the lines ahead of Definition 1.22 Bresar mentions the rules by which ##\{L_{a_1}R_{b_1}+\ldots +L_{a_n}R_{b_n}\}## becomes an algebra. Without them, it would simply be a set of some endomorphisms.
 
  • Like
Likes Math Amateur
  • #12
399
117
But also, #10 is almost immediate from the definitions of Ls and Rt:

(La Lb) x = (LaoLb) x​

= La (Lb x)​

= La (bx)​

= a(bx)​

= (ab)x​

= Lab x.
And virtually the same reasoning to show

(Rc Rd) x = Rdc x.​

(Also note that, any La and any Rb commute:

La Rb = Rb La.​

This can be proved in a similar manner.)
 

Related Threads on Multiplication Maps on Algebras ... Bresar, Lemma 1.25 ...

Replies
2
Views
744
Replies
9
Views
829
Replies
4
Views
810
Replies
3
Views
811
Replies
2
Views
690
Replies
2
Views
912
Replies
1
Views
2K
Replies
6
Views
917
Replies
1
Views
851
  • Last Post
Replies
3
Views
3K
Top