Multiplication Maps on Algebras .... Bresar, Lemma 1.25 ....

  • Context: Undergrad 
  • Thread starter Thread starter Math Amateur
  • Start date Start date
  • Tags Tags
    Multiplication
Click For Summary

Discussion Overview

The discussion revolves around the proof of Lemma 1.25 from Matej Bresar's "Introduction to Noncommutative Algebra," specifically focusing on multiplication maps in finite dimensional division algebras. Participants seek clarification on the implications of certain statements made in the lemma regarding dimensions of algebras and endomorphisms.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • Peter questions why Bresar concludes that the dimension of the multiplication algebra, ##[ M(A) : F ]##, is at least ##d^2##, referencing the linear independence of certain mappings as guaranteed by Lemma 1.24.
  • Peter further inquires how the statement that ##[ M(A) : F ] \ge d^2 = [ \text{ End}_F (A) : F ]## leads to the conclusion that ##M(A) = [ \text{ End}_F (A) : F ]##.
  • Another participant explains that since ##M(A)## is a subspace of ##End_F(A)## with dimension at least ##d^2##, and ##End_F(A)## has dimension exactly ##d^2##, it follows that they must be equal.
  • Clarifications are sought regarding the notation and meaning of the operations involved, specifically the distinction between the multiplication signs ##\cdot## and ##\circ## in the context of mappings.
  • Discussion shifts to the concept of "central simple" algebras and the Brauer group, with participants exploring the definitions and implications of these concepts in relation to the main topic.
  • One participant expresses dissatisfaction with the descriptions of equivalence relations in the context of central simple algebras and seeks a more nuanced understanding.

Areas of Agreement / Disagreement

Participants express varying levels of understanding and seek clarification on specific points, indicating that there is no consensus on all aspects of the discussion. Some participants agree on the implications of dimensions, while others raise questions about the definitions and relationships between concepts.

Contextual Notes

Participants reference specific lemmas and definitions from Bresar's text, which may not be universally understood without access to the book. The discussion includes assumptions about the linear independence of mappings and the structure of algebras that are not fully resolved.

Who May Find This Useful

This discussion may be useful for students and researchers interested in noncommutative algebra, particularly those studying multiplication maps, central simple algebras, and the Brauer group.

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading Matej Bresar's book, "Introduction to Noncommutative Algebra" and am currently focussed on Chapter 1: Finite Dimensional Division Algebras ... ...

I need help with the proof of Lemma 1.25 ...

Lemma 1.25 reads as follows:
?temp_hash=b5ba6c53eab99366cd5dc7529c3c1321.png

?temp_hash=b5ba6c53eab99366cd5dc7529c3c1321.png


My questions on the proof of Lemma 1.25 are as follows:Question 1

In the above text from Bresar we read the following:

" ... ... Therefore ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]## ... ... "Can someone please explain exactly why Bresar is concluding that ##[ M(A) \ : \ F ] \ge d^2## ... ... ?Question 2

In the above text from Bresar we read the following:

" ... ... Therefore ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]##

and so ##M(A) = [ \text{ End}_F (A) \ : \ F ]##. ... ... "Can someone please explain exactly why ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]## ... ...

... implies that ... ##M(A) = [ \text{ End}_F (A) \ : \ F ]## ...
Hope someone can help ...

Peter
===========================================================*** NOTE ***

So that readers of the above post will be able to understand the context and notation of the post ... I am providing Bresar's first two pages on Multiplication Algebras ... ... as follows:
?temp_hash=b5ba6c53eab99366cd5dc7529c3c1321.png

?temp_hash=b5ba6c53eab99366cd5dc7529c3c1321.png
 

Attachments

  • Bresar - 1 - Lemma 1.25 - PART 1 ... ....png
    Bresar - 1 - Lemma 1.25 - PART 1 ... ....png
    27.6 KB · Views: 646
  • Bresar - 2 - Lemma 1.25 - PART 2 ... ....png
    Bresar - 2 - Lemma 1.25 - PART 2 ... ....png
    23.9 KB · Views: 752
  • Bresar - 1 - Section 1.5 Multiplication Algebra - PART 1 ... ....png
    Bresar - 1 - Section 1.5 Multiplication Algebra - PART 1 ... ....png
    27.6 KB · Views: 609
  • Bresar - 2 - Section 1.5 Multiplication Algebra - PART 2 ... ....png
    Bresar - 2 - Section 1.5 Multiplication Algebra - PART 2 ... ....png
    29.5 KB · Views: 780
Last edited:
Physics news on Phys.org
Math Amateur said:
Question 1
In the above text from Bresar we read the following:
" ... ... Therefore ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]## ... ... "
Can someone please explain exactly why Bresar is concluding that ##[ M(A) \ : \ F ] \ge d^2## ... ... ?
We know that ##M(A) = \langle L_a , R_b \,\vert \, a,b \in A \rangle ## is generated by left- and right-multiplications by definition.
Lemma 1.24 guarantees us that ##\{L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}\,\vert \, 1 \leq i,j \leq d \}## are linear independent ##^{*})##. These are ##d^2## many, so the dimension of ##M(A)## can only be greater than ##d^2##, because we already have found ##d^2## linear independent vectors, which can be extended to a basis.

##^{*}) \; 0 = \sum \lambda_{ij}L_{u_i}R_{u_j} = \sum L_{u_i} R_{b_i} \Longrightarrow ## (Lemma 1.24) ## b_i = \sum \lambda_{ij}u_j = 0 \Longrightarrow \lambda_{ij}=0## because ##\{u_k\}## is a basis, hence ##L_{u_i}R_{u_j} = L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}## are linear independent.
Question 2
In the above text from Bresar we read the following:
" ... ... Therefore ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]##
and so ##M(A) = [ \text{ End}_F (A) \ : \ F ]##. ... ... "
Can someone please explain exactly why ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]## ... ...
... implies that ... ##M(A) = [ \text{ End}_F (A) \ : \ F ]## ...
Well, ##L_a## as well as ##R_b## are endomorphisms of ##A##, i.e. ##\mathbb{F}-##linear mappings ##A \rightarrow A##.
Therefore ##M(A) \subseteq End_\mathbb{F}(A)## and we have a subspace ##M(A)## which has at least dimension ##d^2##. On the other hand ##End_\mathbb{F}(A)## has exactly the dimension ##d^2##, so there is no room left between ##M(A)## and ##End_\mathbb{F}(A)##.
That ## \dim End_\mathbb{F}(A)=d^2 ## can fastest be seen, if we think of matrices: Since ##\{u_1,\ldots, u_d\}## is a basis of ##A##, all linear functions, i.e. every element of ##End_\mathbb{F}(A)## can be written as a ##(d \times d)-##matrix with respect to this basis.
 
  • Like
Likes   Reactions: Math Amateur
fresh_42 said:
We know that ##M(A) = \langle L_a , R_b \,\vert \, a,b \in A \rangle ## is generated by left- and right-multiplications by definition.
Lemma 1.24 guarantees us that ##\{L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}\,\vert \, 1 \leq i,j \leq d \}## are linear independent ##^{*})##. These are ##d^2## many, so the dimension of ##M(A)## can only be greater than ##d^2##, because we already have found ##d^2## linear independent vectors, which can be extended to a basis.

##^{*}) \; 0 = \sum \lambda_{ij}L_{u_i}R_{u_j} = \sum L_{u_i} R_{b_i} \Longrightarrow ## (Lemma 1.24) ## b_i = \sum \lambda_{ij}u_j = 0 \Longrightarrow \lambda_{ij}=0## because ##\{u_k\}## is a basis, hence ##L_{u_i}R_{u_j} = L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}## are linear independent.

Well, ##L_a## as well as ##R_b## are endomorphisms of ##A##, i.e. ##\mathbb{F}-##linear mappings ##A \rightarrow A##.
Therefore ##M(A) \subseteq End_\mathbb{F}(A)## and we have a subspace ##M(A)## which has at least dimension ##d^2##. On the other hand ##End_\mathbb{F}(A)## has exactly the dimension ##d^2##, so there is no room left between ##M(A)## and ##End_\mathbb{F}(A)##.
That ## \dim End_\mathbb{F}(A)=d^2 ## can fastest be seen, if we think of matrices: Since ##\{u_1,\ldots, u_d\}## is a basis of ##A##, all linear functions, i.e. every element of ##End_\mathbb{F}(A)## can be written as a ##(d \times d)-##matrix with respect to this basis.
Thanks fresh_42 ... most helpful in helping me to grasp the meaning of Lemma 1.25 ...

But just a clarification ... You write:

" ... ... Lemma 1.24 guarantees us that ##\{L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}\,\vert \, 1 \leq i,j \leq d \}## are linear independent ##^{*})##. "What do you mean when you write " ##L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}## " ...

There appear to be two "multiplications" involved, namely ##\cdot## and ##\circ## ... but what are these ...?

and, further what is the meaning and significance of the equality " ##L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}## "

Can you help ...Still reflecting on your post ...

Peter
 
I simply wanted to indicate, that multiplication ##"\cdot"## here is the successive application of mappings ##"\circ"##, no matter how it is written even if without multiplication sign. In the end it is ##(L_{u_i}R_{u_j})(x) = L_{u_i}(R_{u_j}(x))=L_{u_i}(x\cdot u_j) = u_i \cdot x \cdot u_j##.
 
  • Like
Likes   Reactions: Math Amateur
I am learning a bit about algebra(s) that I never knew! What exactly does the property of being "central simple" have to do with the conclusions above?

Also: now I want to understand the "Brauer group".
 
A ##\mathbb{K}-##algebra ##\mathcal{A}## is central simple, if the center ##\mathcal{C}(\mathcal{A})=\{c\in \mathbb{A}\,\vert \,ca=ac \,\forall \,a\in\mathcal{A}\}## of ##\mathcal{A}## equals ##\mathbb{K}## and ##\mathcal{A}## as a ring is simple, i.e. without ideals.

The Brauer group is a long time no see. So I've read the definition again. A funny thing, that you're talking about. How does it come?

According to the theorem of Artin-Wedderburn, every central simple algebra is isomorphic to a matrix algebra ##\mathbb{M}(n,\mathcal{D})## over a division ring ##\mathcal{D}##, here ##\mathcal{D}=\mathbb{K}##. Now all ##\mathbb{M}(n,\mathbb{K})## are considered equivalent, i.e. ##\mathbb{M}(n,\mathbb{K}) \text{ ~ } \mathbb{M}(m,\mathbb{K})## and the elements of the Brauer group (of ##\mathbb{K}##) are the equivalence classes. E.g. ##[1] = [\mathbb{M}(1,\mathbb{K})]=[\mathbb{K}]## and the inverse element is the opposite algebra ##\mathcal{A}^{op}## with the multiplication ##(a,b) \mapsto ba##.

However, the really interesting question here is: Do all Scottish mathematicians (Hamilton, Wedderburn, ...) have a special relationship to strange algebras and why is it so? :cool:
 
  • Like
Likes   Reactions: Math Amateur
I wasn't quite satisfied with this lapidary description of an equivalence relation here. Unfortunately the English and German Wikipedia page are a one-to-one translation. But the French has been a little bit better. Starting with a central simple algebra ##\mathcal{A}## over a field ##\mathbb{K}##, we have ##\mathcal{A} \otimes_{\mathbb{K}} \mathbb{L} \cong \mathbb{M}(n,\mathbb{L})## for a finite field extension ##\mathbb{L} \supseteq \mathbb{K}##

Now ##\mathcal{A} \text{ ~ } \mathcal{B}## are considered equivalent, if there are natural numbers ##n,m## and an isomorphism such that ##\mathcal{A} \otimes_{\mathbb{K}} \mathbb{M}(n,\mathbb{K}) \cong \mathcal{B} \otimes_{\mathbb{K}} \mathbb{M}(m,\mathbb{K})##.
The (Abelian) Brauer group are now the equivalence classes with multiplication ##\otimes##.

(At least as far as my bad French allowed me to translate it.)
 
  • Like
Likes   Reactions: Math Amateur
@fresh_42 - Re: Scots & maths - try the Jack polynomial. :smile:
 
  • Like
Likes   Reactions: Math Amateur
jim mcnamara said:
@fresh_42 - Re: Scots & maths - try the Jack polynomial. :smile:
His research dealt with the development of analytic methods to evaluate certain integrals over matrix spaces.
Hmmm ... I wonder whether they spoke Gaelic ...
 
  • Like
Likes   Reactions: Math Amateur
  • #10
fresh_42 said:
We know that ##M(A) = \langle L_a , R_b \,\vert \, a,b \in A \rangle ## is generated by left- and right-multiplications by definition.
Lemma 1.24 guarantees us that ##\{L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}\,\vert \, 1 \leq i,j \leq d \}## are linear independent ##^{*})##. These are ##d^2## many, so the dimension of ##M(A)## can only be greater than ##d^2##, because we already have found ##d^2## linear independent vectors, which can be extended to a basis.

##^{*}) \; 0 = \sum \lambda_{ij}L_{u_i}R_{u_j} = \sum L_{u_i} R_{b_i} \Longrightarrow ## (Lemma 1.24) ## b_i = \sum \lambda_{ij}u_j = 0 \Longrightarrow \lambda_{ij}=0## because ##\{u_k\}## is a basis, hence ##L_{u_i}R_{u_j} = L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}## are linear independent.

Well, ##L_a## as well as ##R_b## are endomorphisms of ##A##, i.e. ##\mathbb{F}-##linear mappings ##A \rightarrow A##.
Therefore ##M(A) \subseteq End_\mathbb{F}(A)## and we have a subspace ##M(A)## which has at least dimension ##d^2##. On the other hand ##End_\mathbb{F}(A)## has exactly the dimension ##d^2##, so there is no room left between ##M(A)## and ##End_\mathbb{F}(A)##.
That ## \dim End_\mathbb{F}(A)=d^2 ## can fastest be seen, if we think of matrices: Since ##\{u_1,\ldots, u_d\}## is a basis of ##A##, all linear functions, i.e. every element of ##End_\mathbb{F}(A)## can be written as a ##(d \times d)-##matrix with respect to this basis.
Hi fresh_42 ...

Just a further clarification ... ...

You write:

" ... ... We know that ##M(A) = \langle L_a , R_b \,\vert \, a,b \in A \rangle ## is generated by left- and right-multiplications by definition. ... ... "Now ... if ##M(A)## is generated by ##L_a## and ##R_b## then it should contain elements like ##L_a L_b L_c## and ##L_a^2 R_b^2 R_c## ... and so on ...BUT ... how do elements like these fit with Bresar's definition of ##M(A)## ... as follows:

##M(A) := \{ L_{a_1} R_{b_2} + \ ... \ ... \ + L_{a_1} R_{b_2} \ | \ a_i, b_i \in A, n \in \mathbb{N} \}##

... ...

... ... unless ... we treat ##L_a L_b L_c = L_{abc} R_1 = L_t R_u##

where ##t = abc## and ##u = 1## ... ...and ##t, u \in A## ... ...

... and ...

we treat ##L_a^2 R_b^2 R_c = L_{aa} R_{cbb} = L_r R_s##

where ##r = aa## and ##s = cbb## ...Can you help me to clarify this issue ...

Peter
 
  • #11
That's correct. In the lines ahead of Definition 1.22 Bresar mentions the rules by which ##\{L_{a_1}R_{b_1}+\ldots +L_{a_n}R_{b_n}\}## becomes an algebra. Without them, it would simply be a set of some endomorphisms.
 
  • Like
Likes   Reactions: Math Amateur
  • #12
But also, #10 is almost immediate from the definitions of Ls and Rt:

(La Lb) x = (LaoLb) x​

= La (Lb x)​

= La (bx)​

= a(bx)​

= (ab)x​

= Lab x.
And virtually the same reasoning to show

(Rc Rd) x = Rdc x.​

(Also note that, any La and any Rb commute:

La Rb = Rb La.​

This can be proved in a similar manner.)
 

Similar threads

Replies
2
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
3
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 14 ·
Replies
14
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K