Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

I Multiplication Maps on Algebras ... Bresar, Lemma 1.25 ...

  1. Dec 5, 2016 #1
    I am reading Matej Bresar's book, "Introduction to Noncommutative Algebra" and am currently focussed on Chapter 1: Finite Dimensional Division Algebras ... ...

    I need help with the proof of Lemma 1.25 ...

    Lemma 1.25 reads as follows:


    ?temp_hash=b5ba6c53eab99366cd5dc7529c3c1321.png
    ?temp_hash=b5ba6c53eab99366cd5dc7529c3c1321.png




    My questions on the proof of Lemma 1.25 are as follows:


    Question 1

    In the above text from Bresar we read the following:

    " ... ... Therefore ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]## ... ... "


    Can someone please explain exactly why Bresar is concluding that ##[ M(A) \ : \ F ] \ge d^2## ... ... ?





    Question 2

    In the above text from Bresar we read the following:

    " ... ... Therefore ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]##

    and so ##M(A) = [ \text{ End}_F (A) \ : \ F ]##. ... ... "


    Can someone please explain exactly why ##[ M(A) \ : \ F ] \ge d^2 = [ \text{ End}_F (A) \ : \ F ]## ... ...

    ... implies that ... ##M(A) = [ \text{ End}_F (A) \ : \ F ]## ...



    Hope someone can help ...

    Peter



    ===========================================================


    *** NOTE ***

    So that readers of the above post will be able to understand the context and notation of the post ... I am providing Bresar's first two pages on Multiplication Algebras ... ... as follows:



    ?temp_hash=b5ba6c53eab99366cd5dc7529c3c1321.png
    ?temp_hash=b5ba6c53eab99366cd5dc7529c3c1321.png
     
    Last edited: Dec 5, 2016
  2. jcsd
  3. Dec 5, 2016 #2

    fresh_42

    Staff: Mentor

    We know that ##M(A) = \langle L_a , R_b \,\vert \, a,b \in A \rangle ## is generated by left- and right-multiplications by definition.
    Lemma 1.24 guarantees us that ##\{L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}\,\vert \, 1 \leq i,j \leq d \}## are linear independent ##^{*})##. These are ##d^2## many, so the dimension of ##M(A)## can only be greater than ##d^2##, because we already have found ##d^2## linear independent vectors, which can be extended to a basis.

    ##^{*}) \; 0 = \sum \lambda_{ij}L_{u_i}R_{u_j} = \sum L_{u_i} R_{b_i} \Longrightarrow ## (Lemma 1.24) ## b_i = \sum \lambda_{ij}u_j = 0 \Longrightarrow \lambda_{ij}=0## because ##\{u_k\}## is a basis, hence ##L_{u_i}R_{u_j} = L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}## are linear independent.
    Well, ##L_a## as well as ##R_b## are endomorphisms of ##A##, i.e. ##\mathbb{F}-##linear mappings ##A \rightarrow A##.
    Therefore ##M(A) \subseteq End_\mathbb{F}(A)## and we have a subspace ##M(A)## which has at least dimension ##d^2##. On the other hand ##End_\mathbb{F}(A)## has exactly the dimension ##d^2##, so there is no room left between ##M(A)## and ##End_\mathbb{F}(A)##.
    That ## \dim End_\mathbb{F}(A)=d^2 ## can fastest be seen, if we think of matrices: Since ##\{u_1,\ldots, u_d\}## is a basis of ##A##, all linear functions, i.e. every element of ##End_\mathbb{F}(A)## can be written as a ##(d \times d)-##matrix with respect to this basis.
     
  4. Dec 6, 2016 #3

    Thanks fresh_42 ... most helpful in helping me to grasp the meaning of Lemma 1.25 ...

    But just a clarification ... You write:

    " ... ... Lemma 1.24 guarantees us that ##\{L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}\,\vert \, 1 \leq i,j \leq d \}## are linear independent ##^{*})##. "


    What do you mean when you write " ##L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}## " ...

    There appear to be two "multiplications" involved, namely ##\cdot## and ##\circ## ... but what are these ...?

    and, further what is the meaning and significance of the equality " ##L_{u_i} \cdot R_{u_j} = L_{u_i} \circ R_{u_j}## "

    Can you help ...


    Still reflecting on your post ...

    Peter
     
  5. Dec 6, 2016 #4

    fresh_42

    Staff: Mentor

    I simply wanted to indicate, that multiplication ##"\cdot"## here is the successive application of mappings ##"\circ"##, no matter how it is written even if without multiplication sign. In the end it is ##(L_{u_i}R_{u_j})(x) = L_{u_i}(R_{u_j}(x))=L_{u_i}(x\cdot u_j) = u_i \cdot x \cdot u_j##.
     
  6. Dec 6, 2016 #5
    I am learning a bit about algebra(s) that I never knew! What exactly does the property of being "central simple" have to do with the conclusions above?

    Also: now I want to understand the "Brauer group".
     
  7. Dec 6, 2016 #6

    fresh_42

    Staff: Mentor

    A ##\mathbb{K}-##algebra ##\mathcal{A}## is central simple, if the center ##\mathcal{C}(\mathcal{A})=\{c\in \mathbb{A}\,\vert \,ca=ac \,\forall \,a\in\mathcal{A}\}## of ##\mathcal{A}## equals ##\mathbb{K}## and ##\mathcal{A}## as a ring is simple, i.e. without ideals.

    The Brauer group is a long time no see. So I've read the definition again. A funny thing, that you're talking about. How does it come?

    According to the theorem of Artin-Wedderburn, every central simple algebra is isomorphic to a matrix algebra ##\mathbb{M}(n,\mathcal{D})## over a division ring ##\mathcal{D}##, here ##\mathcal{D}=\mathbb{K}##. Now all ##\mathbb{M}(n,\mathbb{K})## are considered equivalent, i.e. ##\mathbb{M}(n,\mathbb{K}) \text{ ~ } \mathbb{M}(m,\mathbb{K})## and the elements of the Brauer group (of ##\mathbb{K}##) are the equivalence classes. E.g. ##[1] = [\mathbb{M}(1,\mathbb{K})]=[\mathbb{K}]## and the inverse element is the opposite algebra ##\mathcal{A}^{op}## with the multiplication ##(a,b) \mapsto ba##.

    However, the really interesting question here is: Do all Scottish mathematicians (Hamilton, Wedderburn, ...) have a special relationship to strange algebras and why is it so? :cool:
     
  8. Dec 6, 2016 #7

    fresh_42

    Staff: Mentor

    I wasn't quite satisfied with this lapidary description of an equivalence relation here. Unfortunately the English and German Wikipedia page are a one-to-one translation. But the French has been a little bit better. Starting with a central simple algebra ##\mathcal{A}## over a field ##\mathbb{K}##, we have ##\mathcal{A} \otimes_{\mathbb{K}} \mathbb{L} \cong \mathbb{M}(n,\mathbb{L})## for a finite field extension ##\mathbb{L} \supseteq \mathbb{K}##

    Now ##\mathcal{A} \text{ ~ } \mathcal{B}## are considered equivalent, if there are natural numbers ##n,m## and an isomorphism such that ##\mathcal{A} \otimes_{\mathbb{K}} \mathbb{M}(n,\mathbb{K}) \cong \mathcal{B} \otimes_{\mathbb{K}} \mathbb{M}(m,\mathbb{K})##.
    The (Abelian) Brauer group are now the equivalence classes with multiplication ##\otimes##.

    (At least as far as my bad French allowed me to translate it.)
     
  9. Dec 6, 2016 #8

    jim mcnamara

    User Avatar

    Staff: Mentor

    @fresh_42 - Re: Scots & maths - try the Jack polynomial. :smile:
     
  10. Dec 6, 2016 #9

    fresh_42

    Staff: Mentor

    Hmmm ... I wonder whether they spoke Gaelic ...
     
  11. Dec 6, 2016 #10

    Hi fresh_42 ...

    Just a further clarification ... ...

    You write:

    " ... ... We know that ##M(A) = \langle L_a , R_b \,\vert \, a,b \in A \rangle ## is generated by left- and right-multiplications by definition. ... ... "


    Now ... if ##M(A)## is generated by ##L_a## and ##R_b## then it should contain elements like ##L_a L_b L_c## and ##L_a^2 R_b^2 R_c## ... and so on ...


    BUT ... how do elements like these fit with Bresar's definition of ##M(A)## ... as follows:

    ##M(A) := \{ L_{a_1} R_{b_2} + \ ... \ ... \ + L_{a_1} R_{b_2} \ | \ a_i, b_i \in A, n \in \mathbb{N} \}##

    ... ...

    ... ... unless ... we treat ##L_a L_b L_c = L_{abc} R_1 = L_t R_u##

    where ##t = abc## and ##u = 1## ... ...and ##t, u \in A## ... ...

    ... and ...

    we treat ##L_a^2 R_b^2 R_c = L_{aa} R_{cbb} = L_r R_s##

    where ##r = aa## and ##s = cbb## ...


    Can you help me to clarify this issue ....

    Peter
     
  12. Dec 7, 2016 #11

    fresh_42

    Staff: Mentor

    That's correct. In the lines ahead of Definition 1.22 Bresar mentions the rules by which ##\{L_{a_1}R_{b_1}+\ldots +L_{a_n}R_{b_n}\}## becomes an algebra. Without them, it would simply be a set of some endomorphisms.
     
  13. Dec 7, 2016 #12
    But also, #10 is almost immediate from the definitions of Ls and Rt:

    (La Lb) x = (LaoLb) x​

    = La (Lb x)​

    = La (bx)​

    = a(bx)​

    = (ab)x​

    = Lab x.
    And virtually the same reasoning to show

    (Rc Rd) x = Rdc x.​

    (Also note that, any La and any Rb commute:

    La Rb = Rb La.​

    This can be proved in a similar manner.)
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted