MHB Linear Algebras or k-algebras - Cohn p. 53-54 - SIMPLE CLARIFICATION

  • Thread starter Thread starter Math Amateur
  • Start date Start date
  • Tags Tags
    Linear
Click For Summary
The discussion centers on understanding the multiplication defined in linear algebras as presented in P. M. Cohn's "Introduction to Ring Theory." Equation (2.1) illustrates that the product of basis elements can be expressed as linear combinations of other basis elements, leading to the multiplication constants $\gamma_{ijk}$. Participants clarify that the assumption of products being zero is incorrect without specifying the multiplication rules in the algebra. Examples from polynomial rings and matrix algebras demonstrate how multiplication can yield non-trivial results, emphasizing the importance of defining multiplication in linear algebras. The conversation highlights the distinction between scalar multiplication and ring multiplication in the context of $k$-algebras.
Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading "Introduction to Ring Theory" by P. M. Cohn (Springer Undergraduate Mathematics Series)

In Chapter 2: Linear Algebras and Artinian Rings we read the following on pages 53-54:

View attachment 3132
View attachment 3133In the above text we read:

" … … The multiplication in A is completely determined by the products of the basis elements. Thus we have the equations

$$u_i u_j = \sum_k \gamma_{ijk} u_k$$ … … … (2.1)

where the elements $$\gamma_{ijk}$$ are called the multiplications constants of the algebra … … "

Can someone please explain how equation (2.1) follows?

Peter
 
Physics news on Phys.org
Hi Peter,

The the products $u_i u_j$ are elements of $A$, so they have unique expressions as linear combinations of the basis elements $u_k$. This leads to (2.1).
 
Euge said:
Hi Peter,

The the products $u_i u_j$ are elements of $A$, so they have unique expressions as linear combinations of the basis elements $u_k$. This leads to (2.1).

Yes, understand that Euge, but still bit puzzled … how shall I explain ...

well … I keep thinking … … if the algebra has n dimensions … then ...

$$u_i = 0.u_1 + 0.u_2 + \ … \ … \ + 1.u_i + \ … \ … \ + 0.u_n$$

and

$$u_j = 0.u_1 + 0.u_2 + \ … \ … \ + 1.u_j + \ … \ … \ + 0.u_n$$

and so it seems (I think)

$$u_i.u_j = 0$$

? can you clarify ?

Peter

EDIT Maybe my 'multiplication' is wrong?
 
Peter said:
Yes, understand that Euge, but still bit puzzled … how shall I explain ...

well … I keep thinking … … if the algebra has n dimensions … then ...

$$u_i = 0.u_1 + 0.u_2 + \ … \ … \ + 1.u_i + \ … \ … \ + 0.u_n$$

and

$$u_j = 0.u_1 + 0.u_2 + \ … \ … \ + 1.u_j + \ … \ … \ + 0.u_n$$

and so it seems (I think)

$$u_i.u_j = 0$$

? can you clarify ?

Peter
EDIT Maybe my 'multiplication' is wrong?
Peter, how do know that $u_i u_j = 0$ for all $i$ and $j$? The identities you have do not imply this since multiplication in $A$ has not been specified. In fact, what you're implying is that multiplication in $A$ is trivial. I'll give two examples with nontrivial multiplication.

$\textbf{Example 1}$. The polynomial ring $\Bbb R[x]$ is a linear algebra over $\Bbb R$ with basis $\{1, x, x^2,\ldots\}$ and multiplication defined by the usual ring multiplication of polynomials. For each $k \ge 0$, let $u_k = x^k$. Then $u_i u_j = u_{i+j}$ for all $i$ and $j$. So the multiplication constants are given by $\gamma_{ijk} = \delta_k^{i+j}$, where $\delta$ denotes the Kronecker delta.

$\textbf{Example 2}$. Consider $\Bbb R^3$ with multiplication given by the cross product. It is a non-associative linear algebra over $\Bbb R$ with standard basis $\{e_1, e_2, e_3\}$. Since $e_1e_2 = e_3$, $e_2e_3 = e_1$, $e_3e_1 = e_2$, and $e_ie_j = -e_je_i$ for all $i$ and $j$, the multiplication constants are given by

$\gamma_{111} = 0$, $\gamma_{112} = 0$, $\gamma_{113} = 0$,

$\gamma_{121} = 0$, $\gamma_{122} = 0$, $\gamma_{123} = 1$,

$\gamma_{131} = 0$, $\gamma_{132} = -1$, $\gamma_{133} = 0$,

$\gamma_{211} = 0$, $\gamma_{212} = 0$, $\gamma_{213} = -1$

$\gamma_{221} = 0$, $\gamma_{222} = 0$, $\gamma_{223} = 0$,

$\gamma_{231} = 1$, $\gamma_{232} = 0$, $\gamma_{233} = 0$,

$\gamma_{311} = 0$, $\gamma_{312} = 1$, $\gamma_{313} = 0$,

$\gamma_{321} = -1$, $\gamma_{322} = 0$, $\gamma_{323} = 0$,

$\gamma_{331} = 0$, $\gamma_{332} = 0$, $\gamma_{333} = 0$.

Observe that $\gamma_{ijk}$ equals 1 when $(i,j,k)$ is a cyclic permutation $(1 2 3)$ and -1 when $(i,j,k)$ is a cyclic permutation of $(132)$. Also, $\gamma_{ijk} = 0$ whenever $i, j$, and $k$ are not all distinct. So in fact, $\gamma_{ijk} = \epsilon_{ijk}$, the Levi-Civita symbol.
 
It's somewhat of a trivial example, but it may be helpful to consider the $\Bbb Q$-algebra $\text{Mat}_n(\Bbb Q)$, which has a basis consisting of the elementary matrices $E_{ij} = (a_{km})$ where:

$a_{km} = 1, k = i,m = j$
$a_{km} = 0,$ otherwise.

So let's look at what happens when we multiply $E_{ij}E_{i'j'}$:

$E_{ij}E_{i'j'} = E_{ij'}, j = i'$
$E_{ij}E_{i'j'} = 0$, otherwise.

To get a handle on what this really means, let's specify $n = 2$, so our 2x2 matrices can be seen as being "$\Bbb Q^4$ with a multiplication". So our basis is, explicitly:

$u_1 = \begin{bmatrix}1&0\\0&0\end{bmatrix};\ u_2 = \begin{bmatrix}0&1\\0&0\end{bmatrix};\ u_3 = \begin{bmatrix}0&0\\1&0\end{bmatrix};\ u_4 = \begin{bmatrix}0&0\\0&1\end{bmatrix}$

We have:

$u_1u_1 = u_1$, so $\gamma_{111} = 1, \gamma_{11k} = 0, k = 2,3,4$.
$u_1u_2 = u_2$, so $\gamma_{121} = 0, \gamma_{122} = 1, \gamma_{12k} = 0, k = 3,4$
$u_1u_3 = 0$ (all the $\gamma$'s are 0)

and so on (there's 13 more products to compute, and thus 52 more multiplication constants).

Another important $k$-algebra is $k[x]$ (polynomials over $k$) which has as one possible basis:

$u_i = x^i$.

Since $u_iu_j = (x^i)(x^j) = x^{i+j} = u_{i+j}$, we see that:

$\gamma_{ijk} = 1$ when $k = i+j$
$\gamma_{ijk} = 0$, otherwise.

Another often-used example: we have $\Bbb R^2$ as an $\Bbb R$-algebra given the basis:

$\{e_1,e_2\} = \{(1,0),(0,1)\}$ with multiplication constants:

$\gamma_{111} = 1$
$\gamma_{112} = 0$
$\gamma_{121} = 0$
$\gamma_{122} = 1$
$\gamma_{211} = 0$
$\gamma_{212} = 1$
$\gamma_{221} = -1$
$\gamma_{222} = 0$.

Since $\gamma_{ijk} = \gamma_{jik}$, this forms a commutative $\Bbb R$-algebra, more commonly known as $\Bbb C$ (this is a DIVISION ALGEBRA since $U(\Bbb C) = \Bbb C - \{0\}$, that is: all non-zero elements are invertible).

Interestingly enough, we actually have this as a sub-algebra of $\text{Mat}_2(\Bbb R)$, with basis:

$\{v_1,v_2\} = \{E_{11} + E_{22},E_{21}-E_{12}\}$.

This is because the basis elements when multiplied give a linear combination of these two basis elements:

$v_1v_1 = v_1$
$v_2v_1 = v_1v_2 = v_2$
$v_2v_2 = -v_1$, as can be readily verified.

***************

Given a basis for $\text{Mat}_n(k)$, we can use this to define an isomorphism with $\text{Hom}_{k}(k^n,k^n)$, so the study of the (particular) linear algebra $\text{Mat}_n(k)$ is typically the focus of a course called "linear algebra".

It is important not to confuse the $k$-action (scalar multiplication) of a $k$-algebra with the ring multiplication, but typically, if our algebra $A$ is unital, we can often consider it as an extension ring of $k$ via the map:

$\alpha \mapsto \alpha\cdot 1_A$

For example, with $A = \text{Mat}_n(k)$ we have the embedding:

$\alpha \mapsto \alpha I_n$

and with $A = k[x]$ we have the natural embedding of $k$ as constant polynomials

(note that $k \cong k[x]/(x)$ which essentially amounts to "evaluating $p(x)$ at 0", for any polynomial $p$).
 
Deveno said:
It's somewhat of a trivial example, but it may be helpful to consider the $\Bbb Q$-algebra $\text{Mat}_n(\Bbb Q)$, which has a basis consisting of the elementary matrices $E_{ij} = (a_{km})$ where:

$a_{km} = 1, k = i,m = j$
$a_{km} = 0,$ otherwise.

So let's look at what happens when we multiply $E_{ij}E_{i'j'}$:

$E_{ij}E_{i'j'} = E_{ij'}, j = i'$
$E_{ij}E_{i'j'} = 0$, otherwise.

To get a handle on what this really means, let's specify $n = 2$, so our 2x2 matrices can be seen as being "$\Bbb Q^4$ with a multiplication". So our basis is, explicitly:

$u_1 = \begin{bmatrix}1&0\\0&0\end{bmatrix};\ u_2 = \begin{bmatrix}0&1\\0&0\end{bmatrix};\ u_3 = \begin{bmatrix}0&0\\1&0\end{bmatrix};\ u_4 = \begin{bmatrix}0&0\\0&1\end{bmatrix}$

We have:

$u_1u_1 = u_1$, so $\gamma_{111} = 1, \gamma_{11k} = 0, k = 2,3,4$.
$u_1u_2 = u_2$, so $\gamma_{121} = 0, \gamma_{122} = 1, \gamma_{12k} = 0, k = 3,4$
$u_1u_3 = 0$ (all the $\gamma$'s are 0)

and so on (there's 13 more products to compute, and thus 52 more multiplication constants).

Another important $k$-algebra is $k[x]$ (polynomials over $k$) which has as one possible basis:

$u_i = x^i$.

Since $u_iu_j = (x^i)(x^j) = x^{i+j} = u_{i+j}$, we see that:

$\gamma_{ijk} = 1$ when $k = i+j$
$\gamma_{ijk} = 0$, otherwise.

Another often-used example: we have $\Bbb R^2$ as an $\Bbb R$-algebra given the basis:

$\{e_1,e_2\} = \{(1,0),(0,1)\}$ with multiplication constants:

$\gamma_{111} = 1$
$\gamma_{112} = 0$
$\gamma_{121} = 0$
$\gamma_{122} = 1$
$\gamma_{211} = 0$
$\gamma_{212} = 1$
$\gamma_{221} = -1$
$\gamma_{222} = 0$.

Since $\gamma_{ijk} = \gamma_{jik}$, this forms a commutative $\Bbb R$-algebra, more commonly known as $\Bbb C$ (this is a DIVISION ALGEBRA since $U(\Bbb C) = \Bbb C - \{0\}$, that is: all non-zero elements are invertible).

Interestingly enough, we actually have this as a sub-algebra of $\text{Mat}_2(\Bbb R)$, with basis:

$\{v_1,v_2\} = \{E_{11} + E_{22},E_{21}-E_{12}\}$.

This is because the basis elements when multiplied give a linear combination of these two basis elements:

$v_1v_1 = v_1$
$v_2v_1 = v_1v_2 = v_2$
$v_2v_2 = -v_1$, as can be readily verified.

***************

Given a basis for $\text{Mat}_n(k)$, we can use this to define an isomorphism with $\text{Hom}_{k}(k^n,k^n)$, so the study of the (particular) linear algebra $\text{Mat}_n(k)$ is typically the focus of a course called "linear algebra".

It is important not to confuse the $k$-action (scalar multiplication) of a $k$-algebra with the ring multiplication, but typically, if our algebra $A$ is unital, we can often consider it as an extension ring of $k$ via the map:

$\alpha \mapsto \alpha\cdot 1_A$

For example, with $A = \text{Mat}_n(k)$ we have the embedding:

$\alpha \mapsto \alpha I_n$

and with $A = k[x]$ we have the natural embedding of $k$ as constant polynomials

(note that $k \cong k[x]/(x)$ which essentially amounts to "evaluating $p(x)$ at 0", for any polynomial $p$).
Thanks so much for the example Deveno … the examples you post are extremely helpful and informative ...

Working through the details of your post very soon ...

Peter
 
Deveno said:
It's somewhat of a trivial example, but it may be helpful to consider the $\Bbb Q$-algebra $\text{Mat}_n(\Bbb Q)$, which has a basis consisting of the elementary matrices $E_{ij} = (a_{km})$ where:

$a_{km} = 1, k = i,m = j$
$a_{km} = 0,$ otherwise.

So let's look at what happens when we multiply $E_{ij}E_{i'j'}$:

$E_{ij}E_{i'j'} = E_{ij'}, j = i'$
$E_{ij}E_{i'j'} = 0$, otherwise.

To get a handle on what this really means, let's specify $n = 2$, so our 2x2 matrices can be seen as being "$\Bbb Q^4$ with a multiplication". So our basis is, explicitly:

$u_1 = \begin{bmatrix}1&0\\0&0\end{bmatrix};\ u_2 = \begin{bmatrix}0&1\\0&0\end{bmatrix};\ u_3 = \begin{bmatrix}0&0\\1&0\end{bmatrix};\ u_4 = \begin{bmatrix}0&0\\0&1\end{bmatrix}$

We have:

$u_1u_1 = u_1$, so $\gamma_{111} = 1, \gamma_{11k} = 0, k = 2,3,4$.
$u_1u_2 = u_2$, so $\gamma_{121} = 0, \gamma_{122} = 1, \gamma_{12k} = 0, k = 3,4$
$u_1u_3 = 0$ (all the $\gamma$'s are 0)

and so on (there's 13 more products to compute, and thus 52 more multiplication constants).

Another important $k$-algebra is $k[x]$ (polynomials over $k$) which has as one possible basis:

$u_i = x^i$.

Since $u_iu_j = (x^i)(x^j) = x^{i+j} = u_{i+j}$, we see that:

$\gamma_{ijk} = 1$ when $k = i+j$
$\gamma_{ijk} = 0$, otherwise.

Another often-used example: we have $\Bbb R^2$ as an $\Bbb R$-algebra given the basis:

$\{e_1,e_2\} = \{(1,0),(0,1)\}$ with multiplication constants:

$\gamma_{111} = 1$
$\gamma_{112} = 0$
$\gamma_{121} = 0$
$\gamma_{122} = 1$
$\gamma_{211} = 0$
$\gamma_{212} = 1$
$\gamma_{221} = -1$
$\gamma_{222} = 0$.

Since $\gamma_{ijk} = \gamma_{jik}$, this forms a commutative $\Bbb R$-algebra, more commonly known as $\Bbb C$ (this is a DIVISION ALGEBRA since $U(\Bbb C) = \Bbb C - \{0\}$, that is: all non-zero elements are invertible).

Interestingly enough, we actually have this as a sub-algebra of $\text{Mat}_2(\Bbb R)$, with basis:

$\{v_1,v_2\} = \{E_{11} + E_{22},E_{21}-E_{12}\}$.

This is because the basis elements when multiplied give a linear combination of these two basis elements:

$v_1v_1 = v_1$
$v_2v_1 = v_1v_2 = v_2$
$v_2v_2 = -v_1$, as can be readily verified.

***************

Given a basis for $\text{Mat}_n(k)$, we can use this to define an isomorphism with $\text{Hom}_{k}(k^n,k^n)$, so the study of the (particular) linear algebra $\text{Mat}_n(k)$ is typically the focus of a course called "linear algebra".

It is important not to confuse the $k$-action (scalar multiplication) of a $k$-algebra with the ring multiplication, but typically, if our algebra $A$ is unital, we can often consider it as an extension ring of $k$ via the map:

$\alpha \mapsto \alpha\cdot 1_A$

For example, with $A = \text{Mat}_n(k)$ we have the embedding:

$\alpha \mapsto \alpha I_n$

and with $A = k[x]$ we have the natural embedding of $k$ as constant polynomials

(note that $k \cong k[x]/(x)$ which essentially amounts to "evaluating $p(x)$ at 0", for any polynomial $p$).

Hi Deveno … just working through your post … and need help ...You write:

" … … Another often-used example: we have $\Bbb R^2$ as an $\Bbb R$-algebra given the basis:

$\{e_1,e_2\} = \{(1,0),(0,1)\}$ with multiplication constants:

$\gamma_{111} = 1$
$\gamma_{112} = 0$
$\gamma_{121} = 0$
$\gamma_{122} = 1$
$\gamma_{211} = 0$
$\gamma_{212} = 1$
$\gamma_{221} = -1$
$\gamma_{222} = 0$. … … "

I am assuming (rather tentatively) that multiplication is of the form

$$(x_1 y_1) \cdot (x_2 y_2) = (x_1x_2, y_1y_2) $$ (component wise)

and that

$$e_i \cdot e_j = \sum_{k = 1}^2 \gamma_{ijk} e_k = \gamma_{111} e_1 +\gamma_{112} e_2$$Is that right?
BUT … if it is correct then$$e_1 \cdot e_1 = (1,0) \cdot (1,0) = (1,0) = (1,0) + (0,0) = 1 \cdot e_1 + 0 \cdot e_2 $$

so

$$\gamma_{111} =1 \text{ and } \gamma_{112} = 0$$Similarly

$$e_1 \cdot e_2 = (1,0) \cdot (0,1) = (0,0) = (0,0) + (0,0) = 0 \cdot e_1 + 0 \cdot e_2 $$

so

$$\gamma_{121} = 0 \text{ and } \gamma_{122} = 0$$
BUT … in your analysis we have$$\gamma_{121} =0$$ and $$\gamma_{122} = 1$$Can you please explain what is wrong in my analysis above?Peter
 
Last edited:
Peter said:
Hi Deveno … just working through your post … and need help ...You write:

" … … Another often-used example: we have $\Bbb R^2$ as an $\Bbb R$-algebra given the basis:

$\{e_1,e_2\} = \{(1,0),(0,1)\}$ with multiplication constants:

$\gamma_{111} = 1$
$\gamma_{112} = 0$
$\gamma_{121} = 0$
$\gamma_{122} = 1$
$\gamma_{211} = 0$
$\gamma_{212} = 1$
$\gamma_{221} = -1$
$\gamma_{222} = 0$. … … "

I am assuming (rather tentatively) that multiplication is of the form

Why?

$$(x_1 y_1) \cdot (x_2 y_2) = (x_1x_2, y_1y_2) $$ (component wise)

and that

$$e_i \cdot e_j = \sum_{k = 1}^2 \gamma_{ijk} e_k = \gamma_{111} e_1 +\gamma_{112} e_2$$Is that right?
BUT … if it is correct then$$e_1 \cdot e_1 = (1,0) \cdot (1,0) = (1,0) = (1,0) + (0,0) = 1 \cdot e_1 + 0 \cdot e_2 $$

so

$$\gamma_{111} =1 \text{ and } \gamma_{112} = 0$$Similarly

$$e_1 \cdot e_2 = (1,0) \cdot (0,1) = (0,0) = (0,0) + (0,0) = 0 \cdot e_1 + 0 \cdot e_2 $$

This is incorrect. You are assuming that:

$(a,b)\cdot(c,d) = (ac,bd)$.

We have:

$(a,b)\cdot(c,d) = (ae_1 + be_2)(ce_1 + de_2) = ac(e_1e_1) + ad(e_1e_2) + bc(e_2e_1) + bd(e_2e_2)$

Now:

$e_1e_1 = \gamma_{111}e_1 + \gamma_{112}e_2$

so to evaluate this, we need to know the multiplicative constants BEFOREHAND. Let's find them by looking at the parent algebra these come from:

$e_1e_1 = (E_{11} + E_{22})(E_{11} + E_{22}) = E_{11}E_{11} + E_{11}E_{22} + E_{22}E_{11} + E_{22}E_{22}$

$= \begin{bmatrix}1&0\\0&0\end{bmatrix}\begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}1&0\\0&0\end{bmatrix}\begin{bmatrix}0&0\\0&1\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}\begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}\begin{bmatrix}0&0\\0&1\end{bmatrix}$

$= \begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}$

$= \begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}$

$= E_{11} + E_{22} = e_1 = 1e_1 + 0e_2$, so $\gamma_{111} = 1,\gamma_{112} = 0$.

(Basically, in the product $E_{ij}E_{km}$ if $j = k$ we get $E_{im}$, otherwise we get the 0-matrix).

We have to do the same thing for $e_1e_2$:

$e_1e_2 = \gamma_{121}e_1 + \gamma_{122}e_2$

and working through the matrix multiplication we have:

$e_1e_2 = (E_{11} + E_{22})(E_{21} - E_{12}) = E_{11}E_{21} - E_{11}E_{12} +E_{22}E_{21} - E_{22}E_{12}$

$= 0 - E_{12} + E_{21} - 0 = E_{21} - E_{12} = e_2 = 0e_1 + 1e_2$

so that $\gamma_{121} = 0$ and $\gamma_{122} = 1$.

The other multiplicative constants can be verified the same way. It's worthwhile to do this for yourself, once.

$$\gamma_{121} = 0 \text{ and } \gamma_{122} = 0$$
BUT … in your analysis we have$$\gamma_{121} =0$$ and $$\gamma_{122} = 1$$Can you please explain what is wrong in my analysis above?Peter

This product is NOT "the usual component-wise product" of $\Bbb R \times \Bbb R$.

So what we wind up with is:

$(a,b)(c,d) = (ae_1 + be_2)(ce_1 + de_2) = ac(e_1e_1) + ad(e_1e_2) + bc(e_2e_1) + bd(e_2e_2)$

$= ac(\gamma_{111}e_1 + \gamma_{112}e_2) + ad(\gamma_{121}e_1 + \gamma_{122}e_2) + bc(\gamma_{211}e_1 + \gamma_{212}e_2) + bd(\gamma_{221}e_1 + \gamma_{222}e_2)$

$= (ac\gamma_{111} + ad\gamma_{121} + bc\gamma_{211} + bd\gamma_{221})e_1 + (ac\gamma_{112} + ad\gamma_{122} + bc\gamma_{212} + bd\gamma_{222})e_2$

$= (ac + 0 + 0 - bd)e_1 + (0 + ad + bc + 0)e_2 = (ac - bd,ad + bc)$.

Note that if $b = d = 0$, everything goes away except the $ac$ term, that is, it IS true that:

$(a,0)(c,0) = (ac,0)$. This gives us an embedding of $\Bbb R$ as: $a \mapsto (a,0)$, which is a ring-homomorphism (indeed a monomorphism).

Recall that in our "parent algebra" this is:

$ae_1 + 0e_2 = a(E_{11} + E_{22}) = \begin{bmatrix}a&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&a\end{bmatrix} = aI$ so that this is the "same embedding" we had before (such matrices are often called "scalar matrices").

What is more interesting, is if $a = c = 0$, and $b = d = 1$, we have:

$(0,1)(0,1) = (-1,0)$ <--note how the 0 "jumps coordinates". Evidently in this algebra, $(0,1)$ is a square root of $(-1,0)$, which we are identifying with $-1$.

I urge you to think about what kind of mapping of $\Bbb R^2$ we get by sending:

$(x,y)\mapsto (0,1)(x,y) = (-y,x)$. Draw some pictures. It may help to think of this map as the composition of two reflections:

$(x,y) \mapsto (y,x)$ (reflecting about the diagonal line $x = y$) <---this one first.

$(a,b) \mapsto (-a,b)$ (reflecting about the $x$ axis) <---this one last (they don't commute).

Use this to convince yourself "dilation-rotations" are a good name for complex numbers (considered as a subalgebra of the algebra of real 2x2 matrices).
 
Deveno said:
Why?
This is incorrect. You are assuming that:

$(a,b)\cdot(c,d) = (ac,bd)$.

We have:

$(a,b)\cdot(c,d) = (ae_1 + be_2)(ce_1 + de_2) = ac(e_1e_1) + ad(e_1e_2) + bc(e_2e_1) + bd(e_2e_2)$

Now:

$e_1e_1 = \gamma_{111}e_1 + \gamma_{112}e_2$

so to evaluate this, we need to know the multiplicative constants BEFOREHAND. Let's find them by looking at the parent algebra these come from:

$e_1e_1 = (E_{11} + E_{22})(E_{11} + E_{22}) = E_{11}E_{11} + E_{11}E_{22} + E_{22}E_{11} + E_{22}E_{22}$

$= \begin{bmatrix}1&0\\0&0\end{bmatrix}\begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}1&0\\0&0\end{bmatrix}\begin{bmatrix}0&0\\0&1\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}\begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}\begin{bmatrix}0&0\\0&1\end{bmatrix}$

$= \begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}$

$= \begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}$

$= E_{11} + E_{22} = e_1 = 1e_1 + 0e_2$, so $\gamma_{111} = 1,\gamma_{112} = 0$.

(Basically, in the product $E_{ij}E_{km}$ if $j = k$ we get $E_{im}$, otherwise we get the 0-matrix).

We have to do the same thing for $e_1e_2$:

$e_1e_2 = \gamma_{121}e_1 + \gamma_{122}e_2$

and working through the matrix multiplication we have:

$e_1e_2 = (E_{11} + E_{22})(E_{21} - E_{12}) = E_{11}E_{21} - E_{11}E_{12} +E_{22}E_{21} - E_{22}E_{12}$

$= 0 - E_{12} + E_{21} - 0 = E_{21} - E_{12} = e_2 = 0e_1 + 1e_2$

so that $\gamma_{121} = 0$ and $\gamma_{122} = 1$.

The other multiplicative constants can be verified the same way. It's worthwhile to do this for yourself, once.
This product is NOT "the usual component-wise product" of $\Bbb R \times \Bbb R$.

So what we wind up with is:

$(a,b)(c,d) = (ae_1 + be_2)(ce_1 + de_2) = ac(e_1e_1) + ad(e_1e_2) + bc(e_2e_1) + bd(e_2e_2)$

$= ac(\gamma_{111}e_1 + \gamma_{112}e_2) + ad(\gamma_{121}e_1 + \gamma_{122}e_2) + bc(\gamma_{211}e_1 + \gamma_{212}e_2) + bd(\gamma_{221}e_1 + \gamma_{222}e_2)$

$= (ac\gamma_{111} + ad\gamma_{121} + bc\gamma_{211} + bd\gamma_{221})e_1 + (ac\gamma_{112} + ad\gamma_{122} + bc\gamma_{212} + bd\gamma_{222})e_2$

$= (ac + 0 + 0 - bd)e_1 + (0 + ad + bc + 0)e_2 = (ac - bd,ad + bc)$.

Note that if $b = d = 0$, everything goes away except the $ac$ term, that is, it IS true that:

$(a,0)(c,0) = (ac,0)$. This gives us an embedding of $\Bbb R$ as: $a \mapsto (a,0)$, which is a ring-homomorphism (indeed a monomorphism).

Recall that in our "parent algebra" this is:

$ae_1 + 0e_2 = a(E_{11} + E_{22}) = \begin{bmatrix}a&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&a\end{bmatrix} = aI$ so that this is the "same embedding" we had before (such matrices are often called "scalar matrices").

What is more interesting, is if $a = c = 0$, and $b = d = 1$, we have:

$(0,1)(0,1) = (-1,0)$ <--note how the 0 "jumps coordinates". Evidently in this algebra, $(0,1)$ is a square root of $(-1,0)$, which we are identifying with $-1$.

I urge you to think about what kind of mapping of $\Bbb R^2$ we get by sending:

$(x,y)\mapsto (0,1)(x,y) = (-y,x)$. Draw some pictures. It may help to think of this map as the composition of two reflections:

$(x,y) \mapsto (y,x)$ (reflecting about the diagonal line $x = y$) <---this one first.

$(a,b) \mapsto (-a,b)$ (reflecting about the $x$ axis) <---this one last (they don't commute).

Use this to convince yourself "dilation-rotations" are a good name for complex numbers (considered as a subalgebra of the algebra of real 2x2 matrices).
Thanks Deveno … at first glance this post looks EXTREMELY helpful to my gaining an understanding of k-algebras … …

Obviously I have to be very careful over assuming things regarding the multiplication …

Working through your post now ...

Thanks again,

Peter
 
  • #10
Deveno said:
Why?
This is incorrect. You are assuming that:

$(a,b)\cdot(c,d) = (ac,bd)$.

We have:

$(a,b)\cdot(c,d) = (ae_1 + be_2)(ce_1 + de_2) = ac(e_1e_1) + ad(e_1e_2) + bc(e_2e_1) + bd(e_2e_2)$

Now:

$e_1e_1 = \gamma_{111}e_1 + \gamma_{112}e_2$

so to evaluate this, we need to know the multiplicative constants BEFOREHAND. Let's find them by looking at the parent algebra these come from:

$e_1e_1 = (E_{11} + E_{22})(E_{11} + E_{22}) = E_{11}E_{11} + E_{11}E_{22} + E_{22}E_{11} + E_{22}E_{22}$

$= \begin{bmatrix}1&0\\0&0\end{bmatrix}\begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}1&0\\0&0\end{bmatrix}\begin{bmatrix}0&0\\0&1\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}\begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}\begin{bmatrix}0&0\\0&1\end{bmatrix}$

$= \begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}$

$= \begin{bmatrix}1&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix}$

$= E_{11} + E_{22} = e_1 = 1e_1 + 0e_2$, so $\gamma_{111} = 1,\gamma_{112} = 0$.

(Basically, in the product $E_{ij}E_{km}$ if $j = k$ we get $E_{im}$, otherwise we get the 0-matrix).

We have to do the same thing for $e_1e_2$:

$e_1e_2 = \gamma_{121}e_1 + \gamma_{122}e_2$

and working through the matrix multiplication we have:

$e_1e_2 = (E_{11} + E_{22})(E_{21} - E_{12}) = E_{11}E_{21} - E_{11}E_{12} +E_{22}E_{21} - E_{22}E_{12}$

$= 0 - E_{12} + E_{21} - 0 = E_{21} - E_{12} = e_2 = 0e_1 + 1e_2$

so that $\gamma_{121} = 0$ and $\gamma_{122} = 1$.

The other multiplicative constants can be verified the same way. It's worthwhile to do this for yourself, once.
This product is NOT "the usual component-wise product" of $\Bbb R \times \Bbb R$.

So what we wind up with is:

$(a,b)(c,d) = (ae_1 + be_2)(ce_1 + de_2) = ac(e_1e_1) + ad(e_1e_2) + bc(e_2e_1) + bd(e_2e_2)$

$= ac(\gamma_{111}e_1 + \gamma_{112}e_2) + ad(\gamma_{121}e_1 + \gamma_{122}e_2) + bc(\gamma_{211}e_1 + \gamma_{212}e_2) + bd(\gamma_{221}e_1 + \gamma_{222}e_2)$

$= (ac\gamma_{111} + ad\gamma_{121} + bc\gamma_{211} + bd\gamma_{221})e_1 + (ac\gamma_{112} + ad\gamma_{122} + bc\gamma_{212} + bd\gamma_{222})e_2$

$= (ac + 0 + 0 - bd)e_1 + (0 + ad + bc + 0)e_2 = (ac - bd,ad + bc)$.

Note that if $b = d = 0$, everything goes away except the $ac$ term, that is, it IS true that:

$(a,0)(c,0) = (ac,0)$. This gives us an embedding of $\Bbb R$ as: $a \mapsto (a,0)$, which is a ring-homomorphism (indeed a monomorphism).

Recall that in our "parent algebra" this is:

$ae_1 + 0e_2 = a(E_{11} + E_{22}) = \begin{bmatrix}a&0\\0&0\end{bmatrix} + \begin{bmatrix}0&0\\0&a\end{bmatrix} = aI$ so that this is the "same embedding" we had before (such matrices are often called "scalar matrices").

What is more interesting, is if $a = c = 0$, and $b = d = 1$, we have:

$(0,1)(0,1) = (-1,0)$ <--note how the 0 "jumps coordinates". Evidently in this algebra, $(0,1)$ is a square root of $(-1,0)$, which we are identifying with $-1$.

I urge you to think about what kind of mapping of $\Bbb R^2$ we get by sending:

$(x,y)\mapsto (0,1)(x,y) = (-y,x)$. Draw some pictures. It may help to think of this map as the composition of two reflections:

$(x,y) \mapsto (y,x)$ (reflecting about the diagonal line $x = y$) <---this one first.

$(a,b) \mapsto (-a,b)$ (reflecting about the $x$ axis) <---this one last (they don't commute).

Use this to convince yourself "dilation-rotations" are a good name for complex numbers (considered as a subalgebra of the algebra of real 2x2 matrices).
Hi Deveno … thanks for your extensive help ...

Sorry to be slow, but I need your further assistance ...

In your example of $$\mathbb{R}^2$$ as an $$\mathbb{R}$$-algebra, you write:

" … … Given the basis $$\{ e_1, e_2 \} = \{ (1,0), (0,1) \} $$ … … "

So we have $$e_1 = (1,0)$$ and $$e_2 = (0,1)$$ … …

Then you write:

" … … $$e_1e_1 = ( E_{11} + E_{22} ) ( E_{11} + E_{22} )$$ … … "

so, that is $$e_1 = ( E_{11} + E_{22} )$$

BUT … …

$$ E_{11} + E_{22} = \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix} = \begin{bmatrix}1&0\\0&1\end{bmatrix} $$

while, as indicated above, $$e_1 = (1,0)$$ ?

Can you clarify?

Peter
 
Last edited:
  • #11
Peter said:
Hi Deveno … thanks for your extensive help ...

Sorry to be slow, but I need your further assistance ...

In your example of $$\mathbb{R}^2$$ as an $$\mathbb{R}$$-algebra, you write:

" … … Given the basis $$\{ e_1, e_2 \} = \{ (1,0), (0,1) \} $$ … … "

So we have $$e_1 = (1,0)$$ and $$e_2 = (0,1)$$ … …

Then you write:

" … … $$e_1e_1 = ( E_{11} + E_{22} ) ( E_{11} + E_{22} )$$ … … "

so, that is $$e_1 = ( E_{11} + E_{22} )$$

BUT … …

$$ E_{11} + E_{22} = \begin{bmatrix} 1&0 \\ 0&0 \end{bmatrix} + \begin{bmatrix}0&0\\0&1\end{bmatrix} = \begin{bmatrix}1&0\\0&1\end{bmatrix} $$

while, as indicated above, $$e_1 = (1,0)$$ ?

Can you clarify?

Peter

Given a vector space ($F$-module) $V$, one can realize this vector space many different ways. For example, one can embed an isomorph of $\Bbb R^2$ in $\Bbb R^4$ a LOT of different ways, just send any basis $\{v_1,v_2\}$ to two linearly independent vectors of $\Bbb R^4$.

Here, that isomorphism, of $\Bbb C$ as a 2-dimensional $\Bbb R$-algebra into $\text{Mat}_4(\Bbb R)$ is given by:

$e_1 = (1,0) \mapsto E_{11} + E_{22} = \begin{bmatrix}1&0\\0&1\end{bmatrix}$

$e_2 = (0,1) \mapsto E_{21} - E_{12} = \begin{bmatrix}0&-1\\1&0\end{bmatrix}$.

Of course, it might not be obvious that $\{E_{11}+E_{22},E_{21}-E_{12}\}$ is linearly independent, but if:

$c_1(E_{11}+E_{22}) + c_2(E_{21}-E_{12}) = 0$ (the 0-matrix)

then we have a linear combination of the $E_{ij}$ that sums to 0, and since the $E_{ij}$ are linearly independent, it follows that $c_1 = c_2 = 0$.

The important thing about this embedding is not only is it a VECTOR-SPACE isomorphism, but it preserves the RING multiplication as well.

So we have "the same algebraic structure" (complex numbers), as on one hand "two-dimensional things" (2-vectors in the plane), and (equivalently) as "four-dimensional things" (2x2 matrices).

If we extend by linearity, we obtain the algebra monomorphism:

$\phi: \Bbb C \to \text{Mat}_2(\Bbb R)$ given by:

$\phi(a+bi) = \begin{bmatrix}a&-b\\b&a\end{bmatrix}$

Again, it is instructive to work out that $\phi$ is both a ring-homomorphism and a vector space homomorphism with trivial kernel.

In particular, since $\Bbb C$ is isomorphic to $\phi(\Bbb C)$, it follows that for these two basis choices, we have the same multiplication constants, so it really doesn't matter if we talk about $e_j$ or $\phi(e_j)$, they "act the same".
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 15 ·
Replies
15
Views
4K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
1K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K