# I Finite Dimensional Division Algebras - Bresar Lemma 1.1

1. Nov 16, 2016

### Math Amateur

I am reading Matej Bresar's book, "Introduction to Noncommutative Algebra" and am currently focussed on Chapter 1: Finite Dimensional Division Algebras ... ...

I need help with the an aspect of the proof of Lemma 1.1 ... ...

In the above text, at the start of the proof of Lemma 1.1, Bresar writes the following:

" ... ... Since the dimension of $D$ is $n$, the elements $1, x, \ ... \ ... \ , x^n$ are linearly dependent. This means that there exists a non-zero polynomial $f( \omega ) \in \mathbb{R} [ \omega ]$ of degree at most $n$ such that $f(x) = 0$ ... ... "

My question is as follows:

How exactly (rigorously and formally) does the elements $1, x, \ ... \ ... \ , x^n$ being linearly dependent allow us to conclude that there exists a non-zero polynomial $f( \omega ) \in \mathbb{R} [ \omega ]$ of degree at most $n$ such that $f(x) = 0$ ... ?

Help will be much appreciated ...

Peter

=====================================================

In order for readers of the above post to appreciate the context of the post I am providing pages 1-2 of Bresar ... as follows ...

#### Attached Files:

File size:
163.3 KB
Views:
101
File size:
86.9 KB
Views:
89
• ###### Bresar - Page 2.png
File size:
92 KB
Views:
96
2. Nov 16, 2016

### zinq

This follows from the definition of linear independence (or dependence). You may want to remind yourself of that definition and write down exactly what it means in this case.

3. Nov 17, 2016

### Math Amateur

Thanks zinq ... but I need some further help ... ...

The linear dependence of the elements $1, x, \ ... \ ... \ , x^n$ meas that we can find elements $c_0, c_1, \ ... \ ... \ , c_n \in \mathbb{R}$ , not all zero, so that:

$c_0.1.+ c_1.x + \ ... \ ... \ + c_n x^n = 0$

BUT ... how do we proceed to demonstrate that this implies that there exists a non-zero polynomial $f( \omega ) \in \mathbb{R} [ \omega ]$ of degree at most $n$ such that $f(x) = 0$ ... ...

Can you help further ... ?

Peter

4. Nov 17, 2016

### Staff: Mentor

We take $f(x)=0$ by construction of $c_i$ as the a non-trivial linear combination $0=c_0 \cdot 1 + c_1 \cdot x + \ldots + c_n x^n$ which must exist for dimensional reasons.

Then we define $f(\omega) := c_0 \cdot 1 + c_1 \cdot \omega + \ldots + c_n \cdot \omega^n$, i.e. $f(\omega) \in \mathbb{R}[\omega]$ and $f(\omega) \leftrightarrow (c_0 , c_1 , \ldots , c_n) \neq 0$ because we have more "vectors" $(1,x,\ldots,x^n)$ than the dimension $n$.

Btw.: Funny choice of variable names. Usually $x$ is the indeterminate and $\omega$ a number. Here it's the other way around.

5. Nov 17, 2016

### Math Amateur

Thanks fresh_42 ... clear and helpful ...