Understanding Direct Products of Vector Spaces: Cooperstein's Example 1.17

Click For Summary
SUMMARY

This discussion centers on the direct product of vector spaces as presented in Bruce Cooperstein's "Advanced Linear Algebra." Specifically, it clarifies how an n-tuple in ℝⁿ can be represented as a function f: {1, ..., n} → ℝ, aligning with Cooperstein's notation. The conversation highlights the equivalence between n-tuples and indexed sets of values, emphasizing that Cooperstein's definition allows for the direct product to be defined even when the indexing set I is infinite. The discussion also touches on the concept of F-linear maps in the context of polynomial functions.

PREREQUISITES
  • Understanding of vector spaces and their properties
  • Familiarity with functions and mappings in mathematics
  • Knowledge of polynomial functions and their representations
  • Basic concepts of linear algebra, including linear maps
NEXT STEPS
  • Study the concept of direct products in vector spaces using Cooperstein's notation
  • Explore the properties of F-linear maps and their applications in linear algebra
  • Investigate the relationship between polynomial functions and their n-tuple representations
  • Learn about infinite indexing sets and their implications in vector space theory
USEFUL FOR

Students and educators in mathematics, particularly those focusing on linear algebra, vector space theory, and functional analysis. This discussion is also beneficial for anyone looking to deepen their understanding of polynomial functions and linear mappings.

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
In Bruce Cooperstein's book: Advanced Linear Algebra, he gives the following example on page 12 in his chapter on vector spaces (Chapter 1) ... ...View attachment 4886I am finding it difficult to fully understand this example ... ...

Can someone give an example using Cooperstein's construction ... using, for clarity, his notation ... ?

If we take $$I = \{ 1, 2, \ ... \ ... \ , n \}$$ ... ... then I am used to thinking that the direct product of vector spaces $$U_1, U_2, \ ... \ ... \ U_n$$ is the set of all n-tuples $$( u_1, u_2, \ ... \ ... \ u_n )$$ with addition and scalar multiplication defined componentwise ... ...

BUT ... how do we square this with Cooperstein's definition/construction of the direct product of a set of vector spaces ... ...

Hope someone can help clarify the above issues ... ...

Peter
 
Physics news on Phys.org
Consider an $n$-tuple in $\Bbb R^n$, say $u = (x_1,\dots,x_n)$.

We can regard this as a function $f:\{1,\dots,n\}\to \Bbb R$, namely, the function given by:

$f(i) = x_i$.

So, in Cooperstein's notation, if we call the set $\{1,\dots,n\}$ say, $I$, we have:

$\Bbb R^n = \Pi_{i \in I}\ \Bbb R = \Bbb R\oplus \cdots \oplus \Bbb R$ (an $n$-fold direct product).

If we have $v = (y_1,\dots,y_n)$, we can similarly define $v$ as the function $g:I \to \Bbb R$ given by:

$g(i) = y_i$.

Normally, we think of $u+v$ as being defined as "component-wise" addition, that is:

$u + v = (x_1 + y_1,\dots,x_n + y_n)$.

If we define $f+g$ by $(f+g)(i) = f(i) + g(i)$ as Cooperstein does, we get that $f+g$ maps $i$ to $x_i + y_i$. So it accomplishes the same thing.

The advantage to Cooperstein's definition, is that if $I$ is not a FINITE set, we can still define the direct product of a number of spaces corresponding to the infinite number of things in $I$ ($I$ is called the *indexing* set).

Convince yourself that if $I = \Bbb N$, and $U_i = F$ for all $i \in I$, that the resulting space we get is isomorphic as a $F$-vector space to $F[x]$. It is common to refer to the image $f(i)$ as the "$i$-th coordinate" of a vector when all the $U_i$ coincide with the underlying field, and as the "$i$-th component" when the $U_i$ are subspaces of the product.
 
Deveno said:
Consider an $n$-tuple in $\Bbb R^n$, say $u = (x_1,\dots,x_n)$.

We can regard this as a function $f:\{1,\dots,n\}\to \Bbb R$, namely, the function given by:

$f(i) = x_i$.

So, in Cooperstein's notation, if we call the set $\{1,\dots,n\}$ say, $I$, we have:

$\Bbb R^n = \Pi_{i \in I}\ \Bbb R = \Bbb R\oplus \cdots \oplus \Bbb R$ (an $n$-fold direct product).

If we have $v = (y_1,\dots,y_n)$, we can similarly define $v$ as the function $g:I \to \Bbb R$ given by:

$g(i) = y_i$.

Normally, we think of $u+v$ as being defined as "component-wise" addition, that is:

$u + v = (x_1 + y_1,\dots,x_n + y_n)$.

If we define $f+g$ by $(f+g)(i) = f(i) + g(i)$ as Cooperstein does, we get that $f+g$ maps $i$ to $x_i + y_i$. So it accomplishes the same thing.

The advantage to Cooperstein's definition, is that if $I$ is not a FINITE set, we can still define the direct product of a number of spaces corresponding to the infinite number of things in $I$ ($I$ is called the *indexing* set).

Convince yourself that if $I = \Bbb N$, and $U_i = F$ for all $i \in I$, that the resulting space we get is isomorphic as a $F$-vector space to $F[x]$. It is common to refer to the image $f(i)$ as the "$i$-th coordinate" of a vector when all the $U_i$ coincide with the underlying field, and as the "$i$-th component" when the $U_i$ are subspaces of the product.

Hi Deveno ... thanks for the help ...

Until I received your post I was having real trouble seeing how a function $$f$$ was identical to (or could be equivalent to) an $$n$$-tuple or an indexed set of values ... then after reading through your post I realized that the set of function values, $$f(i)$$ is, of course, indexed by the set $$I$$ in the same way as an $$n$$-tuple ... and so the function $$f$$ gives essentially the same information as an $$n$$-tuple ...Just a further question, however ... ... ... you wrote:

"Convince yourself that if $I = \Bbb N$, and $U_i = F$ for all $i \in I$, that the resulting space we get is isomorphic as a $F$-vector space to $F[x]$. ... ... "

I am having trouble seeing this ... can you help further ... it looks a really interesting point ...

Hope you can help ...

Peter
 
To any polynomial $f(x) = a_0 + a_1x + a_2x^2 + \cdots + a_nx^n$ we can assign the $n$-tuple:

$(a_0,a_1,a_2,\dots,a_n)$ (our indexing set is infinite, because $n$ might be arbitrarily large).

Your job is to show that if we call this $n$-tuple $[f]$, that the mapping:

$f(x) \mapsto [f]$ is $F$-linear.
 
Deveno said:
To any polynomial $f(x) = a_0 + a_1x + a_2x^2 + \cdots + a_nx^n$ we can assign the $n$-tuple:

$(a_0,a_1,a_2,\dots,a_n)$ (our indexing set is infinite, because $n$ might be arbitrarily large).

Your job is to show that if we call this $n$-tuple $[f]$, that the mapping:

$f(x) \mapsto [f]$ is $F$-linear.
Thanks Deveno ...

Can you clarify exactly what is meant by F-linear ...

Peter
 
By definition an $F$-linear map is an $F$-module homomorphism, in other words it preserves the module addition, and the action of $F$ upon the underlying $F$-module $V$. This is often written:

$T(\alpha u + \beta v) = \alpha T(u) + \beta T(v)$.

For example, on the vector space of real polynomials, $\Bbb R[x]$, the map $D$ given by $D(f(x)) = f'(x)$

(here $f'(x)$ is the *formal derivative of $f$*, if:
$f(x) = a_0 + a_1x +\cdots + a_nx^n$, then $f'(x) = a_1 + 2a_2x +\cdots + na_nx^{n-1}$)

is a $\Bbb R$-linear map, since:

$D(f(x) + g(x)) = (f+g)'(x) = f'(x) + g'(x) = D(f(x)) + D(g(x))$

and:

$D(\alpha f(x)) = (\alpha f)'(x) = \alpha(f'(x)) = \alpha D(f(x))$.
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
Replies
2
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K