# Group homomorphism

1. Aug 16, 2011

### Sumanta

Hi,

I understand the fact that grp theory textbooks defined Hom(G, H) as (g + h) u forms a group homomorphism. I wanted to know if there is any notion of homomorphism as $\Sigma_{I} g_{i}$ where each $g_{i}$ is a homomorphism and I is an infinite index set. If so how is it defined.

Thx

2. Aug 16, 2011

### daveyp225

Just as an example, there are objects called free abelian groups which are groups isomorphic to $\bigoplus_{i \in I} \mathbb{Z}$ for some index set I.

So I think it makes sense to talk about a homomorphism $g=\sum_I g_i$ where $g_i:G_i\to \mathbb{Z}$ for some $G_i$ (or some groups can probably map to a "smaller" direct sum of Z). It should be a relatively straight-forward proof.

edit: I'm not sure if I answered the correct question. Are you asking if arbitrary sums of homomorphisms converge to a homomorphism? If I is finite, then yes, it is true. I wouldn't even know where to begin in the infinite case.

Last edited: Aug 16, 2011
3. Aug 16, 2011

### HallsofIvy

I can't make grammatical sense out of that question. I think your are trying to say that the set of all homomorphism from G to H can be made a group by defining the operation g+ h by (g+ h)(u)= g(u)+ h(u) where the sum on the right is the group operation in H.

Certainly, given an operation, you can repeat that operation a finite number of times, but that is not the point here. The operation in any group is a "binary" operation that applies to two members of the group at a time.

4. Aug 16, 2011

### Sumanta

Hi,

Actually I think daveyp225 understood my question.

Suppose u have a finite product of groups say $\Pi G_{i}$ where the index set is finite.
And needless to say there exists from each of these a homomorphism $g_{i}$ to H.

So now u could define g: $\Pi G_{i}$ to H as

$\Sigma g_{i}(u_{i})$.

Can this be extended so that the index set is infinite. ie is $\Sigma g_{i}(u_{i})$ a valid concept at all.

Thx

5. Aug 17, 2011

### daveyp225

Here's what I think. In special cases, yes. In general though, $g_i(k) = h_i$ is an element of of $H$, but you'd first have to know that $\sum_i h_i$ even makes sense to write down. Then you can try to talk about whether or not the sum makes sense for all of the domain. This shows that to work in general, your space needs to have (among other things) an idea of an "accumulation point" as in pointset topology. In addition, you'll need that convergence in Hom(G,H) makes sense.

Here's one example: Let $G_i = (\mathbb{R},+), H=(\mathbb{R},+), I = \mathbb{N}$.

Define $g_i(x) = \frac{x}{2^i}$. Then each $g_i$ is a homomorphism from $\mathbb{R}$ to $\mathbb{R}$ and $\sum_ig_i = id_{\mathbb{R}}$

edit:

Opps, I didn't account for non-constant sequences. As far as I can tell, if you should want $\sum_i g_i(x_i)$ you would need convergence of $\sum_i g_i$ to a continuous linear function and convergence of the sequence $\{x_i\}$. As you can see this is stepping outside of just "group theory" very quickly. Perhaps there is some algebraic-only view on this, but someone with more expertise would have to chime in.

Last edited: Aug 17, 2011