# Find F: R->R satisfying F(x+y)=F(x)+F(y) and F(xy)=F(x)F(y)

1. Sep 22, 2009

### iN10SE

1. The problem statement, all variables and given/known data

Let F:R->R be a function such that, for all x,y belonging to R, we have F(x+y)=F(x)+F(y) and F(xy)=F(x)F(y). Prove that F is one of the following two functions:
i> f(x)=0
ii> f(x)=x
(Hint : At some point in your proof, the fact that every positive real number is the sqaure of a real number will be valuable)
2. Relevant equations

3. The attempt at a solution

Let f(x)=x^n. Then f(x+y)=f(x)+f(y) is not satisfied. So f(x) can't be a polynomial. Similarly f(x) can't contain e^x, logx, any trigonometric function.
Now let f(x)=ax, where a=!0,1.
then f(x+y)=f(x)+f(y)=2ax is satisfied but not f(xy)=f(x).f(y) as f(x^2)=a.x^2 but f(x).f(x)=a^2.x^2. NOw a = a^2 iff a= 0 or 1. hence the proof.

My confusion is (as I am new to prrof oriented maths.), I have actually never used the hint. So, did I skip any argument?

1. The problem statement, all variables and given/known data

2. Relevant equations

3. The attempt at a solution

2. Sep 22, 2009

### Elucidus

The proffered proof shows some examples of what f(x) cannot be and gives two examples of what f(x) could be, but it does not prove that f(x) cannot be something else other than $(x \mapsto 0) \text{ or } (x \mapsto x)$.

--Elucidus

3. Sep 22, 2009

### iN10SE

thanks euclidian
ok...second try :

F(x+y)=F(x)+F(y)
put y=0
F(x)=F(x)+F(0)
so either F(x)=0 for all x
or F(0)=0 => F(x) has the form F(x)=xg(x) where g(x)=!0 for all x

now I have to show g(x)=1 for all x.

From the relations of F(x) and g(x), it can be derived
g(x+y)=(xg(x)+yg(y))/(x+y) and g(xy)=g(x)g(y)
put y=0 in second relation
g(x)=g(x).g(0)

g(x)=!0 for all zero, g(0)=1.

now I am stuck... but i GUESS i AM PRETTY NEAR?

4. Sep 23, 2009

### Elucidus

The following piece of putative logic is troubling:

$$\text{If } F(0) = 0 \text{ then either } F(x) = 0 \text{ for all } x \text{ or there exists } g(x) \text{ such that } g(x) \neq 0 \text{ for all } x \text{ and } F(x) = x \cdot g(x).$$

Take F(x) = sin(x). F(0) = 0 but F(x) is not always zero, nor does there exist g(x) so that g(x) is always nonzero and F(x) = xg(x).

It is possible to show that F(x) must be odd and F(0) = 0, but I don't think that implies that an x can be factored out of F(x).

--Elucidus

5. Sep 23, 2009

### iN10SE

Thanks. but any suggestion on how to proceed then?

6. Sep 23, 2009

### Caesar_Rahil

What is we substitute x=y;
then F(2x)=2F(x)
also F(x^2)=(F(x))^2
i think something to do with square may come from here

7. Sep 23, 2009

### Elucidus

Yes but sin(x)/x isn't always nonzero.

--Elucidus

8. Sep 23, 2009

### Caesar_Rahil

Well when is it zero??

9. Sep 23, 2009

### Caesar_Rahil

F(0)=0
F(1)=1 or F(1)=0
also F(2x)=2F(x)
Put x=1;
F(2)=2 or F(2)=0
you can try going on like that
Also if F(x) is differentiable (it is not given but still)
F'(x)=lim(h->0) F(x+h)-F(x) / h
= lim (h->0) F(h) / h
= constant
since F'(x)=constant
=> F(x) is linear => F(x) = kx +c
Put x=0,1;
you get F(x) = x or F(x) = 0

Proof still not valid though

10. Sep 23, 2009

### Elucidus

At any nonzero multiple of pi.

--Elucidus

11. Sep 23, 2009

### Elucidus

For any integer k, F(kx) = kF(x) and specifically F(0) = 0 (as mentioned) and F(-x) = -F(x) (which makes it odd).

If one lets F(x) = x + G(x) then

xy + G(xy) = F(xy) = F(x)F(y) = [x + G(x)][y + G(y)] = xy + xG(y) + yG(x) + G(x)G(y)

indicates that G(xy) = xG(y) + yG(x) + G(x)G(y) and when y = 1

G(x) = xG(1) + G(x) + G(x)G(1) from which we get

0 = xG(1) + G(x)G(1) = G(1)[x + G(x)] = G(1)F(x).

So either G(1) = 0 or F(x) = 0 for all x.

If F(x) is not identically 0 then G(1) = 0 and since F(0) = 0 then G(0) = 0.

If it could be shown that G(x) is either 0 or -F(x) then we'd be in business.

Additioinally for $$x \geq 0, F(x) = F(\sqrt{x} \cdot \sqrt{x}) = F(\sqrt{x})^2 \geq 0.$$

So F(x) is nonnegative to the right of 0 and by radial symmetry nonpositive to the left of zero.

I'm not sure how to leverage this to a solution but I feel it's nearby but eluding me. I'm not even sure if any of this is relevant.

--Elucidus

12. Sep 23, 2009

### Elucidus

I explored this approach as well, but F is not known to be differentiable (let alone continuous).

Secondly

$$\lim_{h \rightarrow 0} \frac{F(h)}{h} \text{ is not necessarily a constant.}$$

--Elucidus

13. Sep 23, 2009

### Office_Shredder

Staff Emeritus
Try to start off by showing for any rational number of the form p/q (p,q integers), F(p/q*x) = p/qF(x). Then if F(1)=c, we have that F(p/q) = cp/q for any rational number. Verify what F(1) can be.

At that point you either assume F is continuous, or you make a very savvy argument (which eludes me to this point) to include the irrational numbers

14. Sep 23, 2009

### Dick

It's eluding me too. I think I actually looked at this before. By using my usual recourse when a problem is way too tough for me. I hunted through the sci.math forums. I THINK i found an entry by one of the brainiacs there that there ARE such functions which are not 0 or the identity. But they have to be constructed by axiom of choice type arguments and you really can't find a concise example of one accessible to a finite brain. That's just a vague recollection, so I could be wrong. But I think the OP might be omitting a condition on the function F that might make this a lot easier.

15. Sep 23, 2009

### Elucidus

Consider $F(\frac{p}{q} \cdot x)$ when x = q (p, q integers). This shows that either $F(x) \equiv 0 \text{ or } F(\frac{p}{q}) = \frac{p}{q}$ (why?).

So F is either identically 0 or the identity on the rationals and if F is known to be contunuous, then F must either be identically 0 or the identiy on R.

The condition of continuity I believe is necessary to prove the claim. As Dick mentioned, I vaguely recollect that there are bizarre discontinuous functions that satisfy the premises that are neither identically 0 nor the identity.

--Elucidus

EDIT: The first sentence should say $F(1) = 0 \text{ not }F(x) \equiv 0$.

Last edited: Sep 23, 2009
16. Sep 23, 2009

### Office_Shredder

Staff Emeritus
By the linearity and multiplicity condition, once you've prove F is the identity on the rationals (assuming it's non-zero, since that case is trivial), you can basically extend it to any algebraic number - F(solution to p(x)) must be a solution to p(x). The transcendentals are a bit more of a mystery.

17. Sep 23, 2009

### Dick

There's a LOT more transcendentals than there are algebraic numbers, aren't there? Wish I could find a reference on this.

18. Sep 23, 2009

### Elucidus

I've managed to prove that if F(1) = 0 then F is periodic with period 1 and is therefore F identically 0.

I also have that if F(1) is not 0 then F(x) = x for all rationals. But I haven't made the final jump to all real numbers yet (it may not be possible).

--Elucidus

19. Sep 23, 2009

### Dick

If F(1)=0 then F(x)=F(1*x)=F(1)*F(x)=0*F(x)=0. Why do you need periodic?

20. Sep 23, 2009

### Office_Shredder

Staff Emeritus
Using the properties given, F(1) non-zero means F(1)=1 and from that you can prove all algebraic numbers are mapped to themselves. I'm almost 100% sure you can't glean anything from the transcendentals this way, because all you can do is form rational polynomials with addition and multiplication, and the algebraic numbers are exactly those that you can find as roots to rational polynomials.

New idea: Suppose F(pi) = c. What other values of x have we determined F(x) for? Obviously any rational multiple of pi, and any linear combination of integral powers of pi. I think that's it. So you can probably choose a basis of R over Q (here's the axiom of choice), make an equivalence relation based on which guys are multiples of each other, and then on the set of equivalence classes define F to be whatever the hell you want by defining F(basis vector)