Number theory proof: Unique determination of a recursively defined function

a7d07c8114
Messages
22
Reaction score
0

Homework Statement



Use the principle of mathematical induction to show that the value at each positive integer of a function defined recursively is uniquely determined.

I understand the problem and its related concepts. However, I feel that my attempt at a proof doesn't use the principle of mathematical induction properly. I would appreciate any insight into the solution.

This problem is taken from Rosen's Elementary Number Theory and Its Applications, page 16.

Homework Equations



The Principle of Mathematical Induction. A set of positive integers that contains the integer 1 and the integer n+1 whenever it contains n must be
the set of all positive integers.

We say the function f is defined recursively if the value of f at 1 is specified and if a rule is provided for determining f(n+1) from f(n).

The Attempt at a Solution



Let there exist a recursively defined function f, whose domain is all n such that n is a positive integer. For n=1, the value of f(1) is arbitrarily specified, and therefore the value of f has a unique determination at 1.

Assume there is a value of n such that f(n) is uniquely determined. Assume f(n+1) is not uniquely determined. Then the determination of f(n+1) must be the same as the determination of f(m), where m is a positive integer less than n+1. Assume n+1=2. Then m=1, since 1 is the the only number in the domain of f less than 2. Then f(2) must have the same determination as f(1). But f(1) is defined to be uniquely determined, leading to a contradiction. Therefore, f(2) must also be uniquely determined.

Assume n+1=3. Then m must be either 1 or 2...

This process can be continued infinitely. Therefore f(n+1) must be uniquely determined. By the Principle of Mathematical Induction, it can therefore be concluded that f(n) is uniquely determined at all positive integer values of n.
 
Physics news on Phys.org
I think the biggest difficulty is that your usage of f is exceedingly awkward. Taken literally, it's actually nonsense. However, I imagine you have a correct idea in your head and are struggling with how to turn that idea into something you can manipulate logically.

I suggest you try and ponder just what you're trying to when you use the symbol f, and find a better way to express yourself.
 
Thanks for the reply. I'm not sure I understand very well though. Could you perhaps link me to an explanation or a proof that illustrates your point? I did feel that it was awkward writing the proof the way I did, but I'm so far a complete novice at proof writing in general.
 
The problem is not on page 16. It is on page 28.:) But anyway here is the way I would do it.

Suppose that f(n) is defined recursively by specifying the value of f(1) and a rule for finding f(n+1)
from f(n). We will prove by mathematical induction that such a function is well-defined. First, note that
f(1) is well-defined since this value is explicitly stated. Now assume that f(n) is well-defined. Then
f(n + 1) also is well-defined since a rule is given for determining this value from f(n).
 
Am I allowed to do that? It seems like just restating the definition of a recursive function to me...though I'm very new at this, so I don't know if that's what you're meant to do. Or maybe I'm just interpreting "uniquely defined" the wrong way.
 
Page 25 of my copy. :smile:


Mainly, the thing I find lacking in both of your proofs is that you start with "let f be defined by the recursive definition..." and formulate the rest of the proof in terms of f.

On the face of it, there are two serious difficulties with this approach, which amount to completely ignoring what is to be proven:
1. That there exists f satisfying the recursive definition needs to be proven​
and given such an f
2. Every function satisfying the recursive definition is equal to f

I think it's likely the actual ideas both of you have in your head really do address these points, and probably correctly so.




That said, upon reflection I realize the point of this is to be an exercise in mathematical induction, and not to be an exercise in foundations, so it's probably fine to be imprecise on these points. I withdraw my objection.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top