Question in Landau's Foundations of Analysis

In summary, the conversation covers various axioms and theorems related to natural numbers in Landau. The current focus is on proving Theorem 4, which involves defining addition recursively and using induction to show various properties. The importance of the recursion theorem in understanding the natural numbers is also mentioned.
  • #1
Ackbach
Gold Member
MHB
4,155
89
So far in Landau, we've covered the following:

Axiom 0: The "equals sign" is an equivalence relation (reflexive, commutative, and transitive).
Axiom 1: $1$ is a natural number.
Axiom 2: For each $x$ there exists exactly one natural number, called the successor of $x$, which will be denoted by $x'$. If $x=y$, then $x'=y'$.
Axiom 3: $x'\not=1$.
Axiom 4: If $x'=y'$, then $x=y$.
Axiom 5: Let there be given a set $M$ of natural numbers, with the following properties:
  • $1$ belongs to $M$.
  • If $x$ belongs to $M$, then so does $x'$.
Then $M$ contains all the natural numbers.

Theorem 1: If $x\not=y$, then $x'\not=y'$.
Theorem 2: $x'\not=x$.
Theorem 3: If $x\not=1$, then there exists a (unique) $u$ such that $x=u'$.
Theorem 4: To every pair of numbers $x,y,$ we may assign in exactly one way a natural number, called $x+y,$ such that
  • $x+1=x'$ for every $x$, and
  • $x+y'=(x+y)'$ for every $x$ and $y$.

I'm in the proof of Theorem 4, and uniqueness is straight-forward. I'm on existence. You let $M$ be the set of all natural numbers for which it is possible to make this definition. It's straight-forward to show that $1\in M$. I'm trying to show that $(x\in M) \implies (x'\in M)$. So, we get to assume that there exists an $x+y$ for all $y$ such that $x+1=x'$ and $x+y'=(x+y)'$. We need to show that $x'+1=(x')',$ (trivial - by definition) and that $x'+y'=(x'+y)'$. It's this last equation that is giving me conniptions. Landau has the series of equations
$$x'+y'=(x+y')'=((x+y)')'=(x'+y)'.$$
I can see the middle equality - it's by assumption and Axiom 2. But I cannot see why the first equality is true, nor can I see why the last equality is true. That is, why are
\begin{align*}
x'+y'&=(x+y')' \qquad \text{and} \\
((x+y)')'&=(x'+y)'?
\end{align*}

Important point: Associativity is Theorem 5 (proof depends on Theorem 4), and Commutativity is Theorem 6 (proof depends on Theorem 4's proof). So we're not allowed to use associativity or commutativity.

Thanks so much for your time!
 
Physics news on Phys.org
  • #2
I agree, this does look strange. The definition of addition is by recursion on $y$, so I think the proof should be by induction on $y$ as well. The fact that $x'+y=(x+y)'$ is then proved by another induction on $y$.

I'll think about this more tomorrow and will write if I have something to add or if you have more questions.
 
  • #3
The way I typically see this accomplished is by "currying" (so you have to use induction (axiom 5) twice).

That is, instead of defining $+: \Bbb N \times \Bbb N \to \Bbb N$ *directly*, we first define for a FIXED $x \in N$ (we can do this because by Axiom 1 $\Bbb N$ is not empty), the function $g_x : \Bbb N \to \Bbb N$ such that:

$g_x(1) = x'$
$g_x(y') = (g_x(y))'$.

Now we can let $S$ be the subset of $\Bbb N$ such that $g_x$ is a well-defined function $S \to \Bbb N$. Clearly, $1 \in S$ by Axiom 2.

Suppose that $y \in S$, so that $g_x(y) \in \Bbb N$ is uniquely defined. Then, again by Axiom 2, $(g_x(y))' \in \Bbb N$ is uniquely defined, and hence $y' \in S$. So by Axiom 5, $S = \Bbb N$.

Now let $T$ be the subset of $x \in \Bbb N$ such that $x \mapsto g_x$ is a well-defined (unique) function. It is clear that $1 \in T$, since $g_1: y \mapsto y'$ (that is, $g_1$ *is* the successor function). Now the next bit is the tricky part-given that $g_x$ is well-defined, how do we show $g_{x'}$ is?

Well, we show that for all $y \in \Bbb N$, that $g_{x'}(y) = (g_x(y))'$. The RHS is assumed well-defined, and we have proven the equality for all $y$ above. Thus the function $x \mapsto g_x$ is well-defined on all of $\Bbb N$ ($T = \Bbb N$), from Axiom 5.

In "your" terms, we have shown for all $x,y \in \Bbb N$:

$(x' + y)' = (x + y)'$ (this is your last equation, after applying Axiom 4 to remove the primes).

So, substituting in $y'$ for $y$ (since $y' \in \Bbb N$ if $y$ is), we have:

$(x' + y')' = (x + y')'$ (this is your first equation).

We had to use induction twice-once on $x$, once on $y$.

The theorems on multiplication also involve "double induction", and although they may be disguised, the theorems on associativity require "triple induction".

The "real" reason addition is commutative, is if we regard successorship as a function $s$, and consider "adding to $x$" as a function (we called it $g_x$), we have from our definition:

$g_x \circ s = s \circ g_x$ (that is: $x + y' = (x + y)'$), so THESE TWO FUNCTIONS COMMUTE.

In more primitive terms, adding is defined to be "recursive counting", and successorship is 1-1, so it doesn't matter if we count "left-to-right" or "right-to-left" (successorhip has an inverse, predecessorship). If we didn't have an "initial number" (one), we would automatically have a(n abelian) group defining subtraction from predecessorship ("going back" reverses "going forward").

I note in passing it makes no substantial difference if one includes 0 in the natural numbers (things just shift "one over").

What I wrote before is *one* instance of the RECURSION THEOREM:

If $A$ is any set, and $f: A \to A$ is any function, with $a \in A$, then there is a UNIQUE function $g: \Bbb N \to A$ such that:

$g(1) = a$
$g(s(y)) = f(g(y))$.

This theorem actually *characterizes* the natural numbers, in the sense that any triple $(X,u,s)$ (here $X$ is thought as a set which is a "stand-in" for $\Bbb N$, $u$ is a stand-in for $1$, and $s$ is a stand-in for successorship) which satisfies a corresponding recursion theorem (with the appropriate substitutions) yields an isomorphic structure, that we can perform the analogy of induction on.

Although this sound "complicated", it really shouldn't scare you, what we are saying is that the natural numbers are "recursively defined":

$1 \stackrel{s}{\to} 2\stackrel{s}{\to} 3\stackrel{s}{\to} 4 \stackrel{s}{\to} \dots$

so of course functions defined on them admit recursive definitions (coming up with the function $f$ in the recursion theorem sometimes takes a bit of cleverness).

In this particular example (my $g_x$), we are taking $A = \Bbb N$, and $f = s$, and $a = s(x)$. Addition is "recursive counting".

It really makes no difference if you take $1 = \{\emptyset\}$, and define $s(x) = x \cup \{x\}$, you get the same diagram:

$1 \to s(1) \to s(s(1)) \to s(s(s(1))) \to \dots$

(We don't even need to start with $\{\emptyset\}$ we could start with $\{a\}$, where $a$ is any set, like "elephant"-mathematically speaking, though, we can prove the *existence* of $a = \emptyset$, whereas the mathematical existence of "elephant" is, currently, unknown-although I believe Terence Tao is working on the problem).
 
Last edited:
  • #4
Deveno said:
The way I typically see this accomplished is by "currying" (so you have to use induction (axiom 5) twice).

That is, instead of defining $+: \Bbb N \times \Bbb N \to \Bbb N$ *directly*, we first define for a FIXED $x \in N$ (we can do this because by Axiom 1 $\Bbb N$ is not empty), the function $g_x : \Bbb N \to \Bbb N$ such that:

$g_x(1) = x'$
$g_x(y') = (g_x(y))'$.

Now we can let $S$ be the subset of $\Bbb N$ such that $g_x$ is a well-defined function $S \to \Bbb N$. Clearly, $1 \in S$ by Axiom 2.

Suppose that $y \in S$, so that $g_x(y) \in \Bbb N$ is uniquely defined. Then, again by Axiom 2, $(g_x(y))' \in \Bbb N$ is uniquely defined, and hence $y' \in S$. So by Axiom 5, $S = \Bbb N$.

Now let $T$ be the subset of $x \in \Bbb N$ such that $x \mapsto g_x$ is a well-defined (unique) function. It is clear that $1 \in T$, since $g_1: y \mapsto y'$ (that is, $g_1$ *is* the successor function). Now the next bit is the tricky part-given that $g_x$ is well-defined, how do we show $g_{x'}$ is?

Well, we show that for all $y \in \Bbb N$, that $g_{x'}(y) = (g_x(y))'$. The RHS is assumed well-defined, and we have proven the equality for all $y$ above. Thus the function $x \mapsto g_x$ is well-defined on all of $\Bbb N$ ($T = \Bbb N$), from Axiom 5.

In "your" terms, we have shown for all $x,y \in \Bbb N$:

$(x' + y)' = (x + y)'$ (this is your last equation, after applying Axiom 4 to remove the primes).

So, substituting in $y'$ for $y$ (since $y' \in \Bbb N$ if $y$ is), we have:

$(x' + y')' = (x + y')'$ (this is your first equation).

We had to use induction twice-once on $x$, once on $y$.

The theorems on multiplication also involve "double induction", and although they may be disguised, the theorems on associativity require "triple induction".

The "real" reason addition is commutative, is if we regard successorship as a function $s$, and consider "adding to $x$" as a function (we called it $g_x$), we have from our definition:

$g_x \circ s = s \circ g_x$ (that is: $x + y' = (x + y)'$), so THESE TWO FUNCTIONS COMMUTE.

In more primitive terms, adding is defined to be "recursive counting", and successorship is 1-1, so it doesn't matter if we count "left-to-right" or "right-to-left" (successorhip has an inverse, predecessorship). If we didn't have an "initial number" (one), we would automatically have a(n abelian) group defining subtraction from predecessorship ("going back" reverses "going forward").

I note in passing it makes no substantial difference if one includes 0 in the natural numbers (things just shift "one over").

What I wrote before is *one* instance of the RECURSION THEOREM:

If $A$ is any set, and $f: A \to A$ is any function, with $a \in A$, then there is a UNIQUE function $g: \Bbb N \to A$ such that:

$g(1) = a$
$g(s(y)) = f(g(y))$.

This theorem actually *characterizes* the natural numbers, in the sense that any triple $(X,u,s)$ (here $X$ is thought as a set which is a "stand-in" for $\Bbb N$, $u$ is a stand-in for $1$, and $s$ is a stand-in for successorship) which satisfies a corresponding recursion theorem (with the appropriate substitutions) yields an isomorphic structure, that we can perform the analogy of induction on.

Although this sound "complicated", it really shouldn't scare you, what we are saying is that the natural numbers are "recursively defined":

$1 \stackrel{s}{\to} 2\stackrel{s}{\to} 3\stackrel{s}{\to} 4 \stackrel{s}{\to} \dots$

so of course functions defined on them admit recursive definitions (coming up with the function $f$ in the recursion theorem sometimes takes a bit of cleverness).

In this particular example (my $g_x$), we are taking $A = \Bbb N$, and $f = s$, and $a = s(x)$. Addition is "recursive counting".

It really makes no difference if you take $1 = \{\emptyset\}$, and define $s(x) = x \cup \{x\}$, you get the same diagram:

$1 \to s(1) \to s(s(1)) \to s(s(s(1))) \to \dots$

(We don't even need to start with $\{\emptyset\}$ we could start with $\{a\}$, where $a$ is any set, like "elephant"-mathematically speaking, though, we can prove the *existence* of $a = \emptyset$, whereas the mathematical existence of "elephant" is, currently, unknown-although I believe Terence Tao is working on the problem).

Next step: distill this down so that 9th-graders can understand it. :D
 
  • #5
Well, the notion of currying is going to be a bit over the top for many ninth-graders. But it's like this:

$s(x,y) = x+y$ is a function that takes TWO arguments ($x$, and $y$).

It's often easier to think of this "one variable at a time".

So, first, we can look at the function that (for a fixed $x$) takes $y \mapsto x+y$.

In other words, instead of "adding $x$ and $y$ (together)", we start with some (known) $x$, and "add $y$ to it".

For every $x$, we're going to get a "different function":

$y \mapsto 1+y$ - "add $y$ to 1"
$y \mapsto 2+y$ - "add $y$ to 2"

and so on.

We can call the function that we get from our known $x$ as $g_x$, or some other similar notation.

The "trick" is then seeing we can get a function of $x$, which returns a function of $y$:

$g:x \mapsto g_x$, where $g_x(y) = x+y$.

That is, $s(x,y) = [g(x)](y)$.

To recap: $s$ does this:

Input two numbers, output one number.

Our "currying" does this:

Input one number, get a function. Input another number into the function, get a number. It's a two-stage process, the "result" has to wait on the second function, which depends on the first input. It's "chain reaction" rather than "mix 'em up".

This isn't the only scenario where this kind of approach will pop up. It occurs in linear algebra (scalar multiplication can be regarded as a function $F \times V \to V$, that is, combine a scalar and a vector, to get a vector, or it can be regarded as an "action", that is, given a scalar $a$, you get the function $a(-)$, which maps $v$ to $av$), it occurs in calculus, with integrals:

With $F(x) = \int\limits_{a}^x f(t)\ dt$

for each different $a$, you get a different $F$, and you don't get a number until you plug in $x = b$.

Of course, its perfectly fine to "sweep this under the rug", and just start an induction proof on $y$, and while "nested" in it, start another induction proof on $x$. Then, there's no "abstract machinery" to confuse the students, but the only way you can really "justify" it is that "hey, it works!" (which may be enough).
 
  • #6
Ok, in class today, we tried the double-induction approach, with no fancy notation. The issue is that, deep down in the details, we seem to constantly need commutativity and associativity, which are the next two theorems! Here is our work:

We want to show that $x+y'=(x+y)'$ for all $x$ and $y$. Now I tried induction on $x$ as the "outer" induction. you had suggested $y$ as the outer induction. Perhaps that clears up these difficulties, I'm not sure. In any case, here's what we have.

Let $M$ be the set of all $x$ such that the theorem holds.
(x) Base Case: WTS: $1+y'=(1+y)'$ for all $y$. Let $N$ be the set of all $y$ such that $1+y'=(1+y)'$.
(y) Base Case: WTS: $1+1'=(1+1)'$. Now the RHS is $(1')'$. But I'm not at all sure how we can get this to be what the LHS is without assuming in our definition that $1+x=x'$ for all $x$. We defined $x+1=x'$. How can we do this without commutativity? We ended up absorbing this into our definition: $x+1=1+x=x'$ for all $x$. So this holds. Now:
(y) Inductive Step: Assume $1+y'=(1+y)'$. We WTS $1+(y')'=(1+y')'$. Now the RHS is $((1+y)')'=(((y')')'$, by assumption. If we use our strengthened assumption that addition by one on the left is equivalent to the successor, then this equality holds also. So $1+y'=(1+y)'$ for all $y$.

(x) Inductive Step: Assume $x+y'=(x+y)'$. WTS: $x'+y'=(x'+y)'$. Let $N$ be the set of all $y$ such that this holds.
(y) Base Case: WTS: $x'+1'=(x'+1)'$. Now the RHS is certainly $((x')')'$. It is certainly intuitive that the LHS is the same. But why?
(y) Inductive Step: Assume $x'+y'=(x'+y)'$. WTS: $x'+(y')'=(x'+y')'$. Now the RHS is certainly $((x'+y)')'$, by assumption. Why is this equal to the LHS?

Now $x$ and $y$ are not symmetric in the formula we are trying to prove. Is this just a whole lot easier if $y$ is the outer induction, and $x$ the inner? Should we try that approach instead?
 
  • #7
Before I dive in, let me give an example of how a sum is actually calculated given your definition, so you see the difficulty involved.

We will take as given we already "know" that:

$1' = 2$
$2' = 3$
$3' = 4$
$4' = 5$
$5' = 6$
$6' = 7$

and we wish to evaluate $3+4$.

By our definition $3 + 4 = 3 + 3' = (3 + 3)'$. Thus we need to evaluate $3+3$.

We have:

$3 + 3 = 3 + 2' = (3 + 2)'$, and:

$3 + 2 = 3 + 1' = (3 + 1)' = 3'$. So, working backwards, we have:

$3 + 4 = (3 + 3)' = ((3+2)')' = (((3+1)')')' = (((3')')')' = ((4')')' = (5')' = 6' = 7$.

Note the asymmetry in how we calculated, we worked exclusively on the $4$, and did nothing with the three.

In short, the defining formula:

$x + y' = (x + y)'$ is not symmetric in $x$ and $y$, so it definitely *matters* how we break it down by induction.

Base case ($y$), we want to show $x + 1' = (x + 1)'$, for all $x \in \Bbb N$.

Base case ($x$), we want to show that $1 + 1' = (1 + 1)'$. This is true by DEFINITION. That is, to the pair $(1,1')$, it is possible to assign only a single value to the function $s(x,y') = x + y'$, and we are saying that value is $(1')'$, which is some element of $\Bbb N$ (for ANY function $f:S \times S \to S$ we can define $f(s_1,s_2)$ by declaring what its value is).

Let me explain a bit further-we have to "start somewhere", and we have to say "how to get somewhere else". We want to show that:

$s(x,y) = x+y$ is a well-defined function from pairs of natural numbers to the natural numbers. Our "seed value" for natural numbers is the initial number, $1$, and to comply with rule #2:

$x + y' = (x + y)'$

this has to hold for $y = 1$. So we define it so that happens (this is a common trick in mathematics). So no matter what else we prove about our (as yet unproven, existence-wise) function $s(x,y)$, we have that $s(1,1) = 2$, AND $s(1,2) = 3$. The first is our "start", and the next is what we are going to leverage in our induction (we need BOTH).

(Case for $x$ continued) Now we need to show that $s(x,1)$ "makes sense" (and is uniquely defined) for ALL $x \in \Bbb N$.

So, we assume that $s(x,1)$ makes sense, and we want to show that we get a unique value for $s(x',1)$.

In your terms, given that $1' + 1 = (1+1)'$ AND that $x + 1 = x'$, we want to show that $x' + 1 = (x')'$.

But this is trivial, that is what we simply DEFINE $x' + 1$ to BE.

This is our function:

$1 \mapsto 1$
$1' \mapsto (1')'$
...
$x \mapsto x'$

this is a well-defined unique function of $x$, namely, the successor function. In symbolic form:

$s(-,1) \stackrel{\text{def}}{\equiv} (-)'$

This concludes the "base case" on $y$. Let me amplify this a bit: ALL the above shows is that we have (by definition!):

$x + 1 = x'$ for any $x \in \Bbb N$, so it's CERTAINLY TRUE for $x'$, whenever $x \in \Bbb N$ (since thus $x' \in \Bbb N$), and CERTAINLY TRUE for $x = 1'$.

(See how doing it "this way" makes the argument almost "invisible"?).

So now we come to "the real meat":

All we need to do, is show that $s(x,y') \in \Bbb N$ if $s(x,y)$ is. Since you've ALREADY shown uniqueness, we just have to show this takes "SOME VALUE" in $\Bbb N$.

So, assign the value, for $s(x,y')$ of:

$s(x,y') = (s(x,y))'$.

In YOUR symbols:

$x+y' = (x+y)'$.

By induction (since we're defined on all of $\Bbb N \times \Bbb N$), it follows that:

$x +(y')' = (x+y')' = [(x+y)']'$.

So look:

Assume $x' + y = (x + y)'$ (this is certainly true for $y = 1$).

Then:

$x' + y' = (x' + y)'$ (by our definition of "plus", using $x'$ instead of $x$, which is legal, since we are only restricting $y$)

$= [(x+y)']'$ (by our induction hypothesis)

$= (x + y')'$ (using our definition of "plus", again), and there you have it.

Here's the sticky part, and it's perhaps too subtle for ninth-graders to appreciate:

The fact that the data:

$s(x,1) = x'$
$s(x,y) = (s(x,y))'$ determine a FUNCTION $s$ (your "existence") is a non-trivial consequence of the recursion theorem I quoted in another post. Landau takes such particular care in this instance because he wants the existence to be exhibited directly, rather than prove the recursion theorem.
 
  • #8
I'm able to follow some of your post, but not all of it. You have to remember that I'm not a Mozart-level mathematician, but a Haydn-level one. That is, I never was a prodigy, and I've had to work extremely hard just to get where I am now. I am a step-by-step mathematician, and I am usually unable to make gigantic leaps of insight and collapse a whole chain of reasoning into one step (unless it's high-school algebra). I also think that there are some fundamental concepts about this theorem that I'm lacking.

Ok, let's back up a second. Here is the exact word-for-word restatement of Theorem 4:

Theorem 4, and at the same time Definition 1: To every pair of numbers $x,y,$ we may assign in exactly one way a natural number, called $x+y$ ($+$ to be read "plus"), such that
  1. $x+1=x'$ for every $x$
  2. $x+y'=(x+y)'$ for every $x$ and every $y$.
$x+y$ is called the sum of $x$ and $y$, or the number obtained by addition of $y$ to $x$.

One thing that's always bothered me about this statement is that it's not entirely clear what's the definition, and what's the theorem. Is everything the definition and everything the theorem? It seems really odd that $x+1=x'$ should be a theorem. That strikes me as definition. But, since we don't have commutativity yet, this is saying that addition on the right is equivalent to the successor operation.

Or is the statement itself all definition, but we have to prove that it's well-defined? That is, do we have to show existence and uniqueness of the definition
\begin{align*}
x+1&=x' \\
x+y'&=(x+y)' \; ?
\end{align*}
 
  • #9
Ok, I think I've got it now! We are not trying to prove the formulae; we are trying to prove existence and uniqueness, so that the definition proposed is well-defined. So the two formulae are the definition of addition. We don't have to prove that the formulae hold. We need to show that, for every pair $x,y,$ we can show a unique number exists that satisfies both the formulae. Is this right? If so, the equation in Landau that troubled me in the OP is no longer a mystery.

Landau has the following in the existence part of the proof (part B):

Now we will show that for each $x$ it is actually possible to define $x+y$ for all $y$ in such a way that $x+1=x'$ and $x+y'=(x+y)'$ for every $y$.

Let $M$ be the set of all $x$ for which this is possible (in exactly one way, by [the uniqueness proved in, Ackbach] A).

I) For $x=1$, the number $x+y=y'$ is as required, since $x+1=1'=x',$ and $x+y'=(y')'=(x+y)'$. Hence $1$ belongs to $M$. [If you work this through, it all makes total sense, substituting $1$ for $x$, Ackbach.]

II) Let $x$ belong to $M$, so that there exists an $x+y$ for all $y$. Then the number $x'+y=(x+y)'$ is the required number for $x'$, since $x'+1=(x+1)'=(x')'$ [Again, this is very straight-forward, Ackbach] and
\begin{align*}
x'+y'&=(x+y')' \quad \text{[This works because we just assumed $x'+y=(x+y)'$ for all $y$, Ackbach]} \\
&=((x+y)')' \quad \text{[By inductive hypothesis, which the author omitted, and by Axiom 2, Ackbach]} \\
&=(x'+y)' \quad \text{[Again, because we defined $x'+y=(x+y)'$ for all $y$, Ackbach]}.
\end{align*}
Hence $x'$ belongs to $M$. Therefore, $M$ contains all [natural numbers, Ackbach] $x$. QED.
 
  • #10
Ackbach said:
Ok, I think I've got it now! We are not trying to prove the formulae; we are trying to prove existence and uniqueness, so that the definition proposed is well-defined. So the two formulae are the definition of addition. We don't have to prove that the formulae hold. We need to show that, for every pair $x,y,$ we can show a unique number exists that satisfies both the formulae. Is this right? If so, the equation in Landau that troubled me in the OP is no longer a mystery.

Landau has the following in the existence part of the proof (part B):

Now we will show that for each $x$ it is actually possible to define $x+y$ for all $y$ in such a way that $x+1=x'$ and $x+y'=(x+y)'$ for every $y$.

Let $M$ be the set of all $x$ for which this is possible (in exactly one way, by [the uniqueness proved in, Ackbach] A).

I) For $x=1$, the number $x+y=y'$ is as required, since $x+1=1'=x',$ and $x+y'=(y')'=(x+y)'$. Hence $1$ belongs to $M$. [If you work this through, it all makes total sense, substituting $1$ for $x$, Ackbach.]

II) Let $x$ belong to $M$, so that there exists an $x+y$ for all $y$. Then the number $x'+y=(x+y)'$ is the required number for $x'$, since $x'+1=(x+1)'=(x')'$ [Again, this is very straight-forward, Ackbach] and
\begin{align*}
x'+y'&=(x+y')' \quad \text{[This works because we just assumed $x'+y=(x+y)'$ for all $y$, Ackbach]} \\
&=((x+y)')' \quad \text{[By inductive hypothesis, which the author omitted, and by Axiom 2, Ackbach]} \\
&=(x'+y)' \quad \text{[Again, because we defined $x'+y=(x+y)'$ for all $y$, Ackbach]}.
\end{align*}
Hence $x'$ belongs to $M$. Therefore, $M$ contains all [natural numbers, Ackbach] $x$. QED.

YES!

The idea is, you want to show that for every $x \in \Bbb N$, the assignment:

$y \mapsto x + y$ *is* a function. The simplest name for this function is probably "$x+$".

Here is a proof of the recursion theorem, of which this is a special case:

Let $g: X \to X$ be *any* function, with $a \in X$. Then there exists a UNIQUE function:

$f: \Bbb N \to X$ such that:

$f(1) = a$.
$f(x') = g(f(x))$.

First of all, let $S = \{y \in \Bbb N: f:\{1,\dots,y\} \to X\text{ is a function}\}$

Clearly, $1 \in S$ (any function from a singleton set is *completely* determined by a single value, in this case, $a$).

Suppose $y \in S$. This means that $f(y)$ is an unambiguous element of $X$. Since $g$ is a function, $g \circ f$ is a function on $\text{im }f \cap \text{dom }g$ (we are taking it "as given" that functions are composable-proving this fact would take us too far afield, but rest assured, it *is* true). Since $f(1),\dots f(y) \in X = \text{dom }g$, we have that $g \circ f$ is well-defined on $\{1,\dots,y\}$. In particular, we have $(g \circ f)(y) \in X$, that is: $f(y') = g(f(y)) \in X$, unambiguously defined. Hence $y' \in S$, and therefore $S = \Bbb N$.

This shows *existence*, which is the hard part.

Now suppose we have TWO such functions, $f_1$ and $f_2$. Since they are distinct, there is some minimal element of $\Bbb N$ on which they disagree (this depends on the "well-ordering" of $\Bbb N$, which can be shown by induction as well). Call this element $m$.

So $f_1(m) \neq f_2(m)$, but if $k < m$, then $f_1(k) = f_2(k)$.

Note that $m \neq 1$, for we must have $f_1(1) = f_2(1) = a$.

Thus we may assume that $m = k'$, for some $k \in \Bbb N$.

So $f_1(m) = f_1(k') = g(f_1(k)) = g(f_2(k)) = f_2(k') = f_2(m)$, contradiction. So $f$ is unique.

Your problem here is where $X = \Bbb N$, and $g = $ the successor function, and $f$ is $x+$.

As I mentioned in another post, this is a "common trick" in algebra (and math in general), to regard a binary operation as a unary operation by fixing one (typically the left) operand. For example, the vector sum:

$u + v$

can be seen as $L_u(v)$, where $L_u$ is "translation by $u$" (which is how we DRAW vector addition).
 
  • #11
Thanks for persevering, Deveno. As you can easily see, I have a thick skull, but it is penetrable from time to time.
 
  • #12
Ackbach said:
Thanks for persevering, Deveno. As you can easily see, I have a thick skull, but it is penetrable from time to time.

Well, math is like a ditch, the deeper you go, the harder it is to dig.
 

1. What is the main focus of Landau's Foundations of Analysis?

The main focus of Landau's Foundations of Analysis is to provide a rigorous and comprehensive treatment of the fundamentals of mathematical analysis. This includes topics such as sets, functions, limits, continuity, differentiation, and integration.

2. Is this book suitable for beginners in analysis?

No, this book is not suitable for beginners in analysis. Landau assumes a certain level of mathematical maturity and familiarity with basic concepts in analysis such as the real numbers and basic topology.

3. What sets this book apart from other textbooks on analysis?

Landau's Foundations of Analysis is known for its concise and elegant presentation of the subject matter. It also emphasizes the importance of logical rigor and provides clear and complete proofs for all theorems.

4. Are there any prerequisites for studying this book?

As mentioned earlier, a basic understanding of real numbers and topology is necessary. Familiarity with mathematical notation and proof techniques is also recommended.

5. Can this book be used as a reference for graduate level analysis courses?

Yes, this book is often used as a reference for graduate level analysis courses. It covers a wide range of topics and provides a solid foundation for more advanced studies in analysis.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Calculus and Beyond Homework Help
Replies
3
Views
506
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
1K
Replies
5
Views
363
Back
Top