# Interesting properties of nested functions

by Matt Benesi
Tags: functions, interesting, nested, properties
 P: 66 The properties arise from infinitely nested functions such as: $x=\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}}$ You can solve it algebraically to verify that x is equal to the nested function (all of the following functions can be solved in a similar manner). Simply raise both sides to the nth power: $x^n=x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}$ Divide through by xn-1: $x=\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}$ What we will do is explore the properties of finitely iterated nestings. The variable a is used to denote the number of nestings. For this example, a=4: $f(x,n,a)=f(x,n,4)=\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}}}}}$ The interesting properties arise when we subtract a nested function from the number it equals at infinite nestings (x). With this particular variety of nesting: $f(x,n,a) = x-\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1...}}}}}$ We end up approaching $n^{-a}x\ln{x}$ as a gets larger. It approaches the value quicker for higher n and x. f(x,n,a)/f(x,n,a+1) approaches n as a increases. There are many multiplicative nested type equations we can try, such as: $x=\log_B [ B^x\log_B [B^x \log_B [B^x....]]]$ so $f(x,B,a)=x-\log_B [ B^x\log_B [B^x \log_B [B^x....]]]$ which has the interesting property: $$\dfrac{f(x,B,a)}{f(x,B,a+1)}\to x \ln B$$ Of course, there are whole other types of nested functions using addition/subtraction, in addition to multiplication (if you mix them). You can do cosine and cosine-1 together, xn and x1/n and other combinations. This next formulas approach the derivative of the inner functions when taking $\dfrac{f(...,a)}{f(...,a+1)}$ for higher a. For this one, the inner function is x^n: $$f(x,n,a)=x-\sqrt[n]{x^{n}-x+\sqrt[n]{x^{n}-x+\sqrt[n]{x^{n}-x+\sqrt[n]{x^{n}-x+...}}}}$$ so $$\dfrac{f(x,n,a)}{f(x,n,a+1)}\to nx^{n-1}$$ For this one, the inner function is Bx: $$f(x,B,a)=x-\log_B [B^x-x+\log_B [B^x-x+\log_B [B^x-x+\log_B [...]]]]$$ so $$\dfrac{f(x,B,a)}{f(x,B,a+1)}\to B^{x}\ln B$$ as a increases (or for larger B and x). In fact, all of the basic formulas that use - x + (the repeated formula....) appear to approach the derivative of the inner formula for f(...,a)/f(...,a+1) except in conditions when the functions and inverse functions used have limited well defined domains (such as cosine and cosine-1). Combining the functions results in approaching the derivative of the combined inner function: $$f(x,B,a)=x-\sqrt[n]{\log_B [ B^{x^n}-x+ \sqrt[n] {\log_B [B^{x^n}-x+ \sqrt[n]{ \log_B [B^{x^n}-x+...}}}]]]$$ Note that it is set up to take x^n first, then take B^(x^n) next (as if it were infinitely iterated so that it is algebraically sound). The "[" symbol doesn't show up to clearly under the radical. Anyways.... As with the other -x + ... functions, this one approaches the derivative of the inner function $B^{x^n}$ $$\dfrac{f(x,B,a)}{f(x,B,a+1)}\to n\,{x}^{n-1}\,{y}^{{x}^{n}}\,log\left( y\right)$$ For all of these functions, the exact approached value (so for x^n as the inner function, nx^(n-1) ) for f(...,a)/f(...,a+1) can be taken to the ath power (number of iterations) and multiplied by f(....,a) to create a constant. Haven't found a rational one yet. Not a lot of them are listed over at the inverse symbolic calculator, although for the x^n-x+.... one, with x and n equal to 2, you end up with pi^2/4. Note that a value more or less than the value that f(...,a)/f(...,a+1) approaches causes the calculated constant to diverge towards 0 or infinity as a increases [UNLESS you use the exact constant (such as the derivatives, or n for the first example, x ln (B) for the second, etc.)].
P: 3,313
 Quote by Matt Benesi The properties arise from infinitely nested functions such as: $x=\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}}$ You can solve it algebraically to verify that x is equal to the nested function (all of the following functions can be solved in a similar manner). Simply raise both sides to the nth power: $x^n=x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}$ Divide through by xn-1: $x=\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}$
That is not a convincing demonstration because it begins by assuming the equation which is to be proved.

One way to motivate your method is:

For $x \ge 0$ and $n$ a positive integer

[eq. 1] $x = \sqrt[n]{x^n} = \sqrt[n]{x^{n-1} x }$

Use eq. 1 to substitute for the term x in the right hand side of eq. 1. We obtain

[eq. 2] $x = \sqrt[n]{x^{n-1} \sqrt[n]{x^{n-1} x} }$

Use eq 1 to substitute for the term x in the right hand side of eq 2. We obtain

[eq. 3] $x = \sqrt[n]{x^{n-1} \sqrt[n]{x^{n-1} } \sqrt[n]{x^{n-1} x }}$

Continuing the above process suggests (but does not prove) that x is equal to the "infinitely nested" function given in your first equation.

Define a sequence of functions $\{f_i\}$ recursively as follows:

$f_1(x) = \sqrt[n]{x^{n-1}}$
For $j > 1$ , $f_j(x) = \sqrt[n]{x^{n-1} f_{j-1}(x) }$

Your claim is that for each $x \ge 0$, $\lim_{j \rightarrow \infty} f_j(x) = x$
P: 66
 Quote by Stephen Tashi That is not a convincing demonstration because it begins by assuming the equation which is to be proved.
Hi Stephen, It's actually valid reasoning. Look over the nested radical page at Wolfram MathWorld, especially equations 9,10, and 11 to pick up a better understanding of the reasoning.

Incidentally, this isn't the point I was bringing up, and as of yet I have not come up with a valid proof for the derivative conjecture, which is the most interesting property of nested functions. The derivative conjecture can be readily tested (although inductive "validity" is more than a little non-rigorous), and stands up to all tests I've thrown at it (within specific domains for various functions), I'm just drawing a blank (for now) on how to prove it.

As to this little gem:
$$x=\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}}$$
Take both sides to the nth power:
$$x^n=x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}}$$
Note that you still have the infinitely nested radical after the $x^{n-1}$ part. Once you divide $x^{n-1}$ on both sides of the equation, you end up with your original equation of x= infinitely nested radical.

I actually wrote out a rough proof for this particular radicals interesting tendency towards $$ln{x} =\lim_{a\to\infty} \dfrac{x- \sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}} }{x} \times n^a$$ with a being the number of iterations of the radical

As you probably know, you can "force multiply" (I think that's similar to how McGuffin put it) through radicals. For example:

$$\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}} = \sqrt[{n^2}]{x^{n^2-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}$$

An easier example:
$$2\, \sqrt[2]{2} = \sqrt[2]{2^2 \times 2}$$
As you can see, to force multiply a number into a radical, you take it to the power of the radical. When you have multiple radicals together, such as $2\, \sqrt[2]{2\sqrt[2]{2}} = \sqrt[2]{8\,\sqrt[2]{2}}= \sqrt[4]{8^2 \times 2}$, as you push the number through the radical, instead of leaving stacked radicals like $\sqrt[2]{\sqrt[2]{x}}$ you can concentrate them under a single radical (square root of a square root is the 4th root): $\sqrt[4]{x}$.

For the above nested radical of a nestings, we end up with the following:
$$x^{\frac{n^a-1}{n^a}} = \sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}}$$
Of course:
$$x^{\frac{n^a-1}{n^a}} =x^{\frac{n^a}{n^a}} \times x^{\frac{-1}{n^a}}=x \times x^{\frac{-1}{n^a}}$$

Therefore dividing $x- x^{\frac{n^a-1}{n^a}}$ by x will result in:
$1-x^{\frac{-1}{n^a}}$. Multiplying this quantity by na results in the following equation: $\left( 1-x^{\frac{-1}{n^a}} \right) \times n^a$

And if we take the limit as $n^a\to\infty$ we end up with a very familiar looking equation (one of the formulas for natural log of a number).

Back to the original problem with lack of proof that x= so and so infinitely nested radical, you can look at $x^{\frac{n^a-1}{n^a}}$. As $\lim_{n^a\to\infty} \dfrac{n^a-1}{n^a}=1$ therefore $\lim_{n^a\to\infty} x^{\frac{n^a-1}{n^a}} = x^1 = x$

P: 3,313
Interesting properties of nested functions

 Quote by Matt Benesi Hi Stephen, It's actually valid reasoning. Look over the nested radical page at Wolfram MathWorld
It isn't valid reasoning. It isn't any kind of reasoning. In fact, on the (very interesting) Wolfram page that you gave there are only assertions given without proofs. If you mean that you are asserting something on the basis of an authoritative reference, I'll buy that.

For example, if we assert:

1 = 1 - 1 + 1 - 1 + 1......

We can add (1 - 1) to both sides obtaining

1 + 1 - 1 = (1 -1) + 1 - 1 + 1 - 1 + ....

1 = 1 - 1 + 1 - 1 + ....

Which is the original equation. But that doesn't establish the correctness of the original equation.

 Incidentally, this isn't the point I was bringing up, and as of yet I have not come up with a valid proof for the derivative conjecture, which is the most interesting property of nested functions.
I agree that if it turns out to be a property, it will be an interesting property.

But before we get to that, how general is the possibility of expressing the identity function as an infinite recursion of other functions?

Do you think things could be as general as this:

Hypothesis:

Let $g(x,y)$ be a real valued function of two real variables whose partial derivatives exist and are continuous. Further suppose that $g(x,x) = x$.

Concluson (?):

Then there exists a constant k such that the sequence of functions $\{ f_i\}$ defined recursively by:
$f_1(x) = g(x,k)$
For $j > 1$, $f_j (x) = g(f_{j-1}(x),k)$

has the property that for all $x,$ $\lim_{j \rightarrow \infty} f_j(x) = x$

Of course, I'm just trying to make a generalization based on the case $g(x,y) = \sqrt[n]{x^{n-1} y}$ and $k = 1$.
P: 66
 Quote by Stephen Tashi It isn't valid reasoning. It isn't any kind of reasoning.
Am I assuming too many premises? If so, I apologize. I think the wikipedia article on nested radicals has a very succinct explanation:

 Quote by Wikipedia Under certain conditions infinitely nested square roots such as $$x = \sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2+\cdots}}}}$$ represent rational numbers. This rational number can be found by realizing that ''x'' also appears under the radical sign, which gives the equation $$x = \sqrt{2+x}$$
 Quote by Stephen Tashi In fact, on the (very interesting) Wolfram page that you gave there are only assertions given without proofs.
I thought there was enough information given in those 3 statements...
 I agree that if it turns out to be a property, it will be an interesting property.
I've tested it with various inverse functions, from exponential functions, trigonometric functions, to standard power functions (x^n). As long as we stay within the domain of the functions (arccos (x) or cos-1 (x) between -1,1.. etc.), we end up finding the derivative (in a method I mentioned in a separate thread in the calculus subforum, that I just saw your reply in! Got some work to do, so will save that for later).

 But before we get to that, how general is the possibility of expressing the identity function as an infinite recursion of other functions?
As long as you can work out an algebraic "identity function", such as the various ones mentioned, you can do the infinite recursion. Remember, you must take x through various steps and arrive back at x= infinitely nested function (**you must arrive at the exact same equation after doing various steps).
 $f_1(x) = g(x,k)$ For $j > 1$, $f_j (x) = g(f_{j-1}(x),k)$
I'm drawing a blank about the k constant? Besides that, it's an understandable recursion formula (although I have my habit of defining it as "nestings" rather than recursions).
P: 3,313
 Quote by Matt Benesi I'm drawing a blank about the k constant?
From my point of view, a general method of expressing the identity function as an infinitely recursive function is to visualize $x$ as $f^{-1} ( f(x))$ for some function $f(x)$ that has an inverse. Then write $f(x)$ as an expression so that it involves a term with a single $x$ in it. The only way that I see to express this in a general mathematical way is say that $f(x) = g(x,x)$ where $g(x,y)$ is a function of two variables.

So if I want to begin a recursive definition based on the above thinking, I can't say "Let $f_1(x) = g(x)$" because I'm missing a variable in $g$. Looking at your examples, it appears that I can realize them by saying "Let $f_1(x) = g(x,k)$" if I pick the right value of the constant k.
 P: 66 I'm still drawing a complete blank on why you would need an additional constant? I don't use one in any of the functions, although I do specify a specific nesting depth of a (or call it recursivity depth). Now, all of the functions with the interesting derivative property are of the form: $x= f^{-1}(f(x)-x + f^{-1}(f(x)-x+...))$ so that we apply the function to both sides: $f(x)= f\left( f^{-1}(f(x)-x + f^{-1}(f(x)-x+...))\right)$ which ends up as: $f(x)= f(x)-x+ f^{-1}(f(x)-x + f^{-1}(f(x)-x+...))$ so that we can subtract $f(x)-x$ from both sides and end up with the original equation. If you are trying to specify a nesting depth, or recursivity depth such that: $$f_1(x)=f^{-1}(f(x)-x)$$ $$f_2(x)= f^{-1}(f(x)-x + f^{-1}(f(x)-x))$$ I've been calling the depth a, so the first is: $$f(x,a) = f(x,1) = f^{-1}(f(x)-x)$$ $$f(x,2) = f^{-1}(f(x)-x + f^{-1}(f(x)-x))$$ Another way of specifying depth is (with $f_0(x)=0$): $$f_n(x)=f^{-1}(f(x)-x+f_{n-1}(x))$$ which should strike you as resembling Fibonacci sequence or Mandelbrot set syntax. Here is a quick example (note that the original post has both an incorrect and correct version of the following formula). Note that $f(x,B)=B^x$ and $f^{-1}(x,B)=\log_B(x)$ $$x=\log_B [B^x-x+\log_B [B^x-x+\log_B [B^x-x+\log_B [...]]]]$$ $$B^x=B^{\log_B [B^x-x+\log_B [B^x-x+\log_B [B^x-x+\log_B [...]]]]}$$ $$B^x=B^x-x+\log_B [B^x-x+\log_B [B^x-x+\log_B [B^x-x+\log_B [...]]]]$$ subtract B^x-x from both sides and you end up with the original equation: $$x=\log_B [B^x-x+\log_B [B^x-x+\log_B [B^x-x+\log_B [...]]]]$$ Another side note, the multiplicative infinite nestings of the form: $x= f^{-1}(\frac{f(x)}{x} \times f^{-1}(\frac{f(x)}{x} \times f^{-1}(...))$ don't appear to display the derivative property (at least for the few I've analyzed- there is no rigorous proof against this as of yet).
P: 3,313
 Quote by Matt Benesi I'm still drawing a complete blank on why you would need an additional constant? I don't use one in any of the functions, although I do specify a specific nesting depth of a (or call it recursivity depth).
If you fit the $\sqrt[n]{x^{n-1}}$ example into my framework, "you" (meaning a person who does use my framework) actually do use a constant.

There are certainly other attempts to state what your are doing in general terms. Another attempt is this:

Write $x = f^{-1}(f(x))$ for some invertible function $f$.
Find a function $g(x,y)$ such that $g(x,0) = f(x)$
Define the sequence of functions $\{f_i\}$ by:
$f_1(x) = f^{-1}(g(x,x))$
For $j > 1$, $f_j(x) = f^{-1}( g(x,x) + f_{j-1}(x))$

I don't know if that approach works. All I'm saying is that it is one way to generalize your example ( by using $g(x,y) = f(x) - y$).

Can you state the most general procedure for expessing the identity function as the limit of a sequence of recursively defined functions? (And can you do it without using "infinite" symbolic expressions with "..."'s in them?)

 Now, all of the functions with the interesting derivative property are of the form: $x= f^{-1}(f(x)-x + f^{-1}(f(x)-x+...))$ so that we apply the function to both sides: $f(x)= f\left( f^{-1}(f(x)-x + f^{-1}(f(x)-x+...))\right)$ which ends up as: $f(x)= f(x)-x+ f^{-1}(f(x)-x + f^{-1}(f(x)-x+...))$ so that we can subtract $f(x)-x$ from both sides and end up with the original equation.
You should clarify what this is supposed to demonstrate. As an argument about numerical quantities, it doesn't make sense. To parody it:

If I start with the equation $A = B$ and apply $f$ to both sides, I get
$f(A) = f(B)$ Since $f(A) = f(B)$ then $A - f(A) = B - f(B)$, so I can subtract $A - f(A)$ from the left and $B - f(B)$ from the right and get the original equation.

Instead of that argument, what you are trying to express has something to do with the "algebraic form" of the terms in equation. That's the type of mathematics one sees when people talk about "formal" power series. You're just supposed to treat them as strings of symbols and not worry about convergence. You are demonstrating something about the behavior of symbols. However, if you are trying to prove something about derivatives, I think you must eventually express the ideas in a way that acceptable for doing proofs involving convergence, epsilons and deltas etc.
P: 66
 Quote by Stephen Tashi I don't know if that approach works. All I'm saying is that it is one way to generalize your example ( by using $g(x,y) = f(x) - y$).
Ok. What you really want to do is say $g(x)=f(x) - x$. No need for the additional variable y. If you'd like I can provide a few numerical examples (probably tomorrow, it's getting quite late here).
 Can you state the most general procedure for expessing the identity function as the limit of a sequence of recursively defined functions? (And can you do it without using "infinite" symbolic expressions with "..."'s in them?)
$$g_0(x) = 0$$
$$g_n(x) = f^{-1}\left(f(x)-x+g_{n-1}(x)\right)$$
$$x= \lim_{n\to\infty} g_n(x)$$
It's very similar for the multiplication/division identity functions (proof/demo if requested):
$$g_0(x) = 1$$
$$g_n(x) = f^{-1}\left(\frac{f(x)}{x} \times g_{n-1}(x)\right)$$
$$x= \lim_{n\to\infty} g_n(x)$$
 You should clarify what this is supposed to demonstrate. As an argument about numerical quantities, it doesn't make sense.
You should re-read the math carefully, and perhaps the statement.
Statement: "all of the functions with the interesting derivative property are of the form:"
$$1.\,x= f^{-1}(f(x)-x + f^{-1}(f(x)-x+...))$$
Then I simply showed how the equation works, in case someone jumped in at this point.
Apply the function to both sides of the equation:
$$2.\,f(x)= f\left( f^{-1}(f(x)-x + f^{-1}(f(x)-x+...))\right)$$
$$3.\,f(x)= f(x)-x+ f^{-1}(f(x)-x + f^{-1}(f(x)-x+...))$$
Once we subtract f(x)-x from both sides of equation 3, we end up with equation 1 again.
We could also add f(x)-x to both sides of equation 1 and arrive at equation 3, then apply the inverse function to both sides to arrive at equation 1.

The wikipedia demonstration which leads to $x=\sqrt{2+x}$ is but one of many possible forms of this type of equation.

Specifically $f(x)=x^2$, $f^{-1}(x)=\sqrt{x}$ and lastly, with x=2:
$$x = \sqrt{x^2-x+\sqrt{x^2-x+...}} = \sqrt {2 +\sqrt{2+...}} = \sqrt{2 +x}$$

 However, if you are trying to prove something about derivatives, I think you must eventually express the ideas in a way that acceptable for doing proofs involving convergence, epsilons and deltas etc.
I wasn't even attempting to prove the derivative property as of yet, in fact, I was wondering if there was a specific name for the property so that I could read about it (which is why I asked about it in the calculus sub-forum), and perhaps avoid a re-proof if someone else has already written one out. :D
P: 3,313
 Quote by Matt Benesi Ok. What you really want to do is say $g(x)=f(x) - x$. No need for the additional variable y.
I'm not trying to reproduce only the single example of f(x) -x. I want it be included as a special case. but I'm speculating there might other g(x,y) that work besides g(x,y) = f(x) - y.

 It's very similar for the multiplication/division identity functions (proof/demo if requested):
You've given examples of two different types of methods, one involving addition or subtraction of x and the other involving multiplication and division of x. What I'm asking is what is the statement of a general method that includes both these methods (and perhaps others) as special cases?

 Then I simply showed how the equation works, in case someone jumped in at this point.
Let's try to define what it means for the equation to work. After all, suppose someone is trying to prove a trigonometric identity. He writes the identity down as an equation. He performs various manipulations that preserve the equality of two sides of an equation and after several steps he arrives back at the original equation. Most graders would say that he gets an "F" on that one.

I'll take a shot at it. We start with an equation of the form $x = f^{-1}$(some stuff).
We apply f to both sides obtaining $f(x) =$ (some stuff)
Then we do some operation to both sides of the equation that converts $f(x)$ to $x$ and the (some stuff) to (some other stuff). We find that, as infinite strings of symbols, (some other stuff).is equal to $f^{-1}$ (some.stuff).

So what we are demonstrating is not something about showing numerical equality. We aren't proving that the first equation is correct. We are showing something about the transformation properties of an infinite string of symbols.

(And for purposes of mathematics like calculus that worries about things like convergence, demonstrations like this are not formal proofs. As far as I can see the Wikipedia and the Wolfram page you cited don't give formal proofs. The Wolfram page has references for the results it states. What is acceptable as a formal proof could be determined by what is in those references.)

What I find hard to forumulate in precise mathematical terms is my statement that we apply "an operation" to both sides of the equation. If we wanted an operation to change f(x) to x, the natural operation would be to take $f^{-1}$ of both sides. But that's not what we do. We do something like divide by $x^{n-1}$ or subtract $f(x) - x$. What is a precise way to express "an operation" in a general way that includes both those examples as special cases?
P: 66
 Quote by Stephen Tashi You've given examples of two different types of methods, one involving addition or subtraction of x and the other involving multiplication and division of x. What I'm asking is what is the statement of a general method that includes both these methods (and perhaps others) as special cases?
There are perhaps infinite variations, besides the strict addition and multiplication examples. Following is one that combines addition/subtraction and multiplication/division. r(x) is the infinitely repeated function set
$$x = f^{-1}\left ( 2 \times f(x) - \dfrac{f(x)}{x} \times f^{-1} \left ( 2 \times f(x) - \dfrac{f(x)}{x} \times r(x) \right ) \, \right)$$
First apply function f to both sides:
$$f(x) = 2 \times f(x) - \dfrac{f(x)}{x} \times f^{-1} \left ( 2 \times f(x) - \dfrac{f(x)}{x} \times r(x) \right )$$
subtract $2 \times f(x)$ from both sides:
$$-f(x) = - \dfrac{f(x)}{x} \times f^{-1} \left ( 2 \times f(x) - \dfrac{f(x)}{x} \times r(x) \right )$$
divide both sides by $-\dfrac{f(x)}{x}$:
$$- \dfrac{-f(x) \times x}{f(x)} = x = f^{-1} \left ( 2 \times f(x) - \dfrac{f(x)}{x} \times r(x) \right )$$
Remembering that r(x) is the infinitely repeated function sequence.
 He performs various manipulations that preserve the equality of two sides of an equation and after several steps he arrives back at the original equation. Most graders would say that he gets an "F" on that one.
Take some time to realize that adding or removing a recursion from an infinitely recursed function is not the same as circular reasoning (trivial manipulation), nor is it invalid. An infinite amount of steps plus or minus a finite amount of steps is still an infinite amount of steps.
1) Let's call the infinitely nested function r(x)
2) For x to be equal to r(x), we must be able to apply the same functions to both r(x) and x and arrive back at x= r(x)
a. these functions must denest or nest a portion of r(x) or we run the risk of doing something circular (or trivial) such as your "parody" example (trivial or circular manipulation as in: add 1 to both sides, multiply both sides by 2, subtract 2 from both sides, divide both sides by 2)

b. when $x=r(x)=\sqrt{x^2-x+\sqrt{x^2-x+...}}$ we must be able to either add a recursion to r(x), or remove a recursion:

i. to add a recursion to this example subtract x from both sides $x - x = 0 = -x + \sqrt{x^2-x+\sqrt{x^2-x+...}}$

ii. add x^2 to both sides $x-x+x^2 = x^2 = x^2-x + \sqrt{x^2-x+\sqrt{x^2-x+...}}$

iii. take the square root of both sides $\sqrt{x^2} = x = \sqrt{x^2-x+\sqrt{x^2-x+...}}$

Do the inverse operations in reverse order to remove a recursion (square both sides, subtract x^2 from both sides, add in x to both sides).

 So what we are demonstrating is not something about showing numerical equality. We aren't proving that the first equation is correct. We are showing something about the transformation properties of an infinite string of symbols.
Try setting x to some recursive formula that doesn't allow you to bring both sides back to the same formula once a recursion has been added or removed.
Remember x must be equal to one recursion + x (since x is equal to an infinite recursion). Therefore one should be able to remove one recursion and have the same formula. I'll show you 2 examples of why the following infinite recursion does NOT work for non trivial solutions:
$$x=\sqrt{x^3-x+\sqrt{x^3-x+...}}$$
which should be equal to the following equation if the infinite recursion $\sqrt{x^3-x+\sqrt{x^3-x+...}}$ is equal to x
$$x=\sqrt{x^3-x+x}$$
square both sides:
$x^2=x^3-x+x$ in other words $x^2=x^3$ which is only valid for the trivial solution of x=0 (not for the invalid solution x=1, which ends up with 1=0, even for certain valid formulas (x=1 is not in the domain for these formulas)):
$$1= \sqrt{1^3-1+\sqrt{1^3-1+...}} = \sqrt{0+\sqrt{0+...}}$$

Also note that there is no way to remove or add a recursion from this formula without altering it, non-trivial manipulation simply leads to a new formula:
$$x=\sqrt{x^3-x+\sqrt{x^3-x+...}}$$
$$x^2=x^3-x+\sqrt{x^3-x+\sqrt{x^3-x+...}}$$
notice we end up with x being equal to something else (although it still works for the trivial solution of 0):
$$x=x^3-x^2+\sqrt{x^3-x+\sqrt{x^3-x+...}}$$
 And for purposes of mathematics like calculus that worries about things like convergence, demonstrations like this are not formal proofs. As far as I can see the Wikipedia and the Wolfram page you cited don't give formal proofs.
I don't think they bothered with extensive proofs because of the nature of algebra and infinitely nested functions. The invalid nesting I posted above should highlight the difference between a valid and invalid infinite nesting.
 What is a precise way to express "an operation" in a general way that includes both those examples as special cases?
These examples are both part of the identity functions for infinitely nested functions. The identity functions of infinitely nested functions are those functions that preserve the integrity of the function when a recursion/nesting is added or removed.
1) assume an algebraically valid non trivial infinite nest x= r(x) within valid domain and range of functions in r(x) !!!
2) adding or removing a nesting from r(x) must result in the original equality x=r(x)

For example, the following additive portion: $x^2-x$ is not the whole identity function for that particular function. You add that in then take the square root of both sides, so the identity function of this particular function involves more than one step.
$x=\sqrt{x^2-x+\sqrt{x^2-x+...}}$

$x+x^2-x=x^2-x+\sqrt{x^2-x+\sqrt{x^2-x+...}}$

$x^2=x^2-x+\sqrt{x^2-x+\sqrt{x^2-x+...}}$
square root:
$x=\sqrt{x^2-x+\sqrt{x^2-x+\sqrt{x^2-x+...}}}=\sqrt{x^2-x+\sqrt{x^2-x+...}}$

Here's a neat fact about the formula right above this sentence. Setting x equal to the golden ratio gives you the nested root formula for the golden ratio, since $\phi^2-\phi=1$. Check equation 14 on the Mathworld Golden Ratio website. Setting x=2, you get part of a formula for pi that you can check out at Mathworld's Pi Formulas website (formula 66*). In fact, many different constants can be made from these various equations.
P: 3,313
 Take some time to realize that adding or removing a recursion from an infinitely recursed function is not the same as circular reasoning (trivial manipulation), nor is it invalid.
It is invalid as method of showing the numerical equality between the two sides of the original equation. It is valid as method of showing something about the manipulation of infinite strings of symbols.

For example, you showed that this equation "works":

$$x = f^{-1}( f(x) - x + f^{-1}(f(x) -x + ....)$$

Let $f(x) = x$

Define
$g_0(x) = 0$
$g_{n}(x) = f^{-1}( f(x) -x + g_{n-1}(x) )$

Then we have:

$g_0(x) = 0$
$g_1(x) = f^{-1}(x - x + 0) = f^{-1}(0) = 0$
$g_2(x) = f^{-1}(x - x + g_1(x)) = f^{-1}(0 + 0) = 0$
etc.

The statement: $x = \lim_{n \rightarrow \infty} g_n(x)$ is false except at $x = 0$

 An infinite amount of steps plus or minus a finite amount of steps is still an infinite amount of steps.
I think it would be better to talk about countably infinite strings of symbols rather than "steps". I do realize that that if we have two countably infinite strings of symbols and we remove a finite number of symbols from one string then the two strings can remain equal.

 1) Let's call the infinitely nested function r(x) 2) For x to be equal to r(x), we must be able to apply the same functions to both r(x) and x and arrive back at x= r(x)
You have to be precise about what you mean by "apply the same functions". In the context of an ordinary discussion of calculus, a "function" is a mapping from numbers to numbers, not a mapping from strings of symbols to strings of symbols. What you are talking about is the latter kind of function. For example, as a mapping of symbols to symbols, sin(2x) can be interpreted as a function that produces an infinite series of symbols or as a function that produces a finite string of symbols ( involving trig functions). As mappings of symbols, these are two different functions. As mappings of numbers to numbers, they are the same function.

 Try setting x to some recursive formula that doesn't allow you to bring both sides back to the same formula once a recursion has been added or removed.
That's an interesting challenge. But can we prove that if the symbolic behavior of the expression doesn't "work" then the numerical equality of the equation doesn't hold? After all, there are identities in mathematics that establish the numerical equality of two very different groups of symbols.

 I don't think they bothered with extensive proofs because of the nature of algebra and infinitely nested functions. The invalid nesting I posted above should highlight the difference between a valid and invalid infinite nesting.
I think the web pages were intended as an intuitive introduction, so they don't have formal proofs. If the references that the web pages cite are valid, the references would have to show numerical convergence by the recognized methods of mathematical analysis.
P: 66
 Quote by Stephen Tashi It is invalid as method of showing the numerical equality between the two sides of the original equation. It is valid as method of showing something about the manipulation of infinite strings of symbols. For example, you showed that this equation "works": $$x = f^{-1}( f(x) - x + f^{-1}(f(x) -x + ....)$$
It should be patently obvious that when f(x)=x, the above equation cannot work. I don't think I should even have to mention something that obvious. In addition, when x=1, there are many f(x) which do not transform x into something other than 1, which does the exact same thing (another obvious exception to that specific type of equation).

Do you really need me to explain every specific case in which these types of equations do not work?

Here are some examples:

1) x has to be within the range of the inverse function f-1
2) f(x)-x + r(x) must be within the domain of the inverse function f-1
3) x must be within the domain of the function f
4) trivial quantities such as zero are trivial (tautologically speaking)
5) f(x) cannot be equal to x (exceptions may exist, none come to mind)

When x can not equal r(x), it should be obvious.
 I think it would be better to talk about countably infinite strings of symbols rather than "steps".
I had something particular in mind. Single nesting= 1 step. Anyways...
 You have to be precise about what you mean by "apply the same functions".
What do you think "apply the function g(y,x)= y + f(x)-x to both sides of an equation" means?
$$x= f^{-1}\left(f(x)-x+f^{-1}(f(x)-x+...)\right)$$
In this case, y= either side of the equation and x=x:
$$f(x)=f(x)-x+ f^{-1}\left(f(x)-x+f^{-1}(f(x)-x+...)\right)$$
Now another function, $f^{-1}(z)$ is applied to both sides with the result being the original equation.

If you can think of examples besides the above list in which x will not equal an infinitely nested function group, we should add them to the list. Of course, there are other things I really want to learn about, such as what is the name of the derivative approximation method using these functions? In addition, specific rules need to be written out for the derivative approximation method, which as far as I can tell requires valid x= infinitely nested functions.
P: 3,313
 Quote by Matt Benesi Do you really need me to explain every specific case in which these types of equations do not work?
Can you? The normal procedure in stating a theorem is to state the premises that are required for the conclusion to be true, not to list all possible situations where the conclusion is false. If your aim to do something that is recognized as mathematics by modern standards, then you need to state all the assumptions that you are making about $f(x)$ and show those assumptions imply the convergence of the sequence of functions $g_i(x)$ to $x$ according the usual definition of convergence of a sequence of functions. It is not sufficient to begin with an equation that assumes there is convergence to x and then manipulate it to show you can cast off one layer of the recursion.

I'm not trying to function as the mathematical Thought Police. I agree that your methods of manipulating infinite strings of symbols suggest interesting results but it would require more work to prove your assertions and clarify precisely when they apply.

The only way that I see, so far, to determine whether a function f(x) "works" in the equation, is test it as a specific case.
 Sci Advisor P: 3,313 Here's another one that looks like it doesn't work: $f(x) = \frac{x}{2}$ at $x = 2$ $g_1(2)= 0$ $g_2 (2) = 2( \frac{2}{2} - 2) = 2 ( 1 -2) = 2(-1) = -2$ $g_3(2) = 2( \frac{2}{2} - 2 + (-2)) = 2( -1 + (-2) ) = - 6$ $g_4(2) = 2( \frac{2}{2} - 2 + (-6) ) = 2( -1 + (-6)) = - 14$
P: 66
 Quote by Stephen Tashi The normal procedure in stating a theorem is to state the premises that are required for the conclusion to be true, not to list all possible situations where the conclusion is false. If your aim to do something that is recognized as mathematics by modern standards, then you need to state all the assumptions that you are making about $f(x)$ and show those assumptions imply the convergence of the sequence of functions $g_i(x)$ to $x$ according the usual definition of convergence of a sequence of functions. It is not sufficient to begin with an equation that assumes there is convergence to x and then manipulate it to show you can cast off one layer of the recursion.
I see that. There are many conditions under which the various r(x) do not work. r(x) meaning the full infinite recursion, as I've been using

For example, I've been working with x and n greater than 1 for all equations of the format $\sqrt[n]{x^n-x+\sqrt[n]{x^n-x+...}}$. If you set x=-2 and n=2, you end up approaching 3 (as 3^2-3=6 just like (-2)^2- - 2=6). Now I could specify:
$|x|= \sqrt[n]{|x^n|-|x|+\sqrt[n]{|x^n|-|x|+...}}$ for all x and n greater than 1, but I don't have a valid proof, as you've shown (induction not being proof). I'm wondering if Ramanajun proved this particular form? Anyways, for this form of the equation:

x>1 and n>1
$$x^n > x^n - x$$
$$x > \sqrt[n]{x^n-x}$$
Because $x > \sqrt[n]{x^n-x}$, $g_1(x) = \sqrt[n]{x^n-x}$, $g_1(x)< x$ therefore $g_2(x)= \sqrt[n]{x^n -x + g_1(x)} < x$.

$$g_i(x)=\sqrt[n]{x^n-x+g_{i-1}(x)}$$
You can see that each gi will be larger than the next, although they will never reach x since we require gi to equal x in order for it to reach x (and it's always going to be short). You can also see that gi of x will approach the limit x as $i\to\infty$. Can you think of a better way to say this? I'm drawing a blank.

Therefore....
$$x= \sqrt[n]{x^n-x+\sqrt[n]{x^n-x+...}}$$

Interesting:
$$[g_i(x)]^n=x^n-x+g_{i-1}(x)$$
$$x=x^n-[g_i(x)]^n+g_{i-1}(x)$$
$$x^n-x=[g_i(x)]^n-g_{i-1}(x)$$
$$x^n-[g_i(x)]^n=x-g_{i-1}(x)$$
remembering that we end up with something along the lines of (in some cases, more than one operation is required):
$$nx^{n-1} = \lim_{i\to\infty} \dfrac{x-g_{i-1}(x)}{x-g_{i}(x)}$$
 I'm not trying to function as the mathematical Thought Police. I agree that your methods of manipulating infinite strings of symbols suggest interesting results but it would require more work to prove your assertions and clarify precisely when they apply.
Thanks. For now, I think each variation will have to be proved individually. I still am looking forward to working on the derivative property, but I suppose a couple of proofs for various r(x) are in order.

 The only way that I see, so far, to determine whether a function f(x) "works" in the equation, is test it as a specific case.
A few variations will get nailed. It would be nice to have a general proof, but.... that's out of my reach for now.
 P: 66 Here is another quick proof for a range of values (B>1 and x>1). To be proved: $$x=\log_B [B^x-x+\log_B [B^x-x+...]]$$ To keep it simple, for this part of the proof assume B>1 and x>1, we can work out how other values work later. $$g_0(x)=0$$ $$g_i(x)= \log_B {\left[B^x-x+g_{i-1}(x)\right]}$$ So for this one, $B^x-x$ must be greater than 1. Obviously the log of that will be less than x, and remain so except at the limit $g_{i\to\infty}$. Once again, a little neat thing popped out at me from the $g_i$ equation (at least I like it): $$B^{g_i(x)}= B^x-x+g_{i-1}(x)$$ $$x-g_{i-1}(x)= B^x-B^{g_i(x)}$$ And once again, applying the inverse function (log base B) to both individual parts of the higher i side of the equation, and dividing out (following the various rules outlined in the other thread- keep in mind I haven't clearly defined them as of yet!!) gives us the approximate derivative of Bx, Bxln(B) as $g_{i\to\infty}$. $\dfrac{x-g_{i-1}(x)}{\log_B [B^x]- \log_B [B^{g_i(x)}]}\; = \; \dfrac{x-g_{i-1}(x)}{ x-g_i(x)}$ Anyways, will do a few trig ones after the weekend, won't be able to check in for a few days, so have a good one.
 Sci Advisor P: 3,313 I'll have to read that carefully before understanding it. Today's hazy thoughts on the subject. It simplifies notation to interchange the roles of $f^{-1}$ and $f$, which I will do. I'll also write the index of the recursion variable in brackets instead of as a subscript. So I'll define $g[1](x) = 0$ $g[n](x) = f ( f^{-1}(x) - x + g[n-1](x)) , n > 1$ For linear functions of x , $f(x) =cx + d$, the associated sequence $g[i]$ appears to converge to $x$ when $|c| < 1$ since it is the sequence of partial sums of an infinite geometric series that sums to $x$. Suppose we are given a non-linear function $f(x)$ and we can sandwich it between the graphs of two linear functions $A(x)$ and $B(x)$ as long as $x$ is in some interval $I$. So for $x \in I$, $A(x) \le f(x) \le B(x)$. Suppose that the associated sequences $g_A[i]$ and[ $g_B[i]$ both converge to the identity function (for any real number x) and that the sequence of $g_f[i]$ associated with $f(x)$ at a given $x = x_0$ never involves evaluation $f$ at any point outside the interval I. Intuitively, since the recursion process moves the graphs of both linear functions to be the graph of the identity function, it also should move the graph of f to be the identity function. This might be formalized enough to use the squeeze theorem for sequences. If it is true that $g_A[i](x_0) \le g_f[i](x_0) \le g_B[i](x_0)$ and both the extreme terms approach $x_0$ in the limit, then the middle term should also. I haven't visualized the recursions process carefully enough to know if that inequality must hold when the graph of f is sandwiched.

 Related Discussions General Math 11 Calculus & Beyond Homework 1 Calculus & Beyond Homework 10 General Math 6 Calculus 2