Interesting properties of nested functions

In summary, the conversation discusses the properties and behaviors of infinitely nested functions, specifically those that can be solved algebraically by raising both sides to the nth power and dividing through by xn-1. The conversation explores the properties of finitely iterated nestings and the interesting patterns that emerge when subtracting a nested function from the number it equals at infinite nestings. It also discusses the approach to the derivative of the inner function for higher iterations and provides examples of different types of nested functions and their corresponding approached values. The conversation ends by suggesting a recursive definition for a sequence of functions and the claim that the limit of this sequence approaches a certain value.
  • #1
Matt Benesi
134
7
The properties arise from infinitely nested functions such as:
[itex]x=\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}}[/itex]​
You can solve it algebraically to verify that x is equal to the nested function (all of the following functions can be solved in a similar manner).

Simply raise both sides to the nth power: [itex]x^n=x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}[/itex]
Divide through by xn-1: [itex]x=\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}[/itex]

What we will do is explore the properties of finitely iterated nestings. The variable a is used to denote the number of nestings.

For this example, a=4: [itex]f(x,n,a)=f(x,n,4)=\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}}}}} [/itex]

The interesting properties arise when we subtract a nested function from the number it equals at infinite nestings (x).

With this particular variety of nesting: [itex]f(x,n,a) = x-\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1...}}}}} [/itex]

We end up approaching [itex]n^{-a}x\ln{x}[/itex] as a gets larger. It approaches the value quicker for higher n and x. f(x,n,a)/f(x,n,a+1) approaches n as a increases.

There are many multiplicative nested type equations we can try, such as:

[itex]x=\log_B [ B^x\log_B [B^x \log_B [B^x...]]][/itex] so [itex]f(x,B,a)=x-\log_B [ B^x\log_B [B^x \log_B [B^x...]]][/itex]

which has the interesting property:
[tex]\dfrac{f(x,B,a)}{f(x,B,a+1)}\to x \ln B[/tex]
Of course, there are whole other types of nested functions using addition/subtraction, in addition to multiplication (if you mix them). You can do cosine and cosine-1 together, xn and x1/n and other combinations.

This next formulas approach the derivative of the inner functions when taking [itex]\dfrac{f(...,a)}{f(...,a+1)}[/itex] for higher a.

For this one, the inner function is x^n:
[tex]f(x,n,a)=x-\sqrt[n]{x^{n}-x+\sqrt[n]{x^{n}-x+\sqrt[n]{x^{n}-x+\sqrt[n]{x^{n}-x+...}}}}[/tex]
so [tex]\dfrac{f(x,n,a)}{f(x,n,a+1)}\to nx^{n-1}[/tex]

For this one, the inner function is Bx:
[tex]f(x,B,a)=x-\log_B [B^x-x+\log_B [B^x-x+\log_B [B^x-x+\log_B [...]]]][/tex]
so [tex]\dfrac{f(x,B,a)}{f(x,B,a+1)}\to B^{x}\ln B[/tex]
as a increases (or for larger B and x).

In fact, all of the basic formulas that use - x + (the repeated formula...) appear to approach the derivative of the inner formula for f(...,a)/f(...,a+1) except in conditions when the functions and inverse functions used have limited well defined domains (such as cosine and cosine-1).

Combining the functions results in approaching the derivative of the combined inner function:
[tex]f(x,B,a)=x-\sqrt[n]{\log_B [ B^{x^n}-x+ \sqrt[n] {\log_B [B^{x^n}-x+ \sqrt[n]{ \log_B [B^{x^n}-x+...}}}]]][/tex]

Note that it is set up to take x^n first, then take B^(x^n) next (as if it were infinitely iterated so that it is algebraically sound). The "[" symbol doesn't show up to clearly under the radical. Anyways...

As with the other -x + ... functions, this one approaches the derivative of the inner function [itex]B^{x^n}[/itex]
[tex]\dfrac{f(x,B,a)}{f(x,B,a+1)}\to n\,{x}^{n-1}\,{y}^{{x}^{n}}\,log\left( y\right)[/tex]

For all of these functions, the exact approached value (so for x^n as the inner function, nx^(n-1) ) for f(...,a)/f(...,a+1) can be taken to the ath power (number of iterations) and multiplied by f(...,a) to create a constant. Haven't found a rational one yet. Not a lot of them are listed over at the http://oldweb.cecm.sfu.ca/projects/ISC/ISCmain.html", although for the x^n-x+... one, with x and n equal to 2, you end up with pi^2/4.

Note that a value more or less than the value that f(...,a)/f(...,a+1) approaches causes the calculated constant to diverge towards 0 or infinity as a increases [UNLESS you use the exact constant (such as the derivatives, or n for the first example, x ln (B) for the second, etc.)].
 
Last edited by a moderator:
Mathematics news on Phys.org
  • #2
Matt Benesi said:
The properties arise from infinitely nested functions such as:
[itex]x=\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}}[/itex]​
You can solve it algebraically to verify that x is equal to the nested function (all of the following functions can be solved in a similar manner).

Simply raise both sides to the nth power: [itex]x^n=x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}[/itex]
Divide through by xn-1: [itex]x=\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}[/itex]

That is not a convincing demonstration because it begins by assuming the equation which is to be proved.

One way to motivate your method is:

For [itex] x \ge 0 [/itex] and [itex] n [/itex] a positive integer

[eq. 1] [itex] x = \sqrt[n]{x^n} = \sqrt[n]{x^{n-1} x } [/itex]

Use eq. 1 to substitute for the term x in the right hand side of eq. 1. We obtain

[eq. 2] [itex] x = \sqrt[n]{x^{n-1} \sqrt[n]{x^{n-1} x} } [/itex]

Use eq 1 to substitute for the term x in the right hand side of eq 2. We obtain

[eq. 3] [itex] x = \sqrt[n]{x^{n-1} \sqrt[n]{x^{n-1} } \sqrt[n]{x^{n-1} x }} [/itex]

Continuing the above process suggests (but does not prove) that x is equal to the "infinitely nested" function given in your first equation.

Define a sequence of functions [itex] \{f_i\} [/itex] recursively as follows:

[itex] f_1(x) = \sqrt[n]{x^{n-1}} [/itex]
For [itex] j > 1 [/itex] , [itex] f_j(x) = \sqrt[n]{x^{n-1} f_{j-1}(x) } [/itex]

Your claim is that for each [itex] x \ge 0 [/itex], [itex] \lim_{j \rightarrow \infty} f_j(x) = x [/itex]
 
  • #3
Stephen Tashi said:
That is not a convincing demonstration because it begins by assuming the equation which is to be proved.
Hi Stephen, It's actually valid reasoning. Look over the http://mathworld.wolfram.com/NestedRadical.html" [Broken], especially equations 9,10, and 11 to pick up a better understanding of the reasoning.

Incidentally, this isn't the point I was bringing up, and as of yet I have not come up with a valid proof for the derivative conjecture, which is the most interesting property of nested functions. The derivative conjecture can be readily tested (although inductive "validity" is more than a little non-rigorous), and stands up to all tests I've thrown at it (within specific domains for various functions), I'm just drawing a blank (for now) on how to prove it.

As to this little gem:
[tex]x=\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}}[/tex]
Take both sides to the nth power:
[tex]x^n=x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}}[/tex]
Note that you still have the infinitely nested radical after the [itex]x^{n-1}[/itex] part. Once you divide [itex]x^{n-1}[/itex] on both sides of the equation, you end up with your original equation of x= infinitely nested radical.

I actually wrote out a rough proof for this particular radicals interesting tendency towards [tex] ln{x} =\lim_{a\to\infty} \dfrac{x- \sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}} }{x} \times n^a[/tex] with a being the number of iterations of the radical

As you probably know, you can "force multiply" (I think that's similar to how McGuffin put it) through radicals. For example:

[tex] \sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}} =
\sqrt[{n^2}]{x^{n^2-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}[/tex]

An easier example:
[tex]2\, \sqrt[2]{2} = \sqrt[2]{2^2 \times 2}[/tex]
As you can see, to force multiply a number into a radical, you take it to the power of the radical. When you have multiple radicals together, such as [itex]2\, \sqrt[2]{2\sqrt[2]{2}} = \sqrt[2]{8\,\sqrt[2]{2}}= \sqrt[4]{8^2 \times 2}[/itex], as you push the number through the radical, instead of leaving stacked radicals like [itex]\sqrt[2]{\sqrt[2]{x}}[/itex] you can concentrate them under a single radical (square root of a square root is the 4th root): [itex]\sqrt[4]{x}[/itex].

For the above nested radical of a nestings, we end up with the following:
[tex] x^{\frac{n^a-1}{n^a}} = \sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}\sqrt[n]{x^{n-1}...}}}}[/tex]
Of course:
[tex]x^{\frac{n^a-1}{n^a}} =x^{\frac{n^a}{n^a}} \times x^{\frac{-1}{n^a}}=x \times x^{\frac{-1}{n^a}}[/tex]

Therefore dividing [itex]x- x^{\frac{n^a-1}{n^a}}[/itex] by x will result in:
[itex]1-x^{\frac{-1}{n^a}}[/itex]. Multiplying this quantity by na results in the following equation: [itex]\left( 1-x^{\frac{-1}{n^a}} \right) \times n^a[/itex]

And if we take the limit as [itex]n^a\to\infty[/itex] we end up with a very familiar looking equation (https://www.physicsforums.com/showthread.php?t=544177").

Back to the original problem with lack of proof that x= so and so infinitely nested radical, you can look at [itex]x^{\frac{n^a-1}{n^a}}[/itex]. As [itex]\lim_{n^a\to\infty} \dfrac{n^a-1}{n^a}=1[/itex] therefore [itex] \lim_{n^a\to\infty} x^{\frac{n^a-1}{n^a}} = x^1 = x[/itex]
 
Last edited by a moderator:
  • #4
Matt Benesi said:
Hi Stephen, It's actually valid reasoning. Look over the http://mathworld.wolfram.com/NestedRadical.html" [Broken]

It isn't valid reasoning. It isn't any kind of reasoning. In fact, on the (very interesting) Wolfram page that you gave there are only assertions given without proofs. If you mean that you are asserting something on the basis of an authoritative reference, I'll buy that.

For example, if we assert:

1 = 1 - 1 + 1 - 1 + 1...

We can add (1 - 1) to both sides obtaining

1 + 1 - 1 = (1 -1) + 1 - 1 + 1 - 1 + ...

1 = 1 - 1 + 1 - 1 + ...

Which is the original equation. But that doesn't establish the correctness of the original equation.

Incidentally, this isn't the point I was bringing up, and as of yet I have not come up with a valid proof for the derivative conjecture, which is the most interesting property of nested functions.

I agree that if it turns out to be a property, it will be an interesting property.



But before we get to that, how general is the possibility of expressing the identity function as an infinite recursion of other functions?

Do you think things could be as general as this:

Hypothesis:

Let [itex] g(x,y) [/itex] be a real valued function of two real variables whose partial derivatives exist and are continuous. Further suppose that [itex] g(x,x) = x [/itex].

Concluson (?):

Then there exists a constant k such that the sequence of functions [itex]\{ f_i\}[/itex] defined recursively by:
[itex] f_1(x) = g(x,k) [/itex]
For [itex] j > 1[/itex], [itex] f_j (x) = g(f_{j-1}(x),k) [/itex]

has the property that for all [itex] x, [/itex] [itex] \lim_{j \rightarrow \infty} f_j(x) = x [/itex]

Of course, I'm just trying to make a generalization based on the case [itex] g(x,y) = \sqrt[n]{x^{n-1} y} [/itex] and [itex] k = 1 [/itex].
 
Last edited by a moderator:
  • #5
Stephen Tashi said:
It isn't valid reasoning. It isn't any kind of reasoning.
Am I assuming too many premises? If so, I apologize. I think the http://en.wikipedia.org/wiki/Nested_radical#Infinitely_nested_radicals" has a very succinct explanation:

Wikipedia said:
Under certain conditions infinitely nested square roots such as
[tex] x = \sqrt{2+\sqrt{2+\sqrt{2+\sqrt{2+\cdots}}}} [/tex]

represent rational numbers. This rational number can be found by realizing that ''x'' also appears under the radical sign, which gives the equation
[tex] x = \sqrt{2+x} [/tex]
Stephen Tashi said:
In fact, on the (very interesting) Wolfram page that you gave there are only assertions given without proofs.
I thought there was enough information given in those 3 statements... :redface:
I agree that if it turns out to be a property, it will be an interesting property.
I've tested it with various inverse functions, from exponential functions, trigonometric functions, to standard power functions (x^n). As long as we stay within the domain of the functions (arccos (x) or cos-1 (x) between -1,1.. etc.), we end up finding the derivative (in a method https://www.physicsforums.com/showthread.php?t=544489", that I just saw your reply in! Got some work to do, so will save that for later).

But before we get to that, how general is the possibility of expressing the identity function as an infinite recursion of other functions?
As long as you can work out an algebraic "identity function", such as the various ones mentioned, you can do the infinite recursion. Remember, you must take x through various steps and arrive back at x= infinitely nested function (**you must arrive at the exact same equation after doing various steps).
[itex] f_1(x) = g(x,k) [/itex]
For [itex] j > 1[/itex], [itex] f_j (x) = g(f_{j-1}(x),k) [/itex]
I'm drawing a blank about the k constant? Besides that, it's an understandable recursion formula (although I have my habit of defining it as "nestings" rather than recursions).
 
Last edited by a moderator:
  • #6
Matt Benesi said:
I'm drawing a blank about the k constant?

From my point of view, a general method of expressing the identity function as an infinitely recursive function is to visualize [itex] x [/itex] as [itex] f^{-1} ( f(x)) [/itex] for some function [itex] f(x) [/itex] that has an inverse. Then write [itex] f(x) [/itex] as an expression so that it involves a term with a single [itex] x [/itex] in it. The only way that I see to express this in a general mathematical way is say that [itex] f(x) = g(x,x) [/itex] where [itex] g(x,y) [/itex] is a function of two variables.

So if I want to begin a recursive definition based on the above thinking, I can't say "Let [itex] f_1(x) = g(x) [/itex]" because I'm missing a variable in [itex] g [/itex]. Looking at your examples, it appears that I can realize them by saying "Let [itex] f_1(x) = g(x,k) [/itex]" if I pick the right value of the constant k.
 
  • #7
I'm still drawing a complete blank on why you would need an additional constant?

I don't use one in any of the functions, although I do specify a specific nesting depth of a (or call it recursivity depth).

Now, all of the functions with the interesting derivative property are of the form: [itex]x= f^{-1}(f(x)-x + f^{-1}(f(x)-x+...))[/itex] so that we apply the function to both sides: [itex]f(x)= f\left( f^{-1}(f(x)-x + f^{-1}(f(x)-x+...))\right)[/itex] which ends up as: [itex]f(x)= f(x)-x+ f^{-1}(f(x)-x + f^{-1}(f(x)-x+...))[/itex] so that we can subtract [itex]f(x)-x[/itex] from both sides and end up with the original equation.

If you are trying to specify a nesting depth, or recursivity depth such that:
[tex]f_1(x)=f^{-1}(f(x)-x)[/tex]
[tex]f_2(x)= f^{-1}(f(x)-x + f^{-1}(f(x)-x))[/tex]
I've been calling the depth a, so the first is:
[tex]f(x,a) = f(x,1) = f^{-1}(f(x)-x)[/tex]
[tex]f(x,2) = f^{-1}(f(x)-x + f^{-1}(f(x)-x))[/tex]
Another way of specifying depth is (with [itex]f_0(x)=0[/itex]):
[tex]f_n(x)=f^{-1}(f(x)-x+f_{n-1}(x))[/tex]
which should strike you as resembling Fibonacci sequence or Mandelbrot set syntax.

Here is a quick example (note that the original post has both an incorrect and correct version of the following formula).
Note that [itex]f(x,B)=B^x[/itex] and [itex]f^{-1}(x,B)=\log_B(x)[/itex]

[tex]x=\log_B [B^x-x+\log_B [B^x-x+\log_B [B^x-x+\log_B [...]]]][/tex]
[tex]B^x=B^{\log_B [B^x-x+\log_B [B^x-x+\log_B [B^x-x+\log_B [...]]]]}[/tex]
[tex]B^x=B^x-x+\log_B [B^x-x+\log_B [B^x-x+\log_B [B^x-x+\log_B [...]]]][/tex]
subtract B^x-x from both sides and you end up with the original equation:
[tex]x=\log_B [B^x-x+\log_B [B^x-x+\log_B [B^x-x+\log_B [...]]]][/tex]​

Another side note, the multiplicative infinite nestings of the form: [itex]x= f^{-1}(\frac{f(x)}{x} \times f^{-1}(\frac{f(x)}{x} \times f^{-1}(...))[/itex] don't appear to display the derivative property (at least for the few I've analyzed- there is no rigorous proof against this as of yet).
 
  • #8
Matt Benesi said:
I'm still drawing a complete blank on why you would need an additional constant?

I don't use one in any of the functions, although I do specify a specific nesting depth of a (or call it recursivity depth).

If you fit the [itex] \sqrt[n]{x^{n-1}} [/itex] example into my framework, "you" (meaning a person who does use my framework) actually do use a constant.

There are certainly other attempts to state what your are doing in general terms. Another attempt is this:

Write [itex] x = f^{-1}(f(x)) [/itex] for some invertible function [itex] f[/itex].
Find a function [itex] g(x,y) [/itex] such that [itex] g(x,0) = f(x) [/itex]
Define the sequence of functions [itex] \{f_i\} [/itex] by:
[itex] f_1(x) = f^{-1}(g(x,x)) [/itex]
For [itex] j > 1[/itex], [itex] f_j(x) = f^{-1}( g(x,x) + f_{j-1}(x)) [/itex]

I don't know if that approach works. All I'm saying is that it is one way to generalize your example ( by using [itex] g(x,y) = f(x) - y [/itex]).

Can you state the most general procedure for expessing the identity function as the limit of a sequence of recursively defined functions? (And can you do it without using "infinite" symbolic expressions with "..."'s in them?)

Now, all of the functions with the interesting derivative property are of the form: [itex]x= f^{-1}(f(x)-x + f^{-1}(f(x)-x+...))[/itex] so that we apply the function to both sides: [itex]f(x)= f\left( f^{-1}(f(x)-x + f^{-1}(f(x)-x+...))\right)[/itex] which ends up as: [itex]f(x)= f(x)-x+ f^{-1}(f(x)-x + f^{-1}(f(x)-x+...))[/itex] so that we can subtract [itex]f(x)-x[/itex] from both sides and end up with the original equation.

You should clarify what this is supposed to demonstrate. As an argument about numerical quantities, it doesn't make sense. To parody it:

If I start with the equation [itex] A = B [/itex] and apply [itex] f [/itex] to both sides, I get
[itex] f(A) = f(B) [/itex] Since [itex] f(A) = f(B) [/itex] then [itex] A - f(A) = B - f(B) [/itex], so I can subtract [itex] A - f(A) [/itex] from the left and [itex] B - f(B) [/itex] from the right and get the original equation.

Instead of that argument, what you are trying to express has something to do with the "algebraic form" of the terms in equation. That's the type of mathematics one sees when people talk about "formal" power series. You're just supposed to treat them as strings of symbols and not worry about convergence. You are demonstrating something about the behavior of symbols. However, if you are trying to prove something about derivatives, I think you must eventually express the ideas in a way that acceptable for doing proofs involving convergence, epsilons and deltas etc.
 
Last edited:
  • #9
Stephen Tashi said:
I don't know if that approach works. All I'm saying is that it is one way to generalize your example ( by using [itex] g(x,y) = f(x) - y [/itex]).
Ok. What you really want to do is say [itex]g(x)=f(x) - x[/itex]. No need for the additional variable y. If you'd like I can provide a few numerical examples (probably tomorrow, it's getting quite late here).
Can you state the most general procedure for expessing the identity function as the limit of a sequence of recursively defined functions? (And can you do it without using "infinite" symbolic expressions with "..."'s in them?)
[tex]g_0(x) = 0[/tex]
[tex]g_n(x) = f^{-1}\left(f(x)-x+g_{n-1}(x)\right)[/tex]
[tex]x= \lim_{n\to\infty} g_n(x)[/tex]
It's very similar for the multiplication/division identity functions (proof/demo if requested):
[tex]g_0(x) = 1[/tex]
[tex]g_n(x) = f^{-1}\left(\frac{f(x)}{x} \times g_{n-1}(x)\right)[/tex]
[tex]x= \lim_{n\to\infty} g_n(x)[/tex]
You should clarify what this is supposed to demonstrate. As an argument about numerical quantities, it doesn't make sense.
You should re-read the math carefully, and perhaps the statement.
Statement: "all of the functions with the interesting derivative property are of the form:"
[tex]1.\,x= f^{-1}(f(x)-x + f^{-1}(f(x)-x+...))[/tex]
Then I simply showed how the equation works, in case someone jumped in at this point.
Apply the function to both sides of the equation:
[tex]2.\,f(x)= f\left( f^{-1}(f(x)-x + f^{-1}(f(x)-x+...))\right)[/tex]
[tex]3.\,f(x)= f(x)-x+ f^{-1}(f(x)-x + f^{-1}(f(x)-x+...))[/tex]
Once we subtract f(x)-x from both sides of equation 3, we end up with equation 1 again.
We could also add f(x)-x to both sides of equation 1 and arrive at equation 3, then apply the inverse function to both sides to arrive at equation 1.

The http://en.wikipedia.org/wiki/Nested_radical#Infinitely_nested_radicals" which leads to [itex]x=\sqrt{2+x}[/itex] is but one of many possible forms of this type of equation.

Specifically [itex]f(x)=x^2[/itex], [itex]f^{-1}(x)=\sqrt{x}[/itex] and lastly, with x=2:
[tex]x = \sqrt{x^2-x+\sqrt{x^2-x+...}} = \sqrt {2 +\sqrt{2+...}} = \sqrt{2 +x}[/tex]

However, if you are trying to prove something about derivatives, I think you must eventually express the ideas in a way that acceptable for doing proofs involving convergence, epsilons and deltas etc.
I wasn't even attempting to prove the derivative property as of yet, in fact, I was wondering if there was a specific name for the property so that I could read about it (which is why I asked about it in the calculus sub-forum), and perhaps avoid a re-proof if someone else has already written one out. :D
 
Last edited by a moderator:
  • #10
Matt Benesi said:
Ok. What you really want to do is say [itex]g(x)=f(x) - x[/itex]. No need for the additional variable y.

I'm not trying to reproduce only the single example of f(x) -x. I want it be included as a special case. but I'm speculating there might other g(x,y) that work besides g(x,y) = f(x) - y.
It's very similar for the multiplication/division identity functions (proof/demo if requested):

You've given examples of two different types of methods, one involving addition or subtraction of x and the other involving multiplication and division of x. What I'm asking is what is the statement of a general method that includes both these methods (and perhaps others) as special cases?

Then I simply showed how the equation works, in case someone jumped in at this point.

Let's try to define what it means for the equation to work. After all, suppose someone is trying to prove a trigonometric identity. He writes the identity down as an equation. He performs various manipulations that preserve the equality of two sides of an equation and after several steps he arrives back at the original equation. Most graders would say that he gets an "F" on that one.

I'll take a shot at it. We start with an equation of the form [itex] x = f^{-1}[/itex](some stuff).
We apply f to both sides obtaining [itex]f(x) =[/itex] (some stuff)
Then we do some operation to both sides of the equation that converts [itex] f(x) [/itex] to [itex] x [/itex] and the (some stuff) to (some other stuff). We find that, as infinite strings of symbols, (some other stuff).is equal to [itex] f^{-1}[/itex] (some.stuff).

So what we are demonstrating is not something about showing numerical equality. We aren't proving that the first equation is correct. We are showing something about the transformation properties of an infinite string of symbols.

(And for purposes of mathematics like calculus that worries about things like convergence, demonstrations like this are not formal proofs. As far as I can see the Wikipedia and the Wolfram page you cited don't give formal proofs. The Wolfram page has references for the results it states. What is acceptable as a formal proof could be determined by what is in those references.)

What I find hard to forumulate in precise mathematical terms is my statement that we apply "an operation" to both sides of the equation. If we wanted an operation to change f(x) to x, the natural operation would be to take [itex] f^{-1} [/itex] of both sides. But that's not what we do. We do something like divide by [itex] x^{n-1} [/itex] or subtract [itex] f(x) - x [/itex]. What is a precise way to express "an operation" in a general way that includes both those examples as special cases?
 
Last edited:
  • #11
Stephen Tashi said:
You've given examples of two different types of methods, one involving addition or subtraction of x and the other involving multiplication and division of x. What I'm asking is what is the statement of a general method that includes both these methods (and perhaps others) as special cases?
There are perhaps infinite variations, besides the strict addition and multiplication examples. Following is one that combines addition/subtraction and multiplication/division. r(x) is the infinitely repeated function set
[tex] x = f^{-1}\left ( 2 \times f(x) - \dfrac{f(x)}{x} \times f^{-1} \left ( 2 \times f(x) - \dfrac{f(x)}{x} \times r(x) \right ) \, \right)[/tex]
First apply function f to both sides:
[tex] f(x) = 2 \times f(x) - \dfrac{f(x)}{x} \times f^{-1} \left ( 2 \times f(x) - \dfrac{f(x)}{x} \times r(x) \right ) [/tex]
subtract [itex]2 \times f(x)[/itex] from both sides:
[tex] -f(x) = - \dfrac{f(x)}{x} \times f^{-1} \left ( 2 \times f(x) - \dfrac{f(x)}{x} \times r(x) \right ) [/tex]
divide both sides by [itex]-\dfrac{f(x)}{x}[/itex]:
[tex] - \dfrac{-f(x) \times x}{f(x)} = x = f^{-1} \left ( 2 \times f(x) - \dfrac{f(x)}{x} \times r(x) \right ) [/tex]
Remembering that r(x) is the infinitely repeated function sequence.
He performs various manipulations that preserve the equality of two sides of an equation and after several steps he arrives back at the original equation. Most graders would say that he gets an "F" on that one.
Take some time to realize that adding or removing a recursion from an infinitely recursed function is not the same as circular reasoning (trivial manipulation), nor is it invalid. An infinite amount of steps plus or minus a finite amount of steps is still an infinite amount of steps.
1) Let's call the infinitely nested function r(x)
2) For x to be equal to r(x), we must be able to apply the same functions to both r(x) and x and arrive back at x= r(x)
a. these functions must denest or nest a portion of r(x) or we run the risk of doing something circular (or trivial) such as your "parody" example (trivial or circular manipulation as in: add 1 to both sides, multiply both sides by 2, subtract 2 from both sides, divide both sides by 2)

b. when [itex]x=r(x)=\sqrt{x^2-x+\sqrt{x^2-x+...}}[/itex] we must be able to either add a recursion to r(x), or remove a recursion:

i. to add a recursion to this example subtract x from both sides [itex] x - x = 0 = -x + \sqrt{x^2-x+\sqrt{x^2-x+...}}[/itex]

ii. add x^2 to both sides [itex] x-x+x^2 = x^2 = x^2-x + \sqrt{x^2-x+\sqrt{x^2-x+...}}[/itex]

iii. take the square root of both sides [itex]\sqrt{x^2} = x = \sqrt{x^2-x+\sqrt{x^2-x+...}}[/itex]

Do the inverse operations in reverse order to remove a recursion (square both sides, subtract x^2 from both sides, add in x to both sides).

So what we are demonstrating is not something about showing numerical equality. We aren't proving that the first equation is correct. We are showing something about the transformation properties of an infinite string of symbols.
Try setting x to some recursive formula that doesn't allow you to bring both sides back to the same formula once a recursion has been added or removed.
Remember x must be equal to one recursion + x (since x is equal to an infinite recursion). Therefore one should be able to remove one recursion and have the same formula. I'll show you 2 examples of why the following infinite recursion does NOT work for non trivial solutions:
[tex]x=\sqrt{x^3-x+\sqrt{x^3-x+...}}[/tex]
which should be equal to the following equation if the infinite recursion [itex]\sqrt{x^3-x+\sqrt{x^3-x+...}}[/itex] is equal to x
[tex]x=\sqrt{x^3-x+x}[/tex]
square both sides:
[itex]x^2=x^3-x+x[/itex] in other words [itex]x^2=x^3[/itex] which is only valid for the trivial solution of x=0 (not for the invalid solution x=1, which ends up with 1=0, even for certain valid formulas (x=1 is not in the domain for these formulas)):
[tex] 1= \sqrt{1^3-1+\sqrt{1^3-1+...}} = \sqrt{0+\sqrt{0+...}}[/tex]

Also note that there is no way to remove or add a recursion from this formula without altering it, non-trivial manipulation simply leads to a new formula:
[tex]x=\sqrt{x^3-x+\sqrt{x^3-x+...}}[/tex]
[tex]x^2=x^3-x+\sqrt{x^3-x+\sqrt{x^3-x+...}}[/tex]
notice we end up with x being equal to something else (although it still works for the trivial solution of 0):
[tex]x=x^3-x^2+\sqrt{x^3-x+\sqrt{x^3-x+...}}[/tex]
And for purposes of mathematics like calculus that worries about things like convergence, demonstrations like this are not formal proofs. As far as I can see the Wikipedia and the Wolfram page you cited don't give formal proofs.
I don't think they bothered with extensive proofs because of the nature of algebra and infinitely nested functions. The invalid nesting I posted above should highlight the difference between a valid and invalid infinite nesting.
What is a precise way to express "an operation" in a general way that includes both those examples as special cases?
These examples are both part of the identity functions for infinitely nested functions. The identity functions of infinitely nested functions are those functions that preserve the integrity of the function when a recursion/nesting is added or removed.
1) assume an algebraically valid non trivial infinite nest x= r(x) within valid domain and range of functions in r(x) !
2) adding or removing a nesting from r(x) must result in the original equality x=r(x)

For example, the following additive portion: [itex]x^2-x[/itex] is not the whole identity function for that particular function. You add that in then take the square root of both sides, so the identity function of this particular function involves more than one step.
[itex]x=\sqrt{x^2-x+\sqrt{x^2-x+...}}[/itex]

[itex]x+x^2-x=x^2-x+\sqrt{x^2-x+\sqrt{x^2-x+...}}[/itex]

[itex]x^2=x^2-x+\sqrt{x^2-x+\sqrt{x^2-x+...}}[/itex]
square root:
[itex]x=\sqrt{x^2-x+\sqrt{x^2-x+\sqrt{x^2-x+...}}}=\sqrt{x^2-x+\sqrt{x^2-x+...}}[/itex]

Here's a neat fact about the formula right above this sentence. Setting x equal to the golden ratio gives you the nested root formula for the golden ratio, since [itex]\phi^2-\phi=1[/itex]. Check equation 14 on the http://mathworld.wolfram.com/GoldenRatio.html" [Broken] (formula 66*). In fact, many different constants can be made from these various equations.
 
Last edited by a moderator:
  • #12
Take some time to realize that adding or removing a recursion from an infinitely recursed function is not the same as circular reasoning (trivial manipulation), nor is it invalid.

It is invalid as method of showing the numerical equality between the two sides of the original equation. It is valid as method of showing something about the manipulation of infinite strings of symbols.

For example, you showed that this equation "works":

[tex] x = f^{-1}( f(x) - x + f^{-1}(f(x) -x + ...) [/tex]Let [itex] f(x) = x [/itex]

Define
[itex] g_0(x) = 0 [/itex]
[itex] g_{n}(x) = f^{-1}( f(x) -x + g_{n-1}(x) ) [/itex]

Then we have:

[itex] g_0(x) = 0 [/itex]
[itex] g_1(x) = f^{-1}(x - x + 0) = f^{-1}(0) = 0 [/itex]
[itex] g_2(x) = f^{-1}(x - x + g_1(x)) = f^{-1}(0 + 0) = 0 [/itex]
etc.

The statement: [itex] x = \lim_{n \rightarrow \infty} g_n(x) [/itex] is false except at [itex] x = 0 [/itex]
An infinite amount of steps plus or minus a finite amount of steps is still an infinite amount of steps.

I think it would be better to talk about countably infinite strings of symbols rather than "steps". I do realize that that if we have two countably infinite strings of symbols and we remove a finite number of symbols from one string then the two strings can remain equal.

1) Let's call the infinitely nested function r(x)
2) For x to be equal to r(x), we must be able to apply the same functions to both r(x) and x and arrive back at x= r(x)

You have to be precise about what you mean by "apply the same functions". In the context of an ordinary discussion of calculus, a "function" is a mapping from numbers to numbers, not a mapping from strings of symbols to strings of symbols. What you are talking about is the latter kind of function. For example, as a mapping of symbols to symbols, sin(2x) can be interpreted as a function that produces an infinite series of symbols or as a function that produces a finite string of symbols ( involving trig functions). As mappings of symbols, these are two different functions. As mappings of numbers to numbers, they are the same function.
Try setting x to some recursive formula that doesn't allow you to bring both sides back to the same formula once a recursion has been added or removed.

That's an interesting challenge. But can we prove that if the symbolic behavior of the expression doesn't "work" then the numerical equality of the equation doesn't hold? After all, there are identities in mathematics that establish the numerical equality of two very different groups of symbols.
I don't think they bothered with extensive proofs because of the nature of algebra and infinitely nested functions. The invalid nesting I posted above should highlight the difference between a valid and invalid infinite nesting.

I think the web pages were intended as an intuitive introduction, so they don't have formal proofs. If the references that the web pages cite are valid, the references would have to show numerical convergence by the recognized methods of mathematical analysis.
 
Last edited:
  • #13
Stephen Tashi said:
It is invalid as method of showing the numerical equality between the two sides of the original equation. It is valid as method of showing something about the manipulation of infinite strings of symbols.

For example, you showed that this equation "works":

[tex] x = f^{-1}( f(x) - x + f^{-1}(f(x) -x + ...) [/tex]
It should be patently obvious that when f(x)=x, the above equation cannot work. I don't think I should even have to mention something that obvious. In addition, when x=1, there are many f(x) which do not transform x into something other than 1, which does the exact same thing (another obvious exception to that specific type of equation).

Do you really need me to explain every specific case in which these types of equations do not work?

Here are some examples:

1) x has to be within the range of the inverse function f-1
2) f(x)-x + r(x) must be within the domain of the inverse function f-1
3) x must be within the domain of the function f
4) trivial quantities such as zero are trivial (tautologically speaking)
5) f(x) cannot be equal to x (exceptions may exist, none come to mind)

When x can not equal r(x), it should be obvious.
I think it would be better to talk about countably infinite strings of symbols rather than "steps".
I had something particular in mind. Single nesting= 1 step. Anyways...
You have to be precise about what you mean by "apply the same functions".
What do you think "apply the function g(y,x)= y + f(x)-x to both sides of an equation" means?
[tex]x= f^{-1}\left(f(x)-x+f^{-1}(f(x)-x+...)\right)[/tex]
In this case, y= either side of the equation and x=x:
[tex]f(x)=f(x)-x+ f^{-1}\left(f(x)-x+f^{-1}(f(x)-x+...)\right)[/tex]
Now another function, [itex] f^{-1}(z)[/itex] is applied to both sides with the result being the original equation.


If you can think of examples besides the above list in which x will not equal an infinitely nested function group, we should add them to the list. Of course, there are other things I really want to learn about, such as what is the name of the derivative approximation method using these functions? In addition, specific rules need to be written out for the derivative approximation method, which as far as I can tell requires valid x= infinitely nested functions.
 
  • #14
Matt Benesi said:
Do you really need me to explain every specific case in which these types of equations do not work?

Can you? The normal procedure in stating a theorem is to state the premises that are required for the conclusion to be true, not to list all possible situations where the conclusion is false. If your aim to do something that is recognized as mathematics by modern standards, then you need to state all the assumptions that you are making about [itex] f(x) [/itex] and show those assumptions imply the convergence of the sequence of functions [itex] g_i(x) [/itex] to [itex] x [/itex] according the usual definition of convergence of a sequence of functions. It is not sufficient to begin with an equation that assumes there is convergence to x and then manipulate it to show you can cast off one layer of the recursion.

I'm not trying to function as the mathematical Thought Police. I agree that your methods of manipulating infinite strings of symbols suggest interesting results but it would require more work to prove your assertions and clarify precisely when they apply.

The only way that I see, so far, to determine whether a function f(x) "works" in the equation, is test it as a specific case.
 
  • #15
Here's another one that looks like it doesn't work: [itex] f(x) = \frac{x}{2} [/itex] at [itex] x = 2 [/itex]


[itex] g_1(2)= 0 [/itex]
[itex] g_2 (2) = 2( \frac{2}{2} - 2) = 2 ( 1 -2) = 2(-1) = -2 [/itex]
[itex] g_3(2) = 2( \frac{2}{2} - 2 + (-2)) = 2( -1 + (-2) ) = - 6 [/itex]
[itex] g_4(2) = 2( \frac{2}{2} - 2 + (-6) ) = 2( -1 + (-6)) = - 14 [/itex]
 
  • #16
Stephen Tashi said:
The normal procedure in stating a theorem is to state the premises that are required for the conclusion to be true, not to list all possible situations where the conclusion is false. If your aim to do something that is recognized as mathematics by modern standards, then you need to state all the assumptions that you are making about [itex] f(x) [/itex] and show those assumptions imply the convergence of the sequence of functions [itex] g_i(x) [/itex] to [itex] x [/itex] according the usual definition of convergence of a sequence of functions. It is not sufficient to begin with an equation that assumes there is convergence to x and then manipulate it to show you can cast off one layer of the recursion.
I see that. There are many conditions under which the various r(x) do not work. r(x) meaning the full infinite recursion, as I've been using

For example, I've been working with x and n greater than 1 for all equations of the format [itex]\sqrt[n]{x^n-x+\sqrt[n]{x^n-x+...}}[/itex]. If you set x=-2 and n=2, you end up approaching 3 (as 3^2-3=6 just like (-2)^2- - 2=6). Now I could specify:
[itex] |x|= \sqrt[n]{|x^n|-|x|+\sqrt[n]{|x^n|-|x|+...}}[/itex] for all x and n greater than 1, but I don't have a valid proof, as you've shown (induction not being proof). I'm wondering if Ramanajun proved this particular form? Anyways, for this form of the equation:

x>1 and n>1
[tex] x^n > x^n - x[/tex]
[tex] x > \sqrt[n]{x^n-x}[/tex]
Because [itex] x > \sqrt[n]{x^n-x}[/itex], [itex]g_1(x) = \sqrt[n]{x^n-x} [/itex], [itex]g_1(x)< x[/itex] therefore [itex]g_2(x)= \sqrt[n]{x^n -x + g_1(x)} < x[/itex].

[tex]g_i(x)=\sqrt[n]{x^n-x+g_{i-1}(x)}[/tex]
You can see that each gi will be larger than the next, although they will never reach x since we require gi to equal x in order for it to reach x (and it's always going to be short). You can also see that gi of x will approach the limit x as [itex]i\to\infty[/itex]. Can you think of a better way to say this? I'm drawing a blank.

Therefore...
[tex] x= \sqrt[n]{x^n-x+\sqrt[n]{x^n-x+...}}[/tex]

Interesting:
[tex][g_i(x)]^n=x^n-x+g_{i-1}(x)[/tex]
[tex]x=x^n-[g_i(x)]^n+g_{i-1}(x)[/tex]
[tex]x^n-x=[g_i(x)]^n-g_{i-1}(x)[/tex]
[tex]x^n-[g_i(x)]^n=x-g_{i-1}(x)[/tex]
remembering that we end up with something along the lines of (in some cases, more than one operation is required):
[tex]nx^{n-1} = \lim_{i\to\infty} \dfrac{x-g_{i-1}(x)}{x-g_{i}(x)}[/tex]
I'm not trying to function as the mathematical Thought Police. I agree that your methods of manipulating infinite strings of symbols suggest interesting results but it would require more work to prove your assertions and clarify precisely when they apply.
Thanks. For now, I think each variation will have to be proved individually. I still am looking forward to working on the derivative property, but I suppose a couple of proofs for various r(x) are in order.

The only way that I see, so far, to determine whether a function f(x) "works" in the equation, is test it as a specific case.
A few variations will get nailed. It would be nice to have a general proof, but... that's out of my reach for now.
 
Last edited:
  • #17
Here is another quick proof for a range of values (B>1 and x>1).
To be proved:
[tex]x=\log_B [B^x-x+\log_B [B^x-x+...]][/tex]
To keep it simple, for this part of the proof assume B>1 and x>1, we can work out how other values work later.

[tex]g_0(x)=0[/tex]
[tex]g_i(x)= \log_B {\left[B^x-x+g_{i-1}(x)\right]}[/tex]

So for this one, [itex]B^x-x[/itex] must be greater than 1. Obviously the log of that will be less than x, and remain so except at the limit [itex]g_{i\to\infty}[/itex].

Once again, a little neat thing popped out at me from the [itex]g_i[/itex] equation (at least I like it):
[tex]B^{g_i(x)}= B^x-x+g_{i-1}(x)[/tex]
[tex]x-g_{i-1}(x)= B^x-B^{g_i(x)}[/tex]
And once again, applying the inverse function (log base B) to both individual parts of the higher i side of the equation, and dividing out (following the various rules outlined in the other thread- keep in mind I haven't clearly defined them as of yet!) gives us the approximate derivative of Bx, Bxln(B) as [itex]g_{i\to\infty}[/itex].

[itex]\dfrac{x-g_{i-1}(x)}{\log_B [B^x]- \log_B [B^{g_i(x)}]}\; = \; \dfrac{x-g_{i-1}(x)}{ x-g_i(x)}[/itex]​

Anyways, will do a few trig ones after the weekend, won't be able to check in for a few days, so have a good one.
 
Last edited:
  • #18
A squeeze theorem?

I'll have to read that carefully before understanding it.

Today's hazy thoughts on the subject.

It simplifies notation to interchange the roles of [itex] f^{-1} [/itex] and [itex] f [/itex], which I will do. I'll also write the index of the recursion variable in brackets instead of as a subscript.

So I'll define
[itex] g[1](x) = 0 [/itex]
[itex] g[n](x) = f ( f^{-1}(x) - x + g[n-1](x)) , n > 1 [/itex]

For linear functions of x , [itex] f(x) =cx + d [/itex], the associated sequence [itex] g [/itex] appears to converge to [itex] x [/itex] when [itex] |c| < 1[/itex] since it is the sequence of partial sums of an infinite geometric series that sums to [itex] x [/itex].

Suppose we are given a non-linear function [itex] f(x) [/itex] and we can sandwich it between the graphs of two linear functions [itex] A(x) [/itex] and [itex] B(x) [/itex] as long as [itex] x [/itex] is in some interval [itex] I [/itex]. So for [itex] x \in I [/itex], [itex] A(x) \le f(x) \le B(x) [/itex].

Suppose that the associated sequences [itex] g_A [/itex] and[ [itex]g_B [/itex] both converge to the identity function (for any real number x) and that the sequence of [itex] g_f [/itex] associated with [itex] f(x) [/itex] at a given [itex] x = x_0 [/itex] never involves evaluation [itex] f [/itex] at any point outside the interval I.

Intuitively, since the recursion process moves the graphs of both linear functions to be the graph of the identity function, it also should move the graph of f to be the identity function.

This might be formalized enough to use the squeeze theorem for sequences.

If it is true that [itex] g_A(x_0) \le g_f(x_0) \le g_B(x_0) [/itex] and both the extreme terms approach [itex] x_0 [/itex] in the limit, then the middle term should also.

I haven't visualized the recursions process carefully enough to know if that inequality must hold when the graph of f is sandwiched.
 
  • #19
Matt Benesi said:
Here is another quick proof for a range of values (B>1 and x>1).
To be proved:
[tex]x=\log_B [B^x-x+\log_B [B^x-x+...]][/tex]
...
Obviously the log of that will be less than x, and remain so except at the limit [itex]g_{i\to\infty}[/itex].

You showed [itex] x [/itex] is an upper bound for the sequence [itex] g_i [/itex] and you showed the sequence [itex] g_i [/itex] is increasing. But that doesn't prove the limit of the sequence must be [itex] x [/itex]. You would have to show that for any [itex] x - \delta < x [/itex] that the sequence eventually becomes greater than [itex] x - \delta [/itex].
 
  • #20
What Taylor's approximation suggests

Let [itex] x [/itex] be a constant and [itex] f [/itex] be an invertible function and assume both [itex] f [/itex] and it's inverse are differentiable.

Define:
[itex] g_1(x) = f( f^{-1}(x) - x ) [/itex]
[itex] g_n(x) = f( f^{-1}(x) - x + g_{n-1}(x)) [/itex] for [itex] n > 1 [/itex].

Let's see what Taylor's approximation says about the above recursion.

Approximate [itex] f( p - q) [/itex] by [itex] f(p) - q f'(p) [/itex]

Let [itex] x_0 = f^{-1}(x) [/itex]

[itex] g_1(x) = f( f^{-1}x - x) \approx f(f^{-1}(x)) -x f'(f^{-1}x)) = x -x f'(x_0) [/itex]

[itex] g_2(x) = f( f^{-1}(x) -x + g_1) \approx f(f^{-1}(x) -x + x - xf'(x0) [/itex]

[itex] = f(f^{-1}(x) - x f'(x_0) \approx f(f^{-1}f(x) - x f'(x0) f'(x0) [/itex]

[itex] = x - x (f'(x_0))^2 [/itex]

[itex] g_3(x) = f( f^{-1} -x + g_2) \approx f( f^{-1}(x) - x + x - x(f'(x_0))^2) [/itex]

[itex] = f( f^{-1}(x) - x (f'(x_0))^2 \approx f(f^{-1}(x)) - x (f'(x_0))^2 f'(x_0) [/itex]

[itex] = x - x (f'(x_0))^3 [/itex]

This suggests that the critical number in convergence is [itex] f'(x_0) = f'( f^{-1}(x)) [/itex] It also suggests [itex] \frac{g_n+1(x) - x}{g_n(x) - x} \approx f'(x_0) [/itex]
 
Last edited:
  • #21
Stephen, I've got to make sure I don't get caught up in this tonight. I'd be up way to late. Hopefully will get to it tomorrow evening.

Thank you for the input.
 
  • #22
Stephen Tashi said:
You showed [itex] x [/itex] is an upper bound for the sequence [itex] g_i [/itex] and you showed the sequence [itex] g_i [/itex] is increasing. But that doesn't prove the limit of the sequence must be [itex] x [/itex]. You would have to show that for any [itex] x - \delta < x [/itex] that the sequence eventually becomes greater than [itex] x - \delta [/itex].

Remember that any [itex]g_i (x) [/itex] is some form of [itex]x-\delta[/itex] (it's always slightly short of x), with [itex]x\ge \delta> 0[/itex]. Remember that the upper bound for [itex]\delta[/itex] is x at [itex]g_0=0[/itex]. We'll specify [itex]\delta_i[/itex] in the following way: [itex]x-\delta_i=g_i(x)[/itex] with [itex]\delta_0=x[/itex] or whatever.

[tex]x-\delta_i = g_i(x)= \log_B {\left[B^x-x+g_{i-1}(x)\right]}[/tex]
[tex]x-\delta_i = \log_B {\left[B^x-x+x-\delta_{i-1}\right]}[/tex]
We already know that [itex]g_i>g_{i-1}[/itex] so this might be sufficient:
[tex]x-\delta_i = \log_B {\left[B^x-\delta_{i-1}\right]}[/tex]
Anyways, I like the way this is working out, so am typing up a few (hopefully correct) variations for me to glance at while we are working on this:

[itex]B^{x-\delta_i} = B^x-\delta_{i-1} \qquad \qquad \qquad \delta_{i-1}= B^x- B^{x-\delta_i} \qquad \qquad \qquad B^{x-\delta_i}+\delta_{i-1} = B^x[/itex]

[itex]\delta_i =x - \log_B {\left[B^x-\delta_{i-1}\right]} \qquad \qquad \qquad \qquad x = \delta_i + \log_B {\left[B^x-\delta_{i-1}\right]}[/itex]

[tex]B^{x} =B^{\delta_i} \times \left( B^x-\delta_{i-1} \right) \qquad \qquad \qquad \qquad \delta_i=\log_B \left[\frac{B^{x}}{B^x-\delta_{i-1}}\right][/tex]

All right. Time to do some stuff, then get back to this. :D
 
  • #23


Stephen Tashi said:
This suggests that the critical number in convergence is [itex] f'(x_0) = f'( f^{-1}(x)) [/itex] It also suggests [itex] \frac{g_n+1(x) - x}{g_n(x) - x} \approx f'(x_0) [/itex]

It also apparently suggests:
[tex] g_n(x) = x - x [(f'(x_0))^n] [/tex]

Now, this is only the first term of the Taylor series for this (type of) equation.

With [itex]f(x)=x^{1/n}[/itex] and [itex]f^{-1}(x)=x^n[/itex] and [itex]f'(x)=\frac{x^{\frac{1-n}{n}}}{n}[/itex] then [itex]f'(f^{-1}(x))=\frac{x^{1-n}}{n}=\frac{1}{nx^{n-1}}[/itex] which is [itex]\frac{1}{f'^{-1}(x)}[/itex].

We can see the Taylor approximation (of one term) for this particular sequence approaches x. We can also see that it approaches the derivative of the inner function if we reverse your fraction, which more or less is the "interesting property" I was noticing.
 

1. What is a nested function?

A nested function is a function that is defined within another function. This means that the nested function can only be accessed within the outer function and cannot be called outside of it. Nested functions are often used to encapsulate functionality and make code more organized and efficient.

2. What are some interesting properties of nested functions?

One interesting property of nested functions is that they have access to variables defined in the outer function. This allows for closures, where the nested function can "remember" and use values from the outer function even after the outer function has finished executing.

Another interesting property is that nested functions can be used to create private variables and functions, as they are not accessible outside of the outer function.

3. Can nested functions have their own nested functions?

Yes, nested functions can have their own nested functions. This is known as function nesting or function currying, and it allows for further encapsulation and code organization. However, it is important to use nested functions sparingly to avoid creating overly complex and difficult to maintain code.

4. What is the difference between a nested function and a closure?

A closure is a function that has access to variables defined in its outer function, even after the outer function has finished executing. A nested function is simply a function defined within another function. All closures are nested functions, but not all nested functions are closures.

5. Are nested functions commonly used in programming?

Yes, nested functions are commonly used in programming, especially in functional programming languages. They are also useful in JavaScript for creating private variables and functions, and for building more complex and modular code.

Similar threads

Replies
17
Views
3K
Replies
4
Views
283
  • General Math
Replies
3
Views
760
Replies
3
Views
619
Replies
17
Views
2K
  • General Math
Replies
1
Views
686
Replies
16
Views
2K
Replies
6
Views
942
  • General Math
Replies
9
Views
1K
Back
Top