How to prove this approximation?

In summary, the conversation is about a mathematical approximation that involves repeated logarithms and the proof for its correctness. The domain of the variable x is discussed, with a suggestion to plot the expression on the complex plane. A possible approach to prove the approximation is also mentioned. However, there is a disagreement on the validity of the approximation and the conversation ends with a comparison of two equations that are shown to be equivalent in the limit.
  • #1
Kumar8434
121
5
I've arrived at it not by using some mainstream mathematics. I'm looking for a proof which involves some widely-known mathematics. I'm sorry if I'm using my own notation, but it's the only way to make the expression compact.
The notation is:
$$log^n_xy$$: For log with the base x applied n times to y. For example, $$log^3y=log(log(log(y))$$ all with the same base.
The approximation is:$$\frac{\log_ax_2-\log_ax_1}{\log^{n+1}_ax_2-\log^{n+1}_ax_1}*\frac{1}{((ln(a))^n}\approx \prod_{i=1}^nlog_a^ix_1$$, when $$\frac{\log_{a}^{n}x_2}{\log_{a}^{n}x_1}\approx 1$$
1. Is it correct? 2. How can it be proved?
 
Mathematics news on Phys.org
  • #2
What is the domain of x?

It can't be the Reals right because at some point the log argument will be negative and the log of a negative number is undefined.

What would be an interesting thing to do is to write a MATLAB program to plot log(log...(x)) on the complex plane to see how it behaves?
Matlab:
n=100
z=3
for i=1:n
    z=log(z)
    zz(i)=z
end
plot(zz,'*')
 
  • #3
Should work, as long as the n-times repeated logarithm is well-defined. How I would try to prove it: First reduce everything to the natural logarithm, then replace ln(xi) by yi - you don't use the value without logarithm anyway. Then invert both sides of the equation (1/...), and interpret it as derivative. Induction can be useful.
 
  • Like
Likes jedishrfu
  • #4
In my plot it shows it converges to a non-zero complex number of ##( 0.3181 + 1.3372i )## after 100 iterations.
 
  • #5
I was assuming we don't make n too large, so everything stays real.
 
  • Like
Likes jedishrfu
  • #6
So here's how I would go about it. Let's just assume that the base is [itex]e[/itex]. Let's assume that [itex]x_1 = x[/itex] and [itex]x_2 = x + \delta x[/itex]. Let's assume that [itex]x[/itex] is so big that [itex]x > 0, log(x) > 0, log(log(x)) > 0, ...log^n(x) > 0[/itex]. (Up to a maximum value of [itex]n[/itex]; when [itex]n[/itex] gets really large, [itex]log^n(x)[/itex] becomes negative.)

Then to first order, [itex]log(x+\delta x) = log(x(1+ \frac{\delta x}{x})) = log(x) + log(1+ \frac{\delta x}{x}) \approx log(x) + \frac{\delta x}{x}[/itex]
Then [itex]log^2(x+\delta x) \approx log(log(x) + \frac{\delta x}{x}) \approx log(log(x)) + \frac{\delta x}{x log(x)}[/itex]
Etc.

So [itex]log^j(x+\delta x) \approx log^j(x) + \frac{\delta x}{x \cdot log(x) \cdot log(log(x)) \cdot ... log^{j-1}(x)}[/itex]

I think your fact follows from that.
 
  • #7
The last equation is nearly identical to the equation in the first post, just rearranged a bit and with different labels.
 
  • #8
I
mfb said:
Should work, as long as the n-times repeated logarithm is well-defined. How I would try to prove it: First reduce everything to the natural logarithm, then replace ln(xi) by yi - you don't use the value without logarithm anyway. Then invert both sides of the equation (1/...), and interpret it as derivative. Induction can be useful.

I'm sorry, I'm not very good in maths. How can expressions like ##log^nx## be reduced to the natural logarithm. If we apply change of base formula to the deepest logarithm, then keep applying change of base formula until we get into the outermost logarithm, then wouldn't that make the problem very clumsy?
 
Last edited by a moderator:
  • #9
All the inner changes should be negligible within the given approximation.
 
  • #10
@steven Daryl: That equation is not at all identical to mine. In your equation the requirement of the approximation is that x1 and x2 shpuld be close. The requirement of my approximation is that the log applied n times to x2 should be close to log applied n times to x1. Since even with numbers of large differences, if we continually apply log to bpth of them, then it's a good chance that the resulting numbersthat we get will be close for some value of n. Then my approximation can be applied in that case.

My approximation gives somewhat closer values.
 
Last edited:
  • #11
That's what I found with the numeric approximation that any starting x will approach the same complex number value.

Hence the ratio of the two will be 1.
 
  • #12
mfb said:
All the inner changes should be negligible within the given approximation.
I didn't understand that.
 
  • #13
stevendaryl said:
So here's how I would go about it. Let's just assume that the base is [itex]e[/itex]. Let's assume that [itex]x_1 = x[/itex] and [itex]x_2 = x + \delta x[/itex]. Let's assume that [itex]x[/itex] is so big that [itex]x > 0, log(x) > 0, log(log(x)) > 0, ...log^n(x) > 0[/itex]. (Up to a maximum value of [itex]n[/itex]; when [itex]n[/itex] gets really large, [itex]log^n(x)[/itex] becomes negative.)

Then to first order, ##log(x+\delta x) = log(x(1+ \frac{\delta x}{x})) = log(x) + log(1+ \frac{\delta x}{x}) \approx log(x) + \frac{\delta x}{x}##
Then ##log^2(x+\delta x) \approx log(log(x) + \frac{\delta x}{x}) \approx log(log(x)) + \frac{\delta x}{x log(x)}##
Etc.

So [itex]log^j(x+\delta x) \approx log^j(x) + \frac{\delta x}{x \cdot log(x) \cdot log(log(x)) \cdot ... log^{j-1}(x)}[/itex]

I think your fact follows from that.
Try computin ##\prod_{j=1}^nlog^nlogx_1## My approximation will give somewhat closer values. I don't think my approximation follows from your last equation.
 
Last edited by a moderator:
  • #14
Kumar8434 said:
@steven Daryl: That equation is not at all identical to mine.

It's the same: What I wrote was:

[itex]log^j(x + \delta x) \approx log^j(x) + \frac{\delta x}{x \cdot log(x) \cdot ... \cdot log^{j-1}(x)}[/itex]

Rearranging gives:
[itex]\frac{\frac{\delta x}{x}}{log^j(x + \delta x) - log^j(x)} \approx log(x) \cdot ... \cdot log^{j-1}(x)[/itex]

Since [itex]\frac{\delta x}{x} \approx log(x+\delta x) - log(x)[/itex], we have:

[itex]\frac{log(x+\delta x) - log(x)}{log^j(x + \delta x) - log^j(x)} \approx log(x) \cdot ... \cdot log^{j-1}(x) = \Pi_{k=1}^{k=(j-1)} log^k(x)[/itex]

Now if you let [itex]x_1 = x[/itex] and [itex]x_2 = x+ \delta x[/itex], and let [itex]j = n+1[/itex], it becomes:
[itex]\frac{log(x_2) - log(x_1)}{log^{n+1}(x_2) - log^{n+1}(x_1)} \approx \Pi_{k=1}^{k=n} log^k(x_1)[/itex], which is what you wrote.

So the two are the same in the limit as [itex]x_2 \rightarrow x_1[/itex].
 
  • #15
Kumar8434 said:
Try computin ##\prod_{j=1}^nlog^nlogx_1## My approximation will give somewhat closer values. I don't think my approximation follows from your last equation.

They are the same, except that I'm using the additional approximation [itex]log(x_2) - log(x_1) \approx \frac{x_2 - x_1}{x_1}[/itex]
 
  • #16
so the aproximation works if ##\frac{x_1}{x_2} \approx 1##
because ##log(1+ \frac{\delta x}{x}) \approx \frac{\delta x}{x}##
This is not a little limited?
 
  • #17
MAGNIBORO said:
so the aproximation works if ##\frac{x_1}{x_2} \approx 1##
because ##log(1+ \frac{\delta x}{x}) \approx \frac{\delta x}{x}##
This is not a little limited?

Yes, it's limited to the case where [itex]x_2[/itex] is not very far from [itex]x_1[/itex]. The formula obviously breaks down if [itex]x_2 = 1,000,000,000[/itex] and [itex]x_1 = 0.0000001[/itex]
 
  • #18
Isn't this just a trivial application of the mean value theorem?
 
  • #19
Kumar8434 said:
I didn't understand that.
$$\log_a (\log_a x) = \frac 1 {\ln a} \ln \left(\frac 1 {\ln a} \ln(x)\right) = \frac 1 {\ln a} \left( \ln(\ln(x)) - \ln (\ln a)\right) \approx \frac 1 {\ln a} \ln(\ln(x))$$
for x >> aa. Same for larger log chains.
 
  • #20
stevendaryl said:
They are the same, except that I'm using the additional approximation [itex]log(x_2) - log(x_1) \approx \frac{x_2 - x_1}{x_1}[/itex]
The reason you can't use this additional approximation in your proof is that the requirement of this additional approximation is that x2 should be closer to x1. Try using x2=10^7 and x1=10^6 in ##\frac{x_2-x_1}{x_1}\approx lnx_2-lnx_1## and you won't get even a single digit right. But you can use my approximation for x2=10^7 and x1=10^6 because the IMPORTANT thing there is that ##ln^nx_2## should be closer to ## ln^nx_1## which is true for n=3.
 
  • #21
MAGNIBORO said:
so the aproximation works if ##\frac{x_1}{x_2} \approx 1##
because ##log(1+ \frac{\delta x}{x}) \approx \frac{\delta x}{x}##
This is not a little limited?

No, that's the requirement of steven daryl's approximation. My approximation can work even if x2 is 5 times x1. I've wirtten the requirement in my first post.
 
  • #22
Kumar8434 said:
The reason you can't use this additional approximation in your proof is that the requirement of this additional approximation is that x2 should be closer to x1.

It depends on what it is you're trying to prove. What I sketched a proof of is that your approximation is good to order [itex]\delta x = x_2 - x_1[/itex], and to that order, your approximation is the same as mine. So what, exactly, are you wanting a proof of? You want to know the details of how many decimal places of accuracy you get?
 
  • #23
stevendaryl said:
It depends on what it is you're trying to prove. What I sketched a proof of is that your approximation is good to order [itex]\delta x = x_2 - x_1[/itex], and to that order, your approximation is the same as mine. So what, exactly, are you wanting a proof of? You want to know the details of how many decimal places of accuracy you get?
You've proved my approximation in the case ##x2/x1## is close to one. But my approximation even works when that ratio is 5 and sometimes it gives close values when that ratio is 10. You certainly can't use (x2-x1)/x1##\approx ## lnx2-lnx1 in those cases when that ratio is 10. You won't even get a single digit right in that approximation, let alone the talk about how many decimal places you would get right.
 
  • #24
Kumar8434 said:
You've proved my approximation in the case ##x2/x1## is close to one. But my approximation even works when that ratio is 5

I'm asking you what it is that you want to prove about the approximation. You can't really just say "it works when [itex]x_2/x_1[/itex] is 5". What do you mean by saying that it "works"? Does it give the exact answer? If not, how far off is it? I don't mean for a particular [itex]x_2[/itex] and [itex]x_1[/itex], but in general?

You asked for a proof that your approximation works. I gave a proof, for one particular definition of "works". You weren't happy, but you won't say what exactly you are looking for.
 
  • #25
[QUOTE="stevendaryl, post: 5682842, member: 372855" So what, exactly, are you wanting a proof of? You want to know the details of how many decimal places of accuracy you get?[/QUOTE]
Are there some mathematical tools for accuracy check? If you're not joking, then would you please tell me that what's the accuracy of the formula in my first post and in what conditions does it work?
 
  • #26
stevendaryl said:
I'm asking you what it is that you want to prove about the approximation. You can't really just say "it works when [itex]x_2/x_1[/itex] is 5". What do you mean by saying that it "works"? Does it give the exact answer? If not, how far off is it? I don't mean for a particular [itex]x_2[/itex] and [itex]x_1[/itex], but in general?

You asked for a proof that your approximation works. I gave a proof, for one particular definition of "works". You weren't happy, but you won't say what exactly you are looking for.
It gives at least one digit correct even when x2 is 5 times x1 which isn't true for ##\frac{x_2-x_1}{x_1}\approx lnx_2-lnx_1##.
 
  • #27
Kumar8434 said:
It gives at least one digit correct even when x2 is 5 times x1 which isn't true for ##\frac{x_2-x_1}{x_1}\approx lnx_2-lnx_1##.

That's interesting, but so what? You asked for a proof. A proof of what? What is it that you want to prove? You have to give a mathematically precise statement in order for a proof to be possible.
 
  • #28
Kumar8434 said:
Are there some mathematical tools for accuracy check? If you're not joking, then would you please tell me that what's the accuracy of the formula in my first post and in what conditions does it work?

What's easy to prove is that it gives the correct answer in the limit as [itex]x_2 \rightarrow x_1[/itex] to first order in [itex]x_2 - x_1[/itex]. To get a more precise statement of how many decimal places of accuracy you get in what circumstances would be an awful lot of work. That's a challenge for you. You can't really ask other people to do that work for you.
 
  • #29
stevendaryl said:
That's interesting, but so what? You asked for a proof. A proof of what? What is it that you want to prove? You have to give a mathematically precise statement in order for a proof to be possible.
Start with ##log^nx_2/log^nx_1\approx 1## and prove my approximation. DON'T start with ##x_2=x_1+\delta##, that won't do it.
 
  • #30
stevendaryl said:
What's easy to prove is that it gives the correct answer in the limit as [itex]x_2 \rightarrow x_1[/itex] to first order in [itex]x_2 - x_1[/itex]. To get a more precise statement of how many decimal places of accuracy you get in what circumstances would be an awful lot of work. That's a challenge for you. You can't really ask other people to do that work for you.
I've already proved under what circumstances would it work, that's why I've written the circumstance in my first post. But, as I've written, to get to the circumstance and to the formula I've not used some (maybe) crackpot mathematics. Writing that thing would be against this site's rules. As for upto how much decimal places would we get, I'm not aware of any such thing in mathematics. That's why I asked you if any such tool exists.
 
  • #31
Kumar8434 said:
I've already proved under what circumstances would it work, that's why I've written the circumstance in my first post.

Your first post asks

1. Is it correct? 2. How can it be proved?

which to me seems to be asking for a proof, not giving a proof.
 
  • #32
Okay, I am officially done with this thread.
 
  • #33
stevendaryl said:
Your first post asks
which to me seems to be asking for a proof, not giving a proof.
I can't give that proof here because it is not allowed here.
 
  • #34
Kumar8434 said:
I can't give that proof here because it is not allowed here.
What?

Did you try the approach I suggested in post 3? It is not too different from the one stevendaryl posted, but it allows larger differences between the numbers.
 
  • #35
Kumar8434 said:
I can't give that proof here because it is not allowed here.

As @mfb pointed out, your formula and mine are actually IDENTICAL.

I have a formula saying: [itex]log^n(x_2) \approx log^n(x_1) + \frac{(x_2 - x_1)}{x_1} \frac{1}{\Pi_{j=1}^n log^j(x_1)}[/itex]

To get your formula, let [itex]x_1 = log(y_1)[/itex] and let [itex]x_2 = log(y_2)[/itex]. This gives:

[itex]log^n(log(y_2)) \approx log^n(log(y_1)) + \frac{(log(y_2) - log(y_1))}{log(y_1)} \frac{1}{\Pi_{j=1}^n log^j(log(y_1))}[/itex]

Which can be written as:

[itex]log^{n+1}(y_2) \approx log^{n+1}(y_1) + (log(y_2) - log(y_1))\frac{1}{log(y_1) \Pi_{j=1}^n log^{j+1}(y_1)}[/itex]
[itex]= log^{n+1}(y_1) + (log(y_2) - log(y_1))\frac{1}{\Pi_{j=1}^{n+1} log^{j}(y_1)}[/itex]

This is your approximation, rearranged, with the change of variables
[itex]x_2 \Rightarrow y_2[/itex]
[itex]x_1 \Rightarrow x_2[/itex]
[itex]n \Rightarrow n+1[/itex]
 
<h2>1. How do I determine the accuracy of an approximation?</h2><p>The accuracy of an approximation can be determined by calculating the absolute or relative error. Absolute error is the difference between the actual value and the approximated value, while relative error is the absolute error divided by the actual value. The smaller the error, the more accurate the approximation.</p><h2>2. What is the difference between a numerical and analytical approximation?</h2><p>A numerical approximation involves using numerical methods, such as algorithms or simulations, to estimate a value. An analytical approximation, on the other hand, uses mathematical equations to find an approximate solution. Numerical approximations are often more accurate but require more computational resources, while analytical approximations are faster but may be less accurate.</p><h2>3. How do I know if an approximation is valid?</h2><p>An approximation is valid if it satisfies the conditions and assumptions set by the method used to obtain it. For example, if using a Taylor series approximation, the function must be differentiable and the series must converge. It is important to check the validity of an approximation to ensure its accuracy and reliability.</p><h2>4. Can I use multiple approximations to improve accuracy?</h2><p>Yes, using multiple approximations can improve accuracy. This is known as the method of successive approximations, where the result of one approximation is used as the input for the next approximation. However, it is important to ensure that each individual approximation is valid to avoid compounding errors.</p><h2>5. How do I choose the best approximation method?</h2><p>The best approximation method depends on the specific problem and the desired level of accuracy. Some methods may be more suitable for certain types of functions or equations. It is important to consider the assumptions and limitations of each method and choose the one that is most appropriate for the given situation.</p>

1. How do I determine the accuracy of an approximation?

The accuracy of an approximation can be determined by calculating the absolute or relative error. Absolute error is the difference between the actual value and the approximated value, while relative error is the absolute error divided by the actual value. The smaller the error, the more accurate the approximation.

2. What is the difference between a numerical and analytical approximation?

A numerical approximation involves using numerical methods, such as algorithms or simulations, to estimate a value. An analytical approximation, on the other hand, uses mathematical equations to find an approximate solution. Numerical approximations are often more accurate but require more computational resources, while analytical approximations are faster but may be less accurate.

3. How do I know if an approximation is valid?

An approximation is valid if it satisfies the conditions and assumptions set by the method used to obtain it. For example, if using a Taylor series approximation, the function must be differentiable and the series must converge. It is important to check the validity of an approximation to ensure its accuracy and reliability.

4. Can I use multiple approximations to improve accuracy?

Yes, using multiple approximations can improve accuracy. This is known as the method of successive approximations, where the result of one approximation is used as the input for the next approximation. However, it is important to ensure that each individual approximation is valid to avoid compounding errors.

5. How do I choose the best approximation method?

The best approximation method depends on the specific problem and the desired level of accuracy. Some methods may be more suitable for certain types of functions or equations. It is important to consider the assumptions and limitations of each method and choose the one that is most appropriate for the given situation.

Similar threads

  • Linear and Abstract Algebra
Replies
1
Views
584
Replies
3
Views
622
  • Precalculus Mathematics Homework Help
Replies
5
Views
1K
  • General Math
Replies
21
Views
3K
Replies
2
Views
2K
Replies
1
Views
5K
  • General Math
Replies
1
Views
1K
  • General Math
Replies
3
Views
2K
  • Advanced Physics Homework Help
Replies
7
Views
1K
Replies
6
Views
945
Back
Top