Find ##f(x)## such that f(f(x))=##log_ax##

• B
• Kumar8434
In summary, the conversation was about extending the definition of superlogarithms and finding a function that satisfies the equation ##f(f(x)) = \log_ax##. The speaker proposed using functions such as ##f(x) = \frac{x}{a^{\frac{1}{2}}}, g(x) = \frac{x}{a^{\frac{1}{3}}}, etc.## to calculate the number of times these functions can be applied without getting a number smaller than 1. This can then be used to define ##slog_ax## as ##k_1 + \frac{k_2}{2} + \frac{k_3}{3} + ...## where ##k_i## represents
Kumar8434
I was thinking about extending the definition of superlogarithms. I think maybe that problem can be solved if we find a function ##f## such that ##fof(x)=log_ax##. Is there some way to find such a function? Maybe the taylor series could be of some help. Or is there some method to find a function such that ##f(f(x))## is at least approximately equal to ##logx##?

Stavros Kiri
Here's why I think it may help in extending the definition of super-logarithms:
##slog_a x## is defined as the number of times logarithm with the base ##a## is applied to ##x## to get to 1. But this definition fails when ##x## is not of the form ##^na##. So, we're applying ##f(x)=\log_a x## continuously to ##x## in this case until we get 1.

If we define ##log_ax## in a similar manner as the number of times ##x## is divided by ##a## to get to 1, then this definition fails when ##x## is not of the form ##a^n##. In this case, we're applying ##f(x)=\frac{x}{a} ## continuously to ##x## until we get 1. This definition of ##log_ax## can be extended by looking for functions f, g, h ...such that ##f## such that ##f(f(x))=\frac{x}{a}, g(g(g(x)))=\frac{x}{a}## etc.

These functions are easy to find and are equal to, ##f(x)=\frac{x}{a^{\frac{1}{2}}}##, ##g(x)=\frac{x}{a^{\frac{1}{3}}}##,etc.

Here's how these functions help in extending this definition of ##\log_ax##.
Suppose we want to find ##log_220##, then first keep dividing ##20## by ##2##, i.e. keep applying ##f(x)=\frac{x}{2}## to 20. After four divisions, we get 1.25. We can't divide further by 2 because then the number will get smaller than 1.

Now, keep applying ##f(x)=\frac{x}{\sqrt{2}}## to 1.25 until we again reach a situation in which further division gets us smaller than 1. Let the number of times we we were able to divide by ##\sqrt{2}## be ##k_1##. After that, start dividing the resulting number by ##2^{\frac{1}{3}}##. Let the number of times we could divide by ##2^{\frac{1}{3}}## be ##k_2##. Repeat this process to as much accuracy as you like. Then,

$$log_220=4+\frac{k_1}{2}+\frac{k_2}{3}+...$$
So, if we can get functions f, g, h,... such that ##f(f(x))=\log_ax##, ##g(g(g(x)))=\log_ax##, ##h(h(h(h(x))))=log_ax##, etc and calculate the number of times these functions can be applied without getting a number smaller than 1, then I think ##slog_ax## can be calculated by using $$slog_ax=k_1+\frac{k_2}{2}+\frac{k_3}{3}+...$$ where ##k_1## is the number of times log with base ##a## can be applied to ##x## such that the resulting number isn't smaller than 1, ##k_2## is the number of times the function ##f(x)## such that ##f(f(x))=log_ax## can be applied to the number we get after applying ##k_1## logarithms with base ##a## to ##x## without getting a number less than 1,...etc.
Please tell me if I'm wrong.

Last edited:
Stavros Kiri
Kumar8434 said:
$$log_220=4+\frac{k_1}{2}+\frac{k_2}{3}+...$$

What would you get for ##\log_{4} 6## ? ##\ 1 + 1/4##?

then I think ##slog_ax## can be calculated by using ## slog_ax=k_1+\frac{k_2}{2}+ \frac{k_3}{3}+...##.

You must mean that ##slog_ax## can be defined by that process. (It can't be "calculated" until it is defined.)

Do you want the definition to have the consequence that ##slog_a(xy) = slog_a(x) + slog_a(y)## ?

So, if we can get functions f, g, h,... such that f(f(x))=loga

To me, the question of when, for a given ##g(x)## , we can find a function ##f## such ##f(f(x)) = g(x)## (exactly or approximately) is much more interesting that the topic of superlogarithms. I suspect people have investigated the question, but I don't know how much progress has been made.

Stephen Tashi said:
What would you get for ##\log_{4} 6## ? ##\ 1 + 1/4##?
You must mean that ##slog_ax## can be defined by that process. (It can't be "calculated" until it is defined.)

Do you want the definition to have the consequence that ##slog_a(xy) = slog_a(x) + slog_a(y)## ?
To me, the question of when, for a given ##g(x)## , we can find a function ##f## such ##f(f(x)) = g(x)## (exactly or approximately) is much more interesting that the topic of superlogarithms. I suspect people have investigated the question, but I don't know how much progress has been made.
"What would you get for ##\log_{4} 6## ? ##\ 1 + 1/4##?": So what's your question here? ##\log_{4}6\approx 1.29##. ##\ 1 + 1/4## is pretty close to that. And you'll get closer, if you calculate more ##k_{i^{s}}##. I did some blind calculations on my calculator. ##\frac{1.5}{4^{0.25}}##, i.e. 1.06 can be divided by ##4^{\frac{1}{600}}## approximately 25 times ( I just divided it once by ##4^{25/600}## and the resulting number was very close to 1). So, ##log_46=1+1/4+25/600=1.29##, which is very close to the actual value.
And, I already figured out how to calculate approximate functions ##f(x)## such that ##f(f(x))=g(x)##, after no one replied on this thread for days.Getting approximate functions in pretty simple. Read this post of mine to see how: http://math.stackexchange.com/quest...-to-extend-the-definition-of-super-logarithms

Last edited:
Kumar8434 said:
"What would you get for ##\log_{4} 6## ? ##\ 1 + 1/4##?": So what's your question here?
My question is as-stated: What would you get for ##\log_{4}6##? I ask this because your example has the ambiguous phrase "the resulting number":

Kumar8434 said:
Let the number of times we we were able to divide by ##\sqrt{2}## be ##k_1##. After that, start dividing the resulting number by ##2^{\frac{1}{3}}##.

You mentioned several numbers, which number is "the resulting number"?

Stephen Tashi said:
My question is as-stated: What would you get for ##\log_{4}6##? I ask this because your example has the ambiguous phrase "the resulting number":
You mentioned several numbers, which number is "the resulting number"?
The resulting number is the number we obtain after the divisions. For example, after dividing 20 by 2 four times, we get 1.25 ( further division gets us smaller than 1), so this is the resulting number for the next step. After this step, we'll start dividing 1.25 by sqrt(2) repeatedly to get the next resulting number. Then we take this resulting number and start dividing it by cuberoot(2).

If you know some programming language, you could write an iteration that seeks to minimize the function

$$\mathcal{F}(f) = \int\limits_{\epsilon}^R \big(f\big(f(x)\big) - \ln(x)\big)^2dx$$

numerically, approximating the function $f$ as some array. The result might tell something.

Kumar8434 and Stavros Kiri
Kumar8434 said:
The resulting number is the number we obtain after the divisions.

When we are computing ##log_{2} 4##. We divide 4 by 2 until the division produces a number less than 2, so we divide until the division produces 1. Is 1 the "resulting number"?

Stephen Tashi said:
When we are computing ##log_{2} 4##. We divide 4 by 2 until the division produces a number less than 2, so we divide until the division produces 1. Is 1 the "resulting number"?
No. We divide 4 by 2 repeatedly until the result becomes less than 1 or equal to 1. If it becomes equal to 1, then that's great and we stop there. The objective is to get to 1 and if that's not possible by repeated divisions then we try to get as close to 1 as possible. If we dividing by 2 repeatedly gets us less than 1, then we go one step back to the last quotient when it was greater than 1 and that's the 'resulting number', i.e. the number which we have to divide by sqrt(2) now to get as close to 1 as possible, then repeat the steps. Come on, man. This is simple logic. Why are you arguing over this? To make you understand, I'm using this fact:
If a number x is divided by the nth root of ##a## k times such that the resulting quotient is very close to 1. Then, ##log_ax\approx k/n##. The method I've given here to calculate the logarithm is just a slightly different version of this. I hope you understand now. Are you trolling?

jostpuur said:
If you know some programming language, you could write an iteration that seeks to minimize the function

$$\mathcal{F}(f) = \int\limits_{\epsilon}^R \big(f\big(f(x)\big) - \ln(x)\big)^2dx$$

numerically, approximating the function $f$ as some array. The result might tell something.
No. I don't know any programming language. But I figured out how to get approximate functional nth roots, i.e. functions which when applied n times to x give the required function f(x). And it seemed to work. I could convincingly extend the definition of super-logarithms by that. Here's my post on stackexchange: :http://math.stackexchange.com/quest...-to-extend-the-definition-of-super-logarithms

Kumar8434 said:
This is simple logic. Why are you arguing over this?
You are relying on intuition, not logic. Nobody can argue anything about your algorithm until you state a specific algorithm. You said you don't use a programming language, so that would explain why you aren't used to describing algorithms in detail. I give you credit for trying to explain the algorithm by examples, but keep in mind that by relying on intuition and examples, you are appealing to a very limited audience of mathematicians. You only appeal to people who are sympathetic enough to go through your examples and do the labor of defining your algorithm for themselves - and then try to formulate their own proofs for your claims.

So far, you haven't gotten an enthusiastic response. On that site, your are also giving intuitive arguments and numerical examples. You ask if you have made a new discovery. In modern mathematics, for something to be a mathematical discovery, it would need to be precisely stated and backed up by proofs - not just illustrated by some numerical examples. I agree that it is possible to make "discoveries" in the sense of intuititive concepts and convincing examples. However, you shouldn't be surprised if that sort of discovery doesn't arouse much interest among mathematicians. They are used to hearing claims of such discoveries that don't pan out.

If you think that people who want precise descriptions and proofs are "arguing: with you are getting yourself cross-wise with the community of people who can do mathematics. You need to set yourself apart from the crackpots.

I think your intuitive ideas have some merit, but it will be very difficult for you to develop them into mathematical discoveries in the short term. In order to convert your intuitions into a mathematical discovery, you'll have to gain experience in stating definitions precisely and proving theorems.

For example, in your stackexchange post, you propose (by example) a method for finding a function ##f(x)## such that ##f(f(x)) \approx \log(x)## near ##x = a## by using a taylor series approximation. You assume ##f(x)## is a polynomial function ##p(x)## of some (finite) degree. Then you compare coefficients of ##p(x)## with the corresponding coefficients in a taylor series for ##\log(x-a)##. That's a reasonable approach, but it falls short of specifying an algorithm that defines ##slog(a)##. For example, we don't know what degree polynomial we are required to use in computing the ##f## used in the algorithm for ##slog(a)##. It may be that defining ##slog(a)## will require defining it in terms of a limit of results produced by an infinite sequence algorithms instead of the result of one specific algorithm that uses polynomials of specific degrees.

BergsmaRN and Stavros Kiri
Stephen Tashi said:
You are relying on intuition, not logic. Nobody can argue anything about your algorithm until you state a specific algorithm. You said you don't use a programming language, so that would explain why you aren't used to describing algorithms in detail. I give you credit for trying to explain the algorithm by examples, but keep in mind that by relying on intuition and examples, you are appealing to a very limited audience of mathematicians. You only appeal to people who are sympathetic enough to go through your examples and do the labor of defining your algorithm for themselves - and then try to formulate their own proofs for your claims.
So far, you haven't gotten an enthusiastic response. On that site, your are also giving intuitive arguments and numerical examples. You ask if you have made a new discovery. In modern mathematics, for something to be a mathematical discovery, it would need to be precisely stated and backed up by proofs - not just illustrated by some numerical examples. I agree that it is possible to make "discoveries" in the sense of intuititive concepts and convincing examples. However, you shouldn't be surprised if that sort of discovery doesn't arouse much interest among mathematicians. They are used to hearing claims of such discoveries that don't pan out.

If you think that people who want precise descriptions and proofs are "arguing: with you are getting yourself cross-wise with the community of people who can do mathematics. You need to set yourself apart from the crackpots.

I think your intuitive ideas have some merit, but it will be very difficult for you to develop them into mathematical discoveries in the short term. In order to convert your intuitions into a mathematical discovery, you'll have to gain experience in stating definitions precisely and proving theorems.

For example, in your stackexchange post, you propose (by example) a method for finding a function ##f(x)## such that ##f(f(x)) \approx \log(x)## near ##x = a## by using a taylor series approximation. You assume ##f(x)## is a polynomial function ##p(x)## of some (finite) degree. Then you compare coefficients of ##p(x)## with the corresponding coefficients in a taylor series for ##\log(x-a)##. That's a reasonable approach, but it falls short of specifying an algorithm that defines ##slog(a)##. For example, we don't know what degree polynomial we are required to use in computing the ##f## used in the algorithm for ##slog(a)##. It may be that defining ##slog(a)## will require defining it in terms of a limit of results produced by an infinite sequence algorithms instead of the result of one specific algorithm that uses polynomials of specific degrees.
I agree with you. But superlogarithms don't have many great properties like logarithms, so, I have to rely on intuition there. But, the results that I've got are convincing. For example, if you evaluate an approximate functional-square root of ##log_ax## by my method, then apply that function once to ##a^a##, then the answer you get is very close to 1. Similarly, the functional-cube-root of ##log_ax## when applied once to ##a^{a^a}##, then the answer is again very close to 1.

Stephen Tashi said:
You are relying on intuition, not logic. Nobody can argue anything about your algorithm until you state a specific algorithm. You said you don't use a programming language, so that would explain why you aren't used to describing algorithms in detail. I give you credit for trying to explain the algorithm by examples, but keep in mind that by relying on intuition and examples, you are appealing to a very limited audience of mathematicians. You only appeal to people who are sympathetic enough to go through your examples and do the labor of defining your algorithm for themselves - and then try to formulate their own proofs for your claims.So far, you haven't gotten an enthusiastic response. On that site, your are also giving intuitive arguments and numerical examples. You ask if you have made a new discovery. In modern mathematics, for something to be a mathematical discovery, it would need to be precisely stated and backed up by proofs - not just illustrated by some numerical examples. I agree that it is possible to make "discoveries" in the sense of intuititive concepts and convincing examples. However, you shouldn't be surprised if that sort of discovery doesn't arouse much interest among mathematicians. They are used to hearing claims of such discoveries that don't pan out.

If you think that people who want precise descriptions and proofs are "arguing: with you are getting yourself cross-wise with the community of people who can do mathematics. You need to set yourself apart from the crackpots.

I think your intuitive ideas have some merit, but it will be very difficult for you to develop them into mathematical discoveries in the short term. In order to convert your intuitions into a mathematical discovery, you'll have to gain experience in stating definitions precisely and proving theorems.

For example, in your stackexchange post, you propose (by example) a method for finding a function ##f(x)## such that ##f(f(x)) \approx \log(x)## near ##x = a## by using a taylor series approximation. You assume ##f(x)## is a polynomial function ##p(x)## of some (finite) degree. Then you compare coefficients of ##p(x)## with the corresponding coefficients in a taylor series for ##\log(x-a)##. That's a reasonable approach, but it falls short of specifying an algorithm that defines ##slog(a)##. For example, we don't know what degree polynomial we are required to use in computing the ##f## used in the algorithm for ##slog(a)##. It may be that defining ##slog(a)## will require defining it in terms of a limit of results produced by an infinite sequence algorithms instead of the result of one specific algorithm that uses polynomials of specific degrees.
Sorry, I made a slight mistake in my #12 post. I can't edit it now. If you're still interested, then the actual fact is:
If you evaluate an approximate functional-square root of ##log_{a^a}x## by my method, then apply that function once to a, then the answer you get is very close to 1. Similarly, the functional-cube-root of ##log_{a^{a^a}}x## when applied once to a, then the answer is again very close to 1.

Stephen Tashi said:
You are relying on intuition, not logic. Nobody can argue anything about your algorithm until you state a specific algorithm. You said you don't use a programming language, so that would explain why you aren't used to describing algorithms in detail. I give you credit for trying to explain the algorithm by examples, but keep in mind that by relying on intuition and examples, you are appealing to a very limited audience of mathematicians. You only appeal to people who are sympathetic enough to go through your examples and do the labor of defining your algorithm for themselves - and then try to formulate their own proofs for your claims.
So far, you haven't gotten an enthusiastic response. On that site, your are also giving intuitive arguments and numerical examples. You ask if you have made a new discovery. In modern mathematics, for something to be a mathematical discovery, it would need to be precisely stated and backed up by proofs - not just illustrated by some numerical examples. I agree that it is possible to make "discoveries" in the sense of intuititive concepts and convincing examples. However, you shouldn't be surprised if that sort of discovery doesn't arouse much interest among mathematicians. They are used to hearing claims of such discoveries that don't pan out.

If you think that people who want precise descriptions and proofs are "arguing: with you are getting yourself cross-wise with the community of people who can do mathematics. You need to set yourself apart from the crackpots.

I think your intuitive ideas have some merit, but it will be very difficult for you to develop them into mathematical discoveries in the short term. In order to convert your intuitions into a mathematical discovery, you'll have to gain experience in stating definitions precisely and proving theorems.

For example, in your stackexchange post, you propose (by example) a method for finding a function ##f(x)## such that ##f(f(x)) \approx \log(x)## near ##x = a## by using a taylor series approximation. You assume ##f(x)## is a polynomial function ##p(x)## of some (finite) degree. Then you compare coefficients of ##p(x)## with the corresponding coefficients in a taylor series for ##\log(x-a)##. That's a reasonable approach, but it falls short of specifying an algorithm that defines ##slog(a)##. For example, we don't know what degree polynomial we are required to use in computing the ##f## used in the algorithm for ##slog(a)##. It may be that defining ##slog(a)## will require defining it in terms of a limit of results produced by an infinite sequence algorithms instead of the result of one specific algorithm that uses polynomials of specific degrees.
I don't know if it can be done by programming. But could you check one thing for me if it is possible with programming and it it's not much work?
First write a program which accepts polynomials (##f(x)##) of any degree with variable coefficients.
Say we input a polynomial of degree ##n## in it.
Then, that program computes ##f(f(x))##. I don't know if this can be proved but I think that if none of coefficients of ##f(x)## are zero, then ##f(f(x))## must have distinct terms having all the powers of ##x## ranging from ##x^{(n^n)}## to ##x^0##.
Now, input any number ##a## such that ##a## is of the form ##k^k## where ##k## is a natural number. The program then computes the Taylor series of ##log_ax## around ##x=k##.
Now the program compares the coefficients of the terms containing ##x^0, x^1, x^2,...x^n## in ##f(f(x))## with the terms containing the same powers of ##x## in the Taylor series and gets the coefficients and hence gets ##f(x)##.
Now, the program applies substitutes ##x=k## in ##f(x)## and gets the answer. I just need to know if this answer converges towards 1 if we input polynomials of higher and higher degrees. I could do this by hand but it's the equation solving part which gets the most messy when I assume ##f(x)## of higher degrees.

Kumar8434 said:
I don't know if it can be done by programming. But could you check one thing for me if it is possible with programming and it it's not much work?
There are probably people who can do that task "without much work" because they are familiar with Wolfram or some other computer algebra system (CAS) that has libraries to do the component tasks. It would take a lot of work to do it "from scratch" in a language such as C++, but it would be educational. There are libraries for C++ that do those tasks - but then we have the problem of learning how to use the libraries. I'll think about the project, but I'm not promising to do it.

Stephen Tashi said:
There are probably people who can do that task "without much work" because they are familiar with Wolfram or some other computer algebra system (CAS) that has libraries to do the component tasks. It would take a lot of work to do it "from scratch" in a language such as C++, but it would be educational. There are libraries for C++ that do those tasks - but then we have the problem of learning how to use the libraries. I'll think about the project, but I'm not promising to do it.

Kumar8434 said:
Is there some way to find such a function?
Such a task is also a "functional relations" problem. [I used to be an expert in those, but long time ago ...]

Why don't you start with a simpler problem?, but very similar, in terms of solving technique. "Find all real continuous functions that f(f(x)) = x". There is a good method, but you have to be good in calculus. Since I think you have a lot of potential, I won't give you a hint for the method yet, but only the expected answer, which is:
f(x) = x or f(x) = c/x .

Any ideas?

If you solve this, you can apply the method to your problem as well.

Stavros Kiri said:
Such a task is also a "functional relations" problem. [I used to be an expert in those, but long time ago ...]

Why don't you start with a simpler problem?, but very similar, in terms of solving technique. "Find all real continuous functions that f(f(x)) = x". There is a good method, but you have to be good in calculus. Since I think you have a lot of potential, I won't give you a hint for the method yet, but only the expected answer, which is:
f(x) = x or f(x) = c/x .

Any ideas?

If you solve this, you can apply the method to your problem as well.
I could guess the solution but couldn't get anywhere near finding the solutions. ##f(x)=c-x## does also satisfy that.
BTW, a dumb approach:
Let ##f(x)=ax+b##
##f(f(x)=a^2x+ab+b##
Also, ##f(f(x))=x##
So, after comparing the coefficients of ##x## and the constant terms, I could get:
##a=1, b=0##
So, ##f(x)=x##.
Also, ##f(x)=-x## also satisfies ##f(f(x))=x##

Kumar8434 said:
Now the program compares the coefficients of the terms containing ##x^0, x^1, x^2,...x^n## in ##f(f(x))## with the terms containing the same powers of ##x## in the Taylor series and gets the coefficients and hence gets ##f(x)##.

It might not be that simple. For example if ##f(x) = Ax^2 + Bx + C##, then

##f(f(x)) = A^3 x^4 + 2A^2B x^3 + (AB^2 + A^2C + AB)x^2 + (2ABC + B^2)x + (AC^2 + BC + C)##.

If we compare this the taylor series for a given function ##g(x) = \sum_{i=0}^\infty c_i x^i## the implied simultaneous equations are:

1) ##AC^2 + BC + C = c_0 ##
2) ##2ABC + B^2 = c_1##
3) ## AB^2 + A^2C + AB = c_2##
4) ## 2A^2B = c_3##
5) ## A^3 = c_4##.

That's 5 equations in the 3 unknowns #A,B,C##. The system of equations may not be consistent.

Kumar8434 said:
So, after comparing the coefficients of ##x## and the constant terms, I could get:
##a=1, b=0##
So, ##f(x)=x##.
Also, ##f(x)=-x## also satisfies ##f(f(x))=x##
Not dumb approach, but not general enough. Only for linear 1st order solutions.
Solving the system a2=1 and ab+b = 0 carefully will yield all such solutions (indeed includes c-x [and -x is special case for c=0]).

How far have you gone with calculus? (Because I think you are very advanced for high school ... [I saw your last response on e-mail first, before editing, + have seen your profile before, from the dx thread ...])

Stephen Tashi said:
It might not be that simple. For example if ##f(x) = Ax^2 + Bx + C##, then

##f(f(x)) = A^3 x^4 + 2A^2B x^3 + (AB^2 + A^2C + AB)x^2 + (2ABC + B^2)x + (AC^2 + BC + C)##.

If we compare this the taylor series for a given function ##g(x) = \sum_{i=0}^\infty c_i x^i## the implied simultaneous equations are:

1) ##AC^2 + BC + C = c_0 ##
2) ##2ABC + B^2 = c_1##
3) ## AB^2 + A^2C + AB = c_2##
4) ## 2A^2B = c_3##
5) ## A^3 = c_4##.

That's 5 equations in the 3 unknowns #A,B,C##. The system of equations may not be consistent.
We don't have to compare the coefficients of ##x^3## and ##x^4##. We have three unknowns, so only compare the coefficients of the constant term, of ##x## and of ##x^2## with the Taylor series to get three equations. Even if the higher power coefficients of ##f(f(x))## differ with the higher power coefficients of the Taylor series, then that won't be much problem because the higher power terms can be ignored. So, if you assume a polynomial of degree ##n##, then only compare the coefficients of ##x^0, x^1, x^2.....x^n## of ##f(f(x))## with the Taylor series to get ##n## equations with ##n## unknowns. The higher power terms don't matter.

Stavros Kiri said:
Not dumb approach, but not general enough. Only for linear 1st order solutions.
Solving the system a2=1 and ab+b = 0 carefully will yield all such solutions (indeed includes c-x [and -x is special case for c=0]).

How far have you gone with calculus? (Because I think you are very advanced for high school ... [I saw your last response on e-mail first, before editing, + have seen your profile before, from the dx thread ...])
I'm in high-school final year. I've just learned some of the advanced concepts from the internet. That's all. I'm not at all advanced for high school. As for calculus, I currently know the two methods of integration taught in high-school: Substitution and integration by parts and for differentiation, I know the chain rule,and product rule, derivative by first principles,etc and whatever is taught in high-school. Is this knowledge enough to solve your problem?

Kumar8434 said:
I'm in high-school final year. I've just learned some of the advanced concepts from the internet. That's all. I'm not at all advanced for high school. As for calculus, I currently know the two methods of integration taught in high-school: Substitution and integration by parts and for differentiation, I know whatever is taught in high-school. Is this knowledge enough to solve your problem?
Diff. equ's ? [i.e. have you done Differential Equations?]
That's also the first big hint for people watching and have ...
[That would point the way towards perhaps finding non-numerical precise solutions to your problem too - but I didn't have time to actualy work it out yet. For some similar problems, like the one I gave you, you find exact precise closed form solutions ...]
Next hint will be how you end up with a diff. equ ...

Kumar8434 said:
The higher power terms don't matter.

You can't be sure that they don't matter. Even solving eqs 1) through 3) wouldn't just be a matter of "comparing coefficients". It would involve solving nonlinear equations.

Stephen Tashi said:
You can't be sure that they don't matter. Even solving eqs 1) through 3) wouldn't just be a matter of "comparing coefficients". It would involve solving nonlinear equations.
Are those equations difficult even for a computer program? And, I already have a computer program to solve linear equations. I'd just have used it if the equations were linear. The equations are non-linear, that's what makes the problem messy.
And, I'm pretty sure the higher power terms don't matter. First, they surely don't matter at least in the Taylor series, if we're using the Taylor series around a point very close to ##k##, say around ##k-0.5##. And, I'm also convinced that the higher power terms are small even in case of ##f(f(x))##. For example, in the quadratic case, the coefficients of the higher power terms come out to be ##A^3## and ##A^2B##. I'm pretty sure these coefficients will be small enough to ignore the higher power terms. And, we could always work with small ##a## to ensure that the higher power terms don't matter.

Stavros Kiri said:
Diff. equ's ? [i.e. have you done Differential Equations?]
That's also the first big hint for people watching and have ...
[That would point the way towards perhaps finding non-numerical precise solutions to your problem too - but I didn't have time to actualy work it out yet. For some similar problems, like the one I gave you, you find exact precise closed form solutions ...]
Next hint will be how you end up with a diff. equ ...
Yeah, I can do homogenous, linear and variable separable until now.
Still don't know how to solve your problem. I differentiated both sides. Then again differentiated it. But the problem is that the argument of a function is a function. Don't know how to deal with that.

Stephen Tashi said:
You can't be sure that they don't matter. Even solving eqs 1) through 3) wouldn't just be a matter of "comparing coefficients". It would involve solving nonlinear equations.
Is there some tool in the library in C++ which can solve any system of equations we give to it, be it linear or non-linear? And, if they're not solvable then simple return "These equations are not solvable".

Stephen Tashi said:
You can't be sure that they don't matter. Even solving eqs 1) through 3) wouldn't just be a matter of "comparing coefficients". It would involve solving nonlinear equations.
Is there some tool in the library in C++ which can solve any system of equations we give to it, be it linear or non-linear? And, if they're not solvable then simple return "These equations are not solvable".

Kumar8434 said:
Yeah, I can do homogenous, linear and variable separable until now.
Still don't know how to solve your problem. I differentiated both sides. Then again differentiated it. But the problem is that the argument of a function is a function. Don't know how to deal with that.

Did you use differentiation rules correctly? There are two ways 1. Differentiate both sides of f(f(x)) = x. 2. Go through the equivallent expression f(x) = f-1(x) and again differentiate both sides.
Play adequately with both, or even try to combine them.
Kumar8434 said:
But the problem is that the argument of a function is a function. Don't know how to deal with that.
The trick is to do correctly an appropriate change of variable, (e.g. y=f(x) ... [?] ...), before or while differentiating, and apply all those differentiation rules carefuly too ... (including the change of variable one) ...

Kumar8434 said:
Are those equations difficult even for a computer program?
.

I'd say it is not difficult to solve such nonlinear equations numerically.

It also possible to solve such equations symbolically, although the theory behind that is a fairly "deep" subject ( e.g. Grobner bases or Wu's method of elimination).

Is there some tool in the library in C++ which can solve any system of equations we give to it, be it linear or non-linear? And, if they're not solvable then simple return "These equations are not solvable".

I don't know of any tool that versatile in C++ or any other programming system.

The term "the C++ library" can have various meanings. There are special mathematical libraries written for C++ such as "Symbolic C++" ( https://en.wikipedia.org/wiki/SymbolicC++ ). These libraries are not "standard" components of C++, but many are free software and simple to incorporate in C++ programs.

C++ has some "built-in" functions. There is also a library for C++ called "The Standard Template Library" which is, as the name suggests, a "standard" part of C++.

Stavros Kiri said:
Did you use differentiation rules correctly? There are two ways 1. Differentiate both sides of f(f(x)) = x. 2. Go through the equivallent expression f(x) = f-1(x) and again differentiate both sides.
Play adequately with both, or even try to combine them.

The trick is to do correctly an appropriate change of variable, (e.g. y=f(x) ... [?] ...), before or while differentiating, and apply all those differentiation rules carefuly too ... (including the change of variable one) ...
You're not going to like this:
$$f(x)=f^{-1}(x)$$
So, differentiating both sides gives ( by using the fact that ##(f^{-1})'(x)=\frac{1}{f'(f^{-1}(x))}##):
$$f'(x)=\frac{1}{f'(f(x))}$$
Again differentiating this gives:
$$f''(x)=-\frac{1}{(f'(f(x)))^2}*f''(f(x))*f'(x)$$
Since ##f'(x)=(f^{-1})'(x)=\frac{1}{f'(f(x))}##,
$$f''(x)=-\frac{1}{(f'(f(x)))^3}*f''(f(x))$$ ....(1)
Substituting, ##f(x)=x## and hence ##x=f^{-1}(x)=f(x)## in this we get,
$$f''(f(x))=-\frac{1}{(f'(x))^3}*f''(x)$$
Substituting this value of ##f''(f(x))## in (1), we get,
$$f''(x)=-\frac{1}{(f'(f(x)))^3}*-\frac{1}{(f'(x))^3}*f''(x)$$
So,
$$\frac{1}{(f'(f(x)))^3}*\frac{1}{(f'(x))^3}=1$$
After doing all this all what I get is:
$$f'(x)=\frac{1}{f'(f(x))}$$
which is what I started with. Whatever I do I end up at the same point. Apart from this, I believe I must have used some circular reference, or have done something wrong. Never was good with proofs.

Stephen Tashi said:
I'd say it is not difficult to solve such nonlinear equations numerically.

It also possible to solve such equations symbolically, although the theory behind that is a fairly "deep" subject ( e.g. Grobner bases or Wu's method of elimination).
I don't know of any tool that versatile in C++ or any other programming system.

The term "the C++ library" can have various meanings. There are special mathematical libraries written for C++ such as "Symbolic C++" ( https://en.wikipedia.org/wiki/SymbolicC++ ). These libraries are not "standard" components of C++, but many are free software and simple to incorporate in C++ programs.

C++ has some "built-in" functions. There is also a library for C++ called "The Standard Template Library" which is, as the name suggests, a "standard" part of C++.
That reminds me of when I took the challenge of symbolic differentiation when I had just started programming. I couldn't imagine working with symbols is that hard. I think that's fine. We don't have to solve these equations symbolically. We have to compare the coefficients with the numerical values of the coefficients of the Taylor series. No need to obtain a symbolic formula.

Kumar8434 said:
You're not going to like this:
$$f(x)=f^{-1}(x)$$
So, differentiating both sides gives ( by using the fact that ##(f^{-1})'(x)=\frac{1}{f'(f^{-1}(x))}##):
$$f'(x)=\frac{1}{f'(f(x))}$$
You missed the -1 the second time ...
Your algebra yielded just that wrong assumption as a condition for it to hold.
You did a similar mistake in
https://www.physicsforums.com/threa...used-to-get-the-roots-of-a-polynomial.904863/
before editting ...

Stavros Kiri said:
You missed the -1 the second time ...
Your algebra yielded just that wrong assumption as a condition for it to hold.
You did a similar mistake in
https://www.physicsforums.com/threa...used-to-get-the-roots-of-a-polynomial.904863/
before editting ...
No. This time I did that on purpose. Because ##f(x)=f^{-1}(x)## in this case, so I replaced ##f^{-1}(x)## with ##f(x)##. I should've mentioned that.

Kumar8434 said:
No. This time I did that on purpose. Because ##f(x)=f^{-1}(x)## in this case, so I replaced ##f^{-1}(x)## with ##f(x)##. I should've mentioned that.
You are right. Then you don't need to go further. That's probably your diff. equ right there. Just need to change the variable. But let me make sure.

Replies
3
Views
1K
Replies
5
Views
1K
Replies
12
Views
1K
Replies
2
Views
919
Replies
4
Views
2K
Replies
13
Views
1K
Replies
10
Views
2K
Replies
2
Views
2K
Replies
2
Views
1K
Replies
8
Views
1K