Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

B Find ##f(x)## such that f(f(x))=##log_ax##

  1. Feb 8, 2017 #1
    I was thinking about extending the definition of superlogarithms. I think maybe that problem can be solved if we find a function ##f## such that ##fof(x)=log_ax##. Is there some way to find such a function? Maybe the taylor series could be of some help. Or is there some method to find a function such that ##f(f(x))## is at least approximately equal to ##logx##?
     
  2. jcsd
  3. Feb 8, 2017 #2
    Here's why I think it may help in extending the definition of super-logarithms:
    ##slog_a x## is defined as the number of times logarithm with the base ##a## is applied to ##x## to get to 1. But this definition fails when ##x## is not of the form ##^na##. So, we're applying ##f(x)=\log_a x## continuously to ##x## in this case until we get 1.

    If we define ##log_ax## in a similar manner as the number of times ##x## is divided by ##a## to get to 1, then this definition fails when ##x## is not of the form ##a^n##. In this case, we're applying ##f(x)=\frac{x}{a} ## continuously to ##x## until we get 1. This definition of ##log_ax## can be extended by looking for functions f, g, h ....such that ##f## such that ##f(f(x))=\frac{x}{a}, g(g(g(x)))=\frac{x}{a}## etc.

    These functions are easy to find and are equal to, ##f(x)=\frac{x}{a^{\frac{1}{2}}}##, ##g(x)=\frac{x}{a^{\frac{1}{3}}}##,etc.

    Here's how these functions help in extending this definition of ##\log_ax##.
    Suppose we want to find ##log_220##, then first keep dividing ##20## by ##2##, i.e. keep applying ##f(x)=\frac{x}{2}## to 20. After four divisions, we get 1.25. We can't divide further by 2 because then the number will get smaller than 1.

    Now, keep applying ##f(x)=\frac{x}{\sqrt{2}}## to 1.25 until we again reach a situation in which further division gets us smaller than 1. Let the number of times we we were able to divide by ##\sqrt{2}## be ##k_1##. After that, start dividing the resulting number by ##2^{\frac{1}{3}}##. Let the number of times we could divide by ##2^{\frac{1}{3}}## be ##k_2##. Repeat this process to as much accuracy as you like. Then,

    $$log_220=4+\frac{k_1}{2}+\frac{k_2}{3}+...........$$
    So, if we can get functions f, g, h,.... such that ##f(f(x))=\log_ax##, ##g(g(g(x)))=\log_ax##, ##h(h(h(h(x))))=log_ax##, etc and calculate the number of times these functions can be applied without getting a number smaller than 1, then I think ##slog_ax## can be calculated by using $$slog_ax=k_1+\frac{k_2}{2}+\frac{k_3}{3}+....$$ where ##k_1## is the number of times log with base ##a## can be applied to ##x## such that the resulting number isn't smaller than 1, ##k_2## is the number of times the function ##f(x)## such that ##f(f(x))=log_ax## can be applied to the number we get after applying ##k_1## logarithms with base ##a## to ##x## without getting a number less than 1,.....etc.
    Please tell me if I'm wrong.
     
    Last edited: Feb 8, 2017
  4. Feb 14, 2017 #3

    Stephen Tashi

    User Avatar
    Science Advisor

    What would you get for ##\log_{4} 6## ? ##\ 1 + 1/4##?

    You must mean that ##slog_ax## can be defined by that process. (It can't be "calculated" until it is defined.)

    Do you want the definition to have the consequence that ##slog_a(xy) = slog_a(x) + slog_a(y)## ?

    To me, the question of when, for a given ##g(x)## , we can find a function ##f## such ##f(f(x)) = g(x)## (exactly or approximately) is much more interesting that the topic of superlogarithms. I suspect people have investigated the question, but I don't know how much progress has been made.
     
  5. Feb 20, 2017 #4
    "What would you get for ##\log_{4} 6## ? ##\ 1 + 1/4##?": So what's your question here? ##\log_{4}6\approx 1.29##. ##\ 1 + 1/4## is pretty close to that. And you'll get closer, if you calculate more ##k_{i^{s}}##. I did some blind calculations on my calculator. ##\frac{1.5}{4^{0.25}}##, i.e. 1.06 can be divided by ##4^{\frac{1}{600}}## approximately 25 times ( I just divided it once by ##4^{25/600}## and the resulting number was very close to 1). So, ##log_46=1+1/4+25/600=1.29##, which is very close to the actual value.
    And, I already figured out how to calculate approximate functions ##f(x)## such that ##f(f(x))=g(x)##, after no one replied on this thread for days.Getting approximate functions in pretty simple. Read this post of mine to see how: http://math.stackexchange.com/quest...-to-extend-the-definition-of-super-logarithms
     
    Last edited: Feb 20, 2017
  6. Feb 20, 2017 #5

    Stephen Tashi

    User Avatar
    Science Advisor

    My question is as-stated: What would you get for ##\log_{4}6##? I ask this because your example has the ambiguous phrase "the resulting number":

    You mentioned several numbers, which number is "the resulting number"?
     
  7. Feb 20, 2017 #6
    The resulting number is the number we obtain after the divisions. For example, after dividing 20 by 2 four times, we get 1.25 ( further division gets us smaller than 1), so this is the resulting number for the next step. After this step, we'll start dividing 1.25 by sqrt(2) repeatedly to get the next resulting number. Then we take this resulting number and start dividing it by cuberoot(2).
     
  8. Feb 20, 2017 #7
    If you know some programming language, you could write an iteration that seeks to minimize the function

    [tex]
    \mathcal{F}(f) = \int\limits_{\epsilon}^R \big(f\big(f(x)\big) - \ln(x)\big)^2dx
    [/tex]

    numerically, approximating the function [itex]f[/itex] as some array. The result might tell something.
     
  9. Feb 20, 2017 #8

    Stephen Tashi

    User Avatar
    Science Advisor

    When we are computing ##log_{2} 4##. We divide 4 by 2 until the division produces a number less than 2, so we divide until the division produces 1. Is 1 the "resulting number"?
     
  10. Feb 20, 2017 #9
    No. We divide 4 by 2 repeatedly until the result becomes less than 1 or equal to 1. If it becomes equal to 1, then that's great and we stop there. The objective is to get to 1 and if that's not possible by repeated divisions then we try to get as close to 1 as possible. If we dividing by 2 repeatedly gets us less than 1, then we go one step back to the last quotient when it was greater than 1 and that's the 'resulting number', i.e. the number which we have to divide by sqrt(2) now to get as close to 1 as possible, then repeat the steps. Come on, man. This is simple logic. Why are you arguing over this? To make you understand, I'm using this fact:
    If a number x is divided by the nth root of ##a## k times such that the resulting quotient is very close to 1. Then, ##log_ax\approx k/n##. The method I've given here to calculate the logarithm is just a slightly different version of this. I hope you understand now. Are you trolling?
     
  11. Feb 20, 2017 #10
    No. I don't know any programming language. But I figured out how to get approximate functional nth roots, i.e. functions which when applied n times to x give the required function f(x). And it seemed to work. I could convincingly extend the definition of super-logarithms by that. Here's my post on stackexchange: :http://math.stackexchange.com/quest...-to-extend-the-definition-of-super-logarithms
     
  12. Feb 21, 2017 #11

    Stephen Tashi

    User Avatar
    Science Advisor

    You are relying on intuition, not logic. Nobody can argue anything about your algorithm until you state a specific algorithm. You said you don't use a programming language, so that would explain why you aren't used to describing algorithms in detail. I give you credit for trying to explain the algorithm by examples, but keep in mind that by relying on intuition and examples, you are appealing to a very limited audience of mathematicians. You only appeal to people who are sympathetic enough to go through your examples and do the labor of defining your algorithm for themselves - and then try to formulate their own proofs for your claims.

    So far, you haven't gotten an enthusiastic response. On that site, your are also giving intuitive arguments and numerical examples. You ask if you have made a new discovery. In modern mathematics, for something to be a mathematical discovery, it would need to be precisely stated and backed up by proofs - not just illustrated by some numerical examples. I agree that it is possible to make "discoveries" in the sense of intuititive concepts and convincing examples. However, you shouldn't be surprised if that sort of discovery doesn't arouse much interest among mathematicians. They are used to hearing claims of such discoveries that don't pan out.

    If you think that people who want precise descriptions and proofs are "arguing: with you are getting yourself cross-wise with the community of people who can do mathematics. You need to set yourself apart from the crackpots.

    I think your intuitive ideas have some merit, but it will be very difficult for you to develop them into mathematical discoveries in the short term. In order to convert your intuitions in to a mathematical discovery, you'll have to gain experience in stating definitions precisely and proving theorems.

    For example, in your stackexchange post, you propose (by example) a method for finding a function ##f(x)## such that ##f(f(x)) \approx \log(x)## near ##x = a## by using a taylor series approximation. You assume ##f(x)## is a polynomial function ##p(x)## of some (finite) degree. Then you compare coefficients of ##p(x)## with the corresponding coefficients in a taylor series for ##\log(x-a)##. That's a reasonable approach, but it falls short of specifying an algorithm that defines ##slog(a)##. For example, we don't know what degree polynomial we are required to use in computing the ##f## used in the algorithm for ##slog(a)##. It may be that defining ##slog(a)## will require defining it in terms of a limit of results produced by an infinite sequence algorithms instead of the result of one specific algorithm that uses polynomials of specific degrees.
     
  13. Feb 21, 2017 #12
    I agree with you. But superlogarithms don't have many great properties like logarithms, so, I have to rely on intuition there. But, the results that I've got are convincing. For example, if you evaluate an approximate functional-square root of ##log_ax## by my method, then apply that function once to ##a^a##, then the answer you get is very close to 1. Similarly, the functional-cube-root of ##log_ax## when applied once to ##a^{a^a}##, then the answer is again very close to 1.
     
  14. Feb 21, 2017 #13
    Sorry, I made a slight mistake in my #12 post. I can't edit it now. If you're still interested, then the actual fact is:
    If you evaluate an approximate functional-square root of ##log_{a^a}x## by my method, then apply that function once to a, then the answer you get is very close to 1. Similarly, the functional-cube-root of ##log_{a^{a^a}}x## when applied once to a, then the answer is again very close to 1.
     
  15. Feb 25, 2017 #14
    I don't know if it can be done by programming. But could you check one thing for me if it is possible with programming and it it's not much work?
    First write a program which accepts polynomials (##f(x)##) of any degree with variable coefficients.
    Say we input a polynomial of degree ##n## in it.
    Then, that program computes ##f(f(x))##. I don't know if this can be proved but I think that if none of coefficients of ##f(x)## are zero, then ##f(f(x))## must have distinct terms having all the powers of ##x## ranging from ##x^{(n^n)}## to ##x^0##.
    Now, input any number ##a## such that ##a## is of the form ##k^k## where ##k## is a natural number. The program then computes the Taylor series of ##log_ax## around ##x=k##.
    Now the program compares the coefficients of the terms containing ##x^0, x^1, x^2,..........x^n## in ##f(f(x))## with the terms containing the same powers of ##x## in the Taylor series and gets the coefficients and hence gets ##f(x)##.
    Now, the program applies substitutes ##x=k## in ##f(x)## and gets the answer. I just need to know if this answer converges towards 1 if we input polynomials of higher and higher degrees. I could do this by hand but it's the equation solving part which gets the most messy when I assume ##f(x)## of higher degrees.
     
  16. Feb 25, 2017 #15

    Stephen Tashi

    User Avatar
    Science Advisor

    There are probably people who can do that task "without much work" because they are familiar with Wolfram or some other computer algebra system (CAS) that has libraries to do the component tasks. It would take a lot of work to do it "from scratch" in a language such as C++, but it would be educational. There are libraries for C++ that do those tasks - but then we have the problem of learning how to use the libraries. I'll think about the project, but I'm not promising to do it.
     
  17. Feb 25, 2017 #16
    Thanks. That would be great. Please reply if you get something.
     
  18. Feb 25, 2017 #17
    Such a task is also a "functional relations" problem. [I used to be an expert in those, but long time ago ...]

    Why don't you start with a simpler problem?, but very similar, in terms of solving technique. "Find all real continuous functions that f(f(x)) = x". There is a good method, but you have to be good in calculus. Since I think you have a lot of potential, I won't give you a hint for the method yet, but only the expected answer, which is:
    f(x) = x or f(x) = c/x .

    Any ideas?

    If you solve this, you can apply the method to your problem as well.
     
  19. Feb 25, 2017 #18
    I could guess the solution but couldn't get anywhere near finding the solutions. ##f(x)=c-x## does also satisfy that.
    BTW, a dumb approach:
    Let ##f(x)=ax+b##
    ##f(f(x)=a^2x+ab+b##
    Also, ##f(f(x))=x##
    So, after comparing the coefficients of ##x## and the constant terms, I could get:
    ##a=1, b=0##
    So, ##f(x)=x##.
    Also, ##f(x)=-x## also satisfies ##f(f(x))=x##
     
  20. Feb 25, 2017 #19

    Stephen Tashi

    User Avatar
    Science Advisor

    It might not be that simple. For example if ##f(x) = Ax^2 + Bx + C##, then

    ##f(f(x)) = A^3 x^4 + 2A^2B x^3 + (AB^2 + A^2C + AB)x^2 + (2ABC + B^2)x + (AC^2 + BC + C)##.

    If we compare this the taylor series for a given function ##g(x) = \sum_{i=0}^\infty c_i x^i## the implied simultaneous equations are:

    1) ##AC^2 + BC + C = c_0 ##
    2) ##2ABC + B^2 = c_1##
    3) ## AB^2 + A^2C + AB = c_2##
    4) ## 2A^2B = c_3##
    5) ## A^3 = c_4##.

    That's 5 equations in the 3 unknowns #A,B,C##. The system of equations may not be consistent.
     
  21. Feb 25, 2017 #20
    Not dumb approach, but not general enough. Only for linear 1st order solutions.
    Solving the system a2=1 and ab+b = 0 carefully will yield all such solutions (indeed includes c-x [and -x is special case for c=0]).

    How far have you gone with calculus? (Because I think you are very advanced for high school ... [I saw your last response on e-mail first, before editing, + have seen your profile before, from the dx thread ...])
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Find ##f(x)## such that f(f(x))=##log_ax##
  1. Proof f'(x)/f(x)=|f(x)| (Replies: 26)

  2. Find f(x) (Replies: 1)

  3. Find function f(x)? (Replies: 1)

  4. Find f(x) (Replies: 5)

Loading...