- #36
Stavros Kiri
- 1,074
- 886
You mean f(x) = y ?Kumar8434 said:Substituting, f(x) = x and hence
You mean f(x) = y ?Kumar8434 said:Substituting, f(x) = x and hence
Does it make a difference?Stavros Kiri said:You mean f(x) = y ?
Well, f(x) = x is not always true. Are you taking an example?Kumar8434 said:Does it make a difference?
Ok, so I substitute ##f(x)=y## and hence ##x=f^{-1}(y)=f(y)## which gives me:Stavros Kiri said:Well, f(x) = x is not always true. Are you taking an example?
All we know is that f(f(x)) = x.
So, symbolic or numerical, is there a way to write such a program or not? If equation-solving tools don't already exist in the library then, I don't think anyone can design a computer program to solve complicated non-linear equations like these.Stephen Tashi said:I'd say it is not difficult to solve such nonlinear equations numerically.
It also possible to solve such equations symbolically, although the theory behind that is a fairly "deep" subject ( e.g. Grobner bases or Wu's method of elimination).
I don't know of any tool that versatile in C++ or any other programming system.
The term "the C++ library" can have various meanings. There are special mathematical libraries written for C++ such as "Symbolic C++" ( https://en.wikipedia.org/wiki/SymbolicC++ ). These libraries are not "standard" components of C++, but many are free software and simple to incorporate in C++ programs.
C++ has some "built-in" functions. There is also a library for C++ called "The Standard Template Library" which is, as the name suggests, a "standard" part of C++.
Yeah, I told you that I always arrive at that whatever I do. First I differentiated both sides of ##f(f(x))=x## which gave me this equation, then I differentiated both sides of ##f(x)=f^{-1}(x)##, again the same equation. Then, I did all that and again arrived at the same thing.Stavros Kiri said:Instead, for easier algebra, I used (f°g)' = (f'°g)•g'. I got the same result: f'(f(x))•f'(x) = 1.
I don't think getting to 2nd order diff. equ is a good idea here.Kumar8434 said:Ok, so I substitute ##f(x)=y## and hence ##x=f^{-1}(y)=f(y)## which gives me:
$$f''(f(y))=-\frac{1}{(f'(y))^3}*f''(y)$$
But there are still functional arguments. I can't figure out any way to get rid of expressions like ##f''(f(y))##. If I get rid of them by some substitution, then maybe I would get a solvable differential equation.
Good. +I quote you my edited above:Kumar8434 said:Yeah, I told you that I always arrive at that whatever I do. First I differentiated both sides of ##f(f(x))=x## which gave me this equation, then I differentiated both sides of ##f(x)=f^{-1}(x)##, again the same equation. Then, I did all that and again arrived at the same thing.
Stavros Kiri said:Instead, for easier algebra, I used (f°g)' = (f'°g)•g'. I got the same result: f'(f(x))•f'(x) = 1.
That's the one we've got to solve, by appropriate change of variable. We want to keep it 1st order.
I think there are infinite number of functions satisfying ##f(f(x))=x##: https://en.wikipedia.org/wiki/Involution_(mathematics)Stavros Kiri said:I checked that x, c-x, a/x do indeed satisfy this equation. Are these the only ones? That's what we are actually trying to find out or prove , by attempting to actually solve that equation, despite the functional arguments! That's the real challenge of 'functional relations problems' ...
The arbitrariness of constants make them infinite anyway. There they basically talk about self-unitariness or self-inverted functions. In complex functions, QM and Hilbert spaces this is common, e.g. with operators (e.g. Unitary and self-conjugate operators ...). But here we are limited to real functions. However, ... Wow! , that wiki ref that you digged up is good! I have to admit I hadn't seen it. My favourite one there isKumar8434 said:I think there are infinite number of functions satisfying ##f(f(x))=x##: https://en.wikipedia.org/wiki/Involution_(mathematics)
We started as:Stavros Kiri said:We have to find a more general solution.
In a similar manner like above (#47), we have: a) Assume f is invertible. ThenStavros Kiri said:"Find all real continuous functions that f(f(x)) = x".
Try applying the same technique and differentiate both sides to get a differential equation (then perhaps look for an appropriate change of variable) ... but I doubt it can be solved explicitly. I think perhaps the best way to go is your numerical methods ...Kumar8434 said:I was thinking about extending the definition of superlogarithms. I think maybe that problem can be solved if we find a function ##f## such that ##fof(x)=log_ax##. Is there some way to find such a function? Maybe the taylor series could be of some help. Or is there some method to find a function such that ##f(f(x))## is at least approximately equal to ##logx##?
I don't think either that this problem can be solved explicitly. If it could be, someone would've done it by now. Approximate functions are our only option.Stavros Kiri said:A)
We started as:
In a similar manner like above (#47), we have: a) Assume f is invertible. Then
f-1[f(f(x))] = f-1(x),
which easily yields: f(x) = f-1(x)
That's the involution* condition, as pointed out in
https://en.wikipedia.org/wiki/Involution_(mathematics)
The large class of real functions satisfying that condition can at least be depicted or represented graphically as "all the invertible functions whose x-y graph is symmetric to the diagonal straight line "y=x" ...".
So it's a huge class of functions ... (as we can consrtuct easily a whole banch of those [e.g. pick a continuous shape, make it symmetric to y=x ... and you name it ...] ...).
* a function that is its own inverse
b) Assume f is not invertible. E.g. the constant function obviously is not a solution, but I did not try any further.
B) Now about your original log problem
Try applying the same technique and differentiate both sides to get a differential equation (then perhaps look for an appropriate change of variable) ... but I doubt it can be solved explicitly. I think perhaps the best way to go is your numerical methods ...
[+/or cf.
https://www.physicsforums.com/threads/find-f-x-such-that-f-f-x-log_ax.903097/#post-5697290 ]
I tend to agree.Kumar8434 said:I don't think either that this problem can be solved explicitly. If it could be, someone would've done it by now. Approximate functions are our only option.
The project just got simpler. Now forget the polynomials. Could you get approximate functions, such that ##f(f(x))=log_{a^a}x##, by the method suggested by jostpur and see if ##f(a)## converges towards 1? Similarly get function ##g(x)## such that ##g(g(g(x)))=log_{a^{a^a}}x## and see if ##f(a)## converges towards one. I don't think that should be much problem.Stephen Tashi said:I'd say it is not difficult to solve such nonlinear equations numerically.
It also possible to solve such equations symbolically, although the theory behind that is a fairly "deep" subject ( e.g. Grobner bases or Wu's method of elimination).
I don't know of any tool that versatile in C++ or any other programming system.
The term "the C++ library" can have various meanings. There are special mathematical libraries written for C++ such as "Symbolic C++" ( https://en.wikipedia.org/wiki/SymbolicC++ ). These libraries are not "standard" components of C++, but many are free software and simple to incorporate in C++ programs.
C++ has some "built-in" functions. There is also a library for C++ called "The Standard Template Library" which is, as the name suggests, a "standard" part of C++.
jostpuur said:If you know some programming language, you could write an iteration that seeks to minimize the function
[tex]
\mathcal{F}(f) = \int\limits_{\epsilon}^R \big(f\big(f(x)\big) - \ln(x)\big)^2dx
[/tex]
numerically, approximating the function [itex]f[/itex] as some array. The result might tell something.
Kumar8434 said:No. I don't know any programming language.
Kumar8434 said:I'm in high-school final year. I've just learned some of the advanced concepts from the internet. That's all.
Kumar8434 said:I don't know if it can be done by programming. But could you check one thing for me if it is possible with programming and it it's not much work?
Kumar8434 said:Thanks. That would be great. Please reply if you get something.
Kumar8434 said:I don't think that should be much problem.
He only asked for people's help here. There is nothing wrong with that. That's why PF is here. Whoever wants to help does. We are all volunteers here.jostpuur said:Everybody (almost) must know how to write programs. If you are only at high school, well then it is understandable that you haven't learned programming yet, but you should start a programming hobby as soon as possible.You have a severe attitude problem if you are hoping that other people would start writing some programs to study your problem. I know fully well how to write a program that would minimize the function [itex]\mathcal{F}(f)[/itex] I mentioned earlier, but I'm not going go spend my time on it now, because getting some numerical results and graphs about this [itex]f[/itex] isn't very important to me.
Usually the relationship between the young and energetic, and the old and wise, works so that on their own the young and energetic don't know what to do with their enthusiasm, and then the old guys advice them on how to use their energy in a purposeful way. You should appreciate the fact that I even mentioned the existence of the function [itex]\mathcal{F}(f)[/itex], because on your own you would have never realized that that kind of function could be written down to be minimized.
Jostpur suggested a goal, not a specific method. Before charging off to write programs, I prefer to think about the problem a little longer.Kumar8434 said:Could you get approximate functions, such that ##f(f(x))=log_{a^a}x##, by the method suggested by jostpur and see if ##f(a)## converges towards 1?
I fully appreciate your effort. I don't know advanced programming concepts yet. I haven't even started arrays. So, I thought instead of working on my skills for months and then checking my work, I'd ask Stephen Tashi to quickly check it only if it wasn't too much work. As he later mentioned that the task was difficult, so I didn't further force him to do anything.jostpuur said:Everybody (almost) must know how to write programs. If you are only at high school, well then it is understandable that you haven't learned programming yet, but you should start a programming hobby as soon as possible.You have a severe attitude problem if you are hoping that other people would start writing some programs to study your problem. I know fully well how to write a program that would minimize the function [itex]\mathcal{F}(f)[/itex] I mentioned earlier, but I'm not going go spend my time on it now, because getting some numerical results and graphs about this [itex]f[/itex] isn't very important to me.
Usually the relationship between the young and energetic, and the old and wise, works so that on their own the young and energetic don't know what to do with their enthusiasm, and then the old guys advice them on how to use their energy in a purposeful way. You should appreciate the fact that I even mentioned the existence of the function [itex]\mathcal{F}(f)[/itex], because on your own you would have never realized that that kind of function could be written down to be minimized.
I think we can get exact functions ##f(x)## such that ##f(f(x))=logx## using limits. If we assume ##f(x)## to be of degree ##n##. Then ##f(f(x))## will be of degree ##n^2##. Now, we ignore the terms containing powers of ##x## greater than ##n##, and then we solve ##n+1## equations in ##n+1## variables to get the coefficients. Since we're working with symbolic ##n##, so I think the coefficients will be functions of ##n##. So, the exact coefficients can be obtained by applying ##\lim_{n\rightarrow \infty}## to those functions. But, of course, I'm not saying that solving those seriously complicated non-linear equations in terms of ##n## is practically possible.Stephen Tashi said:Jostpur suggested a goal, not a specific method. Before charging off to write programs, I prefer to think about the problem a little longer.
One possibility is that there is no continuous function ##f(x)## such at ##f(f(x))## well approximates ##\log_a(x)## everywhere in the set of values ## (0,\infty)##. For example, perhaps when we create a function that does the approximation well on the set of values ##(0, 2a)## the error gets worse and worse for larger values of ##x##.
Here is an amusing way to think about the problem. Suppose I try to find f(x) by creating a table of values. I pick the value x0 arbitrarily and also pick the value of f(x0) arbitrarily. I know that I want f(f(x0)) = log(x0) so the table looks like:
x, f(x), f(f(x))
----------------
1) x_0, f(x0), log(x0)
Then for a second row, I use the value x1 = f(x0). I already know that f( f(x0)) = log(x0). I also know that I want f(f(x_1)) = log(x1), so all three entries of the second row are determined:
x, f(x), f(f(x))
----------------
1) x_0, f(x0), log(x0)
2) f(x0), log(x0), log(x1) = log( f(x0))
Using x3 = f(f(x0)) = log(x0) to form a third row, I get:
x, f(x), f(f(x))
----------------
1) x_0, f(x0), log(x0)
2) x1= f(x0), log(x0), log(x1)
3) x2= log(x0), log(x1), log(x2)
If I continue to follow this process, the only arbitrary choices I made are the values for x0 and f(x0). The rest of the table is determined by those two values. As an approximation, this method doesn't give us much control over what x values appear in the table.
Could there be a family of functions, one for each point, such that ##f(f(x))=log_ax##?Stavros Kiri said:I tend to agree.
I tried to read it again, but it's too hard to read something when you don't understand more than half of the terms. He said that it involves college level maths. So, I've it saved and I'll read it again when I get to that level. I know that, in the paper, he proves that functional-roots exist and are unique. But how it's proved is far beyond my knowledge.SlowThinker said:Kumar, you should try to read the document
http://tetration.org/IF.pdf
again. If you get past the unfortunate notation, it seems to answer your questions.
It looks as if the author has some good knowledge of the subject, in particular it seems obvious that the Taylor expansion should be performed at the function's fixed point(s) (0.318132 + 1.337236i for log(x)).