Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Oscillation in Dynamic Models

  1. Sep 4, 2012 #1
    For a general family of dynamic models, xk+1=f(xk), oscillation can occur; for example, the function xk+1=xn2-1 experiences oscillation under the starting value 0:


    This type of oscillation must be capable of detection before actually pluggin in the numbers. How can you figure out that a dynamic model will oscillate in such a way?
  2. jcsd
  3. Sep 4, 2012 #2


    User Avatar
    Science Advisor

    Hey camjohn and welcome to the forums.

    One condition up-front is the idea of convergence. If the sequence doesn't converge to some point, you definitely can rule out oscillation.

    The general definition of oscillation is that for some function mapping, he have the property that f(x + p) = f(x) for all x and a fixed p.

    If you can show this holds for a fixed p, then you're done.

    You can also do things like project your mapping to a specific kind of periodic basis and check the results to see whether they are consistent with something of a periodic nature.

    I'd recommend looking at the first two and look at what the implications of f(x + p) = f(x) imply in all different kinds of analyses (like taylor series expansions and so on). You can do the same sort of thinking for discrete functions as well (of which your case is an example).
  4. Sep 5, 2012 #3
    OK, but is that really the definition of oscillation? f(x+p)=f(x)? I'm probably wrong, but I read that expression as some function equaling itself plus a constant. Wouldn't an oscillation have to be expressed, at least to some degree, in a recursive manner? How can you simplify the function, given the starting value, in a way that the oscillation comes clear? Is there a way? Does f(x+p)=f(x) mean something other than a function equaling itself plus a constant, p?
  5. Sep 5, 2012 #4


    User Avatar
    Science Advisor

    By oscillation, I interpreted your question to mean "periodic" and the definition f(x+p) = f(x) for some non-zero p is the definition of periodicity.

    Now this is a very specific kind of oscillation.

    If you want a general kind of oscillation that can happen potentially infinitely many times for a smooth signal, this means it will have infinitely many times that the derivative is equal to 0.

    For a discrete signal, instead of the derivatives being zero it means that there are infinitely many local maximums and minimums.

    If you approximate, interpolate or fit a discrete signal by a continuous one then the continuous version must have infinitely many zeroes for the derivative.

    If you don't have this property, then it means you can only have a finite number of oscillations within some interval (and then you have no more) or that you have none and that the function will tend to specific limits as the domain goes to both extremes (like a hyperbolic function, exponential, that kind of thing).
  6. Sep 5, 2012 #5
    Ohhh so is it maybe that the f(x+p)=f(x) is supposed to be in subscripts? I wish that that great math keyboard was still up for the quick replys. If the equation was saying that the function proves the same for different values (f(SUB)2(SUB))=f(SUB)4(SUB)) then that totally makes sense to me, and it perfectly matches my idea of oscillation: repetitive variation between two or more values (like cosine or sine).

    So then, for general types of oscillation, how could one prove simply with the starting value x(SUB)1(SUB)=0, and the dynamic model x(SUB)k+1(SUB)=x(SUB)n(SUB)^2-1, that f(x+p)=f(x). I'm trying to find some convenient conceptual approach that would allow me to just simplify and just be like, oh it's gonna oscillate. Would the most practical approach just be to detect continuously occuring local maxima na and minima?
  7. Sep 6, 2012 #6


    User Avatar
    Science Advisor

    Yeah for this problem you should put it in subscripts.

    The thing is though, you want to prove it analytically because if you don't you'll need infinitely many statements to do so.

    Since you have a nested function, just use that definition and show that for some p that f_(x+p)(a) = f_(x)(a) for some initial condition a.

    Again analytically if it converges it doesn't oscillate and if it doesn't then it has a chance of oscillating depending on the degree of the equation in simplified format.
  8. Sep 6, 2012 #7
    Oscillations in this case are more often called orbits
  9. Sep 6, 2012 #8


    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    Is a (non-vanishing) oscillation necessarily periodic? How about xk+1 = sin(1+arcsin(xk))?
    Btw, I think this line in your first reply is the opposite of what you intended:
  10. Sep 6, 2012 #9


    User Avatar
    Science Advisor

    The convergence term is not the normal convergence per se, but more or less a form of directional convergence (so looking at the derivative as opposed to the function).

    It has the analogue of looking at whether the derivative is 0 (i.e. it is either zero everywhere or it is zero no-where taking into account your points of inflection).

    If something oscillates even finitely you won't get a situation of no turning points so using classical analysis methods, you find where the roots are and then consider that region for oscillation.

    If something oscillates infinitely, it will have infinitely many solutions to the first derivative being 0: it would be a contradiction if it didn't.

    With regard to the discrete case, if you can fit a continuous model (doesn't have to be exact, but enough to capture the nature of the first derivative behaviour of the function) that describes the first derivatives accurately in the context of these discrete values, then you can see whether this oscillating pattern holds analytically.

    Another easy test is finding the limit at the extreme points of the mapping (like infinity): if you do get an actual fixed value for the limit then you know it can't be oscillating (since an oscillating situation would mean that it stopped oscillating somewhere before).
  11. Sep 6, 2012 #10


    User Avatar
    Staff Emeritus
    Science Advisor

    Am I misunderstanding something? (-1)n = 1, -1, 1, -1, ... does NOT converge but does "oscillate". What do you mean by "convergence" here? Perhaps you are using it is different sense than I am used to.

  12. Sep 6, 2012 #11
    There are many cycles with oscillatory convergence, with the limit actually a limit cycle. The hailstone sequences come g to mind. @chiro
  13. Sep 6, 2012 #12
    I'm starting to think that simply computing the dynamic model will take less time than guessing beforehand as to whether or not it will oscillate. After reading the replies and doing some further research on the internet, I've come to the realization that there is a way to do it, but it's more difficult and time extensive than computing the dynamic model to begin with. Unless someone has some further knowledge on the subject that would disagree with my point, I'm gonna say that there is no TIME REDUCING way to simply prove oscillation before simply computing that x1=x3, x2=x4 etc. Everyone agree with this?
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Similar Discussions: Oscillation in Dynamic Models
  1. Pipe Oscillations (Replies: 4)

  2. Dynamics and algebra (Replies: 5)

  3. Dynamic Optimization (Replies: 1)