Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

How to write a function by analysing/looking at a waveform?

  1. Dec 22, 2011 #1
    How to write a function by analysing/looking at a signal?

    Hello everyone.

    I have attached one waveform/signal. I want to know how can I represent it mathematically.

    I have been studying about functions that are already defined, say, a function like this
    f(x)=4Sinx + 2Cosx | x is in some degrees or radians. I shall draw a graph and I will know how it will look like.
    But I want to know the reverse. Like, say, I have a waveform(given/attached) and I want to represent it mathematically, so that I can apply my Calculus concepts and get the required solution that I want from this observed signal or just simply manipulate it according to my requirement.

    I have seen interpolation, which uses curve tracing or regression analysis. But by doing interpolation I think we are only trying to acquire a signal/waveform(we are approximating the signal). Let's say, I have approximated this signal and say, it looks like my waveform which I attached to this post.

    Now, the next step is that, I want to write the function i.e., I want to represent this function mathematically. How do I do that? Any concepts in mathematics that I can refer?

    Attached Files:

    Last edited: Dec 23, 2011
  2. jcsd
  3. Dec 22, 2011 #2


    User Avatar
    Gold Member

    You can parameterize the curve or you can just guess based on how it looks. You could do fourier analysis and represent it as a sum of sins/cosines. There's probably several different combinations of functions that will approximate that shape.

    I don't think there's a straightforward way of going about this.
  4. Dec 22, 2011 #3


    User Avatar
    Science Advisor

    You might want to look at integral transforms in general.

    As Pythagorean has mentioned, you could use fourier analysis to decompose your signal into sines and cosines which are orthogonal to each other.

    But there are many different kinds of these transforms that you can do. Wavelets provides one kind of family of transforms that might be useful.

    The key thing to figure out is what properties of the function do you want to retain. Fourier analysis retains frequency information for a specific band of frequencies depending on what coeffecients you decide to retain.

    Similarly, the Haar wavelet retains resolution information to a certain scale depending again on what kind of coeffecients you decide to retain.
  5. Dec 23, 2011 #4
    My view::

    Suppose I am Johann Carl Friedrich Gauss and I want to invent a Gaussian function or may be Gaussian looking function which looks like this
    GaussianReal.gif => NumberedEquation1.gif
    I have studied about Gaussian curves in 'Probability and Statistics theory' but nobody explained me how the curve was built to get f(x) the way it looks. I am yet to grasp the entire concepts of 'Probability and Statistics theory'. And as far as I know 'Probability and Stochastic theory' won't help me know the method behind coming up with such curve in the first place. So, we are all using all these curves which are pre-defined to calculate the parameters present in 'Probability and Stochastic theory' like mean, variance and expected value. Right?

    So, as you advice to use Fourier series and Integral transform to make estimations untill I get my curve. I will try to do that.
    Thanks for helping. I wanted to share my view on 'Probability and Stochastic Process theory' because I thought the curves in Gaussian function or any other complex curve can be represented mathematically using a direct method instead of going through indirect methods.
    Last edited: Dec 23, 2011
  6. Dec 23, 2011 #5


    User Avatar
    Science Advisor

    It depends on what distribution you are talking about.

    For example if you are talking about distributions like Poisson or Binomial, these are derived from probability assumptions. The binomial model assumes each trial is independent and using the properties of P(A and B) = P(A)P(B), we can derive the actual mathematical distribution itself. Also the Poisson is just a special limiting case of the binomial.

    For sampling distributions like chi-square and F-distributions, we can use statistical theorems to derive the PDF/CDF for these distributions. These are built specifically for a purpose of doing statistical tests with specific assumptions and not necessarily to model specific kinds of processes like a Poisson distribution models.

    In terms of the normal distribution, you can see this from a few different perspectives. One perspective is that a fundamental result known as the central limit theorem shows that asymptotic results leads to a distribution that is normal. This is the basis for a lot of classical statistical methods that are employed especially in hypothesis testing.

    Another perspective is that the distribution has by some measure of luck, managed to represent many physical phenomena. I recently read a book that covers the life of Gauss, and he said that when he walked to university, he actually counted the number of steps that he took. Upon doing this he found that the number of steps was never constant, and this sparked an interest in probability. The book also stated that when he was analyzing some processes, the properties of these processes had the form of a normal distribution. I am going to deduce that these experiences led him to formulate this distribution, but how we actually came up with the PDF itself is something I can only speculate on: maybe you could read his publications to get a better idea.

    So to conclude, you need to understand what the use and the context of these distributions. They are used for different purposes as I have outlined above.
  7. Dec 23, 2011 #6
    That helped me look at the 'Probability and Stochastic Process' from a different perspective. And I think I need to search/look for the core reason why we are actually going for 'Probability and Stochastic Process' rather going for Differentiation. I know trial and error concept and tossing of a coin concept but I think there is something missing. Anyways, so here, I do an experiment and I get a mathematical distribution curve based on my observations. And I managed to draw a smooth curve.

    But now the next step, the basic question of this thread, is how to represent this curve mathematically. As posted by other helpers, is going through indirect means(using Fourier Series and Integral transforms) only way to write the function mathematically? And there is no straightforward way? And if we managed to do it in a straightforward does that mean that we don't have to use 'Probability and Stochastic Process theory' anymore? True?
  8. Dec 23, 2011 #7


    User Avatar
    Science Advisor

    The main thing to consider, is that your distribution does not really give you a good insight to your stochastic process: the thing that your data gives you is probabilistic properties of your process and in most cases, not anything detailed on the process itself.

    This is why you have to put these things into some context regarding the process itself. This is why for example we model stochastic systems that involves rates with a Poisson process because in the context of the process, the model is really a good fit since it intrinsicly is built from the ground up to model events.

    In terms of doing analysis on time-series, that is a different kettle of fish (although it is still based on foundational probability and statistics).

    One piece of advice I have for you if you want to do any kind of serious work relating to your problem, is to first get a level of understanding for your process. You might need to do a bit more probability and statistical training for it to become more intuitive, but chances are that you will need to understand the process in a mathematical context before you can build a probabilistic model of it. Having a decent underlying model that you can concisely specify both quantitatively and qualitatively will help you put all the pieces together over say trying to fit a distribution to a curve. Fitting the distribution to a curve tells you nothing about the underlying mechanics of the process itself and will not do you any justice to seeing fundamentally what is really going on.

    This is essentially what applied mathematics is all about. Also don't worry if you have assumptions that are way off the mark the first few times out: this is more common than you think. Also be aware that trying to create models that are very simple and compact but yet represent a large number of phenomena are very hard to do and people that do this work (like in the financial industry) are usually compensated well especially if its to do with something that generates a lot of economic activity.

    Since other posters and myself have talked about the mathematical representation, I will not repeat that information, however it is important to acknowledge the mechanics of any process and hopefully this post has given you some further insight or food for thought in this regard.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook