Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

How can I stretch audio waves without distortion?

  1. Aug 12, 2015 #1
    Hello to anyone who reads this, and good day.

    I am an intermediate C++ programmer and I am looking to understand how to stretch or lengthen audio waves as long as I want without distortion.

    What I mean by "stretch" is to prolong the duration of every instance of sound.

    As of now I am assuming that would mean to take each oscillation, move all other samples after the current targeted oscillation over and copy and the paste the oscillations samples next to the original oscillation. Then I would just repeat for every oscillation within the audio file.

    There are programs that can do what I am describing but the audio results are a bit choppy sounding if the audio is stretched to far(choppy is comparable to shaking your voice box while talking). One such program that I've tried is Adobe Audition.

    I would think that it is possible to lengthen audio files to any length without any significant distortion or disturbance.

    So does anyone know of an example of how to do this?

    If not, then can someone explain why audio file audio waves can not be lengthened without choppiness or other distortions?
  2. jcsd
  3. Aug 12, 2015 #2


    Staff: Mentor

  4. Aug 13, 2015 #3

    I actually tried this program a couple of years ago, but the audio seemed a little choppy and it also had reverberation. I just didn't and still don't understand how to manipulate the settings to deal with these problems.

    If you could figure it out that would be great. I think, however, that there is no way to fix these issues in this program that you've linked to.
  5. Aug 13, 2015 #4
    I use wavelab with waves plugins for something like this, do you want to stretch the wave without changing the pitch?
  6. Aug 14, 2015 #5
    Yes, I want to stretch the wave without changing the pitch, but that is not an issue for me. I've been using my sister's Adobe Audition's time stretching feature. It seems to work really well, but there is a bit of a classical robot voice effect that seems to occur. Maybe I should investigate more thoroughly because it just now occurred to me that that robot voice effect may mostly be from the singer's voice and not from the time stretching. I'll look into it later today when my sister gets home and is able to let me use her computer.

    But for now, I am still curious how to lengthen an audio wave without distortion or any added side effects. Can this even be done?

  7. Aug 14, 2015 #6


    User Avatar
    2017 Award

    Staff: Mentor

    Even if it is, I guess our brain would still interpret it as distorted because stretched sound is different from "normal" sound sources.

    Mathematically, you can apply a Fourier transformation to the signal (or, more realistic: to parts of it separately), then modify this transformed spectrum, and transform back.
  8. Aug 14, 2015 #7

    I've read this before, but I wasn't sure if it could be used to allow undistorted lengthening waves. Thanks though, I think I'll look into it more.
  9. Aug 14, 2015 #8
    One thing that could be a part of the problem is that .wav files are 'samples'.
    When played at the intended speed the sampling rate is higher than a human listener can detect, the samples merge into what seems like a sound continuum.
    When played more slowly you are in effect reducing the sample rate.
    It's possible that if sufficiently reduced the sampling might start to become detectable, that would be the 'choppiness'.
    Think of it like video frames.normally you don't see individual frames, but if the video play is slowed down then you can see them.
  10. Aug 14, 2015 #9

    I was under the impression that individual samples didn't make any sounds at all, but that it was a series of samples arranged to form a wave cycle which is interpreted as sound. Do you know this, what you've stated, to be true?
  11. Aug 14, 2015 #10
    Yes,each sample is not a 'sound' it's a number, (or a set of numbers), data in other words.
    When the data is input to an appropriate audio device the digital data gets converted to an analog signal (sound).
    When played at the intended rate we don't hear the transition between each discrete step, but if slowed enough it could become detectable.
  12. Aug 14, 2015 #11


    User Avatar
    Gold Member

    Two interpretations. the definition descriptions are not the best. ( Is one working in the time domain, and the other the spacial, or frequencyy ?)
    A. Lengthen the time duration of each frequency.
    In which case, for a single tone you would check for a zero crossing, copy to the next zero crossing , amd paste. And repeat. A 1Hz signal still sounds like a 1Hz signal only longer. Pitch stays the same.
    B. extend the frequency wrt time. In which case , a 1 Hz signal becomes a different frequency. One would have to interpolate a value between 2 samples and insert. Repeat with the next sample, ans so on.
    Pitch changes.

    For a single tone, A involves more simple computation than B.
    For mutilple frequencies mixed, B computation remains the same as for a single tone, but A becomes much more difficult, with the mutliple frequencies each having a different zero crossing in time.

    seems like you want A.

    As mfb, Fourier could do it, but one can expect disjoints depending upon the complexity of analysis desired.
  13. Aug 14, 2015 #12

    Oh yeah, after thinking about what you posted I realized you are right. I'll keep that in mind, and investigate when I have access to Adobe Audition again.

  14. Aug 14, 2015 #13

    Yeah, I think it's "A" that I want. So it seems to me that it is possible to lengthen the audio output of each tone without any real distortion. So I plan to copy each wave cycle, interpolate and paste between copied cycle and following cycle, and repeat.

  15. Aug 14, 2015 #14
    If you want to just slow down audio, which will lower the pitch of all of it, that's pretty easy. One simple way is to take an FFT of the original recording of say 1 second (48000 samples) and then put all the fourier information into a frame that's say 72000 samples longs (leaving zeros in the areas for high frequencies you don't have enough frequencies for) This will result in smooth, streched audio for that 1 second. (Now 1.5 seconds) But this is something to do for an entire recording at once, or else you can get clicks at frame edges.

    Stretching sounds without lowering the pitch is a somewhat ill-defined problem. To see it, suppose we have two guitar strings very nearly in perfect tune, but not quite, so they beat (cancel each others wave forms) once a second. You record a frame of half a second where they are reinforcing each other, and want to stretch it out: But by continuing those two waves, you must get a quieter different sound the next frame: Is that what you wanted, or did you want more like the original sound? When you continue all the frequency wave forms in a signal, they will do strange things in the added frames they weren't doing in the recorded frame, based on where their different phases. You just get unpredictable results.

    The clicks around frame edges (sound like buzz at high frame rates) come from messing with FFTs when things aren't continuous at the edges of frames...If you stretched out a sine wave 1.5 times its frequency, its at its peak at the end of the frame. If the next frame starts with that same frequency at zero, you've got a drastic drop. You must devise a system of adjusting phases or whatever to smooth it out, the systems of doing this give the "wobble" sound you get with commercial sound stretchers.
    Last edited: Aug 14, 2015
  16. Aug 15, 2015 #15


    User Avatar
    2017 Award

    Staff: Mentor

    A very good argument.
    I guess what we would expect from a streched signal are two signals closer together in frequency so the beat period goes up. That is possible in this special case, but not in the general case. Every stretching algorithm has to be some compromise between different goals.
  17. Aug 16, 2015 #16


    User Avatar
    Gold Member

  18. Aug 16, 2015 #17
    Yeah, definitely. Its one of those weird ones: our minds have an instant intuitive idea of what we want when we talk about stretching sounds, but no unified algorithm. For instance, with a string quartet, we would like them to play the same piece, at a slower tempo, stretching out the notes to match the new time. For a speaker or drummer, we want more cadence or space between the words/beats, without necessarily slowing down the words or individual beats. (which would sound weird). Its one of those things we just "know", but when it comes to expressing it, we find there are numerous possible ways in any case.
  19. Aug 17, 2015 #18

    To answer your question, I wanted each frame to be as close to the original copied as possible.

    Ok, the frame edges are where the problem usually occurs in commercial audio apps. I will take that into account.

    Thanks for the advice.
  20. Aug 17, 2015 #19
    What I was meaning to say is the frame edges are where the problem is in anything you or I come up with. Commercial apps find some solution, but all the solutions have a problem, because everything we come up with changes the song in undesirable ways. Any method of stretching out a drum beat for instance results in a sound different than a drum. What we want is larger spaces between the drumbeats. But we don't want larger space between violin notes, we want them stretched out. With Piano we want the the attack (beginning of the note) to stay the same stretched out, but the decay (end of note) to be longer, like if pianist was holding down sustain petal. Point is its doomed, there is no simple way to stretch sounds so they sound good.
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook