## Visualising Sound

If you visualise in terms of what it actually is (colliding molecules), what would the wavelength, frequency and amplitude be?

I think for amplitude the molecules are hitting each other harder, but wouldn't that mean they're moving faster as well?
 PhysOrg.com physics news on PhysOrg.com >> Iron-platinum alloys could be new-generation hard drives>> Lab sets a new record for creating heralded photons>> Breakthrough calls time on bootleg booze
 If you were to vary the amplitude of a sound wave in a gas and measure the molecules' speeds as a function of amplitude, you probably won't notice an appreciable correlation. Assuming we're at room temperature and that air is basically an ideal gas, the temperature completely determines the velocity distribution of the molecules, so a sound wave wouldn't cause any observable changes in the molecules' velocities when they collide with one another. The sound wave is a pressure fluctuation, so there are moments of high pressure: crowded molecules, lots of collisions, followed by moments of low pressure: sparse molecules, few collisions. The time it takes to oscillate between these two would be proportional to the wavelength and inversely proportional to the frequency (or pitch). What you'd really observe for a higher amplitude (or volume) is periods of alot more collisions, followed by periods with a lot fewer collisions, whereas a small amplitude would have smaller changes between high and low pressures--not harder/softer collisions between molecules.
 Recognitions: Gold Member See “Acoustics and Vibration Animations” by Dan Russell, Ph.D., Professor of Acoustics & Director of Distance Education, Graduate Program in Acoustics, The Pennsylvania State University: http://www.acs.psu.edu/drussell/demos.html specifically, see the longitudinal (sound) wave visualization here: http://www.acs.psu.edu/drussell/Demo...avemotion.html

## Visualising Sound

@Jolb: So if I'm getting you, then the following are correct?
1) Louder sound produces a denser wave which is essentially more more particles hitting each other in the compressed region
2) Higher frequency sound means more compressed regions (or waves) pass a point per unit time but the distance between successive compressed regions are shorter
3) Low frequency means less compressed regions pass a point per unit time but the distance between successive waves are longer

So ultimately a low frequency sound is essentially a lesser number of waves hitting our ear in some unit time? Can we have a single pulse of sound? (so just one compressed region?) How would you then tell the difference between a low and high frequency sound? Also, are regions of compression the same size as regions of rarefaction?

---

@Bobbywhy: That is a great site!

---

And just another couple of questions on sound if no one minds...
1) How does sticking a piece of blu-tack on a tuning fork change its frequency
2) Why is the sound of a guitar amplified when the headstock is touching the wall of a room (certain other objects as well)

Recognitions:
Gold Member
 So ultimately a low frequency sound is essentially a lesser number of waves hitting our ear in some unit time? Can we have a single pulse of sound? (so just one compressed region?) How would you then tell the difference between a low and high frequency sound? Also, are regions of compression the same size as regions of rarefaction?
yes

a single cycle of a sound yes but its gonna be brief, especially if its of significant frequency
say at 1000 Hz ( 1kHz) thats 1000 cycles / second so 1 cycle is going to be in 1000th of a sec, you are probably not going to hear it and only measure it with a good oscilloscope with triggered storage so you can review the pulse.

you couldnt for the above reason again unless on an oscilloscope

for a single frequency, yes the compression and rarefaction areas should be the same
look at a sinewave of a single tone on an oscilloscope

 And just another couple of questions on sound if no one minds... 1) How does sticking a piece of blu-tack on a tuning fork change its frequency 2) Why is the sound of a guitar amplified when the headstock is touching the wall of a room (certain other objects as well)
1) cuz you are changing its physical size ( properties) and therefore changing its resonant frequency

2) because the sound is transferred into the wall or other object and now it also resonating with the chords etc from the guitar so there's a larger area radiating the sound

cheers
Dave

Actually, davenn, it is quite possible to hear 1000Hz. As a general convention, musicians typically tune instruments so that the A *above middle C (a major sixth up from middle C) has a frequency of 440Hz. That means the *A one octave above that is approximately 880Hz and the A one octive higher is ~1760Hz. Normal pianos have one or two more octaves, and the ear is capable of hearing higher frequencies than the highest note on a piano. So 1kHz is snugly inside the audible range.

Just to answer autodidude's questions one by one:

 Quote by autodidude @Jolb: So if I'm getting you, then the following are correct? 1) Louder sound produces a denser wave which is essentially more more particles hitting each other in the compressed region
Correct.
 2) Higher frequency sound means more compressed regions (or waves) pass a point per unit time but the distance between successive compressed regions are shorter 3) Low frequency means less compressed regions pass a point per unit time but the distance between successive waves are longer
Both of those are correct, but I wouldn't use a "but" because if the sound wave moves at a constant velocity (the speed of sound), then the distance between compressed and rarefacted regions is inversely proportional to the time it takes one point to oscillate between rarefacted and compressed.

The frequency of the sound wave f is defined as the number of compressions/rarefactions that pass a point in one second. The time between the point seeing one compression/rarefaction and the next compression/rarefaction is called the period τ, and you can convince yourself that τ=1/f. The wavelength of the wave λ is defined as the distance between compressed regions. If sound waves of different frequencies all travel through space at the same velocity c, then c=λf, and we can see the inverse proportionality.

As a concrete example to illustrate this whole thing, let's say we have a 1000Hz sound wave passing over a microphone. That means the microphone sees 1000 compressions per second. So the time between compressions is one one-thousandth of a second; τ=1/(1000 per second) = 10-3 seconds.

Now, if the sound wave is travelling at 1000 feet per second, then in order for the next compression to hit the microphone exactly 10-3 seconds later than the first, it must be following behind the first compression at exactly (10-3 seconds)(1000 feet per second) = 1 foot behind.

All the numbers I gave are actually realistic values for audible sound waves in air near sea level.
 So ultimately a low frequency sound is essentially a lesser number of waves hitting our ear in some unit time?
Correct.
 Can we have a single pulse of sound? (so just one compressed region?)
It is possible to have a single pulse of sound, of course with some nonzero duration. An example is a drum beat. To create a pulse of sound requires combining a bunch of different frequencies--a drum beat is not a single pure tone, it is composed of many frequencies. You can see this mathematically using fourier analysis.
 How would you then tell the difference between a low and high frequency sound?
Well, you could measure its frequency directly (listen to its pitch), measure its wavelength, etc. To do this with a microphone and an oscilloscope is quite straightforward. The ear detects pitch by having nerves attached to little hairs of different lengths. Each hair has a resonant frequency that depends on its length, and when the hair resonates with an incoming sound, the nerve feels it vibrating.

On the other hand, a microphone is just a substance that gets a voltage across it when a pressure is applied to it. So the oscilloscope is basically looking at "how much is hitting the microphone", whereas the ear breaks the sound down into its constituent frequencies (fourier decomposition) before sending the signal to the brain.
 Also, are regions of compression the same size as regions of rarefaction?
Roughly, yes. An ideal pure tone would have the same size regions of compression and rarefaction. However, a drum beat would probably look different, with the regions all differing a lot in their relative sizes.
 And just another couple of questions on sound if no one minds... 1) How does sticking a piece of blu-tack on a tuning fork change its frequency
Well, the tuning fork makes noise by the vibration of its arms. If we increase the mass of the arms, they have more inertia and would vibrate at a little lower frequency. This is analogous to the fact that the vibrational frequency of a mass held by a spring decreases when the mass gets heavier.
 2) Why is the sound of a guitar amplified when the headstock is touching the wall of a room (certain other objects as well)
This is due to resonance of the wall. If the wall happens to have a vibrational mode of the same frequency that the guitar is playing, then the wall also begins to vibrate at that frequency. This is analogous to why a guitar actually has a body rather than just being a fretboard and strings, and why acoustic guitars need a larger body than guitars that rely on electrical amplification.

*Edit: I was one octave off on the pitch convention: A440 is the A above middle C.

Recognitions:
Gold Member
 Quote by Jolb Actually, davenn, it is quite possible to hear 1000Hz. As a general convention, musicians typically tune instruments so that the A below middle C (a minor third down from middle C) has a frequency of 440Hz. That means the A above middle C is approximately 880Hz and the A one octive higher is 1760Hz. Normal pianos have two or three more octaves, and the ear is capable of hearing higher frequencies than the highest note on a piano. So 1kHz is snugly inside the audible range.

of course it is !!!! its rite in the middle of the voice audio range

YOU didnt read the context of what I said .... Re Read it !

you are not likely to hear a single cycle of a 1000Hz signal its ONLY 1000th of a second long and if by chance you did hear a blip you are never going to recognise it as a single cycle of 1000Hz !!

Dave

 Quote by davenn of course it is !!!! its rite in the middle of the voice audio range YOU didnt read the context of what I said .... Re Read it ! you are not likely to hear a single cycle of a 1000Hz signal its ONLY 1000th of a second long and if by chance you did hear a blip you are never going to recognise it as a single cycle of 1000Hz !! and everything else you just repeated what I had already siad ;) Dave
Easy there Pard......

 Quote by davenn of course it is !!!! its rite in the middle of the voice audio range YOU didnt read the context of what I said .... Re Read it ! you are not likely to hear a single cycle of a 1000Hz signal its ONLY 1000th of a second long and if by chance you did hear a blip you are never going to recognise it as a single cycle of 1000Hz !!
Hmmm... I guess I see now what you were trying to say: that you won't hear a "single cycle" of some sound. The reason that threw me off is because that's a very odd notion: you never hear a single cycle of a sound. You might feel a low-frequency pressure wave through your tactile nerves (like feeling a subwoofer through your seat), but no audible frequency is slow enough to make out the individual cycles. First of all, the "little hairs in the inner ear" model I discussed in my previous post clearly precludes this: your nerves only detect the presence of a certain frequency, not individual peaks and troughs.

Second of all, there are plenty of pressure waves (and other stimuli) all around us that occur at low frequencies, say around 50 Hz, that are well-known to be inaudible (or otherwise imperceptible). For example, observations of the human voice show that the sound waves are modulated at 40Hz, but the 40Hz modulation tone is not perceptible to the human ear (otherwise every singer would have a 40Hz drone constantly accompanying any other pitch they sing.) In addition, psychological experiments show that stimuli (not just sounds--this includes visible things and probably all the senses) that occur for less than about ~1/40 of a second are not perceptible. For example, fluorescent lights actually blink at 120Hz but humans use them all the time without noticing the blinking. So even if your ears could detect a peak and a trough coming from a 40Hz sound wave, the peaks and troughs would not last long enough to make a perceptible sensation.