Sound wave intensity (dB) and sound processing

Click For Summary

Discussion Overview

The discussion revolves around the relationship between sound wave intensity, measured in decibels (dB), and the processing of sound signals by microphones and software. Participants explore the calibration of dB readings, the conversion of sound pressure to voltage signals, and the implications of different reference points in measuring sound intensity.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Experimental/applied

Main Points Raised

  • One participant states that sound intensity is proportional to the square of the wave pressure amplitude and discusses the conversion of sound intensity to dB, referencing the faintest audible sound as a baseline.
  • Another participant suggests that the software likely squares the analog voltage to obtain intensity and mentions the need to adjust dB readings based on the environment, providing an example of adding 110 dB to a reading of -110 dB in a quiet setting.
  • A different participant points out inconsistencies in reference levels across different microphones and software, recommending the use of alternative units if available.
  • One participant emphasizes the importance of maintaining a linear relationship between the pressure signal and the voltage output from the microphone for accurate representation, while noting the non-linear perception of loudness by the human ear.

Areas of Agreement / Disagreement

Participants express differing views on the calibration of dB readings and the consistency of reference levels across devices. There is no consensus on the best approach to ensure accurate dB measurements, indicating ongoing debate and uncertainty.

Contextual Notes

Participants highlight limitations related to the variability of microphone sensitivity and software processing, as well as the potential for non-linearities in sound perception, which may affect the calibration and interpretation of dB values.

fog37
Messages
1,566
Reaction score
108
Hello Forum,

A sound wave intensity (pure frequency) is proportional to the square of the wave pressure amplitude, i.e. ##I \approx p_0^2##, where ##p_0## is the pressure wave amplitude: ##p_0 sin(\omega t \pm kx)##. This means that the (gauge) pressure value goes larger (positive) and lower (negative) than the local atmospheric pressure ##p_atm##.
The sound intensity can also be expressed in dB. In that case, the reference pressure is the pressure amplitude ##p_{min}## associated to the faintest, barely audible sound: $$I (db) \approx log \frac{p_0^2}{p_{min}^2}$$
A microphones connected to sound processing software converts the impinging sound pressure into an analog voltage signal which the software displays in terms of its time-varying intensity ##I##. The analog voltage signal is sampled (sampling rate) and converted to a digital signal using a certain a bit depth (8, 16, 24 bits, etc. the higher the bit depth the better I guess).
When the software reports the sound intensity ##I## as db values , the dB range goes from a maximum 0 dB down to negative dB values because the intensity is not calculated using the pressure amplitude ##p_{min}## associated to a barely audible sound but it is relative to the loudest sound pressure in the time interval.

How can I make sure that the dB intensity values that the software reports are the same dBs commonly calculated and representing the sound loudness (equation above) to get a sense of how truly loud the signal is?
What kind of calibration is needed? I know the microphone and software have gains, etc...

thanks!
 
Physics news on Phys.org
I presume the software effectively squares the analogue voltage to obtain intensity.
You have to add a fixed number of decibels to your reading to make it run from zero up. For instance, if you take a measurement in a quiet country house at night with no traffic or aircraft noise etc, suppose your reading is -110dB. Then roughly speaking you need the instrument to indicate zero dB, so you need to ad 110dB to the reading.
In order to obtain a calibration for the instrument, you can only compare it with a professional one I think.
Notice also that some instruments use the dBA scale, which includes a weighting filter to simulate the frequency response of the ear.
 
In general, the job of a microphone is to convert the captured pressure signal ##p(t)## into a voltage signal ##V(t)##.

For fidelity (assuming no distortion and noise) the shape of the voltage signal V(t), which is output from the microphone, should be exactly the same as the shape of the signal p(t) except for a scaling factor. That means that there must be a linear relationship between the pressure and voltage.
Sound systems always have amplifier between the mic and the software to increase the voltage signal ##V(t##) amplitude (maybe too small) but linearity must be there or the voltage signal will not represent faithfully the pressure signal ##p(t)##. Is that correct? The human ear works differently and nonlinearly: pressure changes don't produce linear changes in the perceived loudness...but that is different...
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 3 ·
Replies
3
Views
3K
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
8K
  • · Replies 11 ·
Replies
11
Views
3K
Replies
3
Views
2K