- #1
fog37
- 1,402
- 95
Hello Forum,
A sound wave intensity (pure frequency) is proportional to the square of the wave pressure amplitude, i.e. ##I \approx p_0^2##, where ##p_0## is the pressure wave amplitude: ##p_0 sin(\omega t \pm kx)##. This means that the (gauge) pressure value goes larger (positive) and lower (negative) than the local atmospheric pressure ##p_atm##.
The sound intensity can also be expressed in dB. In that case, the reference pressure is the pressure amplitude ##p_{min}## associated to the faintest, barely audible sound: $$I (db) \approx log \frac{p_0^2}{p_{min}^2}$$
A microphones connected to sound processing software converts the impinging sound pressure into an analog voltage signal which the software displays in terms of its time-varying intensity ##I##. The analog voltage signal is sampled (sampling rate) and converted to a digital signal using a certain a bit depth (8, 16, 24 bits, etc. the higher the bit depth the better I guess).
When the software reports the sound intensity ##I## as db values , the dB range goes from a maximum 0 dB down to negative dB values because the intensity is not calculated using the pressure amplitude ##p_{min}## associated to a barely audible sound but it is relative to the loudest sound pressure in the time interval.
How can I make sure that the dB intensity values that the software reports are the same dBs commonly calculated and representing the sound loudness (equation above) to get a sense of how truly loud the signal is?
What kind of calibration is needed? I know the microphone and software have gains, etc...
thanks!
A sound wave intensity (pure frequency) is proportional to the square of the wave pressure amplitude, i.e. ##I \approx p_0^2##, where ##p_0## is the pressure wave amplitude: ##p_0 sin(\omega t \pm kx)##. This means that the (gauge) pressure value goes larger (positive) and lower (negative) than the local atmospheric pressure ##p_atm##.
The sound intensity can also be expressed in dB. In that case, the reference pressure is the pressure amplitude ##p_{min}## associated to the faintest, barely audible sound: $$I (db) \approx log \frac{p_0^2}{p_{min}^2}$$
A microphones connected to sound processing software converts the impinging sound pressure into an analog voltage signal which the software displays in terms of its time-varying intensity ##I##. The analog voltage signal is sampled (sampling rate) and converted to a digital signal using a certain a bit depth (8, 16, 24 bits, etc. the higher the bit depth the better I guess).
When the software reports the sound intensity ##I## as db values , the dB range goes from a maximum 0 dB down to negative dB values because the intensity is not calculated using the pressure amplitude ##p_{min}## associated to a barely audible sound but it is relative to the loudest sound pressure in the time interval.
How can I make sure that the dB intensity values that the software reports are the same dBs commonly calculated and representing the sound loudness (equation above) to get a sense of how truly loud the signal is?
What kind of calibration is needed? I know the microphone and software have gains, etc...
thanks!