Sin harmonic components are used in sound synthesis ?

In summary, it is not important whether cos or sin harmonic components are used in sound synthesis. However, the phase of the harmonic components does matter. If the phases of the harmonic components are static and the harmonic waveform does not pass through some non-linear processing element, then the phase or phase differences is nearly always not detectable. If the waveform is passed through some non-linearity, then a variant of the waveform that is has more prominant peaks will sound different. Additionally, if you are doing musical synthesis, then the r_n and \phi_n are (slowly changing) functions of time.
  • #1
tiffney
3
0
pyschoacoustically is it important wether cos or sin harmonic components are used in sound synthesis ?
 
Physics news on Phys.org
  • #2
I can't imagine any way that it would make a difference. The 2 only differ by phase.
 
Last edited:
  • #3
tiffney said:
pyschoacoustically is it important whether cos or sin harmonic components are used in sound synthesis ?

if the phases of the harmonic components are static and if the harmonic waveform does not pass through some non-linear processing element (which might be difficult, considering practically any loudspeaker system), then the phase or phase differences is nearly always not detectable. that is, in general:

[tex] x(t) = \sum{r_{n} \cos \left(n \omega_0 t + \phi_{n} \right)} [/tex]

will sound about the same, no matter what [itex] \phi_n [/itex] is.

BUT the waveform will not look the same for all possible values of [itex] \phi_n [/itex], and if [itex] x(t) [/itex] is passed through some non-linearity (that might flatten out the peaks a little), a variant of [itex] x(t) [/itex] that is has more prominant peaks will sound different (after the non-linearity) than the variant of [itex] x(t) [/itex] that is less peaky.

also, if you are doing musical synthesis, then [itex] r_n [/itex] and [itex] \phi_n [/itex] are (slowly changing) functions of time. that is

[tex] x(t) = \sum_{n=1}^{N}{r_{n}(t) \cos \left(n \omega_0 t + \phi_{n}(t) \right)} [/tex]

then your phase of each harmonic changes in time, and the derivative w.r.t. time of [itex] \phi_{n}(t) [/itex] is a frequency offset or "detuning" of the [itex] n^{th} [/itex] harmonic. that is perceptually salient. how each harmonic is slightly detuned from the exact harmonic frequency (that is an integer multiplying the fundamental frequency) is something that affects the sound of a tone. if they all are perfectly harmonic, the tone sounds a little "dead" compared to a tone with detuned "harmonics". in a real piano, the upper harmonics get sharper and sharper as you get to the 20th harmonic and higher.

so absolute phase, might not matter, but the change of phase does matter. i would not throw phase information away.

try Googling a paper i wrote: "Wavetable Synthesis 101". it lives at the harmony-central.com website somewhere, to get my spin on this issue.

r b-j
 
  • #4
Except in rather odd cases, the ear is not sensitive to phase.

- Warren
 
  • #5
chroot said:
Except in rather odd cases, the ear is not sensitive to phase.

it's more an issue of what the brain decodes. the ear is a sort of really "only" a sophisticated transducer. of course a phase change that is equivalent to a pure delay is inaudible (unless you notice the delay, but that is not at issue here).

also some waveshapes are audibly indistinguishable from others where the difference is phases of different harmonics. such as

[tex] x_1 (t) = \sum_{n=0}^N { \frac{(-1)^n}{2n+1} \cos \left( (2n+1) \omega_0 t \right)} [/tex]

is usually audibly indistiguishable from

[tex] x_2 (t) = \sum_{n=0}^N { \frac{1}{2n+1} \cos \left( (2n+1) \omega_0 t \right)} [/tex]

yet the waveforms clearly look different. x1(t) approaches a square wave as N -> infinity but x2(t) is much more spiky.

however, what would happen if x1(t) and x2(t) are passed through an identical gentle non-linearity, such as

[tex] y_n (t) = \frac{1}{\alpha} \arctan \left( \alpha x_n (t) \right) [/tex] ?

as alpha gets larger, this nonlinearity will start to kick in and you will hear a clear difference between what happens to x1(t) and x2(t).


well, here's another odd case for you, Warren (and this one is purely linear):

All-pass Filter (APF):

[tex] H(z) = \frac{z^{-N} - p}{1 - p z^{-N}} [/tex]

the frequency response of a digital filter (more precisely called a "discrete-time filter") is evalutated as

[tex] H(z) \mid_{z=e^{i \omega}} = H \left( e^{i \omega} \right) [/tex]

where [tex] \omega = \frac{2 \pi f}{F_s} [/tex] , [itex] f [/itex] is the frequency, and [itex] F_s [/itex] is the sampling frequency. [itex] F_s / 2[/itex] is the so-called "Nyquist" frequency and all frequencies must be less than Nyquist in magnitude (or you get aliasing).

[itex] z^{-N} [/itex] represents a delay element of N samples and [itex] 0 \leq p < 1 [/itex] is a number that represents a "pole" if N were 1. (there are N poles, in reality.)

it turns out that

[tex] | H \left( e^{i \omega} \right) | = 1 [/tex]

for all [itex] |f| \leq F_s / 2 [/itex] so this all-pass filter changes nothing (other than possibly phase).

for [itex] F_s / 2 [/itex] equal to, say, 44100 Hz, and if p = 0.95, then if N = 1, then i would agree that, for the most part, the inclusion or removal of this APF would normally be inaudible. however, if N = 22055 (the delay element was 1/2 second) and p = 0.95 , i must steadfastly disagree with any notion that the inclusion or removal of this APF would be inaudible.

so here is an example of where changing nothing other than phase, creates a clearly audible difference.

r b-j
 

1. What are harmonic components in sound synthesis?

Harmonic components are the basic building blocks of sound in sound synthesis. They are the pure tones that make up a sound and are created by oscillators in a synthesizer.

2. How are harmonic components used in sound synthesis?

Harmonic components are used in sound synthesis to create complex sounds by combining multiple pure tones with different frequencies, amplitudes, and phases.

3. What is the purpose of using sine harmonic components in sound synthesis?

Sine harmonic components are used in sound synthesis because they produce a pure, smooth tone without any additional harmonics. This allows for more control and precision in creating desired sounds.

4. Can harmonic components be manipulated in sound synthesis?

Yes, harmonic components can be manipulated in sound synthesis by adjusting their individual frequencies, amplitudes, and phases. This allows for the creation of a wide range of complex and unique sounds.

5. Are harmonic components the only elements used in sound synthesis?

No, harmonic components are not the only elements used in sound synthesis. Other elements such as noise, filters, envelopes, and effects are also commonly used to create and shape sounds in synthesis.

Similar threads

  • Mechanics
Replies
34
Views
2K
Replies
7
Views
627
Replies
4
Views
1K
Replies
7
Views
2K
Replies
58
Views
6K
Replies
16
Views
2K
  • Mechanics
Replies
2
Views
889
Replies
3
Views
5K
  • Introductory Physics Homework Help
Replies
3
Views
194
Replies
12
Views
1K
Back
Top