## Stereo signal phase shift

Hello everybody. I'm trying to build a sound direction localising subsystem for a robot.
I have two microphones placed a distance apart. I'm not worried about sounds behind or the distance, just the direction.

I see two choices. Phase shift detection or a neural sim of interaural time detection (ITD).

Because sound travels at a fairly slow speed, the difference that sound arrives between two mics can be measured. The phase of the sound at the furthest mic will lag behind the near one. By measuring the phase lag, the sound direction can be calculated. The italic bit is the fun bit.

Because we're not measuring a spike that can be easily measured, but a complex mish mash of frequencies, there's no landmark sound to trigger things. I guess I have to process the sound to its component frequencies, then look for the matching set to appear on the other channel constantly for a given time. And vice-versa. Eek. Lots of fourier transforms and maths. Is anyone aware of an IC that's been produced to achieve this? Or a better way?

The second choice is to mimic how we do it. I'm reading a few papers on this at the moment, but none are really helping me get the basic physiology of animal aural processing. I think we use hairs as bandpass filters to roughly achieve the FTs as above, but I'm not sure of the time comparison process. And less sure of how I can electronically do this.

Soooo, this is one of those parts of a project that is proving a mountain to solve. If I can do it without loads of microcontroller code, great. I'm really hoping there's a blindingly obvious solution involving two 555s that'll do it, but I'm not holding my breath :-)

 PhysOrg.com engineering news on PhysOrg.com >> Sensitive bomb detector to rove in search of danger>> PNNL-developed injection molding process recognized with emerging technologies award>> How soon could car seats enter the 3-D comfort zone?
 to relate time difference (ITD) to azimuth direction, you need to review the Blumlien stereo patents. (or just do a little trigonometry. assume no head shadowing, but that you know the inter-aural spacing.) to get the ITD you want to compute the "cross-correlation" between the signals of the two microphones: $$R_{lr}(\tau) = \int (x_l(t) x_r(t-\tau)) w(t) dt$$ where $w(t){/itex] is a window function. if you're doing this with a DSP (or some other real-time processor), then the above integral is a summation and the signals are discrete-time. and the offset lag [itex]\tau$ is also an integer number of samples. if you like USENET, comp.dsp is a good newsgroup for this question.
 Many thanks for your response. I came across this [ oh can't post URLs ] analog.com/en/prod/0,,770_847_AD8302,00.html beasty in my search last night. Looks very interesting. When I get my hands on one too evaluate, I'll report back.

## Stereo signal phase shift

 Quote by rbj to relate time difference (ITD) to azimuth direction, you need to review the Blumlien stereo patents. (or just do a little trigonometry. assume no head shadowing, but that you know the inter-aural spacing.) to get the ITD you want to compute the "cross-correlation" between the signals of the two microphones: $$R_{lr}(\tau) = \int (x_l(t) x_r(t-\tau)) w(t) dt$$ where $w(t)$ is a window function. if you're doing this with a DSP (or some other real-time processor), then the above integral is a summation and the signals are discrete-time. and the offset lag $\tau$ is also an integer number of samples. if you like USENET, comp.dsp is a good newsgroup for this question.
i tried to fix this last night, by the PF server was acting very badly. above is what i meant.