
#1
May608, 05:23 PM

P: 4

Hello everybody. I'm trying to build a sound direction localising subsystem for a robot.
I have two microphones placed a distance apart. I'm not worried about sounds behind or the distance, just the direction. I see two choices. Phase shift detection or a neural sim of interaural time detection (ITD). Because sound travels at a fairly slow speed, the difference that sound arrives between two mics can be measured. The phase of the sound at the furthest mic will lag behind the near one. By measuring the phase lag, the sound direction can be calculated. The italic bit is the fun bit. Because we're not measuring a spike that can be easily measured, but a complex mish mash of frequencies, there's no landmark sound to trigger things. I guess I have to process the sound to its component frequencies, then look for the matching set to appear on the other channel constantly for a given time. And viceversa. Eek. Lots of fourier transforms and maths. Is anyone aware of an IC that's been produced to achieve this? Or a better way? The second choice is to mimic how we do it. I'm reading a few papers on this at the moment, but none are really helping me get the basic physiology of animal aural processing. I think we use hairs as bandpass filters to roughly achieve the FTs as above, but I'm not sure of the time comparison process. And less sure of how I can electronically do this. Soooo, this is one of those parts of a project that is proving a mountain to solve. If I can do it without loads of microcontroller code, great. I'm really hoping there's a blindingly obvious solution involving two 555s that'll do it, but I'm not holding my breath :) Any advice/references gratefully received. 



#2
May708, 01:23 AM

P: 2,265

to relate time difference (ITD) to azimuth direction, you need to review the Blumlien stereo patents. (or just do a little trigonometry. assume no head shadowing, but that you know the interaural spacing.)
to get the ITD you want to compute the "crosscorrelation" between the signals of the two microphones: [tex] R_{lr}(\tau) = \int (x_l(t) x_r(t\tau)) w(t) dt [/tex] where [itex]w(t){/itex] is a window function. if you're doing this with a DSP (or some other realtime processor), then the above integral is a summation and the signals are discretetime. and the offset lag [itex]\tau[/itex] is also an integer number of samples. if you like USENET, comp.dsp is a good newsgroup for this question. 



#3
May708, 03:30 PM

P: 4

Many thanks for your response. I came across this [ oh can't post URLs ] analog.com/en/prod/0,,770_847_AD8302,00.html beasty in my search last night. Looks very interesting. When I get my hands on one too evaluate, I'll report back.




#4
May708, 04:54 PM

P: 2,265

Stereo signal phase shift 


Register to reply 
Related Discussions  
Phase shift using FFT  General Engineering  11  
Phase, Phase Difference and Phase Shift  General Physics  6  
Phase shift  Introductory Physics Homework  0  
Phase of signal  Electrical Engineering  5  
Phase Shift  Electrical Engineering  1 