- #1
- 4
- 0
Hello everybody. I'm trying to build a sound direction localising subsystem for a robot.
I have two microphones placed a distance apart. I'm not worried about sounds behind or the distance, just the direction.
I see two choices. Phase shift detection or a neural sim of interaural time detection (ITD).
Because sound travels at a fairly slow speed, the difference that sound arrives between two mics can be measured. The phase of the sound at the furthest mic will lag behind the near one. By measuring the phase lag, the sound direction can be calculated. The italic bit is the fun bit.
Because we're not measuring a spike that can be easily measured, but a complex mish mash of frequencies, there's no landmark sound to trigger things. I guess I have to process the sound to its component frequencies, then look for the matching set to appear on the other channel constantly for a given time. And vice-versa. Eek. Lots of Fourier transforms and maths. Is anyone aware of an IC that's been produced to achieve this? Or a better way?
The second choice is to mimic how we do it. I'm reading a few papers on this at the moment, but none are really helping me get the basic physiology of animal aural processing. I think we use hairs as bandpass filters to roughly achieve the FTs as above, but I'm not sure of the time comparison process. And less sure of how I can electronically do this.
Soooo, this is one of those parts of a project that is proving a mountain to solve. If I can do it without loads of microcontroller code, great. I'm really hoping there's a blindingly obvious solution involving two 555s that'll do it, but I'm not holding my breath :-)
Any advice/references gratefully received.
I have two microphones placed a distance apart. I'm not worried about sounds behind or the distance, just the direction.
I see two choices. Phase shift detection or a neural sim of interaural time detection (ITD).
Because sound travels at a fairly slow speed, the difference that sound arrives between two mics can be measured. The phase of the sound at the furthest mic will lag behind the near one. By measuring the phase lag, the sound direction can be calculated. The italic bit is the fun bit.
Because we're not measuring a spike that can be easily measured, but a complex mish mash of frequencies, there's no landmark sound to trigger things. I guess I have to process the sound to its component frequencies, then look for the matching set to appear on the other channel constantly for a given time. And vice-versa. Eek. Lots of Fourier transforms and maths. Is anyone aware of an IC that's been produced to achieve this? Or a better way?
The second choice is to mimic how we do it. I'm reading a few papers on this at the moment, but none are really helping me get the basic physiology of animal aural processing. I think we use hairs as bandpass filters to roughly achieve the FTs as above, but I'm not sure of the time comparison process. And less sure of how I can electronically do this.
Soooo, this is one of those parts of a project that is proving a mountain to solve. If I can do it without loads of microcontroller code, great. I'm really hoping there's a blindingly obvious solution involving two 555s that'll do it, but I'm not holding my breath :-)
Any advice/references gratefully received.