Trying to use LMS adaptive filtering to remove noise with a reference signal

AI Thread Summary
The discussion focuses on the challenges of using LMS (Least Mean Square) adaptive filtering to remove noise from a noisy fNIRS signal, which includes a desired signal and a noise reference. The user is struggling with implementing an LMS filter found online, as it requires known desired signals for coefficient calculations, which they do not have. They express confusion about treating the noisy signal as the desired one and question the lack of literature addressing phase shifts in the filtering process. Additionally, the output signal becomes erratic towards the end of the file, but insufficient details are provided to diagnose the issue. Overall, the conversation highlights the complexities of adaptive filtering in the context of noisy signal processing.
softwareguy
Messages
1
Reaction score
0
I have a device (fNIRS, though knowledge of fNIRS probably isn't necessary to help) which produces very noisy signals and a noise reference. The noisy signal consists of a combination of a desired signal, and a noise signal, which is a scalar and phase shift of the noise reference. However, I'm pretty sure the scalar and phase shift change slightly over time.

Similar research all suggests adaptive filtering. Colleagues in another lab have been using an LMS (Least Mean Square adaptive filtering) subroutine, but its in a Matlab toolbox which we can't afford. I found an LMS filter online. After problems getting the filter to work properly, I looked at the literature, and I believe I understand how they work. However, I have some questions.

All of the explanations of LMS filters involve solving for the filter coefficients over time which will produce a desired signal from a noisy one, if you already know the desired signal. This appears to be useful to find unknown filter coefficients, but is not useful for my purposes. The filter I found appears to somewhat work if I treat the noisy signal as the desired signal, and the noise as the input, and then take out the output signal (which is theoretically modified to be as close to the desired/noisy signal as possible). This doesn't make any sense, and I feel like it must be wrong.

In addition, none of the literature seems to account for handling a phase shift. Am I missing something?

also, the output signal goes haywire towards the end of the file. I can't understand why this would be occurring.
 
Engineering news on Phys.org
softwareguy said:
will produce a desired signal from a noisy one, if you already know the desired signal.
Of course! What else. If you can't define the difference between noise and signal, then you can't design a filter. I'm sure you know that, so your question is ill formed.

As for going haywire at the end, you don't give enough info for us to guess.
 
Thread 'Weird near-field phenomenon I get in my EM simulation'
I recently made a basic simulation of wire antennas and I am not sure if the near field in my simulation is modeled correctly. One of the things that worry me is the fact that sometimes I see in my simulation "movements" in the near field that seems to be faster than the speed of wave propagation I defined (the speed of light in the simulation). Specifically I see "nodes" of low amplitude in the E field that are quickly "emitted" from the antenna and then slow down as they approach the far...
Hello dear reader, a brief introduction: Some 4 years ago someone started developing health related issues, apparently due to exposure to RF & ELF related frequencies and/or fields (Magnetic). This is currently becoming known as EHS. (Electromagnetic hypersensitivity is a claimed sensitivity to electromagnetic fields, to which adverse symptoms are attributed.) She experiences a deep burning sensation throughout her entire body, leaving her in pain and exhausted after a pulse has occurred...
Back
Top