I have a device (fNIRS, though knowledge of fNIRS probably isn't necessary to help) which produces very noisy signals and a noise reference. The noisy signal consists of a combination of a desired signal, and a noise signal, which is a scalar and phase shift of the noise reference. However, I'm pretty sure the scalar and phase shift change slightly over time. Similar research all suggests adaptive filtering. Colleagues in another lab have been using an LMS (Least Mean Square adaptive filtering) subroutine, but its in a Matlab toolbox which we can't afford. I found an LMS filter online. After problems getting the filter to work properly, I looked at the literature, and I believe I understand how they work. However, I have some questions. All of the explanations of LMS filters involve solving for the filter coefficients over time which will produce a desired signal from a noisy one, if you already know the desired signal. This appears to be useful to find unknown filter coefficients, but is not useful for my purposes. The filter I found appears to somewhat work if I treat the noisy signal as the desired signal, and the noise as the input, and then take out the output signal (which is theoretically modified to be as close to the desired/noisy signal as possible). This doesn't make any sense, and I feel like it must be wrong. In addition, none of the literature seems to account for handling a phase shift. Am I missing something? also, the output signal goes haywire towards the end of the file. I can't understand why this would be occuring.