Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

2 known signals, 1 PSD: extract components with same phase.

  1. Jul 21, 2009 #1
    As stated in title: I have full knowledge of the 2 signals. They have the same Power Spectral Density. However, only certain components of the 2 signals have the same phase. Is there any way of extracting these components, or at least their PSD functions? Is this doable at all?

    The phase I think I can retrieve by generating a separate reference signal...

    Any ideas how to solve this would be greatly appreciated!

    Thanks in advance!
  2. jcsd
  3. Jul 21, 2009 #2


    User Avatar
    Science Advisor

    Do the components with the same phase have the same, constant frequency?

    Do you just need to know it is there in both signals at the same time?
  4. Jul 21, 2009 #3
    Thanks for your reply! Unfortunately, they do not have the same constant frequency and I know on beforehand they are there at the same time.

    But if it isn't possible to do this at all, that is valuable information as well. It just means I need add more sensors or at least change system configuration. I have been thinking of comparing the cross-power spectral density with the power spectral density of one of the signals. I'm not entirely sure how the in-phase components would show up though, or if they would at all.

    Edit: I can also calculate the range of the phase-lag of the signals without interest. Could that help?
    Last edited: Jul 21, 2009
  5. Jul 21, 2009 #4
    Not clear what you mean by 'full knowledge'. If you really had full knowledge, then you can solve the problem on paper. So, I think the solution (or lack of one) will depend heavily on what you actually do and do not know about the signals (and potentially what you know about the noise field as well).
  6. Jul 21, 2009 #5
    Thanks for the comment!
    I do have full knowledge of the 2 signals. I mean, full knowledge in the sense that they are the actual recorded signals. What I don't know is what components of the 2 signals have the same phase.

    So, at least on paper, how do I solve the problem? I would like to solve the problem myself so could you at first just give a hint on what tools I need to use?
    Last edited: Jul 21, 2009
  7. Jul 21, 2009 #6


    User Avatar
    Science Advisor

    There seem to be a lot of people doing top secret stuff.

    Please try to explain exactly what you have, where it came from and what frequencies you would expect to find in it.
    And why would you be looking for similar phase?

    In electronics, the term "in phase" really means more than finding coincidental pulses in a noise spectrum. It means something more like a similar repetition rate over a number of cycles.
  8. Jul 22, 2009 #7
    What I mean is that if you know that signal 1 is 'sin x' and signal 2 is 'sin (ax +b)
    you can compute the phase angle difference as a function of time (see http://en.wikipedia.org/wiki/Phasor_(sine_waves [Broken]) for basic discussion).

    But from your reply I see that what you actually have is either a) time series samples of each signal separately, or b) time series samples of the mix signal (s1 + s2). If case a), then s1 - s2 will be the amplitude difference as a function of time and you can analyze this signal to determine average, or instantaneous phase difference, etc.

    if case b), then you will have to either add in some additional apriori knowledge about the underlying nature of the signals (e.g. one is a pure sine wave, etc.). If you don't have any additional simplifying information, then you have a non-linear search and optimization problem (what set of 2 and only 2 waveforms add to produce the observed resultant time series). There are likely to be, of course, a potentially large set of possible results, but the search can be aided to some extent by an occam's razor type assumption: What are the 2 *simplest* additive waves to produce the result.
    Last edited by a moderator: May 4, 2017
  9. Jul 22, 2009 #8
    rolerbe: Again, thanks so much for your replies! Yeah, I ruled out the possibility not to only use "statistically processed" material a little too early: I do have the resolutions to accurately use the signals directly. Right after I had written my last reply to you I realized what you meant with "on paper" and that it actually would be practically possible in my case.

    Fortunately I have case a) because I want to use this in a real-time application, I doubt that the optimization algorithms would be speedy enough. And it seems as if I would need in the order of 10^4 calculations / second which should be sufficiently low.

    Just for fun, I will use only one signal to apply optimization algorithms on, with different a priori assumptions, to see which one fits best. That would be interesting.
    Last edited: Jul 22, 2009
  10. Jul 22, 2009 #9
    I am still having trouble understanding what you have here, and what problem you want to solve.

    Are you saying you have two channels, and that the noise is uncorrelated but the signal is correlated but of different phase between the two channels? Or that you know they are the same phase and that is all you want to extract?

    It seems that if you know the phase is the same, then simply add the two signals to get a 3dB SNR gain, then do your best at detection. Anything more clever than that requires an understanding of the part of the modulation that is not in phase between the two channels.

    If the signals are different phase, is the phase of the correlated signal (between the two channels) unknown?

    And if so, is it a constant delay for all frequencies or does the phase or delay vary with frequency?

    And finally, does the phase of the signal between the two channels vary over time, and if so how fast?
    Last edited: Jul 22, 2009
  11. Jul 22, 2009 #10
    Quite possibly a double balanced mixer will detect the simultaneous presence of the two signals. Usually these devices are used only in the microwave range, but I think I have seen them in the RF range. How about letting us know what frequency range you are talking about, how much is the signal power (e.g., +9 dBm), what the signal frequency and amplitude variation is, what the total signal power in each signal is (all components),etc. The double balanced mixer is essentially a signal multiplier- you only get a dc output when two signals of the same frequency are applied to both inputs. The output amplitude depends on the relative phase of the two input signals, so if the phase varies, the output dc signal will vary..
  12. Jul 22, 2009 #11
    Ok, to simplify the problem: u1, u2 recorded:
    (I) u1=f(t)+g(t)
    (II) u2=f(t)+g1(t-p1)+g2(t-p2)+g3(t-p3)...
    (III) g(t)=g1(t)+g2(t)+...
    and then solve for f(t)

    It seems though as if I have basically 2 equations and 3 unknown, unless one can use (III) in some way. The frequencies are fairly low, I don't think I would have any use of anything over 4 kHz. I would say in the range of 0.1-4kHz. The phase differences (or delays rather) are constant and are in the range of 0-0.3ms. I honestly don't know the signal power is yet. I haven't received the hardware yet.
    Last edited: Jul 22, 2009
  13. Jul 23, 2009 #12
    OK, great, this helps tremendously.

    If you don't know anything more useful about the 'g' signals or the phases, then you can only treat all the 'g' energy as noise, and the best you can do is simply add the two channels (u1 + u2) in order to get a 3dB SNR boost where f{t) is signal and everything else is noise (assuming the same noise energy in each channel, same signal energy in each channel, noise is uncorrelated, and signal is correlated).

    If the noise or signal level is different in one channel than the other, then your goal is to maximize SNR, and of course remember that...

    Correlated in-phase level (like correlated signal, or even correlated noise if you have any) (voltage or current) adds linearly:

    [tex]S = S_1 + S_2 + ...[/tex]

    Uncorrelated noise level (voltage or current) adds orthogonally:

    [tex]N = \sqrt{N_1^2 + N_2^2 + ... }[/tex]

    And of course, convert from power to level by taking the square root, and convert from level to power by squaring.

    So to maximize the SNR where the two channels don't have the same noise or signal level, you'd need to write the SNR equations with a variable gain in one channel and then find the gain that maximizes the SNR. Specifically, maximize:

    [tex]\frac{S}{N} = \frac{(S_1 + Gain * S_2 )}{\sqrt{N_1^2 + (Gain * N_2)^2}}[/tex]

    (Where the 'S' and 'N' variables are levels, not power)

    (And remember this assumes uncorrelated "noise". Different phases of those 'g' signals might show some correlated energy--for example if they are more like carriers without much modulation. If they are fully modulated signals, then you can assume different phases behave like uncorrelated noise. But "fully modulated" takes some good source and channel coding, like a good compression and efficient modulation with no predictable energy--like the predictability in a tracking signal, for example.)

    Then you'd apply that gain to the one channel and then add them.

    If you CAN somehow predict one or more of the phases or there's some predictability in some of the things I'm calling "noise" here, then you might do a little arithmetic to help remove the unwanted (in this case correlated) noise. But I'd have to know more to help in that way.

    Finally note that for simple channels with non-descript modulation, correlated signal, and uncorrelated AWGN noise, multiplying them will only hurt the SNR, not improve it. There are, however, special cases where you would multiply, like when you want to cross correlate or deconvolute (you'd multiply in the frequency domain to use less CPU) or the modulation contains more info in higher amplitude spots, for example. To talk about that, though, I'd have to know more about the modulation and channel response, what noise or signal is band-limited, what you know or can find out about phase, and whether some of the unwanted symbols you mentioned are correlated or not.
    Last edited: Jul 23, 2009
  14. Jul 23, 2009 #13
    Thank you so much fleem! That was of tremendous help! It would help a little if I measured in what frequencies most noise are present (which is variable) and adapted the time lag so that the lag coincided with c/(2*f_max).

    But so how do I estimate the noise and signal leves so that I can maximise S/N?
  15. Jul 23, 2009 #14
    I'm not sure what you mean here. It sounds like you are saying the signal phase difference between the channels varies with frequency? If so then maybe you meant adjust the lag so that the signal is in phase at the frequency of minimum-noise, rather than maximum noise (but I'm probaly misinterpreting what you are saying, here).

    Is the signal correlated across the band or not correlated (i.e. is it like ultra-wideband where the same bit is sent simultaneously across the band, or is it something like OFDM where different bits are sent at different frequencies)? If it is correlated across the band, then you can also simply selectively attenuate the low-SNR parts of the band similar to the way I mentioned that you selectively adjust the gain of different channels, so that the final SNR is maximized. If the signal is uncorrelated across the band then you can't do that, of course--and hope the channel coding handles non-flat SNR--like some OFDM schemes do.

    That would require exact knowledge of the channels and of all the signals. I'm not sure if any of that noise has at least a little correlated energy (from your description it sounds like it might). What you might do is add to your receiver the ability to adjust the gain of one channel before the channels are added together, so that when you get the hardware you can find that optimal gain empirically.
  16. Jul 23, 2009 #15
    The optimization search is a b*tch. I was doing that some time ago for a real-time mass spectrometery app (matching gaussians in that case). Very difficult, and requires lots of heuristic code to reject nonsense results. Requires serious horsepower to do in real-time. OTOH, its amazing how much computation you can do in real-time these days. Not only 'impossible', but *ridiculous* to even consider just 10 years ago.

    Good luck!
  17. Jul 23, 2009 #16
    On a side note, pretty much any modern FPGA or desktop computer has way more horsepower than you'd need to convert this 4KHz of data into the frequency domain, where you can do all kinds of fun things like spectral filtering and fast cross- and auto-correlation (to find phases, etc.). The process is fairly simple. For example cross-correlation in the frequency domain is simply the product of the spectra of the two signals you are cross-correlating. To deconvolute a signal, simply divide it (in the frequency domain) by the spectrum of the impulse response of the convoluting channel. There's lots of free DFFT code out there and code that multiplies & divides complex spectra. Most non-trivial signal processing these days occurs in the frequency domain. Its also easier to do math for signal processing in the frequency (Fourier) domain.
    Last edited: Jul 23, 2009
  18. Jul 23, 2009 #17
    rolerbe: thank you so much and thanks for all the help!

    fleem: I have played a little with that but it seems to come down to the same thing again, 3 unknowns and 2 equations. But what you say is interesting. Could I recover the phase as well, perhaps by generating some kind of reference signal?
  19. Jul 23, 2009 #18
    if you have two channels and there is some energy which is common in those channels but out of phase, you will see it in the cross-correlation of those two signals. You can do that cross-correlation in the time domain if you like, but doing it in the frequency domain is faster (once converted to the frequency domain) and has some advantages. Just be sure the buffer size is at least several times the wavelength (the more waves that can fit in the buffer, the better, but of course there'll be a practical limit according to what processing lag you'll allow and according to memory available). So here's the short of it:

    Basically you run the time domain buffer from each channel through a FFT (DFFT to be precise, of course), which produces a real and an imaginary spectrum--a "complex spectrum". Each frequency has a real and an imaginary part. Don't let that scare you--its just the X and Y values of a vector indicating the magnitude and phase of that frequency. Then you multiply each complex sample in one channel's spectrum with same frequency in the other channel's spectrum, which produces a third complex spectrum. Note that these are complex multiplies. Find the peaks in that third spectrum and you've found the frequency and phase of the common energy in the two channels. Wikipedia is a good place to see how to do complex multiplies and divides (http://en.wikipedia.org/wiki/Complex_number), and you can google for free DFFT code in your favorite language. There's a lot of C/C++ stuff out there, some java, and there might even be some free VHDL cores, as well, if you're into FPGAs, for example. Then you can turn it back into time domain if needed, via an IFFT.

    EDIT: I didn't answer your initial question! Yes, you could use a reference signal to correlate but only if its a sine wave or at least doesn't have much in the harmonics. And also that won't tell you if the delay is more than 360 degrees. The problem is that if its a complex signal and the phase varies considerably, then the correlation won't be there.
    Last edited: Jul 23, 2009
  20. Jul 23, 2009 #19
    Fleem has you on the right track. The Gaussian matching I mentioned before is just another deconvolution operation as he describes, the point being that these techniques are powerful and not limited to just sine wave decompositions.

    The only other note I would make is that you will also have to make an appropriate selection of window function (there are several to choose from) for the FFT ops to avoid introducing new artifacts or effects, but that's a little down the road from where you are right now. First decide if this path in general is the one that will lead you to the result you want. The question didn't become clear until you posted your actual signal and measurement conditions.
  21. Jul 23, 2009 #20
    Yeah, I agree with you but how do I implement that in a real-time program? I mean, what's a peak and what's not a peak? Also, given a good criteria, do I implement the selection with a if-statement and go through the array of FFT values?

    Again, thanks so much so far guys! You have been great!
  22. Jul 23, 2009 #21
    Yep, that's what I'm picturing. But one thing I should have mentioned in the previous post is that the covariance of the two copies of the signal you are looking for (without regard to phase) must, of course, be the strongest covariance in the cross-correlation, at least at the frequency you're looking for. This is not to say the signal itself must be stronger than the noise and other unwanted "signal" (correlated noise), but to say that the part of the wanted signal that correlates (the covariance) must be notably higher than the covariance of any other signals in the channels. Really what I'm saying here is obvious--the peaks in the cross correlation produced by the signal you want must be stronger than any other, because your peak finder algorithm is simple, as you describe above (a 'for' loop looking for the max magnitude). Of course, knowing the frequency of the peak will let you ignore all other frequencies. Along with that is the depressing fact that the peak from other unwanted signals at the same frequency will create other peaks in the time-domain cross-correlation that might confuse the peak finder algorithm.

    Note that the peak finder algorithm operates on the time-domain form of the cross correlation.

    Also i forgot to mention that (and you probably know this--but I thought I'd be clear) the magnitude of a complex number is the square root of the sum of the squares of the real and imaginary parts (the length of that vector).

    As far as what can be implemented in a real-time program, considering that your bandwidth is only 4KHz I can pretty much guarantee you'll have WAY more processing power than you need--unless you're going to do it with a 4MHz 8051! Most any modern FPGA, and modern desktop computer, can do this with its eyes closed and one hand in its back pocket. Consider that those programs that modify people's voices do all this sort of thing (FFTs, IFFTs, buffer multiplies & divides, etc.) on over twice the bandwidth and hardly nudge the CPU usage meter. Of course, there will be a delay of about four times the time-domain buffer size, assuming you aren't given an input buffer, and don't provide an output buffer, until the buffers are full.

    EDIT: About an hour after posting this I changed some text above to correct some of my own mistakes.
    Last edited: Jul 23, 2009
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook