1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Travelling wave phase difference problem

  1. Jun 27, 2012 #1
    Hi,

    I can't get my head round this:

    Suppose I have two transmitters A and B, at different distances (d_A, d_B) from a reciever, C.

    Each transmitter transmits a sinusoidal wave towards the receiver, which I will be modelling as 1 dimensional travelling waves, each given by the equation:

    y(x,t) = Asin(wt - kx + θ).

    The idea is that so long as the transmission wavelength used is twice the distance from C to MAX{d_A,d_B}, a phase comparator can detect the difference in phase between the two signals, due to changes in their relative distances from the receiver C (Since one will travel a fraction of a wvelength further than the other).

    This is where I'm stuck:

    Now suppose B is further than A, so that d_B > d_a, and that both transmitted waves have exactly the same frequency, and starting phase (Meaning if t_A and t_B are the times each transmitter starts its tranmission, then for a given value of x, y_A = y_B - i.e. they are spatially coherent - I think that is the right term?). I'm also assuming that the amplitude of both waves at the receiver is 1, to simplify the scenario.

    What I want to know is that if there is a difference between the transmission times, t_A and t_B, of the two transmitters, will it cause a difference in the measured phase at the reciever (Hence affecting the reliability of the distance measurement based on the phase difference)?
     
  2. jcsd
  3. Jun 27, 2012 #2
    What do you mean by transmission time? The propagation time between transmitter and receiver or something else?
     
  4. Jun 28, 2012 #3
    I mean the initial time that the transmissions start at A and B, so that A and B start their transmissions at slightly different times rather than simultaneously.

    Sorry I didn't word it very well there
     
  5. Jun 28, 2012 #4
    Then yes, the delay between the signals at the sources should be taken into account.
    If there is such a delay it should be contained in the initial phase θ. This is the phase at x=0 and t=0 for each one of the signals. Both time and position should be measured from the same reference point. Then θ will "contain" the information about both the position shift and time shift of the two sources.
    If you take θ=0 for one source (A for example), the value of θ for source B will depend on the position and time offsets relative to the source A.

    By the way, for the 1 D case the phase difference between the two waves does not depend on the position along the line, does it?
    On the other hand, if you can measure the phase difference at two different points for one wave, this will depend on position. But then you don't need two waves.
     
  6. Jul 2, 2012 #5

    Thanks. Yes in the 1D case where you have just one wave, reflected, and having its phase measured with respect to its reflected copy, then the initial phase and time of transmission are irrelevant, and any difference in phase is due solely to a change in distance between transmitter and reflector. This is known as continuous wave amplitude modulated range-finding. It usually has to have a multi-tonal modulation, to obtain a given precision over a given distance range to work.

    What I was thinking is if you could have two transmitters sending out waves with the same initial phase, θ, and different transmit times t_A and t_B, then provided t_A and t_B are sufficiently close together, so as to not cause a significant change in measured phase difference, then the measured phase difference should correspond to the distance measurement only, with some degree of precision.

    So, suppose you could trigger two RF transmitters at t_A, and t_B, with the same initial phase. Then if the difference between transmit times is less than some error value, E (i.e. Abs(t_A-t_B) < E), so that the phase shift, Δθ_E, caused by such a difference in transmit times is much less than the minimum phase difference, Δθ, required to measure distance to the desired degree of precision, then in this case, the timing difference introduces negligible error. So provided the two criteria; Initial phases of both transmitters are the same, and time difference is always less than E, then you could repeat separate measurements, using the same (Or very nearly the same, within an acceptable amount of error) phase reference, meaning separate consecutive measurements have the same phase reference, and can therefore track any RELATIVE change in distance between the two.

    Supposing, there were now a set of fixed transmitters instead of just two, then given an initial set of reference phases, while the receiver is stationary, the receiver would be able to calculate its position relative to the set of transmitters, via trilateration as it moves around.

    That is my idea, but there's some potential problems I can see with it. Mainly...whether high frequency RF transmitters can be constructed to have the same initial phase when they transmit (For both the same transmitter, transmitting on different occasions, as well as different identically constructed transmitters)....and also whether trilateration is possible using a changing set of RELATIVE distances as opposed to a periodic set of ABSOLUTE distances, as is used in GPS.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook