Greetings, I am studying Analog & Digital Communications in one of my lectures and I'm stuck with a subject. I would be much pleased if someone is willing to help me on the subject below; Probability of error of a digital signal sent by using non return to zero encoding is easy to find such that we can apply -A+N for output if 0 is sent and A+N if 1 is sent for a single signal transmitted in the interval 0<t<T. Since this interval will only consist of an rectangle of height A or either height -A of cases 1 and 0 being sent respectively, we can use the mean and variance of Y (which is the output Y=-A+N or Y=A+N depending on the data being transmitted), normalize it and use the Q() function to find the error rate. Howeve Manchester encoding is different in sense that to send signal 1 0<t<T/2 is A and T/2<t<T is -A and viceaversa for 0. How am I supposed to do the math of this? When I was working with NRZ I built my calculations on the fact that x(t)=-A+w(t) in the case of 0 being transmitted (w(t) being white gaussian noise. How can I build my new x(t) fuction which the rest I'm sure I can handle.