Recognitions:
Gold Member

## Formula for transverse decrease in signal strength?

Alice is transmitting an electromagnetic beam to Bob.
Assume it is a well collimated beam, like a laser beam, or a maser beam.
As Bob's distance from Alice increases, his signal strength decreases according to the well known inverse square law.

Eve, the eavesdropper, is beside the beam, not in the beam.
As Eve's transverse distance from the beam increases, the strength of the signal decreases.
What is the formula for Eve's signal strength as a function of her transverse distance?

Does it decrease exponentially, like the tail of a Gaussian, or in an inverse power law, like the electromagnetic potential?

Does the signal strength depend on whether the signal is coherent or not?
What if the signal is charged, ie an electron beam in a vacuum?

The signal strength might also depend on Eve's detector type?
For instance, a detector based on the photoelectric effect will fall off exponentially, but one based on induction would fall off as a power law?
 PhysOrg.com physics news on PhysOrg.com >> A quantum simulator for magnetic materials>> Atomic-scale investigations solve key puzzle of LED efficiency>> Error sought & found: State-of-the-art measurement technique optimised

Mentor
It depends on the beam shape at Alice and the distance to Alice.

 For instance, a detector based on the photoelectric effect will fall off exponentially, but one based on induction would fall off as a power law?
If your detector is properly calibrated, the output should be proportional to the measured quantity. Induction is a non-local measurement, but then you have to take the whole beam into account and not the position where you perform the measurement.
 I understand it's a conical beam limited by diffraction, and Eve's transversal distance is an angular one. This is implicit in "inverse square law". Nearer to the transmitter, where the beam is still cylindrical, the lateral distribution would be different. The drop of the field laterally is determined by the shape of the aperture that defines the beam, and especially by the illumination of the aperture - the distribution of power density in it. With a rectangular aperture, uniformly illuminated, the field decreases as sin(a)/a at angle a, hence as 1/a if you neglect the sin(a) fluctuations. A circular aperture would give a Bessel function, with minor differences to sin(a)/a. The power density varies as the field squared. Proper illumination of the aperture can reduce these undesired "sidelobes". By a limited but very useful amount at a limited angle a, by any amount at bigger a if technology could. A Gaussian is a good goal, which can be exceeded by far at some angle ranges (only?). Complete theory exist here. Look at "sampling window" like Hamming, Hanning, raised cosine and the like; this is how signal processing people see this question, and they're the most advanced in this topic. Astronomers are very interested in the topic as well. It's a fundamental limit when they want to picture a planet near its star, but also in IR and radioastronomy as off-axis thermal noise must be rejected. Have a look at "diffraction" and "secondary lobes" at telescopes and refractors. Antennas for radars and sonars give much attention to this, sometimes to avoid being jammed by off-axis sources, never to avoid detection (this is hopeless here). It is one remaining property for which a physical aperture is better than "synthetic aperture" (keyword). Far sidelobes could be reduced arbitrarily... If only the illumination function could be as precise as needed. This is possible in a Fourier transform where the window is a set of multipliers fed in a computer but is much more difficult at a telescope, say. Radio antennas like horns, which use longer waves than light, can corrugate their rim to obtain this effect.