Image Reconstruction:Phase vs. Magnitude

  • Thread starter Thread starter ramdas
  • Start date Start date
  • Tags Tags
    Image Magnitude
AI Thread Summary
The discussion centers on the differences in image reconstruction using magnitude and phase spectra. The magnitude spectrum emphasizes low-frequency components, leading to a smoother image, while the phase spectrum highlights high-frequency components, such as edges and lines, resulting in a sharper image. This contrast arises because the magnitude and phase contain distinct information; together, they reconstruct the original image. The visibility of edges in the phase-only image is attributed to how high-frequency components interact at discontinuities, while low-frequency variations contribute to an overall average brightness. Understanding these concepts is crucial for grasping the complexities of image reconstruction in the frequency domain.
ramdas
Messages
78
Reaction score
0
Figure 1.(c) shows the Test image reconstructed from MAGNITUDE spectrum only. We can say that the intensity values of LOW frequency pixels are comparatively more than HIGH frequency pixels.

Figure 1.(d) shows the Test image reconstructed from PHASE spectrum only. We can say that intensity values of HIGH frequency (edges,lines) pixels are comparatively more than LOW frequency pixels.

Why this magical contradiction of intensity change (or exchange) is present between Test image reconstructed from MAGNITUDE spectrum only and Test image reconstructed from PHASE spectrum only, which when combined together form the original Test image?
 

Attachments

  • xx.PNG
    xx.PNG
    71 KB · Views: 677
Engineering news on Phys.org
The magnitude and the phase contain different information from each other, and together they contain the same information as in the original complex image. So if the original image has information A,B,C, and D and the magnitude spectrum has A and D then B and C must be in the phase spectrum.
 
@Mentor sir but can u tell me in some detail what is happening in figure 1.(c) and 1.(d)actually?
 
If you want to understand what is happening in 1c, to account for the lack of any apparent image, what is happening is that it is the result of the sum of a whole set of spatial harmonics that are only 'in phase' at point (0,0) and produce a massive maximum there and nearby. (Their phases are all set to zero)
In 1d, you are starting with a whole lot of spatial harmonics with the same amplitude - producing a more or less uniform brightness over the image. But there are certain places on the original scene (edges) where the relative phases go to produce a sum with big discontinuities. Even when the amplitudes of the harmonics are kept the same, this still gives identifiable and abrupt changes in resultant amplitude in the same places as the edges in the original.

It's interesting to note that the signal analysis done in the eye is very sensitive to phase distortion of a signal (the phase is what we are looking at because that tells you where the edges are) whereas the ear is much more sensitive to amplitude frequency distortion (you can hear speech, for instance when the audio signal has been subjected to all sorts of phase distortions in audio compression systems).
 
Last edited:
Image d can be understood as a filter. Since the phase is preserved and the magnitude is set to 1 this image is the same as the original image with a filter which is inversely proportional to the k-space magnitude. Since the k-space magnitude is high in the center and low on the edges, this amounts to a high pass filter. Visually, you see the high pass filtering also due to the preservation of edges and the loss of contrast.

I don't know a simple way to understand image c. EDIT: just noticed sophiecentaur's approach for understanding image c, which seems good to me.
 
Last edited:
Whoops. Where did that post go? I was all ready to have a go at answering it.

I am not sure what was meant by low and high frequency pixels. A pixel is the same width over all the picture. I think you could use the term low and high frequency spatial variation.
 
question edited.added euations in previous one...

Figure 1.(c) shows the Test image reconstructed from MAGNITUDE spectrum only. We can say that the intensity values of LOW frequency pixels are comparatively more than HIGH frequency pixels. f(x,y) is image function and F(u,v) is its 2D Fourier transform


f(x,y)= ∑_(u=0)^(U-1)∑(v=0)^(V-1)\ |F(u,v)| exp^(1j*2Π*xu)/M) * e^(1j*2Π*(vy)/N) --(1)

Figure 1.(d) shows the Test image reconstructed from PHASE spectrum only. We can say that intensity values of HIGH frequency (edges,lines) pixels are comparatively more than LOW frequency pixels.

f(x,y)= ∑_(u=0)^(U-1)∑(v=0)^(V-1)\ exp(j*angle(u,v)) *exp^(1j*2Π*xu)/M) * e^(1j*2Π*(vy)/N)--(2)

i want to ask that in phase only reconstruction part why do i get only edges or lines,why not low frequency components?? because from the 2nd equation i am not getting any idea that only edge like features are emphasized...
 
Sir i have added the post again.it was deleted yesterday by mistake...

sophiecentaur said:
Whoops. Where did that post go? I was all ready to have a go at answering it.

I am not sure what was meant by low and high frequency pixels. A pixel is the same width over all the picture. I think you could use the term low and high frequency spatial variation.

Sir i have added the post again.it was deleted yesterday by mistake...
 
To account for the visibility of edges, you need to realize that, in the frequency domain, the high and low frequency components are not only there at the edges; they are everywhere (that's what Fourier is all about. The reason that you 'see' a step or impulse is that they happen to add in those places to produce a visible (but very slight) change in brightness. Over most of the picture, the frequency components add up to produce an 'average' brightness with no visible change - hence the mid-grey appearance.
You need to take into account our subjective appreciation of a scene as well as the Maths involved.
Imo, the reason that it works for our eyes is that our vision system is constantly searching for edges and outlines. Out in the wild, it's the best way to recognise food and threat. It's the outline of an elephant, against a grey wall that allows us to spot it and not the slight change in greyness.
Likewise, we are good at spotting small amounts of rapid movement but we can ignore the variation in light levels as a cloud goes over the sun etc.

Having written all this, I still have to agree with you that the results you showed are not what you'd expect intuitively. When I was shown the effect, years ago, I was just as confused as you have been!
 
Back
Top