Register to reply

How can we represent an image using basis images?

Share this thread:
ramdas
#1
Jun16-14, 02:28 PM
P: 24
I have read that using Fourier transformation we can decompose any arbitrary image into othogonal basis images and reconstruct it back.

But i dont understand terms like "othogonal " and "basis image".

So can anybody shower their ideas on the above terms with example ??
Phys.Org News Partner Physics news on Phys.org
Mapping the optimal route between two quantum states
Spin-based electronics: New material successfully tested
Verifying the future of quantum computing
davenn
#2
Jun16-14, 04:19 PM
Sci Advisor
PF Gold
davenn's Avatar
P: 2,508
hi ramdas

its not an area I know much about, but here's one www site that may help you ...

http://www.cc.gatech.edu/~phlosoft/transforms/

cheers
Dave
Andy Resnick
#3
Jun17-14, 07:24 AM
Sci Advisor
P: 5,513
Quote Quote by ramdas View Post
I have read that using Fourier transformation we can decompose any arbitrary image into othogonal basis images and reconstruct it back.

But i dont understand terms like "othogonal " and "basis image".

So can anybody shower their ideas on the above terms with example ??
If I understand you correctly, the 'orthogonal basis images' are 2-D periodic functions: colloquially, 2-D functions can be decomposed into a discrete or continuous Fourier series, each term being A_mn*sin(nx)*sin(my), where n and m are frequencies and A_mn is the amplitude of that function. Most tutorials present 1-D versions for clarity.

A.T.
#4
Jun17-14, 07:54 AM
P: 3,916
How can we represent an image using basis images?

See also this video at 3:30

Born2bwire
#5
Jun17-14, 09:41 AM
Sci Advisor
PF Gold
Born2bwire's Avatar
P: 1,765
In a very simplified sense, we define a set of functions that together can be used to reproduce any image. Realistically, this requires an infinite set so we truncate the number of functions to get a good approximation. These functions are the bases and are like interpolating functions. Being orthogonal means that they more or less describe unique aspects of the image from each other. That is, there is no "overlap" in the information in one basis with all the others.

So we have a set of interpolating functions that efficiently describe any image we may have to a good approximation. Then we only need to know the amplitudes of these functions to reconstruct an image.

More to it than this but that's a very basic explanation. You can also note that the Cartesian vectors are a vector basis of the Cartesian space. We have three bases, x, y, and z vectors. They are orthogonal as the dot products between them are all zero. If we want to describe the location of any point, then we simply state the vector coefficients for the three bases.
sophiecentaur
#6
Jun17-14, 05:00 PM
Sci Advisor
PF Gold
sophiecentaur's Avatar
P: 11,942
It is easier (just more familiar, I think) to think back to the way that temporal variations of an audio signal (what you see on an oscilloscope - which shows things in two dimensions - volts and time) being transformable into a set of sinusoidal tones (frequency domain). That's in one dimension. An image can have its brightness (in two dimensions) represented by a three dimensional surface.

To get a representation of this image in terms of sinusoidal variations of brightness and distance (Fourier transformation gives spatial frequency domain) you have to make an assumption and that is that the pattern of brightness over the image repeats itself for ever in every direction (like wallpaper). This is the same assumption as is used when the FFT is used for Audio (etc.) work. This is a Discrete Fourier Transform (DFT) which contains discrete frequency components.

With a static TV picture, a DFT can be used over the whole picture or over square sub-sections of the picture (blocks). For a moving picture, subsequent frames are not the same and doing a simple DFT will produce artefacts - jerkiness and smearing.

Digital signal processing, using sinusoidal basis function is inconveniently long winded and it is normal to use different functions, which end up requiring lower transmission bit rates. The 'Raised Cosine' function can be used and there are some very quick algorithms for doing this. This is the basis for JPEG processing. (Other functions are available, as they say) The secret is to choose a set of functions that produce the least perceptible distortions and to reduce the errors (visible boundaries) moving from one block to the next. when you are using as few components as possible.


Register to reply

Related Discussions
C# breaking monochrome image down into new images. Programming & Computer Science 5
Represent |+x> and |-x> in the Sy basis Advanced Physics Homework 1
Basis for kernel and image of a LT ? (Addition and Union of basis) Calculus & Beyond Homework 0
Image Maps / Inverse Images Set Theory, Logic, Probability, Statistics 5
B-mode Ultrasound Image - what does it represent? Medical Sciences 2