- #1
PabloBot
- 3
- 1
I'm looking at this link: HaarWaveletTransform. Given an array of N sample points, it subdivides the array into two arrays of size N/2:
Array1: Averages adjacent sample points. Array2: Computes a finite difference between sample points.
You can then apply recursively k many times. In the end you will get a low resolution averaged image and multiple levels of Array2 which help invert the operation to recover the original data.
After the transform you still have as many data points as you originally had. So my questions are:
Array1: Averages adjacent sample points. Array2: Computes a finite difference between sample points.
You can then apply recursively k many times. In the end you will get a low resolution averaged image and multiple levels of Array2 which help invert the operation to recover the original data.
After the transform you still have as many data points as you originally had. So my questions are:
- How does this save memory? I thought this was supposed to help with compression?
- What is the point? Are some operations easier when you have the down sampled image and multiple levels of Array2?
- How do you get these filtering formulas? I thought for discrete wavelet transform you would have to solve some matrix equation to compute the coefficients for the wavelet basis functions.