1

I've been reading this article: http://aix1.uottawa.ca/~jkhoury/haar.htm which explains the Haar Wavelet Transform.

At a certain point, the author says:

... Since the transformation matrix $W$ is the product of three other matrices, one can normalize $W$ by normalizing each of the three matrices. The normalized version of $W$ is ...

and provides final matrix.

But he doesn't provide the exact procedure for matrix normalization he used. Probably someone with a deep understanding of the Haar Wavelet Transform can explain me, how matrices were normalized so that final result was obtained.

I've been searching the Internet for matrix normalization and couldn't find anything suitable. Also contacted author of article but didn't get any response so far.

Laurent Duval
  • 31,850
  • 3
  • 33
  • 101
ivan.ukr
  • 113
  • 6
  • does it really matter? just throw gram-schmidt at the matrix... – Marcus Müller Oct 09 '16 at 15:28
  • Probably it may sound surprising, but this is exactly what matters to me, otherwise I wouldn't post the question. – ivan.ukr Oct 09 '16 at 15:32
  • Sorry I'm not a very big specialist in this area of math, so "gram-schmidt" says noting to me. If you can, please answer original question. – ivan.ukr Oct 09 '16 at 15:34
  • so the point is that you want to normalize the orthogonal matrix. Gram-Schmidt is the standard method that is taught for orthonormalizing matrices (see your favourite undergrad math textbook or wikipedia); your matrix is already orthogonal, so my question is: Why does the used algorithm matter to you? If you only change the scaling of let's say each column vector, all algorithms should be identical, so use the one that's most intuitive. Since you're talking about matrix normalization, I just assumed you were familiar with the most basic algorithm – Marcus Müller Oct 09 '16 at 15:38
  • What I have tries so far: – ivan.ukr Oct 09 '16 at 15:47
  • I've been searching internet and actually didn't find much about matrix normalization, however found info about vector normalization (divide each vector element by square root of sum of squares of vector elements), also noticed that matrix normalization may be done in this way by rows and by columns, and also similar to it, but when entire matrix elements squares are sum up and sqrt taken. I've tried both these method, and nothing have given me the same result as author provides. So that's why i'm asking question here. – ivan.ukr Oct 09 '16 at 15:53
  • Why algorithm matters to me? Because I need to implement 2D Haar Wavelet transform for the entire image at once of size much greater than 8*8. Matrix multiplication will not work good here, so I want to use averaging/differencing algorithm, and I have implemented it, but so far it doesn't perform any normalization, but what I need is to get the same normalized coefficients, as if I were computing it by multiplication of original image by such kind of normalized matrix. So I am trying to understand how this normalization works. – ivan.ukr Oct 09 '16 at 15:59
  • In fact, matrices just repeat the same averaging/differencing (or vice-versa- I don't know what was first here - chicken or egg) - so I suppose if I gain understanding how matrices were normalized, I can apply the same normalization during averaging/differencing. – ivan.ukr Oct 09 '16 at 16:03
  • Probably this is something specific to Haar Wavelet. SQRT(2) should play somehow here. – ivan.ukr Oct 09 '16 at 16:06
  • I also was trying to multiply each matrix by 1/sqrt(2.0) and multiply matrices, but this doesn't give the same matrix as provided in the article. So I want to understand what is correct way of normalization and how the final matrix in the article I've mentioned was obtained, This is key to writing correct implementation of HWT for higher size images. – ivan.ukr Oct 09 '16 at 16:21
  • You need to read up on the Gram-Schmidt method. Understanding it will probably solve all your questions. It's taught to every engineering undergrad (at least every engineering undergrad I know) for a reason around here! – Marcus Müller Oct 09 '16 at 16:39

1 Answers1

2

After a few quick calculations, it seems to me that the trouble comes from poor notations for the root in your reference. If you read, in the final normalized matrix, $\sqrt{8/64}$ and $\sqrt{2/4}$ instead of $\sqrt{8}/64$ and $\sqrt{2}/4$ (along with the $\pm$ signs), then the final result is correct.

The matrices $V_i$ are orthogonal. To normalize them, you only need to multiply them by a diagonal matrix $D_i$ made from the inverse of the norm of each column.

For $V_1$, the norms are all equal to $\sqrt{1/2^2+1/2^2}=\sqrt{1/2}$. Hence, you can multiply $V_1$ by the diagonal matrix $$D_1= \operatorname{Diag} \{\sqrt{2},\,\sqrt{2},\,\sqrt{2},\,\sqrt{2},\,\sqrt{2},\,\sqrt{2},\,\sqrt{2},\,\sqrt{2}\}\,.$$ and get an orthonormal matrix. Similarly, you get that: $$D_2= \operatorname{Diag} \{\sqrt{2},\,\sqrt{2},\,\sqrt{2},\,\sqrt{2},\,1,\,1,\,1,\,1\}\,,$$ and $$D_3= \operatorname{Diag} \{\sqrt{2},\,\sqrt{2},\,1,\,1,\,1,\,1,\,1,\,1\}\,.$$

Finally, because of commutativity, you get a matrix $D=D_1 D_2 D_3$:

$$D= \operatorname{Diag} \{2\sqrt{2},\,2\sqrt{2},\,2,\,2,\,\sqrt{2},\,\sqrt{2},\,\sqrt{2},\,\sqrt{2}\}\,.$$

If you now multiply the matrix $W$ (the one with the $1/8$)

enter image description here

with that matrix $D$, you get a final matrix $W$:

enter image description here

provided the notation interpretation for the square root. For instance, you can recover $1/8\times 2\sqrt{2} = \frac{\sqrt{8}}{\sqrt{8^2}} =\sqrt{8/64} $.

Laurent Duval
  • 31,850
  • 3
  • 33
  • 101
  • 1
    Thanks Laurent! I have checked this and result really matches if I make changes that you've suggested. – ivan.ukr Oct 10 '16 at 13:38
  • 1
    @ipx I have verified that too with a few lines of code. You can also have a lot at http://dsp.stackexchange.com/q/25479/15892 or http://dsp.stackexchange.com/q/670/15892 – Laurent Duval Oct 10 '16 at 13:48