I am trying to understand the four 1D convolution operations involved in implementation of Laplacian of Gaussian(LoG). I have read the answers to What Is the Difference between Difference of Gaussian, Laplace of Gaussian, and Mexican Hat Wavelet?, and I am also reading ME5286 - Lecture 7 (2017) - Edge Detection (See slide# 62 and 63). My current understanding is:
Pre-compute
LoGand separate to 1D filters in x and y:gxx(x)andgyy(y).Take Gaussian (
g) and separate to:g(x)andg(y).First apply
g(y)andgyy(y)to the image. That makes two 1D convolutions.Then apply
g(x)to the result fromgyy(y)andgxx(x)to the result fromg(y). That adds two more 1D convolutions, making the total four.
Questions:
Is the above understanding correct?
How is this same as
LoG?Also, the diagram in slide #62 contradicts slide #63, in the pdf (Saad J. Bedros - ME5286 Lecture 7 2017 Edge Detection). Which one is correct?
Edit:
It's actually slide#61 and slide#62.
Edit2:
Please see the very last comment, posted Dec 19, 2018, on the first answer.