6

I have a function that performs gaussian blur on image for some specific $\sigma$ (the standard deviation).

It first computes kernel of size $\lceil 3\sigma \rceil$ and then performs convolution with that kernel.

However, I would like to specify blurring radius in pixels rather than $\sigma$.

I suppose that the blur radius (in pixels) is just $\sigma^{2}$ as this denotes variance of a random variable.

Is that right? Can the same thought be extended to 2D?

UPDATE:

The problem is that I need to do things like building a gaussian pyramid (successively blurred and downsampled image).

When the image gets downsampled to 1/2 of its width, I suppose I need a gaussian blur of radius 2 pixels ($\sigma=\sqrt{2}$ ?). And for 1/4 subsampling, I would need blur of 4 pixels ($\sigma=2 ?)$... But I am not sure about that...

Libor
  • 4,255
  • 24
  • 38

2 Answers2

5

Yes, you can always use pixel units for $\sigma$. In 2-D you will have $\sigma_X$ and $\sigma_Y$.

2

The standard deviation $\sigma$ is itself the appropriate linear scale for a Gaussian. For example, in 1D the Gaussian is $f[x,\sigma] = f[x/\sigma] \propto e^{-(x/\sigma)^2}$, i.e. $\sigma$ has the same units as $x$. As Arrigo notes, these units can be pixel-units.

The 2 in the exponent of $\sigma^2$ does not have anything to do with the dimensionality: it's the same in 1D or 2D (or nD).

The use of a $t=\sigma^2$ to index the "scale" in scale-space is more for 1) mathematical convenience, and 2) connection to the time-scale of diffusion.

GeoMatt22
  • 181
  • 3
  • Thanks for clear answer. I have actually tested it in my software and indeed using $\sigma$ instead of $\sigma^{2}$ yields expected results. – Libor May 15 '12 at 12:06