LoG 和 DoG的本质联系与区别

1.Laplacian/Laplacian of Gaussian1 (LoG)

As Laplace operator may detect edges as well as noise (isolated, out-of-range), it may be desirable to smooth the image first by a convolution with a Gaussian kernel of width $\sigma$ 

\begin{displaymath}G_{\sigma}(x,y)=\frac{1}{\sqrt{2\pi\sigma^2}}exp\left(-\frac{x^2+y^2}{2\sigma^2}\right)\end{displaymath}

to suppress the noise before using Laplace for edge detection: 

\begin{displaymath}\bigtriangleup[G_{\sigma}(x,y) * f(x,y)]=[\bigtriangleup G_{\sigma}(x,y)] * f(x,y)=LoG*f(x,y)\end{displaymath}

The first equal sign is due to the fact that 

\begin{displaymath}\frac{d}{dt}[h(t)*f(t)]=\frac{d}{dt} \int f(\tau) h(t-\tau)d\......int f(\tau) \frac{d}{dt} h(t-\tau)d\tau=f(t)*\frac{d}{dt} h(t)\end{displaymath}

So we can obtain the Laplacian of Gaussian $\bigtriangleup G_{\sigma}(x,y)$ first and then convolve it with the input image. To do so, first consider 

\begin{displaymath}\frac{\partial}{\partial x} G_{\sigma}(x,y)=\frac{\partia......+y^2)/2\sigma^2}=-\frac{x}{\sigma^2}e^{-(x^2+y^2)/2\sigma^2}\end{displaymath}

and 

\begin{displaymath}\frac{\partial^2}{\partial^2 x} G_{\sigma}(x,y)=\frac{x^2......gma^2}=\frac{x^2-\sigma^2}{\sigma^4}e^{-(x^2+y^2)/2\sigma^2}\end{displaymath}

Note that for simplicity we omitted the normalizing coefficient $1/\sqrt{2\pi \sigma^2}$. Similarly we can get 

\begin{displaymath}\frac{\partial^2}{\partial^2 y} G_{\sigma}(x,y)=\frac{y^2-\sigma^2}{\sigma^4}e^{-(x^2+y^2)/2\sigma^2}\end{displaymath}

Now we have LoG as an operator or convolution kernel defined as 

\begin{displaymath}LoG \stackrel{\triangle}{=}\bigtriangleup G_{\sigma}(x,y)=\f......)=\frac{x^2+y^2-2\sigma^2}{\sigma^4}e^{-(x^2+y^2)/2\sigma^2}\end{displaymath}

The Gaussian $G(x,y)$ and its first and second derivatives $G'(x,y)$ and $\bigtriangleup G(x,y)$ are shown here:

LoG.gif

LoG_plot.gif

This 2-D LoG can be approximated by a 5 by 5 convolution kernel such as 

\begin{displaymath}\left[ \begin{array}{ccccc}0 & 0 & 1 & 0 & 0 \\0 & 1 &......0 & 1 & 2 & 1 & 0 \\0 & 0 & 1 & 0 & 0 \end{array} \right]\end{displaymath}

The kernel of any other sizes can be obtained by approximating the continuous expression of LoG given above. However, make sure that the sum (or average) of all elements of the kernel has to be zero (similar to the Laplace kernel) so that the convolution result of a homogeneous regions is always zero.

The edges in the image can be obtained by these steps:

  • Applying LoG to the image
  • Detection of zero-crossings in the image
  • Threshold the zero-crossings to keep only those strong ones (large difference between the positive maximum and the negative minimum)
The last step is needed to suppress the weak zero-crossings most likely caused by noise.

forest_LoG.gif






2.Difference of Gaussian (DoG)


Similar to Laplace of Gaussian, the image is first smoothed by convolution with Gaussian kernel of certain width  


to get 


With a different width , a second smoothed image can be obtained: 


We can show that the difference of these two Gaussian smoothed images, called difference of Gaussian (DoG), can be used to detect edges in the image. 


The DoG as an operator or convolution kernel is defined as 


Both 1-D and 2-D functions of  and  and their difference are shown below:

As the difference between two differently low-pass filtered images, the DoG is actually a band-pass filter, which removes high frequency components representing noise, and also some low frequency components representing the homogeneous areas in the image. The frequency components in the passing band are assumed to be associated to the edges in the images.

The discrete convolution kernel for DoG can be obtained by approximating the continuous expression of DoG given above. Again, it is necessary for the sum or average of all elements of the kernel matrix to be zero.

Comparing this plot with the previous one, we see that the DoG curve is very similar to the LoG curve. Also, similar to the case of LoG, the edges in the image can be obtained by these steps:

  • Applying DoG to the image
  • Detection of zero-crossings in the image
  • Threshold the zero-crossings to keep only those strong ones (large difference between the positive maximum and the negative minimum)
The last step is needed to suppress the weak zero-crossings most likely caused by noise.

Edge detection by DoG operator:



讲义的地址:http://fourier.eng.hmc.edu/e161/lectures/gradient/gradient.html


基础知识:

1.1Gradient 

The Gradient (also called the Hamilton operator) is a vector operator for any N-dimensional scalar function , where  is an N-D vector variable. For example, when ,  may represent temperature, concentration, or pressure in the 3-D space. The gradient of this N-D function is a vector composed of components for the  partial derivatives: 


  • The direction  of the gradient vector  is the direction in the N-D space along which the function  increases most rapidly.
  • The magnitude  of the gradient  is the rate of the increment.

In image processing we only consider 2-D field: 


When applied to a 2-D function , this operator produces a vector function: 


where  and . The direction and magnitude of  are respectively 


Now we show that  increases most rapidly along the direction of  and the rate of increment is equal to the magnitude of .

Consider the directional derivative of  along an arbitrary direction : 


This directional derivative is a function of , defined as the angle between directions  and the positive direction of . To find the direction along which  is maximized, we let 


Solving this for , we get 


i.e., 


which is indeed the direction  of .

From , we can also get 


Substituting these into the expression of , we obtain its maximum magnitude, 


which is the magnitude of .

For discrete digital images, the derivative in gradient operation 


becomes the difference 


Two steps for finding discrete gradient of a digital image:

  • Find the difference: in the two directions: 




  • Find the magnitude and direction of the gradient vector: 




The differences in two directions  and  can be obtained by convolution with the following kernels:

  • Roberts 


    or 


  • Sobel (3x3) 


  • Prewitt (3x3) 


  • Prewitt (4x4) 


Note Sobel and Prewitt operators first find the averages of one direction and then find the difference of these averages in the another direction.



1.2 Laplace operator

The Laplace operator is a scalar operator defined as the dot product (inner product) of two gradient vector operators: 


In   dimensional space, we have: 


When applied to a 2-D function  , this operator produces a scalar function: 


In discrete case, the second order differentiation becomes second order difference. In 1-D case, if the first order difference is defined as 


then the second order difference is 
$\displaystyle \bigtriangleup f[n]$ $\textstyle =$ $\displaystyle \bigtriangledown^2 f[n]=f''[n]=D^2_n[f[n]]=f'[n]-f'[n-1]$  
  $\textstyle =$ $\displaystyle (f[n+1]-f[n])-(f[n]-f[n-1])=f[n+1]-2f[n]+f[n-1]$  

Note that  $f''[n]$  is so defined that it is symmetric to the center element  $f[n]$ . The Laplace operation can be carried out by 1-D convolution with a kernel  $[1, -2, 1]$ .

In 2-D case, Laplace operator is the sum of two second order differences in both dimensions: 

$\displaystyle \bigtriangleup f[m,n]$ $\textstyle =$ $\displaystyle D^2_m[f[m,n]]+D^2_n[f[m,n]]$  
  $\textstyle =$ $\displaystyle f[m+1,n]-2f[m,n]+f[m-1,n]+f[m,n+1]-2f[m,n]+f[m,n-1]$  
  $\textstyle =$ $\displaystyle f[m+1,n]+f[m-1,n]+f[m,n+1]+f[m,n-1]-4f[m,n]$  

This operation can be carried out by 2-D convolution kernel: 

\begin{displaymath}\left[ \begin{array}{ccc} 0 & 1 & 0  1 & -4 & 1  0 & 1 & 0\end{array} \right] \end{displaymath}

Other Laplace kernels can be used: 

\begin{displaymath}\left[ \begin{array}{ccc} 1 & 1 & 1  1 & -8 & 1  1 & 1 & 1\end{array} \right] \end{displaymath}

We see that these Laplace kernels are actually the same as the high-pass filtering kernels discussed before.

Gradient operation is an effective detector for sharp edges where the pixel gray levels change over space very rapidly. But when the gray levels change slowly from dark to bright (red in the figure below), the gradient operation will produce a very wide edge (green in the figure). It is helpful in this case to consider using the Laplace operation. The second order derivative of the wide edge (blue in the figure) will have a zero crossing in the middle of edge. Therefore the location of the edge can be obtained by detecting the zero-crossings of the second order difference of the image.

One dimensional example:

In the two dimensional example, the image is on the left, the two Laplace kernels generate two similar results with zero-crossings on the right:

Edge detection by Laplace operator followed by zero-crossing detection:


你可能感兴趣的:(算法)