In the visual system of man the process of image formation begins with the light rays coming from the outside world and impinging on the photoreceptors in the retina. Proportionally, a digital photograph is created by light impinging on another photosensitive device, the CCD array. Every greyscale digital image is a 2-D array (matrix) of numbers. A black pixel typically has the value zero (weakest intensity) whereas a white pixel has the value 255 (strongest intensity). A human photoreceptor hyperpolates according to the amount of light impinging on it, just like the CCD gives the appropriate value to the digital pixel.

Some basics on digital image processing Edit

Many basic image processing techniques such as computing image derivatives or noise smoothing are based on linear filtering. Linear filtering consists in convolving the image with a constant matrix, called mask or kernel or simply window.

Convolution in mathematics is an operator, just like addition or multiplication.

For two continuous functions $ f\, $ and $ g\, $, convolution is written $ f * g \, $ and is defined as the integral of the product of the two functions after one is reversed and shifted: $ (f * g )(t) = \int f(\tau) g(t - \tau)\, d\tau $.

For discrete functions, one can use a discrete version of the convolution. It is given by

$ (f * g)(m) = \sum_n {f(n) g(m - n)} \, $.

The process of filtering an image with convolution is the process of applying the filter to every pixel of the original image in order to compute the pixel values of the filtered image. Consider a digital image $ I(i,j)\, $ and a filter $ W(i,j)\, $. The filtered image $ I'(i,j)\, $ will be given by :

$ I'(i,j) = I(i,j)*W(i,j) = \sum_{h=-\infty}^\infty\sum_{k=-\infty}^\infty{I(i-h,j-k)W(h,k)}\, $.

Notice that since the image (and the filter kernel) are finite in length all the other values in the sum are regarded as zero. A less formal version of the above formula would be: for the N x M image $ I\, $ and the m x m kernel $ W\, $ where m is an odd number smaller than both M and N, the filtered version $ I'\, $ of $ I\, $ at each pixel is given by : $ I'(i,j) = I(i,j)*W(i,j) = \sum_{h=-{\lceil\frac{m}{2}\rceil}}^{\lceil\frac{m}{2}\rceil}\sum_{k=-{\lceil\frac{m}{2}\rceil}}^{\lceil\frac{m}{2}\rceil}{I(i-h,j-k)W(h,k)}\, $.

Here $ \lceil \frac{m}{2} \rceil $ indicates integer division (e.g 3/2 = 1).

For example, consider the 5x5 random image $ I(i,j)\, $ and the 3x3 kernel $ W(i,j)\, $ of an averaging filter.Each pixel of the filtered image $ I'(i,j)\, $ is the average value of the 3x3 neighborhood of the corresponding pixel in $ I(i,j)\, $.


We say that the response of a pixel to a specific filter is the value given to that pixel after the filtering. In the above example, the response of pixel (2,2) to the averaging filter is 5.

Using gradient filters to estimate contrastEdit

The majority of the ganglion cells in the optical axis are responsible for carrying information regarding the change in light intensity around points in the scene. Such points are important in perception because they outline the edges and therefore the contour of objects. In image proccessing we call such points edge points and they are the pixels at or around which the image values undergo a sharp variation. The value (or light intensity) of a pixel depends on the place of the pixel on the image plane. This means that intensity is a function of the coordinates of the pixel. In order to find how the intensity changes in a specific pixel we need to compute the spatial derivative of the intensity function at that pixel.

Gauss zoom

The white arrow (J) represents the spatial derivative (or gradient) of the light intensity at that pixel. The length of the arrow is the magnitude (or strength) and the angle θ is the orientation of the gradient.

The gradient at each pixel of the image will have two components, one representing the change in the x-axis (Jx) and one represinting change in the y-axis (Jy). These two components are the image's partial derivatives at that pixel. Assuming that the partial derivatives exist at that point, a common way to find them is by computing the finite difference estimates at that point. The central difference approximation for Jx would be :

$ J_{x}(i,j) = \frac{\partial I}{\partial x} = \frac{I(x+1,y) - I(x-1,y)}{2h} + O(h^{2})\, $.

The step, $ h $, represents the size of the pixel and can be set to $ h = 1 $ or $ h = 1/2 $. This makes the formulae simpler:

$ J_{x}(i,j) = \frac{1}{2}I(x+1,y) - \frac{1}{2}I(x-1,y) \, $

which can be computed for every pixel in the image by convolving the image rows with the mask

$ \begin{bmatrix} -1/2&0&1/2\end{bmatrix} $.

Likewise, Jy can be computed by convolving the image columns with the same mask.

One can also use the second derivatives to find the edge points by estimating the zero crossings of the Laplace operator. The mask for the Laplacian,

$ \Delta I = \frac{\partial^2I} {\partial x^2} + \frac{\partial^2I} {\partial y^2} $,

would be :

$ \begin{bmatrix} 0&1&0 \\ 1&-4&1 \\ 0&1&0 \end{bmatrix} $.

Ganglion cells in the retina: Receptive field and Lateral inhibition Edit

  • the retina
  • the centre-surround organisation (why ganglion? only these fire AP -some amacrines also)
  • a simple model (pic) -similarities with LaPlace filter
  • purpose : signal compression ->less noise (p.519)