Segmentation (image processing)

Segmentation (image processing)

In computer vision, segmentation refers to the process of partitioning a digital image into multiple regions (sets of pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze.Linda G. Shapiro and George C. Stockman (2001): “Computer Vision”, pp 279-325, New Jersey, Prentice-Hall, ISBN 0-13-030796-3] Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images.

The result of image segmentation is a set of regions that collectively cover the entire image, or a set of contours extracted from the image (see edge detection). Each of the pixels in a region are similar with respect to some characteristic or computed property, such as color, intensity, or texture. Adjacent regions are significantly different with respect to the same characteristic(s).

Some of the practical applications of image segmentation are:

* Medical Imaging [Dzung L. Pham, Chenyang Xu, and Jerry L. Prince (2000): “Current Methods in Medical Image Segmentation”, "Annual Review of Biomedical Engineering", volume 2, pp 315-337]
** Locate tumors and other pathologies
** Measure tissue volumes
** Computer-guided surgery
** Diagnosis
** Treatment planning
** Study of anatomical structure
* Locate objects in satellite images (roads, forests, etc.)
* Face recognition
* Fingerprint recognition
*Automatic traffic contolling systems
*Machine vision

Several general-purpose algorithms and techniques have been developed for image segmentation. Since there is no general solution to the image segmentation problem, these techniques often have to be combined with domain knowledge in order to effectively solve an image segmentation problem for a problem domain.

Clustering Methods

The K-means algorithm is an iterative technique that is used to partition an image into "K" clusters. The basic algorithm is:

# Pick "K" cluster centers, either randomly or based on some heuristic
# Assign each pixel in the image to the cluster that minimizes the variance between the pixel and the cluster center
# Re-compute the cluster centers by averaging all of the pixels in the cluster
# Repeat steps 2 and 3 until convergence is attained (e.g. no pixels change clusters)

In this case, variance is the squared or absolute difference between a pixel and a cluster center. The difference is typically based on pixel color, intensity, texture, and location, or a weighted combination of these factors. "K" can be selected manually, randomly, or by a heuristic.

This algorithm is guaranteed to converge, but it may not return the optimal solution. The quality of the solution depends on the initial set of clusters and the value of "K".

Histogram-Based Methods

Histogram-based methods are very efficient when compared to other image segmentation methods because they typically require only one pass through the pixels. In this technique, a histogram is computed from all of the pixels in the image, and the peaks and valleys in the histogram are used to locate the clusters in the image. Color or intensity can be used as the measure.

A refinement of this technique is to recursively apply the histogram-seeking method to clusters in the image in order to divide them into smaller clusters. This is repeated with smaller and smaller clusters until no more clusters are formed. [Ron Ohlander, Keith Price, and D. Raj Reddy (1978): “Picture Segmentation Using a Recursive Region Splitting Method”, "Computer Graphics and Image Processing", volume 8, pp 313-333]

One disadvantage of the histogram-seeking method is that it may be difficult to identify significant peaks and valleys in the image. In this technique of image classification distance metric and integrated region matching are familiar.

Edge Detection Methods

Edge detection is a well-developed field on its own within image processing. Region boundaries and edges are closely related, since there is often a sharp adjustment in intensity at the region boundaries. Edge detection techniques have therefore been used as the base of another segmentation technique.

The edges identified by edge detection are often disconnected. To segment an object from an image however, one needs closed region boundaries. Discontinuities are bridged if the distance between the two edges is within some predetermined threshold.

Region Growing Methods

The first region growing method was the seeded region growing method. This method takes a set of seeds as input along with the image. The seeds mark each of the objects to be segmented. The regions are iteratively grown by comparing all unallocated neighbouring pixels to the regions. The difference between a pixel's intensity value and the region's mean, delta, is used as a measure of similarity. The pixel with the smallest difference measured this way is allocated to the respective region. This process continues until all pixels are allocated to a region.

Seeded region growing requires seeds as additional input. The segmentation results are dependent on the choice of seeds. Noise in the image can cause the seeds to be poorly placed. Unseeded region growing is a modified algorithm that doesn't require explicit seeds. It starts off with a single region A_1 – the pixel chosen here does not significantly influence final segmentation. At each iteration it considers the neighbouring pixels in the same way as seeded region growing. It differs from seeded region growing in that if the minimum delta is less than a predefined threshold T then it is added to the respective region A_j. If not, then the pixel is considered significantly different from all current regions A_i and a new region A_{n+1} is created with this pixel.

One variant of this technique, proposed by Haralick and Shapiro (1985), is based on pixel intensities. The mean and scatter of the region and the intensity of the candidate pixel is used to compute a test statistic. If the test statistic is sufficiently small, the pixel is added to the region, and the region’s mean and scatter are recomputed. Otherwise, the pixel is rejected, and is used to form a new region.

Level Set Methods

Curve propagation is a popular technique in image analysis for object extraction, object tracking, stereo reconstruction, etc. The central idea behind such an approach is to evolve a curve towards the lowest potential of a cost function, where its definition reflects the task to be addressed and imposes certain smoothness constraints. Lagrangian techniques are based on parameterizing the contour according to some sampling strategy and then evolve each element according to image and internal terms. While such a technique can be very efficient, it suffers from various limitations like deciding on the sampling strategy, estimating the internal geometric properties of the curve, changing its topology, addressing problems in higher dimensions, etc.

The level set method was initially proposed to track moving interfaces by Osher et Sethian in 1988 and has spread across various imaging domains in the late nineties. It can be used to efficiently address the problem of curve/surface/etc. propagation in an implicit manner. The central idea is represent the evolving contour using a signed function, where its zero level corresponds to the actual contour. Then, according to the motion equation of the contour, one can easily derive a similar flow for the implicit surface that when applied to the zero-level will reflect the propagation of the contour. The level set method encodes numerous advantages: it is implicit, parameter free, provides a direct way to estimate the geometric properties of the evolving structure, can change the topology and is intrinsic. Furthermore, they can be used to define an optimization framework as proposed by Zhao, Merriman & Osher in 1996. Therefore, one can conclude that it is a very convenient framework to address numerous applications of computer vision and medical image analysis. [S. Osher & N. Paragios. [http://www.mas.ecp.fr/vision/Personnel/nikos/osher-paragios/ Geometric Level Set Methods in Imaging Vision and Graphics] , Springer Verlag, ISBN 0387954880, 2003.]

Graph Partitioning Methods

The “normalized cuts” method was first proposed by Shi and Malik in 1997. [Jianbo Shi and Jitendra Malik (1997): "Normalized Cuts and Image Segmentation", "IEEE Conference on Computer Vision and Pattern Recognition", pp 731-737] In this method, the image being segmented is modelled as a weighted undirected graph. Each pixel is a node in the graph, and an edge is formed between every pair of pixels. The weight of an edge is a measure of the similarity between the pixels. The image is partitioned into disjoint sets (segments) by removing the edges connecting the segments. The optimal partitioning of the graph is the one that minimizes the weights of the edges that were removed (the “cut”). Shi’s algorithm seeks to minimize the “normalized cut”, which is the ratio of the “cut” to all of the edges in the set.

Watershed Transformation

The watershed transformation considers the gradient magnitude of an image as a topographic surface. Pixels having the highest gradient magnitude intensities (GMIs) correspond to watershed lines, which represent the region boundaries. Water placed on any pixel enclosed by a common watershed line flows downhill to a common local intensity minima (LMI). Pixels draining to a common minimum form a catchment basin, which represent the regions.

Model based Segmentation

The central assumption of such an approach is that structures of interest/organs have a repetitive form of geometry. Therefore, one can seek for a probabilistic model towards explaining the variation of the shape of the organ and then when segmenting an image impose constraints using this model as prior. Such a task involves (i) registration of the training examples to a common pose, (ii) probabilistic representation of the variation of the registered samples, and (iii) statistical inference between the model and the image. State of the art methods in the literature for knowledge-based segmentation involve active shape and appearance models, active contours and deformable templates and level-set based methods.

Multi-scale Segmentation

Image segmentations are computed at multiple scales in scale-space and sometimes propagated from coarse to fine scales; see scale-space segmentation.

Segmentation criteria can be arbitrarily complex and may take into account global as well as local criteria. A common requirement is that each region must be connected in some sense.

Semi-automatic Segmentation

In this kind of segmentation, the user outlines the region of interest with the mouse clicks and algorithms are applied so that the path that best fits the edge of the image is shown.

Techniques like Livewire or Intelligent Scissors are used in this kind of segmentation.

Neural Networks Segmentation

Neural Network segmentation relies on processing small areas of an image using a neural network or a set of neural networks. After such processing the decision-making mechanism marks the areas of an image accordingly to the category recognized by the neural network. A type of network designed especially for this, is the Kohonen map.

Open Source Software

Several open source software packages are available for performing image segmentation

* ITK
* ITK-SNAP is a GUI tool that combines manual and semi-automatic segmentation with level sets.
* GIMP
* VXL
* ImageMagick
* [http://mitk.org/slicebasedsegmentation.html MITK] has a program module for manual segmentation

There are also free academic software packages:
* GemIdent

See also

* Computer Vision
* Data clustering
* Range image segmentation
* K-means algorithm
* Graph Theory
* Histograms
* Region growing
* Pulse-coupled networks

References

[http://instrumentation.hit.bg/Papers/2008-02-02%203D%20Multistage%20Entropy.htm 3D Entropy Based Image Segmentation]


Wikimedia Foundation. 2010.

Игры ⚽ Нужна курсовая?

Look at other dictionaries:

  • Image processing — is any form of signal processing for which the input is an image, such as photographs or frames of video; the output of image processing can be either an image or a set of characteristics or parameters related to the image. Most image processing… …   Wikipedia

  • Thresholding (image processing) — Thresholding is the simplest method of image segmentation. From a grayscale image, thresholding can be used to create binary images .During the thresholding process, individual pixels in an image are marked as “object” pixels if their value is… …   Wikipedia

  • Scilab Image Processing — SIP is a toolbox for processing images in Scilab. SIP is meant to be a free, complete, and useful image toolbox for Scilab. Its goals include tasks such as filtering, blurring, edge detection, thresholding, histogram manipulation, segmentation,… …   Wikipedia

  • Comparison of image processing software — The following table provides a comparison of image processing software. Functionality Matlab*[1] Mathematica[2] imageJ FIJI (software) Population Extract alpha channel No …   Wikipedia

  • Segmentation — may mean: *Market segmentation, in economics Biology *A morphogenesis process that divides a metazoan body into a series of semi repetitive segments *Segmentation (biology), the structure that results from said processComputing *Segmentation… …   Wikipedia

  • Image segment — In computer vision segmentation of an image is the division of a given (digital) image into contiguous regions. In current computer vision algorithms the similarity of image parts is usually defined in terms of color and texture. The goal to… …   Wikipedia

  • Image analysis — is the extraction of meaningful information from images; mainly from digital images by means of digital image processing techniques. Image analysis tasks can be as simple as reading bar coded tags or as sophisticated as identifying a person from… …   Wikipedia

  • Segmentation en plans — La segmentation en plans est l identification automatique, par des méthodes informatiques, des bornes des plans dans une vidéo. Cela consiste à repérer automatiquement les points de montage définis à l origine par le réalisateur, en mesurant les… …   Wikipédia en Français

  • Image moments — Used in image processing, computer vision and related fields, image moments are certain particular weighted averages ( moments ) of the image pixels intensities, or functions of those moments, usually chosen to have some attractive property or… …   Wikipedia

  • Image moment — In image processing, computer vision and related fields, an image moment is a certain particular weighted average (moment) of the image pixels intensities, or a function of such moments, usually chosen to have some attractive property or… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”