- Shopping Bag ( 0 items )
Presents a thorough overview of the major topics of digital image processing, beginning with the basic mathematical tools needed for the subject. Includes a comprehensive chapter on stochastic models for digital image processing. Covers aspects of image representation including luminance, color, spatial and temporal properties of vision, and digitization. Explores various image processing techniques. Discusses algorithm development (software/firmware) for image transforms, enhancement, reconstruction, and image coding.
In image representation one is concerned with characterization of the quantity that each picture-element (also called pixel or pel) represents. An image could represent luminances of objects in a scene (such as pictures taken by ordinary camera), the absorption characteristics of the body tissue (X-ray imaging), the radar cross section of a target (radar imaging), the temperature profile of a region (infrared imaging), or the gravitational field in an area (in geophysical imaging). In general, any twodimensional function that bears information can be considered an image. Image models give a logical or quantitative description of the properties of this function. Figure 1.4 lists several image representation and modeling problems.
An important consideration in image representation is the fidelity or intelligibility criteria for measuring the quality of an image or the performance of a processing technique. Specification of such measures requires models of perception of contrast, spatial frequencies, color, and so on, as discussed in Chapter 3. Knowledge of a fidelity criterion helps in designing the imaging sensor, because it tells us the variables that should be measured most accurately.
The fundamental requirement of digital processing is that images be sampled and quantized. The sampling rate (number of pixels per unit area) has to be large enough to preserve the useful information in an image. It is determined by the bandwidth of the image. For example, the bandwidth of raster scanned common television signal is about 4 MHz. From the sampling theorem, this requires a minimum sampling rate of 8 MHz.At 30 frames/s, this means each frame should contain approximately 266,000 pixels. Thus for a 512-line raster, this means each image frame contains approximately 512 x 512 pixels. Image quantization is the analog to digital conversion of a sampled image to a finite number of gray levels. Image sampling and quantization methods are discussed in Chapter 4.
A classical method of signal representation is by an orthogonal series expansion, such as the Fourier series. For images, analogous representation is possible via two-dimensional orthogonal functions called basis images. For sampled images, the basis images can be determined from unitary matrices called image transforms. Any given image can be expressed as a weighted sum of the basis images (Fig. 1.5). Several characteristics of images, such as their spatial frequency content, bandwidth, power spectrum, and application in filter design, feature extraction, and so on, can be studied via such expansions. The theory and applications of image transforms are discussed in Chapter 5.
Statistical models describe an image as a member of an ensemble, often characterized by its mean and covariance functions. This permits development of algorithms that are useful for an entire class or an ensemble of images rather than for a single image. Often the ensemble is assumed to be stationary so that the mean and covariance functions can easily be estimated. Stationary models are useful in data compression problems such as transform coding, restoration problems such as Wiener filtering, and in other applications where global properties of the ensemble are sufficient. A more effective use of these models in image processing is to consider them to be spatially varying or piecewise spatially invariant.
To characterize short-term or local properties of the pixels, one alternative is to characterize each pixel by a relationship with its neighborhood pixels. For example, a linear system characterized by a (low-order) difference equation and forced by white noise or some other random field with known power spectrum density is a useful approach for representing the ensemble. Figure 1.6 shows three types of stochastic models where an image pixel is characterized in terms of its neighboring pixels. If the image were scanned top to bottom and then left to right, the model of Fig. 1.6a would be called a causal model. This is because the pixel A is characterized by pixels that lie in the "past." Extending this idea, the model of Fig. 1.6b is a noncausal model because the neighbors of A lie in the past as well as the "future" in both the directions. In Fig. 1.6c, we have a semicausal model because the neighbors of A are in the past in the j-direction and are in the past as well as future in the i-direction.
Such models are useful in developing algorithms that have different hardware realizations. For example, causal models can realize recursive filters, which require small memory while yielding an infinite impulse response (IIR). On the other hand, noncausal models can be used to design fast transform-based finite impulse response (FIR) filters. Semicausal models can yield two-dimensional algorithms, which are recursive in one dimension and nonrecursive in the other. Some of these stochastic models can be thought of as generalizations of one dimensional random processes represented by autoregressive (AR) and autoregressive moving average (ARMA) models. Details of these aspects are discussed in Chapter 6.
In global modeling, an image is considered as a composition of several objects. Various objects in the scene are detected (for example, by segmentation techniques), and the model gives the rules for defining the relationship among various objects. Such representations fall under the category of image understanding models, which are not a subject of study in this text.
1.3 Image Enhancement
In image enhancement, the goal is to accentuate certain image features for subsequent analysis or for image display. Examples include contrast and edge enhancement, pseudocoloring, noise filtering, sharpening, and magnifying. Image enhancement is useful in feature extraction, image analysis, and visual information display. The enhancement process itself does not increase the inherent information content in the data. It simply emphasizes certain specified image characteristics. Enhancement algorithms are generally interactive and application-dependent.
Image enhancement techniques, such as contrast stretching, map each gray level into another gray level by a predetermined transformation. An example is the histogram equalization method, where the input gray levels are mapped so that the output gray level distribution is uniform. This has been found to be a powerful method of enhancement of low contrast images (see Fig. 7.14). Other enhancement techniques perform local neighborhood operations as in convolution, transform operations as in the discrete Fourier transform, and other operations as in pseudocoloring where a gray level image is mapped into a color image by assigning different colors to different features. Examples and details of these techniques are considered in Chapter 7...
1. Two Dimensional Systems and Mathematical Preliminaries.
2. Image Perception.
3. Image Sampling and Quantization.
4. Image Transforms.
5. Image Representation by Stochastic Models.
6. Image Enhancement.
7. Image Filtering and Restoration.
8. Image Analysis and Computer Vision.
9. Image Reconstruction From Projections.
10. Image Data Compression.