Jump to content

Normalization (image processing)

From Wikipedia, the free encyclopedia

In image processing, normalization is a process that changes the range of pixel intensity values. Applications include photographs with poor contrast due to glare, for example. Normalization is sometimes called contrast stretching or histogram stretching. In more general fields of data processing, such as digital signal processing, it is referred to as dynamic range expansion.[1]

The purpose of dynamic range expansion in the various applications is usually to bring the image, or other type of signal, into a range that is more familiar or normal to the senses, hence the term normalization. Often, the motivation is to achieve consistency in dynamic range for a set of data, signals, or images to avoid mental distraction or fatigue. For example, a newspaper will strive to make all of the images in an issue share a similar range of grayscale.

Normalization transforms an n-dimensional grayscale image with intensity values in the range , into a new image with intensity values in the range .

The linear normalization of a grayscale digital image is performed according to the formula

For example, if the intensity range of the image is 50 to 180 and the desired range is 0 to 255 the process entails subtracting 50 from each of pixel intensity, making the range 0 to 130. Then each pixel intensity is multiplied by 255/130, making the range 0 to 255.

Normalization might also be non linear, this happens when there isn't a linear relationship between and . An example of non-linear normalization is when the normalization follows a sigmoid function, in that case, the normalized image is computed according to the formula

Where defines the width of the input intensity range, and defines the intensity around which the range is centered.[2]

Auto-normalization in image processing software typically normalizes to the full dynamic range of the number system specified in the image file format.

Contrast Stretching for Image Enhancement

[edit]

This is the most significant and essential technique of spatial based image enhancement.[3] The basic intent of the contrast enhancement technique is to adjust the local contrast in the image so as to bring out the clear regions or objects in the image . Low-contrast images often result from poor or non-uniform lighting conditions, a limited dynamic range of the imaging sensor, or improper settings of the lens aperture.

Contrast Stretching Transformation Functions

The contrast enhancement tries to change the intensity of the pixel in the image, particularly in the input image for the purpose to obtain a more enhanced image .It is based on the number of techniques namely local, global, dark and bright levels of contrast .The contrast enhancement is considered as the amount of color or gray differentiation that lies among the different features in an image .The contrast enhancement improves the quality of image by increasing the luminance difference between the foreground and backgrounds

A Contrast Stretching Transformation can be achieved by:

Contrast Stretching Transformation Graph reference for derivation

1. Stretching the dark range of input values into a wider range of output values: This involves increasing the brightness of the darker areas in the image to enhance details and improve visibility.

2. Shifting the mid-range of input values: This involves adjusting the brightness levels of the mid-tones in the image to improve overall contrast and clarity.

3. Compressing the bright range of input values: This process involves reducing the brightness of the brighter areas in the image to prevent overexposure resulting in a more balanced and visually appealing image.

Local and Global Contrast Stretching

[edit]

Local Contrast Stretching (LCS) is an image enhancement method that focuses on locally adjusting each pixel's value to improve the visualization of structures within an image, particularly in both the darkest and lightest portions. It operates by utilizing sliding windows, known as kernels, which traverse the image. The central pixel within each kernel is adjusted using the following formula:

Where: Ip(x,y) is the color level for the output pixel (x,y) after the contrast stretching process.

I0(x,y) is the color level input for data pixel (x, y).

max is the maximum value for color level in the input image within the selected kernel.

min is the minimum value for color level in the input image within the selected kernel.[4]

Local contrast stretching considers each range of color palate in the image (R, G, and B) separately, providing a set of minimum and maximum values for each color palate.

Global Contrast Stretching, on the other hand, considers all color palate ranges at once to determine the maximum and minimum values for the entire RGB color image. This approach utilizes the combination of RGB colors to derive a single maximum and minimum value for contrast stretching across the entire image.

These contrast stretching techniques play a crucial role in enhancing the clarity and visibility of structures within images, particularly in scenarios with low contrast resulting from factors such as non-uniform lighting conditions or limited dynamic range.

See also

[edit]

References

[edit]
  1. ^ Rafael C. González, Richard Eugene Woods (2007). Digital Image Processing. Prentice Hall. p. 85. ISBN 978-0-13-168728-8.
  2. ^ ITK Software Guide
  3. ^ "Contrast Enhancement Techniques: A Brief and Concise Review" (PDF).
  4. ^ "Comparison of Contrast Stretching methods of Image Enhancement Techniques for Acute Leukemia Images" (PDF).
[edit]