DSP Termpaper
DSP Termpaper
Gurpreet Singh
Department of Electronics and Communication Engineering Lovely Professional University Roll No. E2106B27 Registration No. 11110750
Email:[email protected]
ABSTRACT:With the improvement of synthetic aperture radar technology, larger areas are being imaged and the resolution of the images has increased. Larger images have to be transmitted and stored. Due to the limited storage and downlink capacity on the airplane or satellite, the volume of data must be reduced. This makes compression of SAR images with minimal loss of information important. This study aims to compare the best-known compression techniques, namely discrete cosine transform and discrete wavelet transform. It investigates RADARSAT and SPOT images of different regions of different characteristics. The regions investigated are sea areas, forest areas, and residential and industrial areas which define different patterns of urban land use. The studies showed that compression ratios changed according to the pixel classification. The second purpose of this study is to compare the two compression algorithms. The DWT based algorithm gave the minimum mean square error compared to the DCT based compression algorithm. The results changed according to the quantization process and the transformcoding algorithm.
INTRODUCTION:Discrete wavelet transformation (DWT) of an image results in a compact multiscale representation of the image. The wavelet transformed image can be seen as a multi-rooted directed tree. Each node of the tree corresponds to a pixel of the multi-scale representation. The tree is defined in such a way that each node v has either no offspring or four offspring which are arefinement of node v. An excellent choice to encode such wavelet
transformed images is applying embedded zero tree (EZT) algorithms which are iterative procedures. In the ith iteration, they start by encoding the roots of the tree, i.e., the tree nodes in which normally most of a wavelet transformed images energy is concentrated. Then for each pixel of this level, the corresponding sub-tree is considered. If all the nodes of the sub tree are insignificant with respect to the ith threshold Ti, then the offspring of the pixel are not encoded and the sub tree is pruned away. If the sub-tree is not such a zero tree with respect to Ti, the offspring are encoded and the procedure is recursively applied to the offspring. Here, a pixel of the wavelet transformed image is called insignificant with respect to a threshold Ti if its magnitude is smaller than Ti. A sub-tree is called zero tree with respect to Ti, if all its nodes are insignificant with respect to Ti. The test as to whether a sub tree is a zero tree can only be done by considering all the nodes of the sub tree until a significant pixel is found. Thus, all pixels of a sub tree have to be considered in the worst case. Thus, the EZT algorithms assume that they can efficiently access all the pixels of the image to be encoded. This makes them unsuitable for hardware solutions, in particular for FPGA implementations if the whole image cannot be stored in the internal memory. In we have presented a partitioned approach to perform DWT computations, allowing efficient calculation of the EZT algorithm using programmable hardware. We have discussed a FPGA implementation for lossless image compression. In this paper, we present an efficient FPGA
implementation of the two-dimensional 2.1 Need for image compression: The need for image compression becomes apparent when number of bits per image are computed resulting from typical sampling rates and quantization methods. For example, the amount of storage required for given images is (i) a low resolution, TV quality, color video image which has 512 x 512 pixels/color,8 bits/pixel, and 3 colors approximately consists of 6 x 10 bits; (ii) a 24 x 36 mm negative photograph scanned at 12 x 10mm:3000 x 2000 pixels/color, 8 bits/pixel, and 3 colors nearly contains 144 x 10 bits; (3) a 14 x 17 inch radiograph scanned at 70 x 10mm: 5000 x 6000 pixels, 12 bits/pixel nearly contains 360 x 10 bits. Thus storage of even a few images could cause a problem. As another example of the need for image compression, consider the transmission of low resolution 512 x 512 x 8 bits/pixel x 3color video image over telephone lines. Using a 96000 bauds (bits/sec) modem, the transmission would take approximately 11 minutes for just a single image, which is unacceptable for most applications.
DWT (2D-DWT) for lossy image compression. image compression. If n1 and n2 denote the number of information carrying units in original and compressed image respectively ,then the compression ratio CR can be defined as CR=n1/n2; And relative data redundancy RD of the original image can be defined as RD=1-1/CR; Three possibilities arise here: (1) If n1=n2, then CR=1 and hence RD=0 which implies that original image do not contain any redundancy between the pixels. (2) If n1>>n1, then CR and hence RD>1 which implies considerable amount of redundancy in the original image. (3) If n1<<n2, then CR>0 and hence RD- which indicates that the compressed image contains more data than original image.
2.3 Types of compression: 2.2 Principles behind compression: Number of bits required to represent the information in an image can be minimized by removing the redundancy present in it. There are three types of redundancies: (i) spatial redundancy, which is due to the correlation or dependence between neighboring pixel values; (ii) spectral redundancy, which is due to the correlation between different color planes or spectral bands; (iii) temporal redundancy, which is present because of correlation between different frames in images. Image compression research aims to reduce the number of bits required to represent an image by removing the spatial and spectral redundancies as much as possible. Data redundancy is of central issue in digital Lossless versus Lossy compression: In lossless compression schemes, the reconstructed image, after compression, is numerically identical to the original image. However lossless compression can only achieve a modest amount of compression. Lossless compression is preferred for archival purposes and often medical imaging, technical drawings, clip art or comics. This is because lossy compression methods, especially when used at low bit rates, introduce compression artifacts. An image reconstructed following lossy compression contains degradation relative to the original. Often this is because the compression scheme completely discards redundant information. However, lossy schemes are capable of achieving much higher compression. Lossy methods are especially suitable for natural images
such as photos in applications where minor (sometimes imperceptible) loss of fidelity is acceptable to achieve a substantial reduction in bit rate. The lossy compression that produces imperceptible differences can be called visually lossless [8]. Predictive versus Transform coding: In predictive coding, information already sent or available is used to predict future values, and the difference is coded. Since this is done in the image or spatial domain, it is relatively simple to implement and is readily adapted to local image characteristics. Differential Pulse Code Modulation (DPCM) is one particular example of predictive coding. Transform coding, on the other hand, first transforms the image from its spatial domain representation to a different type of representation using some well-known transform and then codes the transformed values (coefficients). This method provides greater data compression compared to predictive methods, although at the expense of greater computational requirements.
Wavelet Transform(DWT) have become the most widely used transform coding techniques. Transform coding algorithms usually start by partitioning the original image into sub images (blocks) of small size (usually 8 8). For each block the transform coefficients are calculated, effectively converting the original 8 8 array of pixel values into an array of coefficients within which the coefficients closer to the top-left corner usually contain most of the information needed to quantize and encode (and eventually perform the reverse process at the decoders side) the image with little perceptual distortion. The resulting coefficients are then quantized and the output of the quantizer is used by symbol encoding techniques to produce the output bit stream representing the encoded image. In image decompression model at the decoders side, the reverse process takes place, with the obvious difference that the dequantization stage will only generate an approximated version of the original coefficient values e.g., whatever loss was introduced by the quantizer in the encoder stage is not reversible.
Image compression consists of a Transformer, quantizer and encoder. Quantizer: Transformer: It transforms the input data into a format to reduce interpixel redundancies in the input image. Transform coding techniques use a reversible, linear mathematical transform to map Image Compression the pixel values onto a set of coefficients, which are then quantized and encoded. The key factor behind the success of transform-based coding schemes is that many of the resulting coefficients for most natural images have small magnitudes and can be quantized without causing significant distortion in the decoded image. For compression purpose, the higher the capability. of compressing information in fewer coefficients, the better the transform; for that reason, the Discrete It reduces the accuracy of the transformers output in accordance with some reestablished fidelity criterion. Reduces the psycho visual redundancies of the input image. This operation is not reversible and must be omitted if lossless compression is desired. The quantization stage is at the core of any lossy image encoding algorithm. Quantization at the encoder side, means partitioning of the input data range into a smaller set of values. There are two main types of quantizers: scalar quantizers and vector quantizers. A scalar quantizer partitions the domain of input values into a smaller number of intervals. If the output intervals are equally spaced, which is the simplest way to do it, the process is called uniform scalar quantization; otherwise, for reasons usually related to
minimization of total distortion, it is called non uniform scalar quantization. One of the most popular non uniform quantizers is the Lloyd-Max quantizer. Vector quantization (VQ) techniques extend the basic principles of scalar quantization to multiple dimensions.
x[n]*h[n]=$x[k].h[n-k] A half band low pass filter removes all frequencies that are above half of the highest frequency in the tile signal. Then the signal is passed through high pass filter. The two filters are related to each other as h[L-1-n]=(-1)g(n)
Symbol (entropy) encoder: It creates a fixed or variable-length code to represent the quantizers output and maps the output in accordance with the code. In most cases, a variable-length code is used. An entropy encoder compresses the compressed values obtained by the quantizer to provide more efficient compression. Most important types of entropy encoders used in lossy image compression techniques are arithmetic encoder, Huffman encoder and run-length encoder. 3. Image Compression using Descrete Wavelet Transform:Wavelet Transform has become an important method for image compression. Wavelet based coding provides substantial improvement in picture quality at high compression ratios mainly due to better energy compaction property of wavelet transforms. Wavelet transform partitions a signal into a set of functions called wavelets. Wavelets are obtained from a single prototype wavelet called mother wavelet by dilations and shifting. The wavelet transform is computed separately for different segments of the time-domain signal at different frequencies. 3.1 Subband coding: A signal is passed through a series of filters to calculate DWT.Procedure starts by passing this signal sequence through a half band digital low pass filter with impulse response h(n).Filtering of a signal is numerically equal to convolution of the tile signal with impulse response of the filter.
Filters satisfying this condition are known as quadrature mirror filters. After filtering half of the samples can be eliminated since the signal now has the highest frequency as half of the original frequency. The signal can therefore be subsampled by 2, simply by discarding every other sample. This constitutes 1 level of decomposition and can mathematically be expressed as Y1 [n]=x[k]h[2n-k] Y2 [n]= x[k]g[2n+1-k] Where y1[n] and y2[n] are the outputs of low pass and high pass filters, respectively after sub sampling by 2. This decomposition halves the time resolution since only half the number of sample now characterizes the whole signal. Frequency resolution has doubled because each output has half the frequency band of the input. This process is called as sub band coding. It can be repeated further to increase the frequency resolution as shown by the filter bank.
Filter Bank
constant k, and then round to the nearest integer. Quantization is called lossy because it introduces error into the process, since the conversion of w to q is not one to one function[9].
Entropy encoding With this method, a integer sequence q is changed into a shorter sequence e, with the numbers in e being 8 bit integers The conversion is made by an entropy encoding table.Strings of zeros are coded by numbers 1 through 100,105 and 106, while the non-zero integers in q are coded by 101 through 104 and 107 through 254.
Digitation
The image is digitized first.The digitized image can be characterized by its intensity levels ,or scales of gray which range from 0(black) to 255(white), and its resolution, or how many pixels per square inch[9].
3.3 DWT Results: Results obtained with the matlabcode [20]are shown below. Figures (1) shows original Lena image. Fig (2) to Fig(3) show compressed images for various threshold values. As threshold value increases blurring of image continues to increase.
Thresholding In certain signals, many of the wavelet coefficients are close or equal to zero. Through threshold these coefficients are modified so that the sequence of wavelet coefficients contains long strings of zeros. In hard threshold, a threshold is selected. Any wavelet whose absolute value falls below the tolerance is set to zero with the goal to introduce many zeros without losing a great amount of detail. 4 CONCLUSIONS: DWT is used as basis for transformation in JPEG 2000 standard. DWT provides high quality compression at low bit rates. The use of larger DWT basis functions or wavelet filters produces blurring near edges in images. DWT performs better than DCT in the context that it avoids blocking artifacts which degrade reconstructed images. However DWT provides lower quality than JPEG at low compression
Quantization Quantization converts a sequence of floating numbers w to a sequence of integers q.The simplest form is to round to the nearest integer. Another method is to multiply each number in w by a
References:
[1.] R. C. Gonzalez and R. E. Woods, Digital Image Processing, Second edition, pp. 411-514, 2004. [2.] N. Ahmed, T. Natarajan, and K. R. Rao, "Discrete cosine transform," IEEE Trans. on Computers, vol. C-23, pp. 90-93, 1974. [3.] A. S. Lewis and G. Knowles, "Image Compression Using the 2-D Wavelet Transform" IEEE Trans. on Image Processing, Vol. I . NO. 2, PP. 244 - 250, APRIL 1992. [4.] Amir Averbuch, Danny Lazar, and Moshe Israeli, Image Compression Using Wavelet Transform and Multiresolution Decomposition"IEEE Trans. on Image Processing, Vol. 5, No. 1, JANUARY 1996. [5.] M. Antonini , M. Barlaud and I. Daubechies, "Image Coding using Wavelet Transform, IEEE Trans. On Image Processing Vol.1, No.2, pp. 205 220, APRIL 1992. [6.] Robert M. Gray, IEEE, and David L. Neuhoff, IEEE"Quantization", IEEE Trans. on Information Theory, Vol. 44, NO. 6, pp. 23252383, OCTOBER 1998.(invited paper). [7.] Ronald A. DeVore, Bjorn Jawerth, and Bradley J. Lucier, Member,"Image Compression Through Wavelet Transform Coding" IEEE Trans. on Information Theory, Vol. 38. NO. 2, pp. 719-746, MARCH 1992. [8.] http://en.wikipedia.org/Image_compression. [9.] Greg Ames,"Image Compression", Dec 07, 2002