انت هنا الان : شبكة جامعة بابل > موقع الكلية > نظام التعليم الالكتروني > مشاهدة المحاضرة

DATA COMPRESSION - LEC#2

الكلية كلية العلوم للبنات     القسم قسم الحاسبات     المرحلة 4
أستاذ المادة علي كاظم محمد هداب الغرابات       28/10/2016 15:27:44
Compression System Model
The compression system model consists of two parts: the compressor(Encoding) and the decompressor(Decoding). The compressor consists of a preprocessing stage and encoding stage, whereas the decompressor consists of a decoding stage followed by a post processing stage Figure(1.1). Before encoding, preprocessing is performed to prepare the image for the encoding process, and consists of a number of operations that are application specific. After the compressed file has been decoded, post processing can be performed to eliminate some of the potentially undesirable artifacts brought about by the compression process. Often, many practical compression algorithms are a combination of a number of different individual compression techniques.
The compressor can be further broken down into stages as illustrated in Figure (1.2). The first stage in preprocessing is data reduction. Here, the image data can be reduced by gray-level and/or spatial quantization, or they can undergo any desired image enhancement (for example, noise removal) process. The second step in preprocessing is the mapping process, which maps the original image data into another mathematical space where it is easier to compress the data. Next, as part of the encoding process, is the quantization stage, which takes the potentially continuous data from the mapping stage and puts it in discrete form. The final stage of encoding involves coding the resulting data, which maps the discrete data from the quantizer onto a code in an optimal manner. A compression algorithm may consist of all the stages, or it may consist of only one or two of the stages.
The decomposer can be further broken down into the stages shown in Figure (1.3). Here the decoding process is divided into two stages. The first, the decoding stage, takes the compressed file and reverses the original coding by mapping the codes to the original, quantized values. Next, these values are processed by a stage that performs an inverse mapping to reverse the original mapping process. Finally, the image may be processes to enhance the final image. In some cases this may be done to reverse any preprocessing, for example, enlarging an image that was shrunk in the data reduction process. In other cases the enhancement may simply enhance the image to ameliorate any artifacts from the compression process itself.





(a) Compression





(b) Decompression

Figure (1.1) Compression System Model










Figure (1.2) The Compressor









Figure (1.3) The Decomposer

Fidelity Criteria
The key in image compression algorithm development is to determine the minimal data required retaining the necessary information. This is achieved by tacking advantage of the redundancy that exists in images. To determine exactly what information is important and to be able to measure image fidelity, we need to define an image fidelity criterion. Note that the information required is application specific, and that, with lossless schemes, there is no need for a fidelity criterion. Fidelity Criteria can be divided into two classes:
1. Objective fidelity criteria: This fidelity is borrowed from digital signal processing and information theory and provides us with equations that can be used to measure the amount of error in the reconstructed (decompressed) image.
Commonly used objective measures are the root-mean-square error (RMSE), the root-mean-square signal-to-noise ratio (SNRRMS), and the peak signal-to-noise ratio (SNRPEAK). We can define the error between an original, uncompressed pixel value and the reconstructed (decompressed) pixel value as:


Where I(r,c) : The original image
g(r,c) : The decompressed image
r,c : Row and column
Next, we can define the total error in an (N * N) decompressed image as:


The root-mean-square error is found by taking the square root of the error squared divided by the total number of pixels in the image (mean):

The smaller the value of the error metrics, the better the compressed image represents the original images. Alternately, with the signal-to-noise (SNR) metrics, a larger number implies a better image. The SNR metrics consider the decompressed image g(r,c) to be “signal” and the error to be “noise”. We can define the root-mean-square signal-to-noise- ratio as:


Another related metric, the peak signal-to-noise ratio, is defined as:


Where L : the number of gray levels (e.g., for 8 bits L =256).
These objective measures are often used in the research because they are easy to generate and seemingly unbiased, these metrics are not necessarily correlated to our perception of an image.
2. Subjective fidelity criteria: these criteria require the definition of a qualitative scale to assess image quality. This scale can then be used by human test subjects to determine image fidelity. In order to provide unbiased results, evaluation with subjective measures requires careful selection of the test subjects and carefully designed evaluation experiments. The subjective measures are better method for comparison of compression algorithms, if the goal is to achieve high-quality images as defined by visual perception.

Compression performance
The performance of a compression algorithm can be measured by various criteria. It depends on what is our priority concern. We could measure the relative complexity of the algorithm, the memory required to implement the algorithm, how fast the algorithm performs on a given machine, and how closely the reconstruction resembles the original.

A very logical way of measuring how well a compression algorithm is to compresses a given set of data and look at the difference in size of the data before the compression and size of the data after the compression ).

There are several ways of measuring the compression effect:
? Compression ratio :This is simply the ratio of size after compression to size before compression . Values greater than 1 imply an output stream bigger than the input stream (negative compression). The compression ratio can also be called bpb
(bit per bit),

Compression ratio = size after compression / size before compression

? Compression factor: This is the reverse of compression ratio. In this case, values greater than 1 indicate compression and values less than 1 imply expansion. This measure seems natural to many people, since the bigger the factor, the better the compression.

Compression factor = = size before compression / size after compression


? Saving percentage. This shows the shrinkage as a percentage
Saving percentage = size before comp. - size after comp. /
size before comp. %



Example :- Source image file (256 × 256) with 65,536 bytes is compressed into a file with 16,384 bytes. The compression ratio is 1/4 and the compression factor is 4. The saving percentage is: 75% .


المادة المعروضة اعلاه هي مدخل الى المحاضرة المرفوعة بواسطة استاذ(ة) المادة . وقد تبدو لك غير متكاملة . حيث يضع استاذ المادة في بعض الاحيان فقط الجزء الاول من المحاضرة من اجل الاطلاع على ما ستقوم بتحميله لاحقا . في نظام التعليم الالكتروني نوفر هذه الخدمة لكي نبقيك على اطلاع حول محتوى الملف الذي ستقوم بتحميله .