foxy chick pleasures twat and gets licked and plowed in pov.sex kamerki
sampling a tough cock. fsiblog
free porn

A Review of Image Compression Techniques

0

A Review of Image Compression Techniques

Muthana S. Mahdi

Mustansiriyah University,

Baghdad, Iraq

muthanasalih007@gmail.com

 

Marwan Saad Kadhim

Mustansiriyah University,

Baghdad, Iraq

Marwan_saad_k@yahoo.com

 

ABSTRACT

The research in the field of image compression is driven by the ever-increasing bandwidth requirements for transmission of images in computer, mobile and internet environments. In this context, the survey summarizes the major image compression methods spanning across lossy and lossless image compression techniques. Further, the paper concludes that still, research possibilities exist in this field to explore efficient image compression.

General Terms

Image compression, Huffman coding, low bit rate transmission, wavelet, image compression, etc.

1-      INTRODUCTION

Image compression as a specialized discipline of electronic engineering has been gaining considerable attention on account of its applicability to various fields. Compressed image transmission economizes bandwidth and therefore, ensures cost-effectiveness during transmission. The application areas for such compressions today range from mobile, TV and broadcasting of high definition TV up to very high-quality applications such as professional digital video recording or digital cinema/large-screen digital imagery and so on. This has led to enhanced interest in developing tools and algorithms for very low bit-rate image coding. An image is a two dimensional (2-D) signal processed by the human visual system. The signals representing images are usually in analogue form. They are converted to digital form for processing, storage and transmission. An image is a two-dimensional array of pixels. Different types of images are used to form the significant part of data, in particular critical in the fields such as remote sensing, biomedical and large- screen digital imagery etc. The ever increasing bandwidth requirements image compression continues to be a critical focus of research discipline [1].

2-      IMAGE COMPRESSION

Image compression is applied to reduce the amount of data required to represent a digital image. The amount of data associated with visual information is so large that its storage would require enormous capacity. It is a process by which a compact representation of image storage or transmission is possible. Compression is achieved by the removal of three basic data redundancies, which are coding, interpixel and psych visual redundancies. Image compression is broadly classified into lossless and lossy image compression [2].

2-1     Lossless Compression Techniques

The feature of the lossless compression technique is that the original image can be perfectly recovered from the compressed image. It is also known as entropy coding since it uses decomposition techniques to eliminate or minimize redundancy. Lossless compression is mainly used for applications like medical imaging, where the quality of an image is important. The following are the methods that fall under lossless compression: Run-length encoding, Huffman encoding, LZW coding and Area coding [3].

2-1-1             Run-length encoding

Run-length encoding is an image compression method that works by counting the number of adjacent pixels with the same grey-level value. This count, called the run length, is then coded and stored. The number of bits used for the coding depends on the number of pixels in a row: If the row has2n pixels, then the required number of bits is n. A 256 x 256  image requires 8-bits, since 28=256 [4].

2-1-2   Huffman encoding

Huffman coding can generate a code that is as close as possible to the minimum bound, the entropy. This method results in variable length coding. For complex images, the Huffman code alone will reduce the file size by 10 to 50%. By removing irrelevant information first, file size reduction is possible [5].

2-1-3   LZW coding

LZW (Lempel- Ziv – Welch) coding can be static or dynamic, which is a dictionary-based coding. In static dictionary coding, the dictionary is fixed during the encoding and decoding processes. On the other hand in dynamic dictionary coding, the dictionary is updated on the fly. The computer industry is widely using LZW. It is also implemented as a compress command on UNIX [6].

2-1-4 Area coding

Area coding is an enhanced form of run-length coding, which reflects the two-dimensional character of images. It is a significant advancement over the other lossless methods. It does not make much of a meaning to interpret the coding of an image as a sequential stream, as it is, in fact, an array of sequences building up a two-dimensional object. The idea behind this is to find the rectangular regions with the same characteristics. These rectangular regions are coded in a descriptive form as an element with two points and a certain structure. Area coding is highly effective and it can give high compression ratio but the limitation being non-linear, which prevents the implementation in hardware [7].

2-2     Lossy Compression Techniques

Lossy compression technique provides a higher compression ratio than lossless compression. In this method, the compression ratio is high; the decompressed image is not exactly identical to the original image, but close to it. Different types of lossy compression techniques are widely used, characterized by the quality of the reconstructed images and its adequacy for applications. The quantization process applied in lossy compression technique results in loss of information. After quantization, entropy coding is done like lossless compression. The decoding is a reverse process. The entropy decoding is applied to compressed data to get the quantized data. De quantization is applied to it & finally, the inverse transformation is performed to get the reconstructed image. The methods that fall under the lossy compression technique are listed below [8].

2-2-1       Vector Quantization

As part of the vector quantization technique, a dictionary of fixed- size vectors is developed and its index in the dictionary is used as the encoding of the original image vector. Normally entropy coding is used. It exploits linear and non-linear dependence that exists among the components of a vector. Vector quantization is superior even when the components of the random vector are statistically independent of each other [9].

2-2-2             Fractal Coding

Firstly, the image is decomposed into segments by using standard image processing techniques such as edge detection, color separation and spectrum and texture analysis. Then each segment is looked up in a library of fractals. The Fractal coding library contains codes called iterated function system (IFS) codes, which are compact sets of numbers. Using a systematic procedure, A set of codes for a given image are determined using a systematic procedure; accordingly when the IFS codes are applied to a suitable set of image blocks yield an image that is a very close approximation of the original. This scheme is highly effective for compressing images that have good regularity and self- similarity [10].

2-2-3             Block truncation coding

The principle applied here is that the image is divided into non-overlapping blocks of pixels. The mean of the pixel values in the block (the threshold) and reconstruction values are determined for each block. Then a bitmap of the block is created by replacing all pixels whose values are greater than or equal (less than) to the threshold by zero or one. Then for each segment (a group of 1s and 0s) in the bitmap, the reconstruction value is determined. This is the average of the values of the corresponding pixels in the original block [11].

2-2-4             Sub band coding

In the sub-band coding, the image is analyzed and find the components containing frequencies in different bands, the sub-bands. Then the quantization and coding are performed for each sub band. The main advantage of this coding is that quantization and coding for each sub band can be designed separately [12].

2-2-5             Transformation Coding

Here a block of data is unitarily transformed so that a large fraction of its total energy is packed in relatively few transform coefficients, which are quantized independently. Transforms such as DFT (Discrete Fourier Transform) and DCT (Discrete Cosine Transform) are used to change the pixels in the original image into transform coefficients. These coefficients have several properties like energy compaction property that results in most of the energy of the original data being concentrated in only a few of the significant transform coefficients; those few significant coefficients are selected and the remaining is discarded. The selected coefficients are considered for further quantization and entropy encoding. DCT coding has been the most common approach to transform coding, which is also adopted in JPEG [13].

3        CONCLUSION

Through extensive research have been taking place in this area, keeping in view the ever-increasing need for low bit-rate compression methods, scope exists for new methods as well as evolving more efficient algorithms in the existing methods. The review makes clear that the field will continue to interest researchers in the days to come.

4    REFERENCES

  • Woods, R. C. 2008. Digital Image processing. New Delhi: Pearson Pentice Hall, Third Edition, Low price edition, Pages 1-904.
  • Sonja Grgic, M. M. 2001. Comparison of JPEG Image Coders.; Proceedings of the 3rd International symposium on Video Processing and Multimedia Communications, June, (pp. 79-85). Zadar,
  • Austin, D. 2011. Image Compression: Seeing wht’s not there,. Feature Column, American Mathematical Society. May
  • Wang, E.-h. Y. 2009. Joint Optimisation Run-Length Coding, Huffman Coding, and Quantization Table with Complete Baseline JPEG Decoder Compatibility. IEEE Transactions on Image Processing , Volume No. 18, No.1,
  • Ram Singh, R. V. (undated), JPEG 2000: Wavelet Based Image Compression. IIT, Mumbai: EE678 Wavelet Application Assignment 1.
  • Athanassios Skodras, C. C. 2001. The JPEG 2000 Still Image Compression Standard. IEEE Signal Processing Magazine, September .
  • Florian, B. 2001. Wavelet in real time digital audio processing: Analysis and sample implementations. of Computer Science, Master’s thesis, University of Mannheim.
  • Cruz, Y. L. 2006. A Fast and Efficient Hybrid Fractal – Wavelet Image Coder. IEEE Transactions on Image Processing , Vol 15, No.1,
  • Blu, M. U. 2003. Mathematical Properties of the JPEG – 2000 Wavelet filters. IEEE , Vol.12, No.9,
  • De Poli, G., Piccialli, A., & Roads, C. 1991. Representation of musical signals. In Representation of musical signals. MIT
  • Eugenia Al Politou, G. P. 2004. JPEG 2000 and Dissemination of Cultural Heritage over the Internet. IEEE Transactions on Image Processing , Vol. 13, No. 3, March.
  • Sonal, D. K. 2007. A study of various image compression techniques. COIT, RIMT-IET.
  • Tinku Acharya, P.-S. T. 2005. JPEG 2000 Standard for Image Compression: Concepts, Algorithms and VLSI Architecture. Wiley,

اترك رد

لن يتم نشر عنوان بريدك الإلكتروني.

free porn https://evvivaporno.com/ website