Revision as of 15:54, 22 September 2009 by Rscheidt (Talk | contribs)

Lossy versus Lossless Images: What is the difference?

As "analog" 35 mm cameras (and the film used for it!) become more and obsolete, digital cameras and the storage and transmission of digital images are rapidly becoming the de facto standard for today's photography needs.

The resolution of a camera - e.g. 6 MP (Megapixel) or 10 MP - determines the number of pixels the camera uses to represent the "continuous" signal (e.g. a mountain, or your smiling significant other) that your digital camera is sampling.

Thus the digital camera samples the continuous signal, with a period $ T $ (shutter speed) and on for length $ tau $ (related to aperture -- how much light is absorbed):

$ X_s(t) = s_{tau}(t)x(t) $ (Note: image is two-dimensional signal)

A digital camera also quantizes the sampled values, because an infinite amount of storage space (i.e. bits) is not available to represent every pixel. A typical digital camera will allocate 24 bits per pixel at the highest quality setting, thus allowing $ 2^{24} = 16,777,216 $ possible color representations.

We all have heard of the various image formats used - for example, JPEG, GIF, TIFF, RAW, PNG, BMP. Of these, TIFF (usually), RAW, PNG, and BMP are referred to as lossless. This means that with the aforementioned compression algorithms, the original signal can be faithfully reconstructed exactly, bit-by-bit. Lossless image compression would be important for such applications such as medical imaging, where it is important that the resolution of the original image is maintained upon compression and decompression. As an example, a lossless compression is important to maintain high contrast and finer details in an MRI scan of brain tissue, so that an accurate diagnosis can be made!

The size of a raw uncompressed digital image that is 10 MP in resolution and allocated 24 bits per pixel will be: $ 10^6 pixels * 24 bits/pixel * 1 MB / 8*10^6 bits = 30 MB $ Typical lossless compression algorithms are able to achieve compression ratios of only about 2:1 (see [1]). These algorithms seek to reduce redundancies in the data by representing repeated data with less data (yet the representation is exact and no round-off is done). As an example, suppose a row of a black-and-white image is as follows, where W represents a white pixel and B represents a black pixel:

BBBBWWBBBBBWBBBWWWWWWWWW

A run-length encoding scheme (see [2]) might represent this as:

4B2W5B1W3B9W

This encoded representation would be interpreted as "4 B's, 2 W's, 5 B's, 1 W, 3 B's and 9 W's" -- this represents the original signal exactly (i.e. it is lossless) and in 12 characters instead of the original 24 (it should be noted that this will likely be implemented in binary, but the concept does not change).

Lossy compression algorithsm, like JPEG, are used when a reduction in image quality can be tolerated. However, the benefits of this are (typically) reduced file storage (more pictures can be stored on your digital camera's memory card) and quicker data transmission (quicker web page loading). Lossy compression algorithms can achieve compression ratios of 10:1 with little perceptible difference in quality and up to 50:1 with noticeable degradation in picture quality (see [3]). These algorithms typically "exploit" the limitations of the human visual system and its inability to pick up minor changes in hue (we are more sensitive to changes in brightness than hue). Thus the spectrum of colors can be reduced.

JPEGs are notoriously bad at compressing image files that have large sections of one color surrounded by another color - e.g. black and white text, cartoons, solid squares, etc. This is because JPEG removes the high frequency components that are present in the sharp discontinuities in color (e.g. from black to white) and instead interpolate the rapid change with a more gradual one. In other words, in JPEGs, sharp discontinuities in color are made more gradual, insomuch that the outline of a solid black square outlaid on a white background will now look blurry with "gray" pixels at the periphery of the square. This type of compression artifact can be seen here: [4], in both the text and picture.

An example to illustrate the difference between a lossy and lossless compression algorithm is the following:

Suppose we have the value 28.57777

The representation of this "raw" value requires 9 characters, if you include the decimal point.

A lossless algorithm might represent this number as 28.5[4]7, where the [4] represents four 7's in a row. This representation requires 6 characters.

On the otherhand, a lossy algorithm might represent this value as 28.6, requiring only 4 characters. However, this new representation is no longer equal to the original signal! Round-off has occurred and thus this compression has incurred a loss.

- rscheidt

Alumni Liaison

Abstract algebra continues the conceptual developments of linear algebra, on an even grander scale.

Dr. Paul Garrett