=Paper=
{{Paper
|id=Vol-3668/paper11
|storemode=property
|title=Redistribution of the Compressed Data Between Modified DEFLATE-Blocks in the Image Compression Process Without Lossless
|pdfUrl=https://ceur-ws.org/Vol-3668/paper11.pdf
|volume=Vol-3668
|authors=Andrii Bomba,Alexander Shportko,Veronika Postolatii
|dblpUrl=https://dblp.org/rec/conf/colins/BombaSP24
}}
==Redistribution of the Compressed Data Between Modified DEFLATE-Blocks in the Image Compression Process Without Lossless==
Redistribution of the Compressed Data Between Modified
DEFLATE-Blocks in the Image Compression Process
Without Lossless
Andrii Bomba1, Alexander Shportko2 and Veronika Postolatii3
1 National University of Water and Environmental Engineering, 11, Soborna Str, 33028, Rivne, Ukraine
2 Academician Stepan Demianchuk International University of Economics and Humanities, 4, Acad. S. Demianchuk Str,
33000, Rivne, Ukraine
3 National University “Lviv Polytechnic”, 12, St. Bandera Str, 79000, Lviv, Ukraine
Abstract
Reasonable expediency of each stage of lossless image compression. The advantages and
disadvantages of the DEFLATE data compression format are described. The feasibility of using
arithmetic coding instead of Huffman compression in modifications of this format is argued. The
principles of redistribution of compressed data between DEFLATE- blocks in order to increase their
homogeneity to reduce data compression coefficients are presented and substantiated. The software
implementation of this redistribution of compressed data between DEFLATE blocks in C++ language is
given. On commonly known test the ACT set shows that application proposed redistribution of coded
data between modified DEFLATE blocks in the process of lossless progressive hierarchical
compression enables reduce the total size of its compressed images by 3 KB and speeds up decoding
by an average of 2.14%, although it slows down encoding by 6.48 %.
Keywords
Image compression lossless, DEFLATE compressed data format, LZ77 dictionary compression
algorithm. 1
1. Introduction
Today, files transmitted by communication channels most often contain data of one of four main
types: texts, images, video or sound. And if 4 KB is enough to store one page of unformatted text,
then a photo-realistic raster image of 1015 cm size without compression can take several MB,
song recordings - tens of MB, and movie recordings - several GB. Fortunately, most of these files
have a high level of redundancy. It is the reduction of redundancy that allows you to compress
files and thereby increase the speed of information exchange over the network and reduce the
amount of disk space used.
Most of the well-known algorithms that allow you to compress images dozens of times (JPEG
[1][2], fractal and wavelet transforms [3]) lead to slight loss of image quality. Such losses are
invisible for photorealistic high-resolution images, but affect the quality of low-resolution
images with fragments of business graphics or discrete-tone transitions. In addition, there are
classes of images for which distortions are unacceptable (for example, X-rays). Algorithms for
image compression without loss [4], although they compress images much weaker, do not lead
to deterioration of their quality. Currently, lossy image compression algorithms are developing
most dynamically, although lossless image compression algorithms are also gradually
increasing their compression rates. Therefore, in the era of information civilization, when most
information is stored electronically, the problem of lossless image compression by reducing
redundancy remains relevant now and will be relevant in the future.
COLINS-2024: 8th International Conference on Computational Linguistics and Intelligent Systems, April 12–13, 2024,
Lviv, Ukraine
abomba@ukr.net (A. Bomba); ITShportko@gmail.com (A. Shportko); VeronikaShportko@gmail.com (V. Postolatii)
0000-0001-5528-4192 (A. Bomba); 0000-0002-4013-3057 (A. Shportko); 0000-0002-9460-0781 (V. Postolatii)
© 2024 Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
2. Related works. Stages lossless compression of images
As it is known, any data compression is possible precisely due to the reduction or elimination of
redundancies [5]. Three main types of redundancies are distinguished in images [6]: visual
(consisting in the presence of information that is not perceived by the human visual system),
inter-element or spatial (manifested in the correlation of brightness of adjacent pixels or
components of the color model) and code (revealed when using codes of the same length for
items with different probabilities). The more types of redundancies of each type are processed
in a graphic format, the more effective the compression is. In the process of lossless
compression, no information is lost, so the first type of redundancy is not reduced.
To reduce redundancies of the second and third types in archivers and graphic formats
lossless compression of images most often occurs in a maximum of four stages:
1. At the first stage, context-dependent coding reduces redundancies between the same
fragments or fragments with the same structure (reduces inter-element redundancy, can be
also used before the fourth stage). For example, the popular context-dependent algorithm
LZ77 [7] is based on the replacement in the output stream of the sequence of consecutive
literals of the buffer with a reference to a similar pre-encoded sequence of dictionary literals
in the form of a pair of numbers in the process
of coding. If there is no similar sequence of literals in the dictionary longer than two literals,
the first literal of the buffer is transferred to the output stream without changes. After that,
the encoded literals are transferred from the beginning of the buffer to the end of the
dictionary and the encoding continues similarly until the end of the literals of the input
stream. An example of the step-by-step results of the LZ77 algorithm scheme formation for
flow 3, 4, 6, 3, 4, 6, 3, 2, 6, 3, 4, 4 from [8] is given in Table 1. This stream in algorithm-encoded
form will be written as 3, 4, 6, <4, 3>, 2, <3, 6>, 4;
Table 1
Step-by-step flow compression results 3, 4, 6, 3, 4, 6, 3, 2, 6, 3, 4, 4 according to the algorithm LZ77
with [8]
Encoded data
Sliding window (input stream)
№ Matching (output stream)
step sequence
1. - 3, 4, 6, 3, 4, 6, 3, 2, 6, 3, 4, 4 - - 3
2. 3 4, 6, 3, 4, 6, 3, 2, 6, 3, 4, 4 - - 4
3. 3, 4 6, 3, 4, 6, 3, 2, 6, 3, 4, 4 - - 6
4. 3, 4, 6 3, 4, 6, 3, 2, 6, 3, 4, 4 3, 4, 6, 3 <4, 3> -
5. 3, 4, 6, 3, 4, 6, 3 2, 6, 3, 4, 4 - - 2
6. 3, 4, 6, 3, 4, 6, 3, 2 6, 3, 4, 4 6, 3, 4 <3, 6> -
7. 3, 4, 6, 3, 4, 6, 3, 2, 6, 3, 4 4 - - 4
8. 3, 4, 6, 3, 4, 6, 3, 2, 6, 3, 4, 4 - - -
2. At the second stage, the transition to an alternative color model is performed. For
example, in a popular archiver WinRAR [9] before lossless image compression, the transition
from the RGB color model to the R-G, G, B-G color model is performed;
3. On the third, the brightness of the pixel components is transformed using predictors
[10], which during image traversal predict the brightness value of each component of the
next pixel (for example, for the most common 24-bit images, these are the brightness of the
red, green, and blue components, written as integers in separate bytes), using the brightness
values of the same components of previously processed adjacent pixels. In the process of
using this approach, deviations ij of the value of the brightness of the next pixel
component brightnessij from the value predicted by the selected predictor predictij are
calculated and further coded, i.e.
ij brightnessij predictij (1)
(i and j run respectively through all the rows and columns of the pixel components of the
image). Adjacent image pixels most commonly have similar colors, which means that they
also have close value brightness of relevant components, therefore value forecast often
matches with value brightness of the regular components, most often it is close to this value
and rarely – much differs from him (Figure 1 from [10]). That is, most of the values ij are
close to zero. The use of predictors is similar to delta coding [2];
2500 20000
2000 16000
frequencies
frequencies
1500
12000
1000
8000
500
4000
0
0 50 100 150 200 250 0
-126 -63 0 63 126
values values
a) b)
Figure 1: A division of frequencies of values of green components of image Lena.bmp: a) before the
use of predictor (H =7.59 of bpb); b) after the use of Left-predictor (H =5.34 of bpb) with [10]
4. At the fourth stage, context-independent coding forms element codes with lengths that
dependent on their probabilities (processes code redundancy). Identical fragments or
fragments of the same structure targeted by context-sensitive coding rarely occur in photo-
realistic images, so the only universal step of lossless image compression is context-
independent coding itself. It can even be used instead of context-sensitive luminance codes
of individual pixels, if this further reduces the compression ratio (the ratio of the size of
compressed to uncompressed image files, expressed in bpb, hereafter CR). According to our
calculations, the application of a context-independent algorithm reduces the CR of images by
33% on average. Therefore, let's consider the ideas of this coding in more detail.
The main principle of context-independent coding is formulated as follows: the length of the
code of an arbitrary element with a higher probability should not exceed the length of the
code of any element with a lower probability. This principle is based on the fundamental
position of information theory, according to which, in order to minimize the length of the
sequence code, an element si with a probability of occurrence psi should be coded with
log2 psi bits [6]. Then the average code length of the block element after applying any
context-independent algorithm according to the formula of Shannon [11] cannot be less than
the entropy of the source
H psi log2 psi . (1)
i
In practice today, two alternative context-independent methods are most often used:
Huffman coding and arithmetic coding.
Huffman coding (HUFF) matches each element depending on its probability (frequency of
occurrence) with a fixed prefix code [12] [13] [14][15]. This coding is most often performed in
the following sequence: the probabilities (frequencies) of individual elements are calculated;
elements in descending order of probabilities are arranged; the two elements with the lowest
probabilities (most often the last ones in the created list) are iteratively combined in order to
obtain one element, the code 0 to the first one, and 1 to the second one are added; the
probabilities of the selected elements to calculate the probability of the formed element and
insert this element into the sorted list of probabilities are summed; HUFF codes are formed, the
formed codes in reverse order – from the top to each element are written.
The average length of the HUFF code coincides with the entropy of the source only when for
all elements s i the lengths of their optimal codes -log2 p(s i) are integers. In addition, the length
of the HUFF code of even the most probable element cannot be less than one bit.
Arithmetic compression (ARIC), in contrast to HUFF coding, matches each successive
element with an interval with a length proportional to its probability (frequency) [16]
[17][18][19]. At the same time, a subinterval corresponding to the first element of the flow is
selected from the initial interval (most often [0; 1)) and further considered. This subinterval is
again divided in proportion to the frequencies of individual elements, and the subinterval
corresponding to the second element of the flow is selected from it (see, for example, Figure 2
from [20], on which the subintervals are ordered by the increasing values of the elements). The
selection of subintervals continues in the same way until the end of the flow elements. Thus, the
final length of the selected subinterval is equal to the product of the probabilities of all
elements, and its beginning depends on the sequence of these elements in the stream. The result
of ARIC is any number from the last subinterval received. Taking into account the finiteness of
the numbers, to record the resulting number, the first significant digits of the intervals are
monitored each time, and if they match, these digits are recorded in the output stream and
excluded from further consideration. Decoding of the next ARIC element is performed by
determining the corresponding next subinterval to which the read number belongs.
Figure 2 : Arithmetic coding of the first three elements of the sequence 36, 38, 35, 35, 40, 36, 40, 38, 45, 38
with [20]
From the description of the ARIC principles, it follows that in order to perform such
arithmetic compression, the coder and, even more so, the decoder, must know the probabilities
(frequencies) of individual elements. Today, two strategies for forming these frequencies are
widely used: static and adaptive. In the case of a static strategy, the frequencies of individual
elements are transmitted to the decoder in compressed data explicitly. When using an adaptive
strategy, the intervals of the elements are formed synchronously by the encoder and decoder in
the process of data processing. The adaptive strategy provides, as a rule, better CR (since it does
not need to store the frequencies of individual elements in the compressed data and takes into
account the unevenness of the distribution of the current sequence of elements, and not the
flow in general), but it significantly slows down the work of the decompressor, because it
requires recalculation of the probabilities of the elements in the decoding process. Graphics file
formats should, first of all, provide fast decoding, so they often use a static strategy for forming
element intervals.
To speed up the development of graphic formats, data compression formats are created,
improved and used, which combine proven context-dependent and context-independent
algorithms. One of the successful examples of such a combination is the DEFLATE dictionary
compression format [21], which for processing of incoming flow uses context-dependent LZ77
algorithm [7], a the results of his work is compressed by Huffman's dynamic codes [12]. This
format is used in many popular archivers (for example, GZIP [22]) and graphic formats (for
example, PNG [23]) and does not need the acquisition licenses for use in applied software. We
use this compressed data format to develop a lossless progressive hierarchical image
compression format [8][10][20].
It is clear that implementations of the decoding algorithm of LZ77 codes must distinguish
between individual literals and groups . In the DEFLATE format, for this
purpose, the lengths of the substitutions and the individual literals of the LZ77 algorithm are
encoded together as numbers in the range [0; 285]. At the same time, in each compressed block
numbers from the range [0; 255] correspond to the codes of individual literals, 256 marks the
end of the block, and numbers from the range [257; 285] indicate the basic values of lengths.
After the basic length values, there is a number of bits defined by the format, which, together
with the basic value, uniquely determines the replacement length [21]. The offset is stored after
the corresponding replacement length similarly – in the form of the base value and additional
bits. The basic displacement value is within [0; 29]. In the DEFLATE format, the maximum
encoded sequence length can be 258, the offset can be 32768, and different dynamic codes
HUFF are used to encode the base literals/lengths and offsets.
The purpose of this article is to describe and justify the algorithm for redistribution of
compressed data between modified DEFLATE blocks to improve the compression performance
of this format.
3. Options for reducing the size of compressed data in the DEFLATE
format
Let's first emphasize the reasons for improving compression by context-independent
algorithms of the fourth stage in cases of applying the algorithms of the second and third stages.
Let's investigate the dependence of entropy for the case of two elements (Figure 3 and [24]). Let
the probability of the appearance of the first element be equal to p1 . Then the probability of the
second element will be p2 1 p1 , and the entropy will be equal to
H p1 log2 p1 1 p1 log2 1 p1 . We see that the entropy is maximal when p1 p2 0.5 .
The greater the probabilities of the elements differ among themselves, that is, the greater the
unevenness of the distribution is, the lower the entropy is [24]. For M the entropy of different
elements is also maximal when the probabilities of these elements are the same and is
calculated by Hartley's formula [24]:
H log2 M . (2)
Figure 3: Dependence of the entropy of a source of two elements on the probability of the first element
appearing
Difference color models of the second and predictors of the third stage of lossless image
compression do not compress the image, but increase the unevenness of the brightness
distribution around zero (Figure 1b) and therefore reduce the entropy (1), i.e. increase the code
redundancy by reducing the inter-element one.
Taking into account the fact that arithmetic coding more accurately encodes elements whose
occurrence of probabilities are not equal to powers of two [5], we modified the DEFLATE
compressed data format by applying E. Shelvin's range coder in it, the code of which can be
found, for example, in [25]. This encoder reads/writes bytes instead of bits of data and thus
significantly speeds up encoding/decoding. For each element, it uses an interval with an integer
length proportional to its probability. In order to store the length of the interval for each
element in the header of the modified DEFLATE-block of compressed data, instead of the length
of the HUFF code, we write the number of bits for storing the corresponding length of the
interval in the general interval [0; 32768), and after the header – a binary record of this length
without the first bit (which is always equal to one) [20]. The DEFLATE format uses a maximum
of 285 different elements to encode literals/lengths replacement. If there are M different
elements in the next modified DEFLATE block, then, taking into account (2), the number of
additional bits for recording their interval lengths in the block header will be M log2 M 1 ,
if these elements are equally likely. In general, the header size of the modified DEFLATE block
will not exceed 2635 bits.
The size of the most compressed data in each block by its length entropy code [20]
LH N H N log2 N Ni log2 Ni , (3)
i
where N i is the frequency of the element si in the next block, and N Ni .
i
At first glance, it seems that combining all the compressed data into one modified DEFLATE
block should reduce CR, because then only one header of the compressed data block needs to be
stored instead of many. But it was shown in work [20] that the combination of blocks of
compressed data does not reduce (most often – increases) the length of their entropy code. The
greater the difference between the relative frequencies of each element in the two input blocks
is, the greater the additional increase in the length of the entropy code (Figure 4 from [20]) will
be. This gain is zero only when the relative frequencies of the element in the two input blocks
match.
Figure 4: The dependence of the increase in the length of the entropy code of the combination of two input
sequences on the frequencies of the element si in these input sequences [20]
Taking into account (3), the increase in the length of the entropy code due to the
combination of two adjacent blocks will be calculated using the formula
dLH N N log2 N N N log2 N N log2 N
i 2 i i i 2 i i
N N log N N N log N N log N ,
i 2
(4)
i
i i i
where ' indicates the corresponding characteristic of the first block, and '' indicates the
characteristic of the second.
Therefore, to reduce the size of compressed data in the DEFLATE format, it is advisable to
implement the following options for redistributing compressed data between its blocks:
1. Combine two adjacent blocks of compressed data, if the increase in the length of the
entropy code from their combination (4) does not exceed the savings from storing the
header of the combined block instead of the two headers of these adjacent blocks;
2. Split blocks of compressed data into homogeneous blocks, if the decrease in the length
of the entropy code from such a split exceeds the increase in the size of their headers.
4. Software implementation of redistribution of compressed data
between modified DEFLATE-blocks
Let us first provide the text of the function in the C++ language for calculating the predicted
length of the DEFLATE entropy code of the compressed data block (3) based on the frequencies
of its elements, taking into account the predicted length of its header:
UBYTE4 countBitCodeFreqLL(UBYTE4 * masFreq, UBYTE4 freqElementMas)
{ double s=0; UBYTE2 countElement=0;
for (UBYTE2 i=0; i< MaxLengthCodes; i++) // calculation of the subtractor from (3)
if (masFreq [i]>0)
{ countElement++; // number of elements with non-zero frequency
if (masFreq [i]>1) s+=masFreq [i]*log2(masFreq [i]); }
return 70+countElement*log2(countElement)+ // predicted header length
freqElementMas *log2(freqElementMas)-s; } // length of block entropy code
The function for calculating the predicted length of the combination of two blocks functions
similarly:
UBYTE4 countBitCodeSumisnFreqLL (UBYTE4* masFreq1, UBYTE4* masFreq2,
UBYTE4 freqElementMas)
{ double s=0; UBYTE2 countElement=0;
for (UBYTE2 i=0; i0)
{ countElement++;
if (masFreq1[i]+masFreq2[i]>1)
s+=(masFreq1[i]+masFreq2[i])*log2(masFreq1[i]+masFreq2[i]); }
return 70+countElement*log2(countElement)+freqElementMas*log2(freqElementMas)-s; }
Using these functions, a possible combination of two adjacent DEFLATE blocks (the first
option of redistribution) is realized by postponing the data storage of the previous block of
compressed data. If the sum of the sizes of the previous and current blocks does not exceed the
maximum allowable and the combination of blocks reduces the size of the compressed data,
then we will join the current block to the previous one. Otherwise, we will output the previous
block, and postpone the current one for further analysis:
if (countPrevElement && countPrevElement+countCurrentElement < =OutputBufferSize)
// combination of previous and current DEFLATE blocks is possible
{ // calculate the frequency of literals / lengths of substitutions of the previous block
memset (freqPrevBD, 0, MaxLengthCodes * sizeof (UBYTE4));
for (j=0; j< countPrevElement; j++)
if (codePrevLL [j]<256) freqPrevBD [codePrevLL [j]]++; // literal frequency
else
{LengthToCode (codePrevLL [j]-256, code, extra, value);
freqPrevBD [code]++; } // frequency of the base value of the replacement length
// calculation of size reduction from the combination of previous and current blocks
countBitEconomUnion=countBitCodeFreqLL (freqCurrentBD, countCurrentElement)+
countBitCodeFreqLL (freqPrevBD, countPrevElement) –
countBitCodeSumisnFreqLL(freqCurrentBD, freqPrevBD, countCurrentElement+countPrevElement);}
// we combine the previous and current block, if it is appropriate and possible
if (countBitEconomUnion >=0 &&
countPrevElement+countCurrentElement <=OutputBufferSize)
{UBYTE2 *codeUnionLL=new UBYTE2[countPrevElement+countCurrentElement];
UBYTE2 *codeUnionD=new UBYTE2[countPrevElement+countCurrentElement];
memcpy(codeUnionLL, codePrevLL, countPrevElement * sizeof (UBYTE2));
memcpy(codeUnionLL+countPrevElement,codeCurrentLL,countCurrentElement*sizeof(UBYTE2));
memcpy(codeUnionD, codePrevD, countPrevElement * sizeof (UBYTE2));
memcpy(codeUnionD+countPrevElement,codeCurrentD,countCurrentElement*sizeof(UBYTE2));
UBYTE4 countUnionElement=countPrevElement+countCurrentElement;
// after combining, we destroy the previous and current blocks
if (countPrevElement)
{delete [] codePrevLL; codePrevLL=NULL;
delete [] codePrevD; codePrevD=NULL; }
delete [] codeCurrentLL; codeCurrentLL=NULL;
delete [] codeCurrentD; codeCurrentD=NULL;
countCurrentElement=0;
// postpone the combination to the previous block
codePrevLL=codeUnionLL; codePrevD=codeUnionD;
countPrevElement=countUnionElement;
if (!lastblock) return; } // if not the last block, then we expect the next block
// otherwise, we output the previous block and postpone the current block
To perform the division of input blocks of compressed data into homogeneous blocks (the
second variant of redistribution), we implemented the fragmentation algorithm described in
[26], for dividing the input block into homogeneous blocks with different statistical
characteristics. Having these homogeneous blocks, we iteratively choose and combine two
adjacent ones that maximally reduce the length of the compressed data as long as such
reduction is expected. Each of the remaining blocks after iterative combination is output as a
separate DEFLATE block. Here is a fragment of the program for iteratively combining
homogeneous blocks into initial DEFLATE blocks:
for (i=0; i1)
{economBit=countBitCodeRBlock[0]+countBitCodeRBlock[1]-countBitCodeSRBlock[0];
indexMaxEconomic=0;
for (i=1; ieconomBit)
{economBit=countBitCodeRBlock[i]+countBitCodeRBlock[i+1]-countBitCodeSRBlock[i];
indexMaxEconomic=i; } // index of first of contiguous blocks improving compression
if (ekonomBit <0) break; // combining blocks no longer reduces code length
for (j=0; j< MaxLengthCodes; j++) // combine the frequencies of certain adjacent blocks
freqLLRBlock[indexMaxEkonom][j]+=freqLLRBlock[indexMaxEkonom+1][j];
// we shift the frequencies of the following blocks to the left
memcpy(freqLLRBlock+indexMaxEkonom+1, freqLLRBlock+indexMaxEkonom+2,
(countRBlock-indexMaxEkonom-2)* sizeof (freqLL));
for (j=0; j0)
countBitCodeSRBlock[indexMaxEkonom-1]=countBitCodeSumisnFreqLL(
freqLLRBlock [indexMaxEkonom-1], freqLLRBlock [indexMaxEkonom],
startRBlock [indexMaxEkonom+1]-startRBlock[indexMaxEkonom-1]+1);
if (indexMaxEkonom+2< countRBlock) // count the combination with the next block
countBitCodeSRBlock[indexMaxEkonom]=countBitCodeSumisnFreqLL(
freqLLRBlock [indexMaxEkonom], freqLLRBlock [indexMaxEkonom+1],
startRBlock [indexMaxEkonom+2]-startRBlock[indexMaxEkonom]+1);
// shift the lengths of the following combinations to the left
if (indexMaxEkonom+3