An Efficient Two-level Dictionary-based Technique for Segmentation and Compression Compound Images

Image data compression algorithms are essential for getting storage space reduction and, perhaps more importantly, to increase their transfer rates, in terms of space-time complexity. Considering that there isn't any encoder that gives good results across all image types and contents, this paper proposed an evolvable lossless statistical block-based technique for segmentation and compression compound or mixed documents that have different content types, such as pictures, graphics, and/or texts. Derived from the number of detected colors and to achieve better compression ratios, a new well-organized representation of the image is created which nonetheless retains the same image components. With the effort of reducing noise or other variations inside the scanned image, some primary operations are implemented. Thereafter, the proposed algorithm breaks down the compound document image into equal-size-square blocks. Next, inspired by the number of colors detected in each block, these blocks are categorized into a set of six-image objects, called classes, where each one contains a set of closely interrelated pixels that share the same common relevant attributes like color gamut and number, color occurrence, grey level, and others. After that, a new arrangement of these coherent classes is formed using the Lookup Dictionary Table (LUD), which is the real essence of this proposed algorithm. In order to form distinguishable labeled regions sharing the same attributes, adjacent blocks of similar color features are consolidated together into a single coherent whole entity, called segments or regions. After each region is encoded by one of the most off-the-shelf applicable compression techniques, these regions are eventually fused together into a single data file which then subjects to another compression stage to ensure better compression ratios. After the proposed algorithm has been applied and tested on a database containing 3151 24-bit-RGB-bitmap document images, the empirically-based results prove that the overall algorithm is efficient in the long run and has superior storage space reduction when compared with other existing algorithms. As for the empirical findings, the proposed algorithm has achieved (71.039%) relative reduction in the data storage space.


Introduction
RGB images, referred to as component images, are the most common model of images. Each image may be regarded as a "stack" containing three-equal-size arrays. Working at the level of the pixels which make up images, every image has an MxNx3 array of color pixels. This means that the image contains "M" pixels along the horizontal direction, called image width, and "N" pixels along the vertical direction, called image length. Hence, the total pixel count is "M" multiplied by "N", namely "MxN". Moreover, each pixel is associated with three integers that correspond to the three color information: Red, Green, and Blue. The number of bits that are required to address every integer of these three integers defines the bit depth which is also referred to as "pixel depth", "the number of bits per pixel", or "grey-scale resolution". (Kumar et al. 2019) (Gonzalez, Woods, and L.Eddins 2009) (Gonzalez and Woods 2017)  ngs in solving p ds to solve man ard making co as "Machine V preparation an rasps some cer this category.
ed by fewer bi of the later one t that there are last two tack e proposed solu Purpose ocuments, whi ounds (El-Om mpressing the ve" run-time co transmit data (

Image Restoration
Artificial Vision e or more of sdogianni 201 is "better" tha more suitable th ng method tha ove the quality on, such transf nput image, "R stroyed image Image Enhan ce the original the Image Rest problems, scien ny important re omputers imita Vision", "Mac image for Au rtain features o It is used to br its than the or e. e no clear-cut b kled problems, ution has been ich means that mari and Awaja em requires a omputations, h (Kumar et al. 2 Vol. 14,No. 4; the following 10) ( Hamada 2019). Besides, they may never satisfy the ever-growing information demands of customers or even some of their evident needs (El-Omari 2019) (El-Omari et al. 2012). This is especially true for storing and transferring documents containing a huge amount of images. To come up with this focal point and to diminish the data storage requirements, in terms of storage space complexity, or to increase their transfer rates, in terms of time complexity, there is an essential need for compressing these documents with sophisticated algorithms ) (Taha et al. 2012) (El-Omari and Awajan 2009) (El-Omari 2008).
Most data compression techniques generally benefit from the patterns inside the image data to get another equivalent less-space representation. And so, random data are so difficult, if not impossible, to be compressed. However, conventional mechanisms of compression are commonly involved with certain image types that are measured in terms of space-time complexity. Using them with mixed documents may impose many distinctive challenges that have to be adequately addressed. And, thus, the so-called segmentation is evolved out to offer a conceptual way to break down mixed documents into distinct image objects, called segments or regions, where each one of these incorporated parts contains a set of close-related pixels which have "common attributes" like color gamut and number, color occurrence, grey level, and others (Sharma 2019). Rather than using one standard compression technique for the whole document and to achieve a better compression ratio, each segment is extracted alone and then encoded independently (Taha et al. 2012). Thus, this research tackles the problem of segmenting mixed digital documents into six parts. Then at the sender side, each incorporated part is compressed individually apart from others using the most applicable compression technique, thereby ensuring better compression ratios and thence quicker sending data from one machine to another. By this means, the recipient can integrate these various image components to regenerate the original document. However, this arrangement places an emphasis on direct dialogue between the pair of actors, the sender and the recipient. (El-Omari and Awajan 2009)(El-Omari 2008) (El-Omari et al. 2012) In order to state a truth, this paper is a continuation of the previous works in the area of document image segmentation and compression (El-Omari and Awajan 2009)(El-Omari 2008) ) (El-Omari et al. 2012) (Taha et al. 2012). In order to explore further the arguments set out above, this paper is divided into seven sections. After this section provides an introduction to the main theme of the paper, Section 2 surveys the literature to look at the related work and, moreover, reviews some fundamental concepts and terminology that forms the theoretical background. Section 3 is where the real work begins; it presents the current approach developed in this research and then walks through all the different stages which would be required to implement this proposed algorithm. The segments formulation and the mathematical model of this proposed solution are detailed in Section 4. While the conducted experiments and their detailed intensive analysis are discussed in Section 5, Section 6 concludes the project work of this research. Finally, to accomplish the discussion of this paper, the last section, Section 7, highlights an ample research scope and addresses a fairly broad range of possible research opportunities to be further investigated.

Related Work
Mixed document segmentation, as a well-known research area, aims for dividing a document image into its components: pictures, graphics (drawings), texts, and backgrounds. Data compression, on the other hand, is the process of rearranging the original information with the sole intention of relatively getting fewer numbers of bits which in turn leads to storage space reduction.
While the algorithms that carry out the data compression process are referred to as encoders, the ones that perform the inverse process to reconstruct the original images are referred to as decoders (Kumar et al. 2019). However, this whole process is referred to as encoding. Figure 2 is a schematic diagram that depicts the data compression and decompression processes for an image having "M" pixels in length and "N" pixels in wide. Imagine an input data file, "R(M,N)", is encoded to be "E(M,N)" and transferred through a network from a source computer to a destination one where the file can be decoded back, i.e. decompressed or retrieved back, to be "D(M,N)".
It is a reality that exaggerative numbers of millions of digital images are being generated every single hour. Not only that, but most of these images are rich-mix contents (El-Omari 2019)(El-Omari and Alzaghal 2017). In order to face the reality of this truth, many models of segmentation and/or compression are currently available; each one has its own specifications and essential requirements. And so, the right decision for a particular model selection is no longer an easy duty to be carried out (El-Omari 2019). Besides, the traditional algorithms may no longer enough sufficient to upkeep the new needs and then there is a vital need for new efficient techniques. To this aim, the goal of this paper is to get the maximum higher transmission rate at which the data can be transferred properly from the source point to the destination. However, this rate is much related to the size of the file which, consecutively, depends on its content types. As such, there is certainly a broad group of techniques proposed in today's growing field of compression which makes it difficult to choose from the most appropriate model selection especially that most of them provide an adequate style to implement. Depending on many relevant aspects, each model has its own specification and, therefore, these techniques can be classified into six overlapped categories that Figure  • Redundancy-related categorization: This category is associated with the way of performing the compression process. This category can be further grouped into three subgroups: Encoding redundancy, Inter-pixel redundancy, and Psycho-visual redundancy. For more information on this category, you can refer to (Taha et al. 2012 Vol. 14, No. 4; • Ready-Made categorization: By considering the strategy of data compression/decompression, a further additional taxonomy can be possible for this group: (Taha et al. 2012) (Gonzalez and Woods 2017) -Using off-the-shelf packages that are available on the market for data compression/decompression.
-Online disk compression: Building the data compression/decompression as a transparent (i.e. real-time) utility inside the operating system. By using this strategy, every data file is directly compressed when it is saved. In contrast, every data file is automatically decompressed when it is retrieved back (i.e. loaded).
-To speed up operations, the data compression/decompression can be built internally as a special-purpose built-in chip which obviously has its corresponding software driver. Again as stated in the previous subcategory, every file is automatically compressed during the saving process, and vice versa.
• Adaptation-related categorization: Upon the adaptation strategy, image compression algorithms are generally categorized into two fields: static and dynamic. Regardless of its content types, the compression process is fixed in the static case and thereby there isn't any attempt to capture any more details about the problem of interest during the process of encoding, hence the term "data-independent". In contrast to the static case, the dynamic encoding processes changes dynamically depending on the extracted data content, hence the term "adaptive compression". (Taha et al. 2012) (Gonzalez and Woods 2017) • Information-preserving categorization: From a classification point of view, "lossless" versus "lossy" is utilized in accordance with the quality requirements. Different than the other one, "lossless" guarantees that the decoded document and the original one are entirely identical and precisely bit-by-bit alike. Therefore, the compression process is reversible. Reasoning from this fact that there may be some image degradation due to the process of discarding away some data forever, "lossy" does not guarantee that they are alike. And in turn, the process is irreversible and the data are an approximated copy of the original.
From another point of view, "lossless" achieves a lower compression ratio as compared to "lossy". Roughly speaking, the quality and the compression rate run in the opposite direction from each other where lower data quality is much related to a higher compression ratio and versa vice. The degree of quality loss is directly proportional to the compression level being applied to the image. Measured in terms of space-time complexity, the compression reduction level that can be achieved using lossless techniques is lower than that rate of "lossy" techniques and, hence, "lossy" methods typically saves more memory and run-time computations without reporting any distinguishable regression related to the image quality. As defined for the "lossy" cases and at the expense of getting data storage reduction, some loss of information is reasonable and acceptable by an adequate margin of safety, such as small variation of colors or dropping insignificant detail and inessential characters, whose loss will not be observed or make a big difference. "lossless" compression, by contrast, is the only acceptable mean of data reduction where an exact recovery of an encoded image is vitally essential. Medical images, confidential data, legal and historical documents are the most dominant examples of this norm of compression. (El-Omari 2008) In view of Figure 2, if both the reconstructed image, "D(M,N)", and the original one, "R(M,N)", are exactly the same, then the data compression technique is "lossless"; otherwise, it is a "lossy".
• Content-related categorization: In the direction of solving the problem of segmenting and compressing compound documents, this group is divided into three subcategories: -Black-white algorithms: These algorithms, such as "Fax Group 3" and "Fax Group 4", are formally emerged for the purpose of converting the images themselves into black-white color and then encode them through lossless compression algorithms. Even though these algorithms have more storage space reduction, the contrast and the color information are unfortunately lost and, therefore, they are inappropriate for other document types such as medical images, historical documents, or colored magazines. They, on the other hand, are more suitable for some technical and business documents. -Single-type content: These algorithms are only designed to encode the documents that have one type of content. Taha et al. (2012) and  proposed two special-purpose techniques for compression documents that have only text contents.
-Compound images: Rather than uniformly encoding the entire image as reported in the cases of conventional image compression algorithms, this style of algorithms is used to encode compound images that may contain more than one component, such as pictures and graphics besides texts (Kumar et al. 2019). It is based on building prior knowledge about the images, then uses this knowledge to divide them into their different content types, and finally encode every type separately aside from the others (Kumar et al. 2019). Two subcategories are forked from these algorithms: Layered encoding and Block-based encoding. Mixed Raster Content (MRC) is one of the most dominant examples of layered encoding. As illustrated in Figure 4, MRC divides the image into three content types: foreground (FG) of 24-bit color, a binary mask of only one bit for each pixel, and background (BG) of 24-bit color. Each identified bit of the binary mask determines that the pixel belongs either to the foreground layer, i.e. has a value of 0, or to the background layer, i.e. has a value of one (Queiroz, Buckley, andXu 1999)(El-Omari et al. 2017).
Another noteworthy example, El-Omari and Awajan (2009)   Besides the aforesaid categories, there is still much room for improving and investing these existing algorithms or coming up with new effective algorithms and techniques like the one described in this current research.

The Proposed Framework
The philosophy behind this proposed technique is to store descriptors or pointers that refer to specific references within a special-purpose dictionary rather than storing the actual repeated figures for every pixel, which is its color information. While the repetitive data of these colors are stored only once, this internal dictionary is fabricated specifically for every block/region of the image and it is referred to as Lookup Dictionary Table  (LUD). This LUD is organized as key-value pairs: the actual data items being looked up and the reference pointers that point out to where the data are located. So, the LUD reference list should consist of all the reference pointers and the referencing to this dictionary is performed upon coming across any reference pointer Through the indexing operation, the value of every index pointer should point out to one and only one LUD color item. On the other side, as any reference pointer can point out to exactly one LUD color, any LUD color may be referred by many reference pointers. Because the relevant information is declared and stored in the form of codes, the mapping operation between the values of the index pointers and the corresponding colors of the LUD is guaranteed. Namely, each cited reference has to be cited in advance inside the LUD list and, in turn, there is no reason to include uncited references without they originally exist in the LUD entries. (Azad et al. 2010) (Wikipedia 2016 As reflected in Figure 5 and Figure 6, the proposed technique works in a sequence of seven phases. These phases form the roadmap framework of the proposed technique.  Vol. 14, No. 4; Algorithm I: Color Counts Block-Based Segmentation Description This algorithm contains seven phases and six classes in its essence. Through this algorithm, the bitmap table of the original image is divided into six classes based on the number of colors that are detected inside. Then through seven phases, the algorithm builds a new two-level compressed file.

Input
"R(M,N)" which represents any BMP image of size MxNx3, where "M", "N", and "3" correspond to the image length, width, and the three-RGB-component colors, respectively. Output "E(M,N)" which represents the compressed image file. Method

6.
Do preliminary operations: the required preliminary processing is performed in order to reduce noises and/or variations inside the scanned image.
Using Algorithm II that is detailed in Figure 8, scan the block to build the color statistic table (CST). 10.
Check the color frequencies of the previous table, "CST". Then, colors with low frequency may be considered as noise and eliminated.
If the block class is of type "T1", insert a new entry in the table "T1" that contains the followings: 13.

(iii)
The values of the three RGB components of the unique detected color of the block.

16.
Else If the block class is of type "T2", i.e. text-based, insert a new entry in the table "T2" that contains the following data items: 17.
(ii) A special-purpose dictionary for the two detected colors.

19.
(iii) one-bit-reference-pointer index to designate one of the two colors of the dictionary; using zero for the pixels having the first color and one for the second color. One byte can hold the information of 8 pixels.

20.
Else If the block class is of type "T3", insert a new entry in the table "T3" with the following data items: 21.

22.
(ii) A special-purpose 16-color (each color requires three entries) dictionary is built where the detected colors are arranged at first and the remaining unoccupied entries are fulfilled to 16 colors (i.e. 3*16=48 cells) with null values.

23.
(iii) four-bit-reference-pointer index to designate a specific color from the sixteen colors of the stored dictionary. Every 2 pixels require one byte to store their indexes. Any reference pointer refers to one of the already detected colors and there isn't any pointer that refers to one of the unoccupied entries (i.e. colors) that are previously fulfilled to complete the number of colors into 16 colors with null values 24.
Else If the block class is of type "T4", insert a new entry in the table "T4" with the following data: 25.

26.
(ii) A certain special-purpose 128-color (each color needs three entries) dictionary is built where the detected colors are arranged at first and the remaining unoccupied entries are fulfilled to (iii) seven-bit-reference-pointer index to designate a specific color from the 128 colors of the stored dictionary. Every individual pixel requires seven bits to be stored. Any reference pointer refers to one of the previously detected colors and there isn't any pointer that refers to one of the unoccupied entries (i.e. colors) that are already fulfilled to complete the number of colors into 128 colors with null values.

28.
Else If the block class is of type "T5", insert a new entry in the table "T5" that contains the following data items: 29.
(ii) For each pixel of the block, store its red color component. Each pixel requires a single byte.

31.
Else If the block class is of type "T6", insert a new entry in the table "T6" with the following data items: 32.
(ii) The pixels' data that are detected in that block. Every pixel requires three bytes. 34.

End If
35. End For-loop // no more blocks 36.
Invoke Consolidation: In order to form higher-level regions, blocks of similar color features are consolidated together into a higher single coherent whole.

37.
Invoke the first-level compression phase: Each region is encoded by one of the most off-the-shelf applicable compression techniques. Every region is compressed along with its relevant dictionary.
38. Invoke Integration: integrate all the six tables into one file containing one table. 39.
Invoke the second-level compression phase: Again, intending to achieve a better compression ratio, the generated file of the preceding step is going through another stage of compression.  Algorithm II: Image Color Statistic Table (CST) Description This principal algorithm is designed to build a statistic about the detected colors and their relevant frequencies that are captured inside a given block. This statistic represents the color map or the dictionary of colors. Likewise, this algorithm can be carried out to build a statistic about the detected colors and their frequencies of the whole image.

Input
Either the whole MxNx3-size BMP image, i.e. "R(M,N)", or one of its blocks.

Output
A color statistic table "CST" of four columns; three of them correspond to the three basic RGBcolor components of each color and the last one corresponds to the frequency of that color. However, every detected color is viewed by one entry. Method

1.
Initialization: construct an empty table "CST" of four columns.

2.
Read the input block pixels from left to right and top to bottom.

3.
For every pixel of the input the block: 4.
If the three basic RGB-color components already exist in "CST" 5.
Add 1 to the frequency that corresponds to that color. 6.

7.
Insert this new color in the table "CST" with a frequency equals to one. 8.

End For
10. Return the color statistic table "CST".

Phase I: Preliminary Processing Phase
As data efficiency is crucially important to be improved before using, the original data representation may subject to a set of pre-processing steps that are performed for filtering noise or variations inside the scanned images. What's tricky is that the success of this proposed technique is highly depending upon the thoroughness of this phase. (Kumar et al. 2019) Going forward, the image with the enhanced quality is then divided into equal-size-square blocks. A color map of each block that represents the detected colors and their frequencies is generated using Algorithm II that is already detailed in Figure 8. This map is referred to as the Color Statistic Table (CST) for these identified colors. Within this context, if the pixels of an input block (I, J) of "BL x BL" in size and its pixels are distributed among "n" three-RGB-component colors, then Table 1 represents the output of this algorithm. Again, like those that are outlined above, colors with low-frequency rates may be considered as noise and excluded from this table.
The total frequency BL 2 It is essential to mention that this table is arranged in descending order according to the last column, "Frequency". At the beginning of this algorithm, an empty 4-column table is created. As the image file is read, this table is altered whenever a new color is encountered. If the encountered color already exists in this table, its corresponding frequency is increased by one. Otherwise, a new entry corresponding to this new color is inserted in this table with a frequency equals to one. Table 2 states an example of this CST where the data is viewed in decimal values. The block of this example is "32x32" pixels in size. The 1024 pixels are distributed among thirteen three-RGB-component colors.

Phase II: Segmentation and Classification Phase
After the phase of preliminary operations, each block is assigned a class type based on its CST. All blocks that have the same number of colors are given the same label or class. As reported in Table 3 and shown in the illustration of Figure 9, block types can be categorized into six classes.
For the scanned image, "N", suppose that the numbers of blocks for the classes "T1", "T2", "T3", "T4", "T5", and "T6" are: N T1 , N T2 , N T3 , N T4 , N T5 , and N T6 , respectively. Then, the total number of blocks is defined as shown by Equation 1: On the other hand, "N" can be entirely decoded back in a reversible way as shown by Equation 2: The number of detected colors is one and only one. Generally, this class of blocks represents the background of a document image which is a large expanse of a single color. This color is considered as a background.
"T2" 2 The number of detected colors is exactly two. This class usually represents the text-based data.

"T3" 3-16
The number of detected colors is less than 17 and more than two. This class generally represents the drawing parts of the documents: graphs, charts, and/or curves.

"T4" 17-128
The number of detected colors is less than 129 and more than 16.

"T5" 129-256
The number of detected colors is less than 257 and more than 128. These blocks are mainly the grey part of the image.

"T6" >256
The blocks of this class generally represent the millions of color pictures found in the images.

Phase III: Rearrangement Phase
This phase is based on forming newly generated data of each block. The output of this phase is a table where each entry contents vary according to the assigned block class. To explore further details, this phase will be detailed in the next section (specifically, subsections 4. 1 through 4.6).

Phase IV: Consolidation Phase
In order to form higher-level regions (i.e. sub-images), this phase aims at combining together the adjacent neighboring equal-class blocks that have the same dictionary of colors into a larger arrangement of contiguous blocks. It is important to realize that the blocks that have the same class don't essentially have the same colors (i.e. dictionary), but they may have the same number of colors (El-Omari et al. 2017) (Kumar et al. 2019). As shown in the illustration of Figure 10, adjacent neighbors of a given block can be defined as either four-connectivity, in which the two blocks share a common side, or eight-connectivity, in which the two blocks share either a common side or a common corner.

Phase V: First-level Compression Phase
The data compression process will be carried on two levels. While this phase includes the first level, the second one will be in the last phase, Phase VII. Here in this phase, every current region (i.e. sub-images or blocks of similar features) is compressed separately with the most off-the-shelf appropriate compression technique. It is worth mentioning that every region is compressed along with its corresponding dictionary.

Phase VI: Integration Phase
Although they are developed separately, all the six tables are eventually fused together into one entity using the block address, I and j. This incorporated entity is formed as a single data file containing one table that interlinks between the six close-related classes.

Phase VII: Second-level Compression Phase
This is the final phase; again, like the fifth phase outlined above, this phase is carried out with the intention of achieving a better data compression ratio. Thus, the generated data file of the preceding step is going through another noteworthy level of compression.

Solution Formulation & Mathematical Model
Related to its critical importance, this section goes through detailing the six data classes that are abstracted in the preceding section, specifically subsection (3.3). Before reporting this section, it is worth mentioning that the first four classes, "T1" through "T4" are built upon the idea of using special dictionaries and pointers for encoding data, each dictionary, called Lookup Dictionary Table (LUD), is designed for the corresponding class type. However, when the computer at the receiver (i.e. decoder) side and through the inverse decompression (i.e. decoding or retrieved back) process read the encoded compressed file and encounters a pointer, it interprets that pointer by retrieving the corresponding color from its place in the dictionary index; hence the original image part is reconstructed and retrieved up to the last bit.
In order to evaluate the overall performances of the proposed technique, a mathematical measure "Saving Ratio Percentage ( Where "R(M,N)" and "E(M,N)" as already stated in Figure 2.
Being more specific, some references referred to the term "E(M,N)/R(M,N)" as the compression ratio (CR) or the relative data redundancy (Gonzalez et al. 2009) (Kumar et al. 2019). It is important to note that when no data compression is achieved, SRP will be equal to zero. There's no doubt that this measure depends on the image content that leads to the distribution of the original table upon the six classes. Moreover, the proper size of the block length has some impact on the SRP measure which will be shown at the evaluation of the experimental results, namely Section 5.

Class "T1" (One Color)
This class means that the entire block is a background containing one color. Since class "T1" blocks have only one color, the dictionary contains only three cells, one for every basic color component of the single RGB color.
Rather than saving the same information for every individual pixel that makes up the background, this approach stores the color data for the background color only once to refer to all pixels of that block. Figure 11 demonstrates how the data can be fabricated for this class by only using a dictionary of one three-RGB-color-component entry.  Figure 11. Encoding of a block using a class of type "T1" For this class, each block is represented by its address (I, J), and the three RGB components of its sole color.
Since the image has equal-size-square blocks, there is a need to store an additional one-byte cell to represent the block length, "BL". However, this byte is only stored once in this class of blocks to represent all blocks of the image. Moreover, since the blocks of this class have a single color, which is classified as background, there is no need to store more data about the pixels contained in the block. Despite that the special dictionary (i.e. LUD) is essential in this class, the reference pointers are not. Simply, only six bytes are required to store the whole block no matter how much its size. However, this solves one of the drawbacks of layered encoding mentioned in Section 2 which is related to storage space reduction. The SRP per block of this class is modeled mathematically by Equation 4: SRP(" ") = 1 − 1 + 2 + 3 3 * BL * 100% = 1 − 6 3 * BL * 100% (4) The following points analyze the elements of this equation: • Number "1" of the numerator indicates that only one byte is required to store the "BL".
• Number "2" of the numerator means that two bytes are required to store the address of each block, one byte for the "I th " address and one byte for the "J th" address.
• Number "3" in the numerator means that there is a necessity for three bytes to store the three basic RGB components of the unique color.
• "BL" stands for the block length and is given in pixels.
• Since the image is divided into equal-size-square blocks, the size of each block is "BL 2 ". Number "3" in the denominator indicates that there are three basic RGB-color components and, therefore, each pixel of "R(M,N)" occupies three bytes. Thus, the denominator stands for the size of the original block before the compression process.
• Based on the aforementioned points, the numerator indicates the size of the compressed block of this type and size.
The following example will clarify how this class is stored. Suppose there is a square block (I, J) = (31, 16) of size (20 x 20=400) pixels and has the following CST (decimal data): (R 001 , G 001 , B 001 , F 001 ) = (254, 019,028, 400) Since there is only one color in this block, it is identified as a class of type "T1". In view of that, this block will be represented neither with reference pointers nor with LUD. Table 4 illustrates the data schematic construction of this example where only six bytes are required. Based on the above-mentioned discussion, the researcher can conclude that: Rather than using 1200 (i.e. 20 * 20 * 3) bytes to store this block of (20x20) pixels in size, six bytes are enough.
To rephrase this outcome: 6 / (20 * 20 * 3) * 100 % = 0.005% This means that the proposed technique needs only 0.005% of the block size. If Equation 4 is recalculated by using BL=20, the same result will be achieved which means that the proposed algorithm is an efficient alternative for this class of blocks. ( 028 ) 10 6

Class "T2" (a single pair of colors)
It is worth remarking that this class depends on storing the detected colors of each block inside a dedicated two-entry dictionary constructed specifically for that block. Then, rather than storing the corresponding color out of the two detected colors for every pixel inside the block, the reference pointer indexes are used instead. For this well-defined reason, a one-bit-reference pointer is used as an indication to determine the corresponding LUD color.
Since class "T2" has two colors, the dictionary contains six cells, one for every basic color component of each RGB of the two colors. These blocks are represented by the address (I, J) of each block, the 2-color dictionary, called background and foreground colors, and only one bit for every pixel to indicate whether it can be assigned to the background color and assigned zero or the foreground color and assigned one. Figure 12 shows the data structure representation of this class of blocks. Data Part Using one-bit-reference pointers (i.e. every eight pixels need only one byte to store their references). Each reference pointer should certainly point out to either one of the two colors inside that dictionary.

Pixels 08-01
Pixels' Data Pixels 16-09 Pixels 24-17 Pixels 32-25 The remaining representation of the pixels (eight pixels per each byte) Figure 12. Encoding of a block using a class of type "T2" For the blocks of this class, "T2", the data compression is done by storing the reference pointers that point out to the special dictionary. The SRP per block is mathematically expressed by Equation 5: SRP(" ") = 1 − 2 + 2 * 3 + 3 * BL * 100% This equation is different than Equation 4 by the following points: • Address part: The first number "2" of the numerator indicates that two bytes are required to store the address of each block, one byte for the I th address and one byte for the J th address.
• Dictionary part (LUD): Since there are two identified colors and each of them has three basic RGB color components, the expression "2*3=6" of the numerator stands for the number of bytes required to store the LUD.
• Data part: Since there are only two colors in this class type, each bit can hold either zero or one to point out to the foreground (FG) or to the background (BG), respectively. Thus, the number of pixels that can be indicated by a single byte is eight.
For the sake of simplicity, Equation 5 can be redrafted as expressed in Equation 6: SRP(" ") = 1 − 8 + 3 * BL * 100% (6) As an example of this class type, assume that the square block (4, 8) of (30x30=900) pixels in size has the two-color CST that is viewed in Table 5 which has been attained as an output of Algorithm II: Based on the number of detected colors (i.e. a single pair), the type of the block involved in this example is "T2" and, in turn, the corresponding reference pointers for these sixteen pixels are presented in Table 6. On the other side, Table 7 illustrates the data schematic construction of this example where a total of 121 bytes are required for each block of this type and size. For the remaining 884 pixels, other than these sixteen pixels, the same pattern is used. With regard to the aforementioned discussion, the researcher concludes that: Instead of using 2700 (i.e. 30 * 30 * 3) bytes to store this block of (30x30) pixels in size, only 121 bytes are enough. Otherwise speaking:

Class "T3" (3-16 Colors)
This class depends on storing the detected colors of each block inside a particular 16-color dictionary dedicated particularly to that block. Then, rather than storing the corresponding color out of the sixteen ones for every pixel inside the block, the reference pointer indexes are used instead. While these reference pointers are typically implemented through using LUD, each four-bit-reference pointer is used as an indication to determine the corresponding LUD color.
Related to this special dictionary and as aforementioned in Algorithm II of Figure 8, this special 16-color dictionary is built where the same detected colors are arranged at first and the remaining unoccupied entries of colors are fulfilled to 16 colors with null values where each color requires three-null values. Clearly, each four-bit-reference pointer should point out to one of the previously detected colors and no reference pointer should point out to one of the null entries (i.e. colors) that are originally unoccupied and fulfilled to sixteen colors with null values.
In line with Figure 13, the data representation of this class, "T3", is similar to that of class "T2". However, the LUD of this class has 16 * 3 = 48 cells. Each block is represented by the pair (I, J), the 16-color dictionary, and a four-bit reference pointer for every pixel to designate a specific color from the identified sixteen colors of the dictionary. Hence, the value (0000) 2 points out to the first color in the dictionary, the value (0001) 2 points out to the second color, the value (0010) 2 points out to the third color, and so on up to the value (1111) 2 , which is corresponding to (15) 10 , that points out to the last color.
For the blocks of class "T3", the data compression process is implemented by storing the four-bit-reference pointers that point out to the special-purpose dictionary. Therefore, Equation 7 is proposed in this regard: SRP(" ") = 1 − 2 + 3 * 16 + 3 * BL * 100% = 1 − 50 + 3 * BL * 100% (7) Where the following points clarify this equation: • Dictionary part (LUD): Since there are sixteen colors and each of them has three basic RGB color components, "3*16=48" stands for the number of bytes requested to store the LUD.
• Data part: As there is a maximum of sixteen RGB colors and each of them required four bits to be coded, the number of pixels that can be stored in a single byte is (8 / 4 = 2). Hence, the expression (BL 2 / 2) is used to determine the number of bytes that are required to store the four-bit-reference pointers of each block. Data Part Using four-bit-reference pointers (i.e. every two pixels need only one byte to store their references). Each reference pointer should definitely refer to one and only one of the sixteen related colors (i.e. entries) of the dictionary.

Pixels' Data
Pixel 004 Pixel 003 Pixel 006 Pixel 005 The remaining representation of the pixels, using four-bit-reference pointers to point out to the sixteen dictionary entries; two pointers are stored per each byte. Figure 13. Encoding of a block using a class of type "T3" To explain the compression process of this class type in a simple way, the following example clarifies how this class is fabricated. Suppose that the square block (I, J) = (1, 12) of (30 x 30=900) pixels in size has a nine-color CST that is viewed in Table 8. This CST has been achieved as an output of Algorithm II of Figure 8.
Based on the aforementioned discussion, the researcher concludes that: mas.ccsenet.org Modern Applied Science Vol. 14, No. 4; Rather than consuming 2700 (i.e. 30 * 30 *3) bytes to store this block of (30x30) pixels in size, only 500 bytes are enough. To rephrase this outcome: 500 / (30 * 30 * 3) * 100% = 18.519% Another time, this means that the proposed technique is capable to encode this class of blocks by using only 18.519% of the block size. If Equation 7 is recalculated by using BL=30, the same result will be achieved which proves that the outcome is consistent with the research findings. And so, this proposed algorithm is an efficient alternative for this class of blocks.   Data Part A four-bit-reference pointer for each pixel of the block. Just the first ten pixels of this block are shown here.

Class "T4" (17-128 Colors)
Over again, this class is based on storing the detected colors of each block inside a dedicated 128-color dictionary constructed dedicatedly for each block of that type. Then, instead of storing the corresponding color out of the 128 ones for every pixel inside the block, the reference pointer indexes are used instead. While these reference pointers are typically implemented through using LUD, each seven-bit-reference pointer is used as an indication to determine the corresponding LUD color out of the 128 ones.
Related to this special dictionary and as aforementioned in Algorithm II of Figure 8, this special-purpose 128-color dictionary is built and the detected colors are arranged at first and the remaining unoccupied entries are fulfilled to 128 colors with null values where each color needs three-null values. Any reference pointer points out to one of the already detected colors and, surely, each pointer refers to one of the actual detected colors and there isn't any pointer points out to one of the null entries (i.e. colors) that are originally unoccupied and fulfilled to 128 colors with null values. Figure 14 illustrates the data structure representation of this class of blocks. Accordingly, the dictionary concept of this class, "T4", is similar to that of "T2" and "T3". Conversely, the dictionary of this class has (128 * 3 = 384) entries (i.e. 384 bytes). Each block is represented by the pair (I, J), the 128-color dictionary, and a seven-bit-reference pointer for each pixel to designate a specific color among the 128 colors of the dictionary. For instance, the value (000 0000) 2 points out to the first color, the value (000 0001) 2 points out to the second color, the value (000 0010) 2 points out to the third color, and so on up to the last value (111 1111) 2 , which is equivalent to (127)  Data Part Using seven-bit-reference pointers. Each seven-bit-reference pointer refers to one and only one of the 128 entries of colors Pixel Representation Using seven-bit-reference pointers to point out to one of the 128 dictionary entries (Every individual pixel requires only seven bits to store its reference) Pixels' Data Figure 14. Encoding of a block using a class of type "T4" Over again, the data compression of this block class can be constructed by utilizing reference pointers that point out to a special LUD. In this regard, the SRP measure is modeled mathematically by Equation 8: SRP(" ") = 1 − 2 + 3 * 2 + ⁄ 3 * BL * 100% = 1 − 386 + BL * 3 * BL * 100% This equation is similar to Equation 4 except the following differences: • Dictionary part (LUD): Since there are (2 8 =128) colors, the expression "3*2 7 " of the numerator stands for the number of bytes that are required to store the RGB dictionary (i.e. LUD). • Data part: Since there are (128) colors and each of them required seven bits to code, the number of pixels that can be stored in a single byte should be divided by (8/7) or be multiplied by (7/8). So the expression "BL 2 / (8/7)" mas.ccsenet.org Vol. 14, No. 4; is used to determine the number of bytes that are essential to store the seven-bit-reference pointers of each block of this class.
For further clarification, a complete example of this type is introduced at "Appendix A" at the end of this paper. Besides that this example explains the compression process in a simple way, it gives experimental proof to support the validity of Equation 8.

Class "T5" (129-256 Colors)
A block is identified as grey if the values of the corresponding three basic RGB components of all pixels of the block are almost equal. Rather than repeating the same information for the three repeated RGB color components, one component is enough to represent the other two components. Though, the red component is selected to represent the other two color components.
Compared with the previous four classes, neither the special dictionary (i.e. LUD) nor the reference pointers are required for the blocks of this class. Rather, the actual red component of the original block is selected and directly stored as it is without any reshaping or rearrangement. Figure 15 illustrates how the data can be constructed for this class of blocks. Each block is just represented by its address (I, J) and the actual red components of its pixels where each pixel needs a single byte.

Address part I value
Block address (2 bytes)

J value
Data Part Pixels' data contain only Red components. Neither the special-purpose dictionary (LUD) nor the reference pointers are essential here. Each pixel occupies only one byte to be stored. The Red component of the 5 th color 5 th color (1 byte) Pixels' data Using pointers. Since they are grey colors, the red components are enough to be stored.
Pixels' representation of the remaining pixels (1 byte per pixel) & (only the Red components are stored) Figure 15. Encoding of a block using a class of type "T5" Since class "T5" is considered as grey, the dictionary is needless and the SRP per block is defined using Equation 9: SRP(" ") = 1 − 2 + BL 3 * BL * 100% (9) The basic difference between the last two Equations, 8 and 9, is that the dictionary is needless in the latter one. Given that there are (2 8 =256) colors, each color takes up just one byte, hence (BL 2 /1= BL 2 ). For a complete example of this class type, see Appendix B at the end of this paper. This example, on the other hand, gives an empirical proof about its validity.

Class "T6" (more than 256 Colors)
Different than class "T5" which only stores the red component, all the three basic RGB components of the original block are stored in class "T6" and, therefore, each pixel occupies three bytes. The representation of these blocks is saved by storing the address (I, J) of the block and the actual pixels' data where each pixel requires three bytes. Figure 16 shows the data structure representation of this class of blocks.
• In any case, this equation gives negative results for this class. But the result is approaching zero value and, therefore, the counter loss of storage space can be easily affordable and then disregard without making a big difference.

• The rest of this equation is similar to Equation 4.
For further clarification and understanding, Appendix C at the end of this paper gives a complete example of this type and gives real empirical proof about the validity of Equation 10.

Address part I value
Block address (2 bytes)

J value
Data Part (i.e. Pixels' data) Pixels are stored as it is. Each pixel occupies three bytes to be stored.  Figure 16. Encoding of a block using a class of type "T6"

Experimental Results & Evaluation
To assist in compare and contrast, a short outline of the six different classes is outlined in Table 11. It is worth remembering that the block length, "BL", is only stored once at the first byte of class "T1". Since it is reserved as a single byte, the maximum block length is "255". Otherwise, there is a necessity to change the size of the block length. After this proposed algorithm has been conducted upon this database, a proportionate reduction in the compression level has been achieved and this empirically-based evidence, on the other hand, shows rapprochement between theoretical and experimental results. To put it another way, all the ten equations stated in this research are proved both theoretically and empirically as being correct. The result is therefore worthy and the saving percentage (SRP) for the whole dataset in terms of storage space reduction is significant, which is (71.039%). On the way to compare and contrast, this admirable result is totally better than the previous result of  which is (87%) but for the documents that contain only texts and graphics" . For further clear investigation and evaluation, Table 13 illustrates these remarks and results for different block classes and block lengths. The strikethrough bolded cells in the last column of this table are introduced to show the cases where the compression ratio is poor due to the fact that: If the data block is of class "T6", then the current data are stored as it is along with its block address (I, J) which is a two-byte length.  Figure 17 is a graphical evaluation that presents clearly the relation between the block class and the average SRP measure. As clearly shown in this Figure, the overall saving ratio of the proposed algorithm is based on recognizing the class of the block.
By analyzing Table 13 and its associated Figure 17, it can be concluded that the best results of the storage space reduction can be achieved with classes of type "T1" where the average compression ratio is (99.962%), which means that the entire image is a background containing one color. The next best results can be extracted with the class of type "T2" which is (95.783%). Then, the next one is achieved by the class of type "T3" which has (83.020%). Next, classes of types "T4" and "T5" with the percentages (68.411%), (66.654%), respectively.
Due to the fact that additional two-byte storage is required, the worst case is whenever the blocks are of the class of type "T6", which means that the entire image is a picture. In this case, the encoding of this approach is not appropriate and the proposed system is dynamic enough to cancel the encoding process and use another proper encoder. But, this worst-case (i.e. "T6") has an average losing percentage that is around zero (precisely 0.013 %) which can be neglected at the expense of the other worthy percentages. Figure 17. Per class-type compression ratios (using one-byte block length).
On the basis of the above-stated analyses, the block class type has a great impact on the SRP measure. Stated in other words, this measure is highly relying on the image content that leads to the distribution of the original table upon the six classes. By comparing the whole advantages and benefits of this proposed algorithm, it is proved that it is a very efficient alternative and able to produce comparably competitive results.
By a further evaluation of Table 13, the block length, "BL", has an impact on the received results. When the block length is increased, the SRP measure is increased, as well. This proves that SRP is directly proportional to the block length. Hence, duplicating this length is maybe imperative particularly for large-size images. In order to prove this truth, Table 14 and it is a related demonstration of Figure 18 use a two-byte block length. From another point of view, this table assures that the proposed algorithm is a significant one for segmentation and compression compound images.
Similar to the investigation of Table 13 and Figure 17, it can be concluded from Table 14 and Figure 18 that this proposed technique gives the best results for the first five classes. The best results can be achieved with the class of type "T1" where the SRP is (99.999%). The next best results that can be achieved are with the classes, "T2", "T3", "T4", and "T5" with the percentages (95.832%), (83.327%), (70.786%), (66.666%), respectively. Again, the worst case is of the class of type "T6" which is around zero (precisely 0.0002446 %). Since this loss is too small to be observed, it can be neglected without making a big difference.  T1  T2  T3  T4  T5  T6   T1  T2  T3   T4  T5  T6 mas.ccsenet.org Modern Applied Science Vol. 14, No. 4; 2020 Compared with the other approaches, the most important advantage of this proposed algorithm is its simplicity (less than five operations per pixel), clarity and directness, dependency on just a few parameters And, above all, its reliability. Furthermore, this proposed approach combines different compression concepts in order to achieve better compression ratios of the scanned documents; its basic scope is based upon hybridizing the following methods that are already demonstrated in Figure 3: • Dynamic Algorithms: Since it is relying on capturing more details about the problem of interest, it's a dynamic and content-based algorithm. • Statistical-based: It is a local statistical thresholding approach where the blocks classification can be achieved by exploiting some prior knowledge relevant to the number of colors that originally exist within the image or one of its blocks. • Dictionary-based: For the reason that the dictionary of colors is included inside the internal data representation, it is a dictionary-based-compression scheme. • Block-based encoding: It is a block-based approach where the input image is divided into equal-size-square blocks. • Multi-layered encoding: As each input image is divided into six regions in view of the number of the detected colors, it is a region-based approach, as well. • Lossless encoding: When the final image of Phase VII is retrieved back and compared with the original one, the two images are entirely the same and precisely bit-by-bit alike. This guarantees that every bit can be retrieved back precisely to its original value without any level of distortion and hence the process is reversible. This also implies that the proposed algorithm can be recognized as a lossless one or at least near-lossless.
Above and beyond that, not only this approach crosses the aforesaid models but also it is a two-level compression technique (i.e. Phase V and Phase VII). Finally, to conclude the discussion of this section, if the logical operation "XOR" is accomplished on both the encoded input images and the decoded output images, the result is zero (i.e. off or false) which means that both the images are alike. Therefore, the output quality of this phase is (100%) which also reinsures the above-stated conclusion that says: this technique is a lossless one. Figure 18. Per class-type compression ratios (using a two-byte block length).

Conclusion
An increase in the demand of numerous millions of computer users for storing more numerous millions of images paved the way for viewing segmentation and compression techniques and seeing them as more intertwined than ever. And so, the present work proposes a lossless statistical block-based segmentation technique that works in conjunction with other encoding techniques to segment compound or mixed documents that have different content types, such as pictures, graphics, texts, and/or backgrounds. Furthermore, this research has disclosed very stimulating and deep-insight findings that can significantly improve the mechanisms by which the segmentation and compression of compound images are currently evaluated.
With regard to the number of colors detected in each part of the image, this paper involves a seven-phase approach in which an incoming compound document is segmented into a set of multiple image objects, each compressed by the most off-the-shelf applicable compression technique. This approach hybridizes different compression concepts to achieve better compression in terms of space-time complexity. It is a block-based approach where the input image is divided into equal-size-square blocks. It is a region-based approach where the input image is divided into homogeneous regions according to the number of colors. Besides it's a dynamic and  T1  T2  T3  T4  T5  T6   T1  T2  T3  T4  T5  T6 mas.ccsenet.org Vol. 14, No. 4; content-based algorithm, it is a threshold approach where the blocks classification is carried out by exploiting the number of colors that exist within the block or the image. Since the lookup dictionary of colors is included in the internal representation of the third phase (i.e. Rearrangement phase) and in the external representation of the six th phase (i.e. Integrating phase), it is a dictionary-based-compression scheme.
Motivated by the purpose of testing the performance of the proposed algorithm, a special database was created. It contains a dataset of 3151 24-bit-RGB-bitmap document images with different image types and rich-mix contents. In view of the empirical findings, the outcomes of the conducted experiments are admirable and the overall average saving rate that has been achieved is (71.039%) for the whole dataset. The important thing is that the relevant matching analysis between the theorizing (i.e. Equations 1 through 10) and the empirically-based results show rapprochement without any discrepancies. Thus, the algorithm is efficient, robust, and has the capability of handling compound documents that have different content types. However, the performance of this solution, like most other image compression algorithms, depends on the content of the file to be compressed. Finally, as the input encoded image and the output decoded image are recognized as the same and recorded as identical up to the last bit, this technique is a lossless one.

Future Work and Outlook
In order to realize the potential advantages of this proposed technique upon this significant area, further experimental and simulation researches should be carried out and, in turn, several significant issues can be extended for future work to support the achievements of this work. These issues may lead to further improvement related to more storage space reduction and, furthermore, bring to light a great number of new research opportunities that need to be further investigated. On the whole, the research scope can be extended to introduce the following perspectives: • In the way of maximizing the existing compression ratios, the grey-scale resolution (i.e. bit-depth) can be increased from 24-bit to other values. Then the impact of this modification upon the six-SRP classes should be investigated.
• Because some regions may have more relative importance than the others, this algorithm may take further direction related to the preservation of information. For instance, the regions of the vehicle plates might be more significant to be verified precisely than the other parts of the vehicle (Alghyaline et al. 2019). And, therefore, a "lossless" algorithm is applied to vehicle plates and "lossy" compression is applied for the rest of the image. Hence, "lossless" and "lossy" are used upon this importance.
• According to analysts and specialists, it is extremely rare to see these current days anyone living without Internet access and, above and beyond that, it is foreseen in the next few coming years that there isn't any running business without the innovative Cloud Computing (CC) services (El-Omari 2019). As most Digital Image Processing (DIP) applications are high-productivity and could be deployed remotely with the new vision of the smart world, there is an utmost need to integrate the DIP paradigm to be activated within the CC environment (Yuzhong and Lei 2014)(El-Omari 2019). This is especially true for hosting and delivering this proposed solution; the following motivations reinforces this relevant point and truly ensure that CC is the most tolerable place for hosting DIP systems: -The majority of these systems are typically sophisticated and entail high-end communicational capabilities, high-level computational power, well-developed applications, and large mass data storage capacities (Kumar et al. 2019) (M Gokilavani, GP Mannickathan, and MA. Dorairangaswamy 2018)(El-Omari 2019).
-Most DIP systems require application-specific platforms and real-time or near-real-time applications (El-Omari 2019) (Yuzhong and Lei 2014).
-Given that CC is moving in the direction of providing highest Quality of Service (QoS) at a lower expense, the underlying hardware of these systems is usually very expensive to be single-owned by the enterprise itself (Mirarab, Fard, and Shamsi 2014)(El-Omari 2019) (Qin et al. 2018).
-Moreover, the three-field integration (CC, Big Data, and DIP) has recently become the most desirable platform for hosting and delivering DIP functions (El-Omari and Alzaghal 2017) (Mirarab et al. 2014) (Kang and Lee 2016).
By this, segmentation and compression compound images based on utilizing CC might become a wildly-popular simple practice among ordinary users.
• Since image compression has a positive leading contribution in the security area (Kumar et al. 2019), the data inside the Lookup Dictionary Table (LUD) and the reference pointers can be encrypted at the third phase (i.e. Phase III: Rearrangement phase) of this proposed algorithm which, as a result, leads to an encrypted integrated file as an output of phase 6.
• In the foreseeable future, there may be many possible arrangements that can be arranged to rapidly accelerate the compression/decompression progress of this proposed technique; some of them have been already highlighted in Figure 3. Followings are two of these arrangements: -Building the data compression/decompression process as real-time utility software that may be considered as part of the operating system. By using this strategy, every data file is directly encoded when it is stored and, in contrast, it is automatically decoded when it is retrieved back (i.e. loaded).
-Building the data compression/decompression mechanism internally as a special-purpose built-in chip. Again as have been stated in the previous point, every file is automatically compressed during the saving process, and vice versa.
Equally important, both arrangements should be designed to be operated automatically without the users' interferences. In addition, these arrangements should be worked without the end-users' awareness of their existence.
of this type and size. For the remaining 4076 pixels, other than these twenty pixels, the same pattern is used. Related to Table 17, it is vitally important to mention that to simplify the viewing of the seven-bit-reference pointers, eight-bit-reference pointers are viewed instead. In terms of this, the total occupied bytes of the data part is multiplied by (7/8), i.e. 4096*7/8+386=3970.

Total 4096
Finally, to conclude the discussion of this example, an important conclusion that should be stated here: Rather than spending 12288 (i.e. 64 * 64 * 3) bytes in storing this block of (64x64) pixels in size, 4482 bytes are enough. Namely: Based on the number of detected colors that are viewed in Table 18, the type of the block involved in this example is "T5". Table 19 illustrates the data schematic construction of this example where a total of 4098 bytes are required for each block of this type and size. For the remaining 4071 pixels, other than these twenty-five pixels, the same pattern is utilized. With regard to the aforesaid discussion, the researcher concludes that: Rather than using 12288 (i.e. 64 * 64 * 3) bytes to store this three-RGB-component-color block of (64x64) pixels in size, 4098 bytes are enough. To be specific: 4098 / (64 * 64 * 3) * 100% = 33.349% Another time, this means that this proposed technique is capable to encode this class of blocks by using only 33.349% of the block size. If Equation 9 is recalculated by using BL=64, the same result will be achieved which means that the proposed algorithm is an efficient alternative for this class of blocks and, above all, this outcome shows a rapprochement between the theorizing (i.e. Equations 9) and the empirically-based results.