VISTA_ENCODER_DISTORTION applies DCT-based image coding algorithms to the input 256*256 gray scale image to obtain a file with the desired distortion measured using a particular distortion mesure. The program stores the encoded file in the folder\name provided by the user. Particular appendices to the given file name will be added depending on the selected encoding algorithm and distortion measure (see names below). The encoded file can be decoded with VISTA_DECODER.M Besides, the encoder routine also gives the decoded image, the entropy of the code (in bits/pix) and a number of distortion measures. Encoding algorithms 1-4 below use quantizers based on human vision models of increasing accuracy. Algorithms 5-7 are based on similar vision models and also use SVM based selection of transform coefficients. * MODEL 1: JPEG91. Wallace, Comm. of the ACM, Vol.34(4):30-44, (1991) block DCT transform coding with quantization based on JPEG-like linear CSF vision model. It does not include masking relations at all. The name of the encoded file will be: name_jpeg91_distort_measur_1.zip Note, though, that this is not exactly the JPEG standard since (1) 16*16 DCT blocks are used, (2) run length encoding, DPCM DC encoding and block information arrangement are done in a non-standard way, and (3) the final entropy coding is done by the Matlab zip routine. Nevertheless this implementation is the appropriate for fair comparison with the rest of the algorithms since the above (non-fundamental) details are implemented in the same way. * MODEL 2: Malo99. Malo et al., Electr. Lett., Vol.35(13):1067-1068, (1999) block DCT transform coding with quantization based on a simple (point-wise) non-linear masking model. It includes auto-masking but it does not include masking relations among coefficients. The name of the encoded file will be: name_malo99_distort_measur_1.zip * MODEL 3: Epifanio03. Epifanio et al., Patt.Recogn., Vol.36(8):1799-1811, (2003) block DCT transform coding with quantization based on simultaneous diagonalization of covariance matrix and (fixed) perceptual metric matrix (approximately including masking relations among DCT coefficients). The name of the encoded file will be: name_epifanio03_distort_measur_1.zip * MODEL 4: Malo06. Malo et al. IEEE Trans. Im. Proc., Vol.15(1):68-80 (2006) block DCT transform plus non-linear divisive normalization transform and uniform quantization. This is the proper way to take frequency selectivity and all the masking relations into account in the quantization process. The name of the encoded file will be: name_malo06_distort_measur_1.zip * MODEL 5: Robinson03. Robinson & Kecman, IEEE Trans. Neur.Nets., Vol.14(4):950-958 (2003) block DCT transform plus CSF inspired constant insensitivity SVM coefficient selection (RKi-1). SVM based on a rough linear vision model. The name of the encoded file will be: name_robinson03_distort_measur_1.zip * MODEL 6: Gomez05. Gomez et al., IEEE Trans. Neur. Nets., Vol.16(6):1574-1581 (2005) block DCT transform coding plus CSF adaptive insensitivity SVM coefficient selection. SVM based on an accurate linear vision model. The name of the encoded file will be: name_gomez05_distort_measur_1.zip * MODEL 7: Camps08. Camps et al., J. Mach. Learn. Res., Vol.9(1):49-66 (2008) block DCT transform coding plus divisive normalization and constant insensitivity SVM coefficient selection. SVM trained in a vision model domain that takes into account frequency seletivity and masking relations among coefficients. The name of the encoded file will be: name_camps08_distort_measur_1.zip The distortion of the above algorithms is controlled by different parameters: * Algorithms 1-4 depend on the Control Parameter, 'CP'. Smaller CP values imply more coarse quantization thus giving smaller files and more distorted images. * Algorithms 5-7 depend on two parameters: (1) the insensitivity parameter of the SVM, 'Epsilon'. (2) the number of bits used to encode the SVM weights, 'Bits'. For a fixed number of bits, smaller Epsilon values imply keeping more support vectors (or coefficients) and hence larger files and better quality images. The user has to provide a target distortion value. The program then sets the values of the control parameters (CP, or Epsilon and Bits) and iteratively modifies them to achieve the target entropy for the particular image. SYNTAX: [Results] = vista_encoder_entropy(Im,MODEL,'output_folder','name',target_entropies,Num_iterat) Input variables: ---------------- * Im : 256*256 image matrix double precision numbers in the range [0 255] * MODEL : 1-7 (Model 3 is not available at the moment) * 'output_folder': String with the folder where it will be written the output file(s) * 'name' : String with the name of the output file Note that an appendix to this name will be added depending on the coding algorithm. * target_entropies: Vector containing the set of target entropies (an image can be compressed at different entropies with a single call to this function) There will be as many output files as target entropy values. * Num_iterat : Number of iterations to look for the target entropy Output: ------- * Results : Struct variable with the following fields - Results(i).Image = Decoded image corresponding to the i-th value of the target entropy vector. - Results(i).Entropy = Entropy (in bits/pix): file_size/256^2 - Results(i).RMSE = RMSE distortion of the i-th decoded image - Results(i).SSIM = Structural SIMilarity Index of the i-th decoded image. (See Wang et al. IEEE Tr. Im. Proc., 2004 for a description of this distortion measure) - Results(i).MPE_linear= Maximum Perceptual Error of the i-th decoded image based on a linear CSF vision model. (See Gomez et al. IEEE Tr. Neur. Nets., 2005 for a description of this distortion measure) - Results(i).MPE_non_linear= Maximum Perceptual Error of the i-th decoded image based on a non linear vision model. (See Camps et al. JMLR, 2008 for a description of this distortion measure)
0001 0002 % VISTA_ENCODER_DISTORTION applies DCT-based image coding algorithms to the 0003 % input 256*256 gray scale image to obtain a file with the desired 0004 % distortion measured using a particular distortion mesure. 0005 % 0006 % The program stores the encoded file in the folder\name provided by the user. 0007 % Particular appendices to the given file name will be added depending on the 0008 % selected encoding algorithm and distortion measure (see names below). 0009 % The encoded file can be decoded with VISTA_DECODER.M 0010 % 0011 % Besides, the encoder routine also gives the decoded image, the entropy 0012 % of the code (in bits/pix) and a number of distortion measures. 0013 % 0014 % Encoding algorithms 1-4 below use quantizers based on human vision models of 0015 % increasing accuracy. Algorithms 5-7 are based on similar vision models and also use 0016 % SVM based selection of transform coefficients. 0017 % 0018 % * MODEL 1: JPEG91. Wallace, Comm. of the ACM, Vol.34(4):30-44, (1991) 0019 % block DCT transform coding with quantization based on JPEG-like 0020 % linear CSF vision model. It does not include masking relations at all. 0021 % The name of the encoded file will be: name_jpeg91_distort_measur_1.zip 0022 % 0023 % Note, though, that this is not exactly the JPEG standard since 0024 % (1) 16*16 DCT blocks are used, (2) run length encoding, DPCM DC encoding 0025 % and block information arrangement are done in a non-standard way, and (3) 0026 % the final entropy coding is done by the Matlab zip routine. 0027 % Nevertheless this implementation is the appropriate for fair 0028 % comparison with the rest of the algorithms since the above (non-fundamental) 0029 % details are implemented in the same way. 0030 % 0031 % * MODEL 2: Malo99. Malo et al., Electr. Lett., Vol.35(13):1067-1068, (1999) 0032 % block DCT transform coding with quantization based on a 0033 % simple (point-wise) non-linear masking model. It includes 0034 % auto-masking but it does not include masking relations among 0035 % coefficients. 0036 % The name of the encoded file will be: name_malo99_distort_measur_1.zip 0037 % 0038 % * MODEL 3: Epifanio03. Epifanio et al., Patt.Recogn., Vol.36(8):1799-1811, (2003) 0039 % block DCT transform coding with quantization based on 0040 % simultaneous diagonalization of covariance matrix and 0041 % (fixed) perceptual metric matrix (approximately including masking 0042 % relations among DCT coefficients). 0043 % The name of the encoded file will be: name_epifanio03_distort_measur_1.zip 0044 % 0045 % * MODEL 4: Malo06. Malo et al. IEEE Trans. Im. Proc., Vol.15(1):68-80 (2006) 0046 % block DCT transform plus non-linear divisive normalization 0047 % transform and uniform quantization. This is the proper way to take 0048 % frequency selectivity and all the masking relations into account in 0049 % the quantization process. 0050 % The name of the encoded file will be: name_malo06_distort_measur_1.zip 0051 % 0052 % * MODEL 5: Robinson03. Robinson & Kecman, IEEE Trans. Neur.Nets., Vol.14(4):950-958 (2003) 0053 % block DCT transform plus CSF inspired constant insensitivity 0054 % SVM coefficient selection (RKi-1). SVM based on a rough 0055 % linear vision model. 0056 % The name of the encoded file will be: name_robinson03_distort_measur_1.zip 0057 % 0058 % * MODEL 6: Gomez05. Gomez et al., IEEE Trans. Neur. Nets., Vol.16(6):1574-1581 (2005) 0059 % block DCT transform coding plus CSF adaptive insensitivity 0060 % SVM coefficient selection. SVM based on an accurate linear vision 0061 % model. 0062 % The name of the encoded file will be: name_gomez05_distort_measur_1.zip 0063 % 0064 % * MODEL 7: Camps08. Camps et al., J. Mach. Learn. Res., Vol.9(1):49-66 (2008) 0065 % block DCT transform coding plus divisive normalization and 0066 % constant insensitivity SVM coefficient selection. SVM trained in a 0067 % vision model domain that takes into account frequency seletivity and 0068 % masking relations among coefficients. 0069 % The name of the encoded file will be: name_camps08_distort_measur_1.zip 0070 % 0071 % The distortion of the above algorithms is controlled by different 0072 % parameters: 0073 % 0074 % * Algorithms 1-4 depend on the Control Parameter, 'CP'. 0075 % Smaller CP values imply more coarse quantization thus giving smaller 0076 % files and more distorted images. 0077 % 0078 % * Algorithms 5-7 depend on two parameters: 0079 % (1) the insensitivity parameter of the SVM, 'Epsilon'. 0080 % (2) the number of bits used to encode the SVM weights, 'Bits'. 0081 % For a fixed number of bits, smaller Epsilon values imply keeping more 0082 % support vectors (or coefficients) and hence larger files and better 0083 % quality images. 0084 % 0085 % The user has to provide a target distortion value. The program then 0086 % sets the values of the control parameters (CP, or Epsilon and Bits) and 0087 % iteratively modifies them to achieve the target entropy for the particular image. 0088 % 0089 % SYNTAX: 0090 % [Results] = vista_encoder_entropy(Im,MODEL,'output_folder','name',target_entropies,Num_iterat) 0091 % 0092 % Input variables: 0093 % ---------------- 0094 % * Im : 256*256 image matrix double precision numbers in the range [0 255] 0095 % * MODEL : 1-7 (Model 3 is not available at the moment) 0096 % * 'output_folder': String with the folder where it will be written the output file(s) 0097 % * 'name' : String with the name of the output file 0098 % Note that an appendix to this name will be added 0099 % depending on the coding algorithm. 0100 % * target_entropies: Vector containing the set of target entropies 0101 % (an image can be compressed at different entropies 0102 % with a single call to this function) 0103 % There will be as many output files as target entropy 0104 % values. 0105 % * Num_iterat : Number of iterations to look for the target entropy 0106 % 0107 % Output: 0108 % ------- 0109 % * Results : Struct variable with the following fields 0110 % 0111 % - Results(i).Image = Decoded image corresponding to the i-th value 0112 % of the target entropy vector. 0113 % - Results(i).Entropy = Entropy (in bits/pix): file_size/256^2 0114 % - Results(i).RMSE = RMSE distortion of the i-th decoded image 0115 % - Results(i).SSIM = Structural SIMilarity Index of the i-th 0116 % decoded image. 0117 % (See Wang et al. IEEE Tr. Im. Proc., 2004 0118 % for a description of this distortion measure) 0119 % - Results(i).MPE_linear= Maximum Perceptual Error of the i-th decoded 0120 % image based on a linear CSF vision model. 0121 % (See Gomez et al. IEEE Tr. Neur. Nets., 2005 0122 % for a description of this distortion measure) 0123 % - Results(i).MPE_non_linear= Maximum Perceptual Error of the i-th decoded 0124 % image based on a non linear vision model. 0125 % (See Camps et al. JMLR, 2008 for a description 0126 % of this distortion measure) 0127 % 0128 function [Results]=vista_encoder_distortion(Im,algorit,directorio,ficherin,entropias,N_it) 0129 warning('off','MATLAB:dispatcher:InexactMatch') 0130 0131 if algorit==1 0132 algoritmo=6; 0133 0134 Desired_entropy=[0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1 1.2 1.4 1.6 1.8 2 2.2]; 0135 Approximate_uCP=[1.5 2.6 3.3 3.8 4.4 4.9 5.3 5.7 6.5 7.3 8.1 9 9.9 11 12.2]; 0136 for i=1:length(entropias) 0137 if entropias(i)>max(Desired_entropy), 0138 parametro(i)=12.5; 0139 elseif entropias(i)<min(Desired_entropy), 0140 parametro(i)=1.2; 0141 else 0142 parametro(i)=interp1(Desired_entropy,Approximate_uCP,entropias(i)); 0143 end 0144 end 0145 elseif algorit==2 0146 algoritmo=7; 0147 0148 Desired_entropy=[ 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1 1.2 1.4 1.6 1.8 2 2.2]; 0149 Approximate_uCP=[ 1.3 2.3 3 3.3 3.7 4.1 4.5 4.8 5.3 5.8 6.3 6.8 7.4 7.8 8.3]; 0150 for i=1:length(entropias) 0151 if entropias(i)>max(Desired_entropy), 0152 parametro(i)=8.4; 0153 elseif entropias(i)<min(Desired_entropy), 0154 parametro(i)=1.1; 0155 else 0156 parametro(i)=interp1(Desired_entropy,Approximate_uCP,entropias(i)); 0157 end 0158 end 0159 elseif algorit==3 0160 algoritmo=8; 0161 elseif algorit==4 0162 algoritmo=9; 0163 0164 Desired_entropy=[ 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1 1.2 1.4 1.6 1.8 2 2.2]; 0165 Approximate_uCP=[ 0.3 1.0 1.7 2.1 2.6 2.9 3.3 3.9 4.4 4.9 5.3 5.6 6 6.3]; 0166 for i=1:length(entropias) 0167 if entropias(i)>max(Desired_entropy), 0168 parametro(i)=6.5; 0169 elseif entropias(i)<min(Desired_entropy), 0170 parametro(i)=0.15; 0171 else 0172 parametro(i)=interp1(Desired_entropy,Approximate_uCP,entropias(i)); 0173 end 0174 end 0175 elseif algorit==5 0176 algoritmo=2; 0177 0178 Desired_entropy=[ 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1 1.2 1.4 1.6 1.8 2 2.2]; 0179 Approx_Epsilon=[ 23.6 2.9 0.46 0.69 0.012 0.037 0.002 0.016 0.013 0.0004 0.0034 0.0053 0.0016 0.0037 0.0017]; 0180 for i=1:length(entropias) 0181 if entropias(i)>max(Desired_entropy), 0182 parametro(i)=0.001; 0183 elseif entropias(i)<min(Desired_entropy), 0184 parametro(i)=25; 0185 else 0186 parametro(i)=interp1(Desired_entropy,Approx_Epsilon,entropias(i)); 0187 end 0188 end 0189 elseif algorit==6 0190 algoritmo=3; 0191 0192 Desired_entropy=[0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1 1.2 1.4 1.6 1.8 2 2.2]; 0193 Approx_Epsilon=[15.7 0.18 0.10 0.065 0.009 0.026 0.002 0.012 0.011 0.0003 0.003 0.005 0.0015 0.0033 0.0016]; 0194 for i=1:length(entropias) 0195 if entropias(i)>max(Desired_entropy), 0196 parametro(i)=0.001; 0197 elseif entropias(i)<min(Desired_entropy), 0198 parametro(i)=18; 0199 else 0200 parametro(i)=interp1(Desired_entropy,Approx_Epsilon,entropias(i)); 0201 end 0202 end 0203 elseif algorit==7 0204 algoritmo=4; 0205 0206 Desired_entropy=[0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1 1.2 1.4 1.6 1.8 2 2.2]; 0207 Approx_Epsilon=[0.79 0.36 0.17 0.08 0.025 0.025 0.004 0.01 0.004 0.0027 0.0024 0.003 0.0036 0.0002 0.0012]; 0208 for i=1:length(entropias) 0209 if entropias(i)>max(Desired_entropy), 0210 parametro(i)=0.001; 0211 elseif entropias(i)<min(Desired_entropy), 0212 parametro(i)=1; 0213 else 0214 parametro(i)=interp1(Desired_entropy,Approx_Epsilon,entropias(i)); 0215 end 0216 end 0217 else 0218 disp('Not a valid algorithm selection') 0219 end 0220 0221 [perfil,K,exponente] = computing_parameters_entropy(algoritmo); 0222 if(algoritmo <= 5) 0223 Results = entropy_svr(algoritmo,entropias,Im,parametro,perfil,K,exponente,directorio,ficherin,N_it); 0224 else 0225 Results = entropy_ucp(algoritmo,entropias,Im,parametro,exponente,directorio,ficherin,N_it); 0226 end