repository_name
stringlengths 5
67
| func_path_in_repository
stringlengths 4
234
| func_name
stringlengths 0
314
| whole_func_string
stringlengths 52
3.87M
| language
stringclasses 6
values | func_code_string
stringlengths 52
3.87M
| func_documentation_string
stringlengths 1
47.2k
| func_code_url
stringlengths 85
339
|
---|---|---|---|---|---|---|---|
lrq3000/pyFileFixity | pyFileFixity/lib/brownanrs/rs.py | RSCoder.decode | def decode(self, r, nostrip=False, k=None, erasures_pos=None, only_erasures=False, return_string=True):
'''Given a received string or byte array or list r of values between
0 and gf2_charac, attempts to decode it. If it's a valid codeword, or
if there are no more than (n-k)/2 errors, the repaired message is returned.
A message always has k bytes, if a message contained less it is left
padded with null bytes. When decoded, these leading null bytes are
stripped, but that can cause problems if decoding binary data. When
nostrip is True, messages returned are always k bytes long. This is
useful to make sure no data is lost when decoding binary data.
Theoretically, we have R(x) = C(x) + E(x) + V(x), where R is the received message, C is the correct message without errors nor erasures, E are the errors and V the erasures. Thus the goal is to compute E and V from R, so that we can compute: C(x) = R(x) - E(x) - V(x), and then we have our original message! The main problem of decoding is to solve the so-called Key Equation, here we use Berlekamp-Massey.
When stated in the language of spectral estimation, consists of a Fourier transform (syndrome computer), followed by a spectral analysis (Berlekamp-Massey or Euclidian algorithm), followed by an inverse Fourier transform (Chien search).
(see Blahut, "Algebraic Codes for Data Transmission", 2003, chapter 7.6 Decoding in Time Domain).
'''
n = self.n
if not k: k = self.k
# If we were given a string, convert to a list (important to support fields above 2^8)
if isinstance(r, _str):
r = [ord(x) for x in r]
# Turn r into a polynomial
rp = Polynomial([GF2int(x) for x in r])
if erasures_pos:
# Convert string positions to coefficients positions for the algebra to work (see _find_erasures_locator(), ecc characters represent the first coefficients while the message is put last, so it's exactly the reverse of the string positions where the message is first and the ecc is last, thus it's just like if you read the message+ecc string in reverse)
erasures_pos = [len(r)-1-x for x in erasures_pos]
# Set erasures characters to null bytes
# Note that you can just leave the original characters as they are, you don't need to set erased characters to null bytes for the decoding to work, but note that it won't help either (ie, fake erasures, meaning characters that were detected as erasures but actually aren't, will still "consume" one ecc symbol, even if you don't set them to null byte, this is because the syndrome is limited to n-k and thus you can't decode above this bound without a clever trick).
# Example string containing a fake erasure: "hello sam" -> "ooooo sam" with erasures_pos = [0, 1, 2, 3, 4]. Here in fact the last erasure is fake because the original character also was "o" so if we detect "o" as an erasure, we will end up with one fake erasure. But setting it to null byte or not, it will still use up one ecc symbol, it will always be counted as a real erasure. If you're below the n-k bound, then the doceding will be ok. If you're above, then you can't do anything, the decoding won't work. Maybe todo: try to find a clever list decoding algorithm to account for fake erasures....
# Note: commented out so that the resulting omega (error evaluator polynomial) is the same as the erasure evaluator polynomial when decoding the same number of errors or erasures (ie, decoding 3 erasures only will give the same result as 3 errors only, with of course the errors/erasures on the same characters).
#for erasure in erasures_pos:
#rp[erasure] = GF2int(0)
# Compute the syndromes:
sz = self._syndromes(rp, k=k)
if sz.coefficients.count(GF2int(0)) == len(sz): # the code is already valid, there's nothing to do
# The last n-k bytes are parity
ret = r[:-(n-k)]
ecc = r[-(n-k):]
if not nostrip:
ret = self._list_lstrip(r[:-(n-k)], 0)
if return_string and self.gf2_charac < 256:
ret = self._list2str(ret)
ecc = self._list2str(ecc)
return ret, ecc
# Erasures locator polynomial computation
erasures_loc = None
erasures_eval = None
erasures_count = 0
if erasures_pos:
erasures_count = len(erasures_pos)
# Compute the erasure locator polynomial
erasures_loc = self._find_erasures_locator(erasures_pos)
# Compute the erasure evaluator polynomial
erasures_eval = self._find_error_evaluator(sz, erasures_loc, k=k)
if only_erasures:
sigma = erasures_loc
omega = erasures_eval
else:
# Find the error locator polynomial and error evaluator polynomial
# using the Berlekamp-Massey algorithm
# if erasures were supplied, BM will generate the errata (errors-and-erasures) locator and evaluator polynomials
sigma, omega = self._berlekamp_massey(sz, k=k, erasures_loc=erasures_loc, erasures_eval=erasures_eval, erasures_count=erasures_count)
omega = self._find_error_evaluator(sz, sigma, k=k) # we want to make sure that omega is correct (we know that sigma is always correct, but omega not really)
# Now use Chien's procedure to find the error locations
# j is an array of integers representing the positions of the errors, 0
# being the rightmost byte
# X is a corresponding array of GF(2^8) values where X_i = alpha^(j_i)
X, j = self._chien_search(sigma)
# Sanity check: Cannot guarantee correct decoding of more than n-k errata (Singleton Bound, n-k being the minimum distance), and we cannot even check if it's correct (the syndrome will always be all 0 if we try to decode above the bound), thus it's better to just return the input as-is.
if len(j) > n-k:
ret = r[:-(n-k)]
ecc = r[-(n-k):]
if not nostrip:
ret = self._list_lstrip(r[:-(n-k)], 0)
if return_string and self.gf2_charac < 256:
ret = self._list2str(ret)
ecc = self._list2str(ecc)
return ret, ecc
# And finally, find the error magnitudes with Forney's Formula
# Y is an array of GF(2^8) values corresponding to the error magnitude
# at the position given by the j array
Y = self._forney(omega, X)
# Put the error and locations together to form the error polynomial
# Note that an alternative would be to compute the error-spectrum polynomial E(x) which satisfies E(x)*Sigma(x) = 0 (mod x^n - 1) = Omega(x)(x^n - 1) -- see Blahut, Algebraic codes for data transmission
Elist = [GF2int(0)] * self.gf2_charac
if len(Y) >= len(j): # failsafe: if the number of erratas is higher than the number of coefficients in the magnitude polynomial, we failed!
for i in _range(self.gf2_charac): # FIXME? is this really necessary to go to self.gf2_charac? len(rp) wouldn't be just enough? (since the goal is anyway to substract E to rp)
if i in j:
Elist[i] = Y[j.index(i)]
E = Polynomial( Elist[::-1] ) # reverse the list because we used the coefficient degrees (j) instead of the error positions
else:
E = Polynomial()
# And we get our real codeword!
c = rp - E # Remember what we wrote above: R(x) = C(x) + E(x), so here to get back the original codeword C(x) = R(x) - E(x) ! (V(x) the erasures are here is included inside E(x))
if len(c) > len(r): c = rp # failsafe: in case the correction went totally wrong (we repaired padded null bytes instead of the message! thus we end up with a longer message than what we should have), then we just return the uncorrected message. Note: we compare the length of c with r on purpose, that's not an error: if we compare with rp, if the first few characters were erased (null bytes) in r, then in rp the Polynomial will automatically skip them, thus the length will always be smaller in that case.
# Split the polynomial into two parts: the corrected message and the corrected ecc
ret = c.coefficients[:-(n-k)]
ecc = c.coefficients[-(n-k):]
if nostrip:
# Polynomial objects don't store leading 0 coefficients, so we
# actually need to pad this to k bytes
ret = self._list_rjust(ret, k, 0)
if return_string and self.gf2_charac < 256: # automatically disable return_string if the field is above 255 (chr would fail, so it's up to the user to define the mapping)
# Form it back into a string
ret = self._list2str(ret)
ecc = self._list2str(ecc)
return ret, ecc | python | def decode(self, r, nostrip=False, k=None, erasures_pos=None, only_erasures=False, return_string=True):
'''Given a received string or byte array or list r of values between
0 and gf2_charac, attempts to decode it. If it's a valid codeword, or
if there are no more than (n-k)/2 errors, the repaired message is returned.
A message always has k bytes, if a message contained less it is left
padded with null bytes. When decoded, these leading null bytes are
stripped, but that can cause problems if decoding binary data. When
nostrip is True, messages returned are always k bytes long. This is
useful to make sure no data is lost when decoding binary data.
Theoretically, we have R(x) = C(x) + E(x) + V(x), where R is the received message, C is the correct message without errors nor erasures, E are the errors and V the erasures. Thus the goal is to compute E and V from R, so that we can compute: C(x) = R(x) - E(x) - V(x), and then we have our original message! The main problem of decoding is to solve the so-called Key Equation, here we use Berlekamp-Massey.
When stated in the language of spectral estimation, consists of a Fourier transform (syndrome computer), followed by a spectral analysis (Berlekamp-Massey or Euclidian algorithm), followed by an inverse Fourier transform (Chien search).
(see Blahut, "Algebraic Codes for Data Transmission", 2003, chapter 7.6 Decoding in Time Domain).
'''
n = self.n
if not k: k = self.k
# If we were given a string, convert to a list (important to support fields above 2^8)
if isinstance(r, _str):
r = [ord(x) for x in r]
# Turn r into a polynomial
rp = Polynomial([GF2int(x) for x in r])
if erasures_pos:
# Convert string positions to coefficients positions for the algebra to work (see _find_erasures_locator(), ecc characters represent the first coefficients while the message is put last, so it's exactly the reverse of the string positions where the message is first and the ecc is last, thus it's just like if you read the message+ecc string in reverse)
erasures_pos = [len(r)-1-x for x in erasures_pos]
# Set erasures characters to null bytes
# Note that you can just leave the original characters as they are, you don't need to set erased characters to null bytes for the decoding to work, but note that it won't help either (ie, fake erasures, meaning characters that were detected as erasures but actually aren't, will still "consume" one ecc symbol, even if you don't set them to null byte, this is because the syndrome is limited to n-k and thus you can't decode above this bound without a clever trick).
# Example string containing a fake erasure: "hello sam" -> "ooooo sam" with erasures_pos = [0, 1, 2, 3, 4]. Here in fact the last erasure is fake because the original character also was "o" so if we detect "o" as an erasure, we will end up with one fake erasure. But setting it to null byte or not, it will still use up one ecc symbol, it will always be counted as a real erasure. If you're below the n-k bound, then the doceding will be ok. If you're above, then you can't do anything, the decoding won't work. Maybe todo: try to find a clever list decoding algorithm to account for fake erasures....
# Note: commented out so that the resulting omega (error evaluator polynomial) is the same as the erasure evaluator polynomial when decoding the same number of errors or erasures (ie, decoding 3 erasures only will give the same result as 3 errors only, with of course the errors/erasures on the same characters).
#for erasure in erasures_pos:
#rp[erasure] = GF2int(0)
# Compute the syndromes:
sz = self._syndromes(rp, k=k)
if sz.coefficients.count(GF2int(0)) == len(sz): # the code is already valid, there's nothing to do
# The last n-k bytes are parity
ret = r[:-(n-k)]
ecc = r[-(n-k):]
if not nostrip:
ret = self._list_lstrip(r[:-(n-k)], 0)
if return_string and self.gf2_charac < 256:
ret = self._list2str(ret)
ecc = self._list2str(ecc)
return ret, ecc
# Erasures locator polynomial computation
erasures_loc = None
erasures_eval = None
erasures_count = 0
if erasures_pos:
erasures_count = len(erasures_pos)
# Compute the erasure locator polynomial
erasures_loc = self._find_erasures_locator(erasures_pos)
# Compute the erasure evaluator polynomial
erasures_eval = self._find_error_evaluator(sz, erasures_loc, k=k)
if only_erasures:
sigma = erasures_loc
omega = erasures_eval
else:
# Find the error locator polynomial and error evaluator polynomial
# using the Berlekamp-Massey algorithm
# if erasures were supplied, BM will generate the errata (errors-and-erasures) locator and evaluator polynomials
sigma, omega = self._berlekamp_massey(sz, k=k, erasures_loc=erasures_loc, erasures_eval=erasures_eval, erasures_count=erasures_count)
omega = self._find_error_evaluator(sz, sigma, k=k) # we want to make sure that omega is correct (we know that sigma is always correct, but omega not really)
# Now use Chien's procedure to find the error locations
# j is an array of integers representing the positions of the errors, 0
# being the rightmost byte
# X is a corresponding array of GF(2^8) values where X_i = alpha^(j_i)
X, j = self._chien_search(sigma)
# Sanity check: Cannot guarantee correct decoding of more than n-k errata (Singleton Bound, n-k being the minimum distance), and we cannot even check if it's correct (the syndrome will always be all 0 if we try to decode above the bound), thus it's better to just return the input as-is.
if len(j) > n-k:
ret = r[:-(n-k)]
ecc = r[-(n-k):]
if not nostrip:
ret = self._list_lstrip(r[:-(n-k)], 0)
if return_string and self.gf2_charac < 256:
ret = self._list2str(ret)
ecc = self._list2str(ecc)
return ret, ecc
# And finally, find the error magnitudes with Forney's Formula
# Y is an array of GF(2^8) values corresponding to the error magnitude
# at the position given by the j array
Y = self._forney(omega, X)
# Put the error and locations together to form the error polynomial
# Note that an alternative would be to compute the error-spectrum polynomial E(x) which satisfies E(x)*Sigma(x) = 0 (mod x^n - 1) = Omega(x)(x^n - 1) -- see Blahut, Algebraic codes for data transmission
Elist = [GF2int(0)] * self.gf2_charac
if len(Y) >= len(j): # failsafe: if the number of erratas is higher than the number of coefficients in the magnitude polynomial, we failed!
for i in _range(self.gf2_charac): # FIXME? is this really necessary to go to self.gf2_charac? len(rp) wouldn't be just enough? (since the goal is anyway to substract E to rp)
if i in j:
Elist[i] = Y[j.index(i)]
E = Polynomial( Elist[::-1] ) # reverse the list because we used the coefficient degrees (j) instead of the error positions
else:
E = Polynomial()
# And we get our real codeword!
c = rp - E # Remember what we wrote above: R(x) = C(x) + E(x), so here to get back the original codeword C(x) = R(x) - E(x) ! (V(x) the erasures are here is included inside E(x))
if len(c) > len(r): c = rp # failsafe: in case the correction went totally wrong (we repaired padded null bytes instead of the message! thus we end up with a longer message than what we should have), then we just return the uncorrected message. Note: we compare the length of c with r on purpose, that's not an error: if we compare with rp, if the first few characters were erased (null bytes) in r, then in rp the Polynomial will automatically skip them, thus the length will always be smaller in that case.
# Split the polynomial into two parts: the corrected message and the corrected ecc
ret = c.coefficients[:-(n-k)]
ecc = c.coefficients[-(n-k):]
if nostrip:
# Polynomial objects don't store leading 0 coefficients, so we
# actually need to pad this to k bytes
ret = self._list_rjust(ret, k, 0)
if return_string and self.gf2_charac < 256: # automatically disable return_string if the field is above 255 (chr would fail, so it's up to the user to define the mapping)
# Form it back into a string
ret = self._list2str(ret)
ecc = self._list2str(ecc)
return ret, ecc | Given a received string or byte array or list r of values between
0 and gf2_charac, attempts to decode it. If it's a valid codeword, or
if there are no more than (n-k)/2 errors, the repaired message is returned.
A message always has k bytes, if a message contained less it is left
padded with null bytes. When decoded, these leading null bytes are
stripped, but that can cause problems if decoding binary data. When
nostrip is True, messages returned are always k bytes long. This is
useful to make sure no data is lost when decoding binary data.
Theoretically, we have R(x) = C(x) + E(x) + V(x), where R is the received message, C is the correct message without errors nor erasures, E are the errors and V the erasures. Thus the goal is to compute E and V from R, so that we can compute: C(x) = R(x) - E(x) - V(x), and then we have our original message! The main problem of decoding is to solve the so-called Key Equation, here we use Berlekamp-Massey.
When stated in the language of spectral estimation, consists of a Fourier transform (syndrome computer), followed by a spectral analysis (Berlekamp-Massey or Euclidian algorithm), followed by an inverse Fourier transform (Chien search).
(see Blahut, "Algebraic Codes for Data Transmission", 2003, chapter 7.6 Decoding in Time Domain). | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/brownanrs/rs.py#L248-L371 |
lrq3000/pyFileFixity | pyFileFixity/lib/brownanrs/rs.py | RSCoder._list_lstrip | def _list_lstrip(self, L, val=0):
'''Left strip the specified value'''
for i in _range(len(L)):
if L[i] != val:
return L[i:] | python | def _list_lstrip(self, L, val=0):
'''Left strip the specified value'''
for i in _range(len(L)):
if L[i] != val:
return L[i:] | Left strip the specified value | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/brownanrs/rs.py#L495-L499 |
lrq3000/pyFileFixity | pyFileFixity/lib/brownanrs/rs.py | RSCoder._list_rjust | def _list_rjust(self, L, width, fillchar=0):
'''Left pad with the specified value to obtain a list of the specified width (length)'''
length = max(0, width - len(L))
return [fillchar]*length + L | python | def _list_rjust(self, L, width, fillchar=0):
'''Left pad with the specified value to obtain a list of the specified width (length)'''
length = max(0, width - len(L))
return [fillchar]*length + L | Left pad with the specified value to obtain a list of the specified width (length) | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/brownanrs/rs.py#L501-L504 |
lrq3000/pyFileFixity | pyFileFixity/lib/brownanrs/rs.py | RSCoder._syndromes | def _syndromes(self, r, k=None):
'''Given the received codeword r in the form of a Polynomial object,
computes the syndromes and returns the syndrome polynomial.
Mathematically, it's essentially equivalent to a Fourrier Transform (Chien search being the inverse).
'''
n = self.n
if not k: k = self.k
# Note the + [GF2int(0)] : we add a 0 coefficient for the lowest degree (the constant). This effectively shifts the syndrome, and will shift every computations depending on the syndromes (such as the errors locator polynomial, errors evaluator polynomial, etc. but not the errors positions).
# This is not necessary as anyway syndromes are defined such as there are only non-zero coefficients (the only 0 is the shift of the constant here) and subsequent computations will/must account for the shift by skipping the first iteration (eg, the often seen range(1, n-k+1)), but you can also avoid prepending the 0 coeff and adapt every subsequent computations to start from 0 instead of 1.
return Polynomial( [r.evaluate( GF2int(self.generator)**(l+self.fcr) ) for l in _range(n-k-1, -1, -1)] + [GF2int(0)], keep_zero=True ) | python | def _syndromes(self, r, k=None):
'''Given the received codeword r in the form of a Polynomial object,
computes the syndromes and returns the syndrome polynomial.
Mathematically, it's essentially equivalent to a Fourrier Transform (Chien search being the inverse).
'''
n = self.n
if not k: k = self.k
# Note the + [GF2int(0)] : we add a 0 coefficient for the lowest degree (the constant). This effectively shifts the syndrome, and will shift every computations depending on the syndromes (such as the errors locator polynomial, errors evaluator polynomial, etc. but not the errors positions).
# This is not necessary as anyway syndromes are defined such as there are only non-zero coefficients (the only 0 is the shift of the constant here) and subsequent computations will/must account for the shift by skipping the first iteration (eg, the often seen range(1, n-k+1)), but you can also avoid prepending the 0 coeff and adapt every subsequent computations to start from 0 instead of 1.
return Polynomial( [r.evaluate( GF2int(self.generator)**(l+self.fcr) ) for l in _range(n-k-1, -1, -1)] + [GF2int(0)], keep_zero=True ) | Given the received codeword r in the form of a Polynomial object,
computes the syndromes and returns the syndrome polynomial.
Mathematically, it's essentially equivalent to a Fourrier Transform (Chien search being the inverse). | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/brownanrs/rs.py#L506-L515 |
lrq3000/pyFileFixity | pyFileFixity/lib/brownanrs/rs.py | RSCoder._find_erasures_locator | def _find_erasures_locator(self, erasures_pos):
'''Compute the erasures locator polynomial from the erasures positions (the positions must be relative to the x coefficient, eg: "hello worldxxxxxxxxx" is tampered to "h_ll_ worldxxxxxxxxx" with xxxxxxxxx being the ecc of length n-k=9, here the string positions are [1, 4], but the coefficients are reversed since the ecc characters are placed as the first coefficients of the polynomial, thus the coefficients of the erased characters are n-1 - [1, 4] = [18, 15] = erasures_loc to be specified as an argument.'''
# See: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Error_Control_Coding/lecture7.pdf and Blahut, Richard E. "Transform techniques for error control codes." IBM Journal of Research and development 23.3 (1979): 299-315. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.92.600&rep=rep1&type=pdf and also a MatLab implementation here: http://www.mathworks.com/matlabcentral/fileexchange/23567-reed-solomon-errors-and-erasures-decoder/content//RS_E_E_DEC.m
erasures_loc = Polynomial([GF2int(1)]) # just to init because we will multiply, so it must be 1 so that the multiplication starts correctly without nulling any term
# erasures_loc is very simple to compute: erasures_loc = prod(1 - x*alpha[j]**i) for i in erasures_pos and where alpha is the alpha chosen to evaluate polynomials (here in this library it's gf(3)). To generate c*x where c is a constant, we simply generate a Polynomial([c, 0]) where 0 is the constant and c is positionned to be the coefficient for x^1. See https://en.wikipedia.org/wiki/Forney_algorithm#Erasures
for i in erasures_pos:
erasures_loc = erasures_loc * (Polynomial([GF2int(1)]) - Polynomial([GF2int(self.generator)**i, 0]))
return erasures_loc | python | def _find_erasures_locator(self, erasures_pos):
'''Compute the erasures locator polynomial from the erasures positions (the positions must be relative to the x coefficient, eg: "hello worldxxxxxxxxx" is tampered to "h_ll_ worldxxxxxxxxx" with xxxxxxxxx being the ecc of length n-k=9, here the string positions are [1, 4], but the coefficients are reversed since the ecc characters are placed as the first coefficients of the polynomial, thus the coefficients of the erased characters are n-1 - [1, 4] = [18, 15] = erasures_loc to be specified as an argument.'''
# See: http://ocw.usu.edu/Electrical_and_Computer_Engineering/Error_Control_Coding/lecture7.pdf and Blahut, Richard E. "Transform techniques for error control codes." IBM Journal of Research and development 23.3 (1979): 299-315. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.92.600&rep=rep1&type=pdf and also a MatLab implementation here: http://www.mathworks.com/matlabcentral/fileexchange/23567-reed-solomon-errors-and-erasures-decoder/content//RS_E_E_DEC.m
erasures_loc = Polynomial([GF2int(1)]) # just to init because we will multiply, so it must be 1 so that the multiplication starts correctly without nulling any term
# erasures_loc is very simple to compute: erasures_loc = prod(1 - x*alpha[j]**i) for i in erasures_pos and where alpha is the alpha chosen to evaluate polynomials (here in this library it's gf(3)). To generate c*x where c is a constant, we simply generate a Polynomial([c, 0]) where 0 is the constant and c is positionned to be the coefficient for x^1. See https://en.wikipedia.org/wiki/Forney_algorithm#Erasures
for i in erasures_pos:
erasures_loc = erasures_loc * (Polynomial([GF2int(1)]) - Polynomial([GF2int(self.generator)**i, 0]))
return erasures_loc | Compute the erasures locator polynomial from the erasures positions (the positions must be relative to the x coefficient, eg: "hello worldxxxxxxxxx" is tampered to "h_ll_ worldxxxxxxxxx" with xxxxxxxxx being the ecc of length n-k=9, here the string positions are [1, 4], but the coefficients are reversed since the ecc characters are placed as the first coefficients of the polynomial, thus the coefficients of the erased characters are n-1 - [1, 4] = [18, 15] = erasures_loc to be specified as an argument. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/brownanrs/rs.py#L538-L545 |
lrq3000/pyFileFixity | pyFileFixity/lib/brownanrs/rs.py | RSCoder._berlekamp_massey | def _berlekamp_massey(self, s, k=None, erasures_loc=None, erasures_eval=None, erasures_count=0):
'''Computes and returns the errata (errors+erasures) locator polynomial (sigma) and the
error evaluator polynomial (omega) at the same time.
If the erasures locator is specified, we will return an errors-and-erasures locator polynomial and an errors-and-erasures evaluator polynomial, else it will compute only errors. With erasures in addition to errors, it can simultaneously decode up to v+2e <= (n-k) where v is the number of erasures and e the number of errors.
Mathematically speaking, this is equivalent to a spectral analysis (see Blahut, "Algebraic Codes for Data Transmission", 2003, chapter 7.6 Decoding in Time Domain).
The parameter s is the syndrome polynomial (syndromes encoded in a
generator function) as returned by _syndromes.
Notes:
The error polynomial:
E(x) = E_0 + E_1 x + ... + E_(n-1) x^(n-1)
j_1, j_2, ..., j_s are the error positions. (There are at most s
errors)
Error location X_i is defined: X_i = α^(j_i)
that is, the power of α (alpha) corresponding to the error location
Error magnitude Y_i is defined: E_(j_i)
that is, the coefficient in the error polynomial at position j_i
Error locator polynomial:
sigma(z) = Product( 1 - X_i * z, i=1..s )
roots are the reciprocals of the error locations
( 1/X_1, 1/X_2, ...)
Error evaluator polynomial omega(z) is here computed at the same time as sigma, but it can also be constructed afterwards using the syndrome and sigma (see _find_error_evaluator() method).
It can be seen that the algorithm tries to iteratively solve for the error locator polynomial by
solving one equation after another and updating the error locator polynomial. If it turns out that it
cannot solve the equation at some step, then it computes the error and weights it by the last
non-zero discriminant found, and delays the weighted result to increase the polynomial degree
by 1. Ref: "Reed Solomon Decoder: TMS320C64x Implementation" by Jagadeesh Sankaran, December 2000, Application Report SPRA686
The best paper I found describing the BM algorithm for errata (errors-and-erasures) evaluator computation is in "Algebraic Codes for Data Transmission", Richard E. Blahut, 2003.
'''
# For errors-and-erasures decoding, see: "Algebraic Codes for Data Transmission", Richard E. Blahut, 2003 and (but it's less complete): Blahut, Richard E. "Transform techniques for error control codes." IBM Journal of Research and development 23.3 (1979): 299-315. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.92.600&rep=rep1&type=pdf and also a MatLab implementation here: http://www.mathworks.com/matlabcentral/fileexchange/23567-reed-solomon-errors-and-erasures-decoder/content//RS_E_E_DEC.m
# also see: Blahut, Richard E. "A universal Reed-Solomon decoder." IBM Journal of Research and Development 28.2 (1984): 150-158. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.2084&rep=rep1&type=pdf
# and another good alternative book with concrete programming examples: Jiang, Yuan. A practical guide to error-control coding using Matlab. Artech House, 2010.
n = self.n
if not k: k = self.k
# Initialize, depending on if we include erasures or not:
if erasures_loc:
sigma = [ Polynomial(erasures_loc.coefficients) ] # copy erasures_loc by creating a new Polynomial, so that we initialize the errata locator polynomial with the erasures locator polynomial.
B = [ Polynomial(erasures_loc.coefficients) ]
omega = [ Polynomial(erasures_eval.coefficients) ] # to compute omega (the evaluator polynomial) at the same time, we also need to initialize it with the partial erasures evaluator polynomial
A = [ Polynomial(erasures_eval.coefficients) ] # TODO: fix the initial value of the evaluator support polynomial, because currently the final omega is not correct (it contains higher order terms that should be removed by the end of BM)
else:
sigma = [ Polynomial([GF2int(1)]) ] # error locator polynomial. Also called Lambda in other notations.
B = [ Polynomial([GF2int(1)]) ] # this is the error locator support/secondary polynomial, which is a funky way to say that it's just a temporary variable that will help us construct sigma, the error locator polynomial
omega = [ Polynomial([GF2int(1)]) ] # error evaluator polynomial. We don't need to initialize it with erasures_loc, it will still work, because Delta is computed using sigma, which itself is correctly initialized with erasures if needed.
A = [ Polynomial([GF2int(0)]) ] # this is the error evaluator support/secondary polynomial, to help us construct omega
L = [ 0 ] # update flag: necessary variable to check when updating is necessary and to check bounds (to avoid wrongly eliminating the higher order terms). For more infos, see https://www.cs.duke.edu/courses/spring11/cps296.3/decoding_rs.pdf
M = [ 0 ] # optional variable to check bounds (so that we do not mistakenly overwrite the higher order terms). This is not necessary, it's only an additional safe check. For more infos, see the presentation decoding_rs.pdf by Andrew Brown in the doc folder.
# Fix the syndrome shifting: when computing the syndrome, some implementations may prepend a 0 coefficient for the lowest degree term (the constant). This is a case of syndrome shifting, thus the syndrome will be bigger than the number of ecc symbols (I don't know what purpose serves this shifting). If that's the case, then we need to account for the syndrome shifting when we use the syndrome such as inside BM, by skipping those prepended coefficients.
# Another way to detect the shifting is to detect the 0 coefficients: by definition, a syndrome does not contain any 0 coefficient (except if there are no errors/erasures, in this case they are all 0). This however doesn't work with the modified Forney syndrome (that we do not use in this lib but it may be implemented in the future), which set to 0 the coefficients corresponding to erasures, leaving only the coefficients corresponding to errors.
synd_shift = 0
if len(s) > (n-k): synd_shift = len(s) - (n-k)
# Polynomial constants:
ONE = Polynomial(z0=GF2int(1))
ZERO = Polynomial(z0=GF2int(0))
Z = Polynomial(z1=GF2int(1)) # used to shift polynomials, simply multiply your poly * Z to shift
# Precaching
s2 = ONE + s
# Iteratively compute the polynomials n-k-erasures_count times. The last ones will be correct (since the algorithm refines the error/errata locator polynomial iteratively depending on the discrepancy, which is kind of a difference-from-correctness measure).
for l in _range(0, n-k-erasures_count): # skip the first erasures_count iterations because we already computed the partial errata locator polynomial (by initializing with the erasures locator polynomial)
K = erasures_count+l+synd_shift # skip the FIRST erasures_count iterations (not the last iterations, that's very important!)
# Goal for each iteration: Compute sigma[l+1] and omega[l+1] such that
# (1 + s)*sigma[l] == omega[l] in mod z^(K)
# For this particular loop iteration, we have sigma[l] and omega[l],
# and are computing sigma[l+1] and omega[l+1]
# First find Delta, the non-zero coefficient of z^(K) in
# (1 + s) * sigma[l]
# Note that adding 1 to the syndrome s is not really necessary, you can do as well without.
# This delta is valid for l (this iteration) only
Delta = ( s2 * sigma[l] ).get_coefficient(K) # Delta is also known as the Discrepancy, and is always a scalar (not a polynomial).
# Make it a polynomial of degree 0, just for ease of computation with polynomials sigma and omega.
Delta = Polynomial(x0=Delta)
# Can now compute sigma[l+1] and omega[l+1] from
# sigma[l], omega[l], B[l], A[l], and Delta
sigma.append( sigma[l] - Delta * Z * B[l] )
omega.append( omega[l] - Delta * Z * A[l] )
# Now compute the next support polynomials B and A
# There are two ways to do this
# This is based on a messy case analysis on the degrees of the four polynomials sigma, omega, A and B in order to minimize the degrees of A and B. For more infos, see https://www.cs.duke.edu/courses/spring10/cps296.3/decoding_rs_scribe.pdf
# In fact it ensures that the degree of the final polynomials aren't too large.
if Delta == ZERO or 2*L[l] > K+erasures_count \
or (2*L[l] == K+erasures_count and M[l] == 0):
#if Delta == ZERO or len(sigma[l+1]) <= len(sigma[l]): # another way to compute when to update, and it doesn't require to maintain the update flag L
# Rule A
B.append( Z * B[l] )
A.append( Z * A[l] )
L.append( L[l] )
M.append( M[l] )
elif (Delta != ZERO and 2*L[l] < K+erasures_count) \
or (2*L[l] == K+erasures_count and M[l] != 0):
# elif Delta != ZERO and len(sigma[l+1]) > len(sigma[l]): # another way to compute when to update, and it doesn't require to maintain the update flag L
# Rule B
B.append( sigma[l] // Delta )
A.append( omega[l] // Delta )
L.append( K - L[l] ) # the update flag L is tricky: in Blahut's schema, it's mandatory to use `L = K - L - erasures_count` (and indeed in a previous draft of this function, if you forgot to do `- erasures_count` it would lead to correcting only 2*(errors+erasures) <= (n-k) instead of 2*errors+erasures <= (n-k)), but in this latest draft, this will lead to a wrong decoding in some cases where it should correctly decode! Thus you should try with and without `- erasures_count` to update L on your own implementation and see which one works OK without producing wrong decoding failures.
M.append( 1 - M[l] )
else:
raise Exception("Code shouldn't have gotten here")
# Hack to fix the simultaneous computation of omega, the errata evaluator polynomial: because A (the errata evaluator support polynomial) is not correctly initialized (I could not find any info in academic papers). So at the end, we get the correct errata evaluator polynomial omega + some higher order terms that should not be present, but since we know that sigma is always correct and the maximum degree should be the same as omega, we can fix omega by truncating too high order terms.
if omega[-1].degree > sigma[-1].degree: omega[-1] = Polynomial(omega[-1].coefficients[-(sigma[-1].degree+1):])
# Debuglines, uncomment to show the result of every iterations
#print "SIGMA BM"
#for i,x in enumerate(sigma):
#print i, ":", x
# Return the last result of the iterations (since BM compute iteratively, the last iteration being correct - it may already be before, but we're not sure)
return sigma[-1], omega[-1] | python | def _berlekamp_massey(self, s, k=None, erasures_loc=None, erasures_eval=None, erasures_count=0):
'''Computes and returns the errata (errors+erasures) locator polynomial (sigma) and the
error evaluator polynomial (omega) at the same time.
If the erasures locator is specified, we will return an errors-and-erasures locator polynomial and an errors-and-erasures evaluator polynomial, else it will compute only errors. With erasures in addition to errors, it can simultaneously decode up to v+2e <= (n-k) where v is the number of erasures and e the number of errors.
Mathematically speaking, this is equivalent to a spectral analysis (see Blahut, "Algebraic Codes for Data Transmission", 2003, chapter 7.6 Decoding in Time Domain).
The parameter s is the syndrome polynomial (syndromes encoded in a
generator function) as returned by _syndromes.
Notes:
The error polynomial:
E(x) = E_0 + E_1 x + ... + E_(n-1) x^(n-1)
j_1, j_2, ..., j_s are the error positions. (There are at most s
errors)
Error location X_i is defined: X_i = α^(j_i)
that is, the power of α (alpha) corresponding to the error location
Error magnitude Y_i is defined: E_(j_i)
that is, the coefficient in the error polynomial at position j_i
Error locator polynomial:
sigma(z) = Product( 1 - X_i * z, i=1..s )
roots are the reciprocals of the error locations
( 1/X_1, 1/X_2, ...)
Error evaluator polynomial omega(z) is here computed at the same time as sigma, but it can also be constructed afterwards using the syndrome and sigma (see _find_error_evaluator() method).
It can be seen that the algorithm tries to iteratively solve for the error locator polynomial by
solving one equation after another and updating the error locator polynomial. If it turns out that it
cannot solve the equation at some step, then it computes the error and weights it by the last
non-zero discriminant found, and delays the weighted result to increase the polynomial degree
by 1. Ref: "Reed Solomon Decoder: TMS320C64x Implementation" by Jagadeesh Sankaran, December 2000, Application Report SPRA686
The best paper I found describing the BM algorithm for errata (errors-and-erasures) evaluator computation is in "Algebraic Codes for Data Transmission", Richard E. Blahut, 2003.
'''
# For errors-and-erasures decoding, see: "Algebraic Codes for Data Transmission", Richard E. Blahut, 2003 and (but it's less complete): Blahut, Richard E. "Transform techniques for error control codes." IBM Journal of Research and development 23.3 (1979): 299-315. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.92.600&rep=rep1&type=pdf and also a MatLab implementation here: http://www.mathworks.com/matlabcentral/fileexchange/23567-reed-solomon-errors-and-erasures-decoder/content//RS_E_E_DEC.m
# also see: Blahut, Richard E. "A universal Reed-Solomon decoder." IBM Journal of Research and Development 28.2 (1984): 150-158. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.84.2084&rep=rep1&type=pdf
# and another good alternative book with concrete programming examples: Jiang, Yuan. A practical guide to error-control coding using Matlab. Artech House, 2010.
n = self.n
if not k: k = self.k
# Initialize, depending on if we include erasures or not:
if erasures_loc:
sigma = [ Polynomial(erasures_loc.coefficients) ] # copy erasures_loc by creating a new Polynomial, so that we initialize the errata locator polynomial with the erasures locator polynomial.
B = [ Polynomial(erasures_loc.coefficients) ]
omega = [ Polynomial(erasures_eval.coefficients) ] # to compute omega (the evaluator polynomial) at the same time, we also need to initialize it with the partial erasures evaluator polynomial
A = [ Polynomial(erasures_eval.coefficients) ] # TODO: fix the initial value of the evaluator support polynomial, because currently the final omega is not correct (it contains higher order terms that should be removed by the end of BM)
else:
sigma = [ Polynomial([GF2int(1)]) ] # error locator polynomial. Also called Lambda in other notations.
B = [ Polynomial([GF2int(1)]) ] # this is the error locator support/secondary polynomial, which is a funky way to say that it's just a temporary variable that will help us construct sigma, the error locator polynomial
omega = [ Polynomial([GF2int(1)]) ] # error evaluator polynomial. We don't need to initialize it with erasures_loc, it will still work, because Delta is computed using sigma, which itself is correctly initialized with erasures if needed.
A = [ Polynomial([GF2int(0)]) ] # this is the error evaluator support/secondary polynomial, to help us construct omega
L = [ 0 ] # update flag: necessary variable to check when updating is necessary and to check bounds (to avoid wrongly eliminating the higher order terms). For more infos, see https://www.cs.duke.edu/courses/spring11/cps296.3/decoding_rs.pdf
M = [ 0 ] # optional variable to check bounds (so that we do not mistakenly overwrite the higher order terms). This is not necessary, it's only an additional safe check. For more infos, see the presentation decoding_rs.pdf by Andrew Brown in the doc folder.
# Fix the syndrome shifting: when computing the syndrome, some implementations may prepend a 0 coefficient for the lowest degree term (the constant). This is a case of syndrome shifting, thus the syndrome will be bigger than the number of ecc symbols (I don't know what purpose serves this shifting). If that's the case, then we need to account for the syndrome shifting when we use the syndrome such as inside BM, by skipping those prepended coefficients.
# Another way to detect the shifting is to detect the 0 coefficients: by definition, a syndrome does not contain any 0 coefficient (except if there are no errors/erasures, in this case they are all 0). This however doesn't work with the modified Forney syndrome (that we do not use in this lib but it may be implemented in the future), which set to 0 the coefficients corresponding to erasures, leaving only the coefficients corresponding to errors.
synd_shift = 0
if len(s) > (n-k): synd_shift = len(s) - (n-k)
# Polynomial constants:
ONE = Polynomial(z0=GF2int(1))
ZERO = Polynomial(z0=GF2int(0))
Z = Polynomial(z1=GF2int(1)) # used to shift polynomials, simply multiply your poly * Z to shift
# Precaching
s2 = ONE + s
# Iteratively compute the polynomials n-k-erasures_count times. The last ones will be correct (since the algorithm refines the error/errata locator polynomial iteratively depending on the discrepancy, which is kind of a difference-from-correctness measure).
for l in _range(0, n-k-erasures_count): # skip the first erasures_count iterations because we already computed the partial errata locator polynomial (by initializing with the erasures locator polynomial)
K = erasures_count+l+synd_shift # skip the FIRST erasures_count iterations (not the last iterations, that's very important!)
# Goal for each iteration: Compute sigma[l+1] and omega[l+1] such that
# (1 + s)*sigma[l] == omega[l] in mod z^(K)
# For this particular loop iteration, we have sigma[l] and omega[l],
# and are computing sigma[l+1] and omega[l+1]
# First find Delta, the non-zero coefficient of z^(K) in
# (1 + s) * sigma[l]
# Note that adding 1 to the syndrome s is not really necessary, you can do as well without.
# This delta is valid for l (this iteration) only
Delta = ( s2 * sigma[l] ).get_coefficient(K) # Delta is also known as the Discrepancy, and is always a scalar (not a polynomial).
# Make it a polynomial of degree 0, just for ease of computation with polynomials sigma and omega.
Delta = Polynomial(x0=Delta)
# Can now compute sigma[l+1] and omega[l+1] from
# sigma[l], omega[l], B[l], A[l], and Delta
sigma.append( sigma[l] - Delta * Z * B[l] )
omega.append( omega[l] - Delta * Z * A[l] )
# Now compute the next support polynomials B and A
# There are two ways to do this
# This is based on a messy case analysis on the degrees of the four polynomials sigma, omega, A and B in order to minimize the degrees of A and B. For more infos, see https://www.cs.duke.edu/courses/spring10/cps296.3/decoding_rs_scribe.pdf
# In fact it ensures that the degree of the final polynomials aren't too large.
if Delta == ZERO or 2*L[l] > K+erasures_count \
or (2*L[l] == K+erasures_count and M[l] == 0):
#if Delta == ZERO or len(sigma[l+1]) <= len(sigma[l]): # another way to compute when to update, and it doesn't require to maintain the update flag L
# Rule A
B.append( Z * B[l] )
A.append( Z * A[l] )
L.append( L[l] )
M.append( M[l] )
elif (Delta != ZERO and 2*L[l] < K+erasures_count) \
or (2*L[l] == K+erasures_count and M[l] != 0):
# elif Delta != ZERO and len(sigma[l+1]) > len(sigma[l]): # another way to compute when to update, and it doesn't require to maintain the update flag L
# Rule B
B.append( sigma[l] // Delta )
A.append( omega[l] // Delta )
L.append( K - L[l] ) # the update flag L is tricky: in Blahut's schema, it's mandatory to use `L = K - L - erasures_count` (and indeed in a previous draft of this function, if you forgot to do `- erasures_count` it would lead to correcting only 2*(errors+erasures) <= (n-k) instead of 2*errors+erasures <= (n-k)), but in this latest draft, this will lead to a wrong decoding in some cases where it should correctly decode! Thus you should try with and without `- erasures_count` to update L on your own implementation and see which one works OK without producing wrong decoding failures.
M.append( 1 - M[l] )
else:
raise Exception("Code shouldn't have gotten here")
# Hack to fix the simultaneous computation of omega, the errata evaluator polynomial: because A (the errata evaluator support polynomial) is not correctly initialized (I could not find any info in academic papers). So at the end, we get the correct errata evaluator polynomial omega + some higher order terms that should not be present, but since we know that sigma is always correct and the maximum degree should be the same as omega, we can fix omega by truncating too high order terms.
if omega[-1].degree > sigma[-1].degree: omega[-1] = Polynomial(omega[-1].coefficients[-(sigma[-1].degree+1):])
# Debuglines, uncomment to show the result of every iterations
#print "SIGMA BM"
#for i,x in enumerate(sigma):
#print i, ":", x
# Return the last result of the iterations (since BM compute iteratively, the last iteration being correct - it may already be before, but we're not sure)
return sigma[-1], omega[-1] | Computes and returns the errata (errors+erasures) locator polynomial (sigma) and the
error evaluator polynomial (omega) at the same time.
If the erasures locator is specified, we will return an errors-and-erasures locator polynomial and an errors-and-erasures evaluator polynomial, else it will compute only errors. With erasures in addition to errors, it can simultaneously decode up to v+2e <= (n-k) where v is the number of erasures and e the number of errors.
Mathematically speaking, this is equivalent to a spectral analysis (see Blahut, "Algebraic Codes for Data Transmission", 2003, chapter 7.6 Decoding in Time Domain).
The parameter s is the syndrome polynomial (syndromes encoded in a
generator function) as returned by _syndromes.
Notes:
The error polynomial:
E(x) = E_0 + E_1 x + ... + E_(n-1) x^(n-1)
j_1, j_2, ..., j_s are the error positions. (There are at most s
errors)
Error location X_i is defined: X_i = α^(j_i)
that is, the power of α (alpha) corresponding to the error location
Error magnitude Y_i is defined: E_(j_i)
that is, the coefficient in the error polynomial at position j_i
Error locator polynomial:
sigma(z) = Product( 1 - X_i * z, i=1..s )
roots are the reciprocals of the error locations
( 1/X_1, 1/X_2, ...)
Error evaluator polynomial omega(z) is here computed at the same time as sigma, but it can also be constructed afterwards using the syndrome and sigma (see _find_error_evaluator() method).
It can be seen that the algorithm tries to iteratively solve for the error locator polynomial by
solving one equation after another and updating the error locator polynomial. If it turns out that it
cannot solve the equation at some step, then it computes the error and weights it by the last
non-zero discriminant found, and delays the weighted result to increase the polynomial degree
by 1. Ref: "Reed Solomon Decoder: TMS320C64x Implementation" by Jagadeesh Sankaran, December 2000, Application Report SPRA686
The best paper I found describing the BM algorithm for errata (errors-and-erasures) evaluator computation is in "Algebraic Codes for Data Transmission", Richard E. Blahut, 2003. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/brownanrs/rs.py#L547-L673 |
lrq3000/pyFileFixity | pyFileFixity/lib/brownanrs/rs.py | RSCoder._berlekamp_massey_fast | def _berlekamp_massey_fast(self, s, k=None, erasures_loc=None, erasures_eval=None, erasures_count=0):
'''Faster implementation of errata (errors-and-erasures) Berlekamp-Massey.
Returns the error locator polynomial (sigma) and the
error evaluator polynomial (omega) with a faster implementation.
'''
n = self.n
if not k: k = self.k
# Initialize, depending on if we include erasures or not:
if erasures_loc:
sigma = Polynomial(erasures_loc.coefficients) # copy erasures_loc by creating a new Polynomial, so that we initialize the errata locator polynomial with the erasures locator polynomial.
sigmaprev = Polynomial(sigma.coefficients)
B = Polynomial(sigma.coefficients)
omega = Polynomial(erasures_eval.coefficients) # to compute omega (the evaluator polynomial) at the same time, we also need to initialize it with the partial erasures evaluator polynomial
omegaprev = Polynomial(omega.coefficients)
A = Polynomial(omega.coefficients) # TODO: fix the initial value of the evaluator support polynomial, because currently the final omega is not correct (it contains higher order terms that should be removed by the end of BM)
else:
sigma = sigmaprev = Polynomial([GF2int(1)]) # error locator polynomial. Also called Lambda in other notations.
sigmaprev = Polynomial([GF2int(1)]) # we need the previous iteration to compute the next value of the support polynomials
B = Polynomial([GF2int(1)]) # this is the error locator support/secondary polynomial, which is a funky way to say that it's just a temporary variable that will help us construct sigma, the error locator polynomial
omega = omegaprev = Polynomial([GF2int(1)]) # error evaluator polynomial. We don't need to initialize it with erasures_loc, it will still work, because Delta is computed using sigma, which itself is correctly initialized with erasures if needed.
omegaprev = Polynomial([GF2int(1)])
A = Polynomial([GF2int(0)]) # this is the error evaluator support/secondary polynomial, to help us construct omega
L = 0 # update flag: necessary variable to check when updating is necessary and to check bounds (to avoid wrongly eliminating the higher order terms). For more infos, see https://www.cs.duke.edu/courses/spring11/cps296.3/decoding_rs.pdf
#M = 0 # optional variable to check bounds (so that we do not mistakenly overwrite the higher order terms). This is not necessary, it's only an additional safe check. For more infos, see the presentation decoding_rs.pdf by Andrew Brown in the doc folder.
# Fix the syndrome shifting: when computing the syndrome, some implementations may prepend a 0 coefficient for the lowest degree term (the constant). This is a case of syndrome shifting, thus the syndrome will be bigger than the number of ecc symbols (I don't know what purpose serves this shifting). If that's the case, then we need to account for the syndrome shifting when we use the syndrome such as inside BM, by skipping those prepended coefficients.
# Another way to detect the shifting is to detect the 0 coefficients: by definition, a syndrome does not contain any 0 coefficient (except if there are no errors/erasures, in this case they are all 0). This however doesn't work with the modified Forney syndrome (that we do not use in this lib but it may be implemented in the future), which set to 0 the coefficients corresponding to erasures, leaving only the coefficients corresponding to errors.
synd_shift = 0
if len(s) > (n-k): synd_shift = len(s) - (n-k)
# Polynomial constants:
ONE = Polynomial([GF2int(1)])
ZERO = GF2int(0)
Z = Polynomial([GF2int(1), GF2int(0)]) # used to shift polynomials, simply multiply your poly * Z to shift
# Precaching
s2 = ONE+s
# Iteratively compute the polynomials n-k-erasures_count times. The last ones will be correct (since the algorithm refines the error/errata locator polynomial iteratively depending on the discrepancy, which is kind of a difference-from-correctness measure).
for l in _range(n-k-erasures_count): # skip the first erasures_count iterations because we already computed the partial errata locator polynomial (by initializing with the erasures locator polynomial)
K = erasures_count+l+synd_shift # skip the FIRST erasures_count iterations (not the last iterations, that's very important!)
# Goal for each iteration: Compute sigma[l+1] and omega[l+1] such that
# (1 + s)*sigma[l] == omega[l] in mod z^(K)
# For this particular loop iteration, we have sigma[l] and omega[l],
# and are computing sigma[l+1] and omega[l+1]
# First find Delta, the non-zero coefficient of z^(K) in
# (1 + s) * sigma[l]
# Note that adding 1 to the syndrome s is not really necessary, you can do as well without.
# This delta is valid for l (this iteration) only
Delta = s2.mul_at(sigma, K) # Delta is also known as the Discrepancy, and is always a scalar (not a polynomial). We just need one coefficient at a specific degree, so we can optimize by computing only the polynomial multiplication at this term, and skip the others.
# Can now compute sigma[l+1] and omega[l+1] from
# sigma[l], omega[l], B[l], A[l], and Delta
sigmaprev = sigma
omegaprev = omega
sigma = sigma - (Z * B).scale(Delta)
omega = omega - (Z * A).scale(Delta)
# Now compute the next support polynomials B and A
# There are two ways to do this
# This is based on a messy case analysis on the degrees of the four polynomials sigma, omega, A and B in order to minimize the degrees of A and B. For more infos, see https://www.cs.duke.edu/courses/spring10/cps296.3/decoding_rs_scribe.pdf
# In fact it ensures that the degree of the final polynomials aren't too large.
if Delta == ZERO or 2*L > K+erasures_count:
#or (2*L == K+erasures_count and M == 0):
#if Delta == ZERO or len(sigma) <= len(sigmaprev): # another way to compute when to update, and it doesn't require to maintain the update flag L
# Rule A
B = Z * B
A = Z * A
#L = L
#M = M
else:
#elif (Delta != ZERO and 2*L < K+erasures_count) \
# or (2*L == K+erasures_count and M != 0):
# elif Delta != ZERO and len(sigma) > len(sigmaprev): # another way to compute when to update, and it doesn't require to maintain the update flag L
# Rule B
B = sigmaprev.scale(Delta.inverse())
A = omegaprev.scale(Delta.inverse())
L = K - L # the update flag L is tricky: in Blahut's schema, it's mandatory to use `L = K - L - erasures_count` (and indeed in a previous draft of this function, if you forgot to do `- erasures_count` it would lead to correcting only 2*(errors+erasures) <= (n-k) instead of 2*errors+erasures <= (n-k)), but in this latest draft, this will lead to a wrong decoding in some cases where it should correctly decode! Thus you should try with and without `- erasures_count` to update L on your own implementation and see which one works OK without producing wrong decoding failures.
#M = 1 - M
#else:
# raise Exception("Code shouldn't have gotten here")
# Hack to fix the simultaneous computation of omega, the errata evaluator polynomial: because A (the errata evaluator support polynomial) is not correctly initialized (I could not find any info in academic papers). So at the end, we get the correct errata evaluator polynomial omega + some higher order terms that should not be present, but since we know that sigma is always correct and the maximum degree should be the same as omega, we can fix omega by truncating too high order terms.
if omega.degree > sigma.degree: omega = Polynomial(omega.coefficients[-(sigma.degree+1):])
# Return the last result of the iterations (since BM compute iteratively, the last iteration being correct - it may already be before, but we're not sure)
return sigma, omega | python | def _berlekamp_massey_fast(self, s, k=None, erasures_loc=None, erasures_eval=None, erasures_count=0):
'''Faster implementation of errata (errors-and-erasures) Berlekamp-Massey.
Returns the error locator polynomial (sigma) and the
error evaluator polynomial (omega) with a faster implementation.
'''
n = self.n
if not k: k = self.k
# Initialize, depending on if we include erasures or not:
if erasures_loc:
sigma = Polynomial(erasures_loc.coefficients) # copy erasures_loc by creating a new Polynomial, so that we initialize the errata locator polynomial with the erasures locator polynomial.
sigmaprev = Polynomial(sigma.coefficients)
B = Polynomial(sigma.coefficients)
omega = Polynomial(erasures_eval.coefficients) # to compute omega (the evaluator polynomial) at the same time, we also need to initialize it with the partial erasures evaluator polynomial
omegaprev = Polynomial(omega.coefficients)
A = Polynomial(omega.coefficients) # TODO: fix the initial value of the evaluator support polynomial, because currently the final omega is not correct (it contains higher order terms that should be removed by the end of BM)
else:
sigma = sigmaprev = Polynomial([GF2int(1)]) # error locator polynomial. Also called Lambda in other notations.
sigmaprev = Polynomial([GF2int(1)]) # we need the previous iteration to compute the next value of the support polynomials
B = Polynomial([GF2int(1)]) # this is the error locator support/secondary polynomial, which is a funky way to say that it's just a temporary variable that will help us construct sigma, the error locator polynomial
omega = omegaprev = Polynomial([GF2int(1)]) # error evaluator polynomial. We don't need to initialize it with erasures_loc, it will still work, because Delta is computed using sigma, which itself is correctly initialized with erasures if needed.
omegaprev = Polynomial([GF2int(1)])
A = Polynomial([GF2int(0)]) # this is the error evaluator support/secondary polynomial, to help us construct omega
L = 0 # update flag: necessary variable to check when updating is necessary and to check bounds (to avoid wrongly eliminating the higher order terms). For more infos, see https://www.cs.duke.edu/courses/spring11/cps296.3/decoding_rs.pdf
#M = 0 # optional variable to check bounds (so that we do not mistakenly overwrite the higher order terms). This is not necessary, it's only an additional safe check. For more infos, see the presentation decoding_rs.pdf by Andrew Brown in the doc folder.
# Fix the syndrome shifting: when computing the syndrome, some implementations may prepend a 0 coefficient for the lowest degree term (the constant). This is a case of syndrome shifting, thus the syndrome will be bigger than the number of ecc symbols (I don't know what purpose serves this shifting). If that's the case, then we need to account for the syndrome shifting when we use the syndrome such as inside BM, by skipping those prepended coefficients.
# Another way to detect the shifting is to detect the 0 coefficients: by definition, a syndrome does not contain any 0 coefficient (except if there are no errors/erasures, in this case they are all 0). This however doesn't work with the modified Forney syndrome (that we do not use in this lib but it may be implemented in the future), which set to 0 the coefficients corresponding to erasures, leaving only the coefficients corresponding to errors.
synd_shift = 0
if len(s) > (n-k): synd_shift = len(s) - (n-k)
# Polynomial constants:
ONE = Polynomial([GF2int(1)])
ZERO = GF2int(0)
Z = Polynomial([GF2int(1), GF2int(0)]) # used to shift polynomials, simply multiply your poly * Z to shift
# Precaching
s2 = ONE+s
# Iteratively compute the polynomials n-k-erasures_count times. The last ones will be correct (since the algorithm refines the error/errata locator polynomial iteratively depending on the discrepancy, which is kind of a difference-from-correctness measure).
for l in _range(n-k-erasures_count): # skip the first erasures_count iterations because we already computed the partial errata locator polynomial (by initializing with the erasures locator polynomial)
K = erasures_count+l+synd_shift # skip the FIRST erasures_count iterations (not the last iterations, that's very important!)
# Goal for each iteration: Compute sigma[l+1] and omega[l+1] such that
# (1 + s)*sigma[l] == omega[l] in mod z^(K)
# For this particular loop iteration, we have sigma[l] and omega[l],
# and are computing sigma[l+1] and omega[l+1]
# First find Delta, the non-zero coefficient of z^(K) in
# (1 + s) * sigma[l]
# Note that adding 1 to the syndrome s is not really necessary, you can do as well without.
# This delta is valid for l (this iteration) only
Delta = s2.mul_at(sigma, K) # Delta is also known as the Discrepancy, and is always a scalar (not a polynomial). We just need one coefficient at a specific degree, so we can optimize by computing only the polynomial multiplication at this term, and skip the others.
# Can now compute sigma[l+1] and omega[l+1] from
# sigma[l], omega[l], B[l], A[l], and Delta
sigmaprev = sigma
omegaprev = omega
sigma = sigma - (Z * B).scale(Delta)
omega = omega - (Z * A).scale(Delta)
# Now compute the next support polynomials B and A
# There are two ways to do this
# This is based on a messy case analysis on the degrees of the four polynomials sigma, omega, A and B in order to minimize the degrees of A and B. For more infos, see https://www.cs.duke.edu/courses/spring10/cps296.3/decoding_rs_scribe.pdf
# In fact it ensures that the degree of the final polynomials aren't too large.
if Delta == ZERO or 2*L > K+erasures_count:
#or (2*L == K+erasures_count and M == 0):
#if Delta == ZERO or len(sigma) <= len(sigmaprev): # another way to compute when to update, and it doesn't require to maintain the update flag L
# Rule A
B = Z * B
A = Z * A
#L = L
#M = M
else:
#elif (Delta != ZERO and 2*L < K+erasures_count) \
# or (2*L == K+erasures_count and M != 0):
# elif Delta != ZERO and len(sigma) > len(sigmaprev): # another way to compute when to update, and it doesn't require to maintain the update flag L
# Rule B
B = sigmaprev.scale(Delta.inverse())
A = omegaprev.scale(Delta.inverse())
L = K - L # the update flag L is tricky: in Blahut's schema, it's mandatory to use `L = K - L - erasures_count` (and indeed in a previous draft of this function, if you forgot to do `- erasures_count` it would lead to correcting only 2*(errors+erasures) <= (n-k) instead of 2*errors+erasures <= (n-k)), but in this latest draft, this will lead to a wrong decoding in some cases where it should correctly decode! Thus you should try with and without `- erasures_count` to update L on your own implementation and see which one works OK without producing wrong decoding failures.
#M = 1 - M
#else:
# raise Exception("Code shouldn't have gotten here")
# Hack to fix the simultaneous computation of omega, the errata evaluator polynomial: because A (the errata evaluator support polynomial) is not correctly initialized (I could not find any info in academic papers). So at the end, we get the correct errata evaluator polynomial omega + some higher order terms that should not be present, but since we know that sigma is always correct and the maximum degree should be the same as omega, we can fix omega by truncating too high order terms.
if omega.degree > sigma.degree: omega = Polynomial(omega.coefficients[-(sigma.degree+1):])
# Return the last result of the iterations (since BM compute iteratively, the last iteration being correct - it may already be before, but we're not sure)
return sigma, omega | Faster implementation of errata (errors-and-erasures) Berlekamp-Massey.
Returns the error locator polynomial (sigma) and the
error evaluator polynomial (omega) with a faster implementation. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/brownanrs/rs.py#L675-L767 |
lrq3000/pyFileFixity | pyFileFixity/lib/brownanrs/rs.py | RSCoder._find_error_evaluator | def _find_error_evaluator(self, synd, sigma, k=None):
'''Compute the error (or erasures if you supply sigma=erasures locator polynomial) evaluator polynomial Omega from the syndrome and the error/erasures/errata locator Sigma. Omega is already computed at the same time as Sigma inside the Berlekamp-Massey implemented above, but in case you modify Sigma, you can recompute Omega afterwards using this method, or just ensure that Omega computed by BM is correct given Sigma (as long as syndrome and sigma are correct, omega will be correct).'''
n = self.n
if not k: k = self.k
# Omega(x) = [ (1 + Synd(x)) * Error_loc(x) ] mod x^(n-k+1)
# NOTE: I don't know why we do 1+Synd(x) here, from docs it seems just Synd(x) is enough (and in practice if you remove the "ONE +" it will still decode correcty) as advised by Blahut in Algebraic Codes for Data Transmission, but it seems it's an implementation detail here.
#ONE = Polynomial([GF2int(1)])
#return ((ONE + synd) * sigma) % Polynomial([GF2int(1)] + [GF2int(0)] * (n-k+1)) # NOT CORRECT: in practice it works flawlessly with this implementation (primitive polynomial = 3), but if you use another primitive like in reedsolo lib, it doesn't work! Thus, I guess that adding ONE is not correct for the general case.
return (synd * sigma) % Polynomial([GF2int(1)] + [GF2int(0)] * (n-k+1)) | python | def _find_error_evaluator(self, synd, sigma, k=None):
'''Compute the error (or erasures if you supply sigma=erasures locator polynomial) evaluator polynomial Omega from the syndrome and the error/erasures/errata locator Sigma. Omega is already computed at the same time as Sigma inside the Berlekamp-Massey implemented above, but in case you modify Sigma, you can recompute Omega afterwards using this method, or just ensure that Omega computed by BM is correct given Sigma (as long as syndrome and sigma are correct, omega will be correct).'''
n = self.n
if not k: k = self.k
# Omega(x) = [ (1 + Synd(x)) * Error_loc(x) ] mod x^(n-k+1)
# NOTE: I don't know why we do 1+Synd(x) here, from docs it seems just Synd(x) is enough (and in practice if you remove the "ONE +" it will still decode correcty) as advised by Blahut in Algebraic Codes for Data Transmission, but it seems it's an implementation detail here.
#ONE = Polynomial([GF2int(1)])
#return ((ONE + synd) * sigma) % Polynomial([GF2int(1)] + [GF2int(0)] * (n-k+1)) # NOT CORRECT: in practice it works flawlessly with this implementation (primitive polynomial = 3), but if you use another primitive like in reedsolo lib, it doesn't work! Thus, I guess that adding ONE is not correct for the general case.
return (synd * sigma) % Polynomial([GF2int(1)] + [GF2int(0)] * (n-k+1)) | Compute the error (or erasures if you supply sigma=erasures locator polynomial) evaluator polynomial Omega from the syndrome and the error/erasures/errata locator Sigma. Omega is already computed at the same time as Sigma inside the Berlekamp-Massey implemented above, but in case you modify Sigma, you can recompute Omega afterwards using this method, or just ensure that Omega computed by BM is correct given Sigma (as long as syndrome and sigma are correct, omega will be correct). | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/brownanrs/rs.py#L769-L778 |
lrq3000/pyFileFixity | pyFileFixity/lib/brownanrs/rs.py | RSCoder._find_error_evaluator_fast | def _find_error_evaluator_fast(self, synd, sigma, k=None):
'''Compute the error (or erasures if you supply sigma=erasures locator polynomial) evaluator polynomial Omega from the syndrome and the error/erasures/errata locator Sigma. Omega is already computed at the same time as Sigma inside the Berlekamp-Massey implemented above, but in case you modify Sigma, you can recompute Omega afterwards using this method, or just ensure that Omega computed by BM is correct given Sigma (as long as syndrome and sigma are correct, omega will be correct).'''
n = self.n
if not k: k = self.k
# Omega(x) = [ Synd(x) * Error_loc(x) ] mod x^(n-k+1) -- From Blahut, Algebraic codes for data transmission, 2003
return (synd * sigma)._gffastmod(Polynomial([GF2int(1)] + [GF2int(0)] * (n-k+1))) | python | def _find_error_evaluator_fast(self, synd, sigma, k=None):
'''Compute the error (or erasures if you supply sigma=erasures locator polynomial) evaluator polynomial Omega from the syndrome and the error/erasures/errata locator Sigma. Omega is already computed at the same time as Sigma inside the Berlekamp-Massey implemented above, but in case you modify Sigma, you can recompute Omega afterwards using this method, or just ensure that Omega computed by BM is correct given Sigma (as long as syndrome and sigma are correct, omega will be correct).'''
n = self.n
if not k: k = self.k
# Omega(x) = [ Synd(x) * Error_loc(x) ] mod x^(n-k+1) -- From Blahut, Algebraic codes for data transmission, 2003
return (synd * sigma)._gffastmod(Polynomial([GF2int(1)] + [GF2int(0)] * (n-k+1))) | Compute the error (or erasures if you supply sigma=erasures locator polynomial) evaluator polynomial Omega from the syndrome and the error/erasures/errata locator Sigma. Omega is already computed at the same time as Sigma inside the Berlekamp-Massey implemented above, but in case you modify Sigma, you can recompute Omega afterwards using this method, or just ensure that Omega computed by BM is correct given Sigma (as long as syndrome and sigma are correct, omega will be correct). | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/brownanrs/rs.py#L780-L786 |
lrq3000/pyFileFixity | pyFileFixity/lib/brownanrs/rs.py | RSCoder._chien_search | def _chien_search(self, sigma):
'''Recall the definition of sigma, it has s roots. To find them, this
function evaluates sigma at all 2^(c_exp-1) (ie: 255 for GF(2^8)) non-zero points to find the roots
The inverse of the roots are X_i, the error locations
Returns a list X of error locations, and a corresponding list j of
error positions (the discrete log of the corresponding X value) The
lists are up to s elements large.
This is essentially an inverse Fourrier transform.
Important technical math note: This implementation is not actually
Chien's search. Chien's search is a way to evaluate the polynomial
such that each evaluation only takes constant time. This here simply
does 255 evaluations straight up, which is much less efficient.
Said differently, we simply do a bruteforce search by trial substitution to find the zeros of this polynomial, which identifies the error locations.
'''
# TODO: find a more efficient algorithm, this is the slowest part of the whole decoding process (~2.5 ms, while any other part is only ~400microsec). Could try the Pruned FFT from "Simple Algorithms for BCH Decoding", by Jonathan Hong and Martin Vetterli, IEEE Transactions on Communications, Vol.43, No.8, August 1995
X = []
j = []
p = GF2int(self.generator)
# Try for each possible location
for l in _range(1, self.gf2_charac+1): # range 1:256 is important: if you use range 0:255, if the last byte of the ecc symbols is corrupted, it won't be correctable! You need to use the range 1,256 to include this last byte.
#l = (i+self.fcr)
# These evaluations could be more efficient, but oh well
if sigma.evaluate( p**l ) == 0: # If it's 0, then bingo! It's an error location
# Compute the error location polynomial X (will be directly used to compute the errors magnitudes inside the Forney algorithm)
X.append( p**(-l) )
# Compute the coefficient position (not the error position, it's actually the reverse: we compute the degree of the term where the error is located. To get the error position, just compute n-1-j).
# This is different than the notes, I think the notes were in error
# Notes said j values were just l, when it's actually 255-l
j.append(self.gf2_charac - l)
# Sanity check: the number of errors/errata positions found should be exactly the same as the length of the errata locator polynomial
errs_nb = len(sigma) - 1 # compute the exact number of errors/errata that this error locator should find
if len(j) != errs_nb:
raise RSCodecError("Too many (or few) errors found by Chien Search for the errata locator polynomial!")
return X, j | python | def _chien_search(self, sigma):
'''Recall the definition of sigma, it has s roots. To find them, this
function evaluates sigma at all 2^(c_exp-1) (ie: 255 for GF(2^8)) non-zero points to find the roots
The inverse of the roots are X_i, the error locations
Returns a list X of error locations, and a corresponding list j of
error positions (the discrete log of the corresponding X value) The
lists are up to s elements large.
This is essentially an inverse Fourrier transform.
Important technical math note: This implementation is not actually
Chien's search. Chien's search is a way to evaluate the polynomial
such that each evaluation only takes constant time. This here simply
does 255 evaluations straight up, which is much less efficient.
Said differently, we simply do a bruteforce search by trial substitution to find the zeros of this polynomial, which identifies the error locations.
'''
# TODO: find a more efficient algorithm, this is the slowest part of the whole decoding process (~2.5 ms, while any other part is only ~400microsec). Could try the Pruned FFT from "Simple Algorithms for BCH Decoding", by Jonathan Hong and Martin Vetterli, IEEE Transactions on Communications, Vol.43, No.8, August 1995
X = []
j = []
p = GF2int(self.generator)
# Try for each possible location
for l in _range(1, self.gf2_charac+1): # range 1:256 is important: if you use range 0:255, if the last byte of the ecc symbols is corrupted, it won't be correctable! You need to use the range 1,256 to include this last byte.
#l = (i+self.fcr)
# These evaluations could be more efficient, but oh well
if sigma.evaluate( p**l ) == 0: # If it's 0, then bingo! It's an error location
# Compute the error location polynomial X (will be directly used to compute the errors magnitudes inside the Forney algorithm)
X.append( p**(-l) )
# Compute the coefficient position (not the error position, it's actually the reverse: we compute the degree of the term where the error is located. To get the error position, just compute n-1-j).
# This is different than the notes, I think the notes were in error
# Notes said j values were just l, when it's actually 255-l
j.append(self.gf2_charac - l)
# Sanity check: the number of errors/errata positions found should be exactly the same as the length of the errata locator polynomial
errs_nb = len(sigma) - 1 # compute the exact number of errors/errata that this error locator should find
if len(j) != errs_nb:
raise RSCodecError("Too many (or few) errors found by Chien Search for the errata locator polynomial!")
return X, j | Recall the definition of sigma, it has s roots. To find them, this
function evaluates sigma at all 2^(c_exp-1) (ie: 255 for GF(2^8)) non-zero points to find the roots
The inverse of the roots are X_i, the error locations
Returns a list X of error locations, and a corresponding list j of
error positions (the discrete log of the corresponding X value) The
lists are up to s elements large.
This is essentially an inverse Fourrier transform.
Important technical math note: This implementation is not actually
Chien's search. Chien's search is a way to evaluate the polynomial
such that each evaluation only takes constant time. This here simply
does 255 evaluations straight up, which is much less efficient.
Said differently, we simply do a bruteforce search by trial substitution to find the zeros of this polynomial, which identifies the error locations. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/brownanrs/rs.py#L788-L826 |
lrq3000/pyFileFixity | pyFileFixity/lib/brownanrs/rs.py | RSCoder._chien_search_fast | def _chien_search_fast(self, sigma):
'''Real chien search, we reuse the previous polynomial evaluation and just multiply by a constant polynomial. This should be faster, but it seems it's just the same speed as the other bruteforce version. However, it should easily be parallelizable.'''
# TODO: doesn't work when fcr is different than 1 (X values are incorrectly "shifted"...)
# TODO: try to mix this approach with the optimized walk on only interesting values, implemented in _chien_search_faster()
X = []
j = []
p = GF2int(self.generator)
if not hasattr(self, 'const_poly'): self.const_poly = [GF2int(self.generator)**(i+self.fcr) for i in _range(self.gf2_charac, -1, -1)] # constant polynomial that will allow us to update the previous polynomial evaluation to get the next one
const_poly = self.const_poly # caching for more efficiency since it never changes
ev_poly, ev = sigma.evaluate_array( p**1 ) # compute the first polynomial evaluation
# Try for each possible location
for l in _range(1, self.gf2_charac+1): # range 1:256 is important: if you use range 0:255, if the last byte of the ecc symbols is corrupted, it won't be correctable! You need to use the range 1,256 to include this last byte.
#l = (i+self.fcr)
# Check if it's a root for the polynomial
if ev == 0: # If it's 0, then bingo! It's an error location
# Compute the error location polynomial X (will be directly used to compute the errors magnitudes inside the Forney algorithm)
X.append( p**(-l) )
# Compute the coefficient position (not the error position, it's actually the reverse: we compute the degree of the term where the error is located. To get the error position, just compute n-1-j).
# This is different than the notes, I think the notes were in error
# Notes said j values were just l, when it's actually 255-l
j.append(self.gf2_charac - l)
# Update the polynomial evaluation for the next iteration
# we simply multiply each term[k] with alpha^k (where here alpha = p = GF2int(generator)).
# For more info, see the presentation by Andrew Brown, or this one: http://web.ntpu.edu.tw/~yshan/BCH_decoding.pdf
# TODO: parallelize this loop
for i in _range(1, len(ev_poly)+1): # TODO: maybe the fcr != 1 fix should be put here?
ev_poly[-i] *= const_poly[-i]
# Compute the new evaluation by just summing
ev = sum(ev_poly)
return X, j | python | def _chien_search_fast(self, sigma):
'''Real chien search, we reuse the previous polynomial evaluation and just multiply by a constant polynomial. This should be faster, but it seems it's just the same speed as the other bruteforce version. However, it should easily be parallelizable.'''
# TODO: doesn't work when fcr is different than 1 (X values are incorrectly "shifted"...)
# TODO: try to mix this approach with the optimized walk on only interesting values, implemented in _chien_search_faster()
X = []
j = []
p = GF2int(self.generator)
if not hasattr(self, 'const_poly'): self.const_poly = [GF2int(self.generator)**(i+self.fcr) for i in _range(self.gf2_charac, -1, -1)] # constant polynomial that will allow us to update the previous polynomial evaluation to get the next one
const_poly = self.const_poly # caching for more efficiency since it never changes
ev_poly, ev = sigma.evaluate_array( p**1 ) # compute the first polynomial evaluation
# Try for each possible location
for l in _range(1, self.gf2_charac+1): # range 1:256 is important: if you use range 0:255, if the last byte of the ecc symbols is corrupted, it won't be correctable! You need to use the range 1,256 to include this last byte.
#l = (i+self.fcr)
# Check if it's a root for the polynomial
if ev == 0: # If it's 0, then bingo! It's an error location
# Compute the error location polynomial X (will be directly used to compute the errors magnitudes inside the Forney algorithm)
X.append( p**(-l) )
# Compute the coefficient position (not the error position, it's actually the reverse: we compute the degree of the term where the error is located. To get the error position, just compute n-1-j).
# This is different than the notes, I think the notes were in error
# Notes said j values were just l, when it's actually 255-l
j.append(self.gf2_charac - l)
# Update the polynomial evaluation for the next iteration
# we simply multiply each term[k] with alpha^k (where here alpha = p = GF2int(generator)).
# For more info, see the presentation by Andrew Brown, or this one: http://web.ntpu.edu.tw/~yshan/BCH_decoding.pdf
# TODO: parallelize this loop
for i in _range(1, len(ev_poly)+1): # TODO: maybe the fcr != 1 fix should be put here?
ev_poly[-i] *= const_poly[-i]
# Compute the new evaluation by just summing
ev = sum(ev_poly)
return X, j | Real chien search, we reuse the previous polynomial evaluation and just multiply by a constant polynomial. This should be faster, but it seems it's just the same speed as the other bruteforce version. However, it should easily be parallelizable. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/brownanrs/rs.py#L828-L860 |
lrq3000/pyFileFixity | pyFileFixity/lib/brownanrs/rs.py | RSCoder._chien_search_faster | def _chien_search_faster(self, sigma):
'''Faster chien search, processing only useful coefficients (the ones in the messages) instead of the whole 2^8 range.
Besides the speed boost, this also allows to fix a number of issue: correctly decoding when the last ecc byte is corrupted, and accepting messages of length n > 2^8.'''
n = self.n
X = []
j = []
p = GF2int(self.generator)
# Normally we should try all 2^8 possible values, but here we optimize to just check the interesting symbols
# This also allows to accept messages where n > 2^8.
for l in _range(n):
#l = (i+self.fcr)
# These evaluations could be more efficient, but oh well
if sigma.evaluate( p**(-l) ) == 0: # If it's 0, then bingo! It's an error location
# Compute the error location polynomial X (will be directly used to compute the errors magnitudes inside the Forney algorithm)
X.append( p**l )
# Compute the coefficient position (not the error position, it's actually the reverse: we compute the degree of the term where the error is located. To get the error position, just compute n-1-j).
# This is different than the notes, I think the notes were in error
# Notes said j values were just l, when it's actually 255-l
j.append(l)
# Sanity check: the number of errors/errata positions found should be exactly the same as the length of the errata locator polynomial
errs_nb = len(sigma) - 1 # compute the exact number of errors/errata that this error locator should find
if len(j) != errs_nb:
# Note: decoding messages+ecc with length n > self.gf2_charac does work partially, but it's wrong, because you will get duplicated values, and then Chien Search cannot discriminate which root is correct and which is not. The duplication of values is normally prevented by the prime polynomial reduction when generating the field (see init_lut() in ff.py), but if you overflow the field, you have no guarantee anymore. We may try to use a bruteforce approach: the correct positions ARE in the final array j, but the problem is because we are above the Galois Field's range, there is a wraparound because of overflow so that for example if j should be [0, 1, 2, 3], we will also get [255, 256, 257, 258] (because 258 % 255 == 3, same for the other values), so we can't discriminate. The issue with that bruteforce approach is that fixing any errs_nb errors among those will always give a correct output message (in the sense that the syndrome will be all 0), so we may not even be able to check if that's correct or not, so there's clearly no way to decode a message of greater length than the field.
raise RSCodecError("Too many (or few) errors found by Chien Search for the errata locator polynomial!")
return X, j | python | def _chien_search_faster(self, sigma):
'''Faster chien search, processing only useful coefficients (the ones in the messages) instead of the whole 2^8 range.
Besides the speed boost, this also allows to fix a number of issue: correctly decoding when the last ecc byte is corrupted, and accepting messages of length n > 2^8.'''
n = self.n
X = []
j = []
p = GF2int(self.generator)
# Normally we should try all 2^8 possible values, but here we optimize to just check the interesting symbols
# This also allows to accept messages where n > 2^8.
for l in _range(n):
#l = (i+self.fcr)
# These evaluations could be more efficient, but oh well
if sigma.evaluate( p**(-l) ) == 0: # If it's 0, then bingo! It's an error location
# Compute the error location polynomial X (will be directly used to compute the errors magnitudes inside the Forney algorithm)
X.append( p**l )
# Compute the coefficient position (not the error position, it's actually the reverse: we compute the degree of the term where the error is located. To get the error position, just compute n-1-j).
# This is different than the notes, I think the notes were in error
# Notes said j values were just l, when it's actually 255-l
j.append(l)
# Sanity check: the number of errors/errata positions found should be exactly the same as the length of the errata locator polynomial
errs_nb = len(sigma) - 1 # compute the exact number of errors/errata that this error locator should find
if len(j) != errs_nb:
# Note: decoding messages+ecc with length n > self.gf2_charac does work partially, but it's wrong, because you will get duplicated values, and then Chien Search cannot discriminate which root is correct and which is not. The duplication of values is normally prevented by the prime polynomial reduction when generating the field (see init_lut() in ff.py), but if you overflow the field, you have no guarantee anymore. We may try to use a bruteforce approach: the correct positions ARE in the final array j, but the problem is because we are above the Galois Field's range, there is a wraparound because of overflow so that for example if j should be [0, 1, 2, 3], we will also get [255, 256, 257, 258] (because 258 % 255 == 3, same for the other values), so we can't discriminate. The issue with that bruteforce approach is that fixing any errs_nb errors among those will always give a correct output message (in the sense that the syndrome will be all 0), so we may not even be able to check if that's correct or not, so there's clearly no way to decode a message of greater length than the field.
raise RSCodecError("Too many (or few) errors found by Chien Search for the errata locator polynomial!")
return X, j | Faster chien search, processing only useful coefficients (the ones in the messages) instead of the whole 2^8 range.
Besides the speed boost, this also allows to fix a number of issue: correctly decoding when the last ecc byte is corrupted, and accepting messages of length n > 2^8. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/brownanrs/rs.py#L862-L888 |
lrq3000/pyFileFixity | pyFileFixity/lib/brownanrs/rs.py | RSCoder._old_forney | def _old_forney(self, omega, X, k=None):
'''Computes the error magnitudes (only works with errors or erasures under t = floor((n-k)/2), not with erasures above (n-k)//2)'''
# XXX Is floor division okay here? Should this be ceiling?
if not k: k = self.k
t = (self.n - k) // 2
Y = []
for l, Xl in enumerate(X):
# Compute the sequence product and multiply its inverse in
prod = GF2int(1) # just to init the product (1 is the neutral term for multiplication)
Xl_inv = Xl.inverse()
for ji in _range(t): # do not change to _range(len(X)) as can be seen in some papers, it won't give the correct result! (sometimes yes, but not always)
if ji == l:
continue
if ji < len(X):
Xj = X[ji]
else: # if above the maximum degree of the polynomial, then all coefficients above are just 0 (that's logical...)
Xj = GF2int(0)
prod = prod * (Xl - Xj)
#if (ji != l):
# prod = prod * (GF2int(1) - X[ji]*(Xl.inverse()))
# Compute Yl
Yl = Xl**t * omega.evaluate(Xl_inv) * Xl_inv * prod.inverse()
Y.append(Yl)
return Y | python | def _old_forney(self, omega, X, k=None):
'''Computes the error magnitudes (only works with errors or erasures under t = floor((n-k)/2), not with erasures above (n-k)//2)'''
# XXX Is floor division okay here? Should this be ceiling?
if not k: k = self.k
t = (self.n - k) // 2
Y = []
for l, Xl in enumerate(X):
# Compute the sequence product and multiply its inverse in
prod = GF2int(1) # just to init the product (1 is the neutral term for multiplication)
Xl_inv = Xl.inverse()
for ji in _range(t): # do not change to _range(len(X)) as can be seen in some papers, it won't give the correct result! (sometimes yes, but not always)
if ji == l:
continue
if ji < len(X):
Xj = X[ji]
else: # if above the maximum degree of the polynomial, then all coefficients above are just 0 (that's logical...)
Xj = GF2int(0)
prod = prod * (Xl - Xj)
#if (ji != l):
# prod = prod * (GF2int(1) - X[ji]*(Xl.inverse()))
# Compute Yl
Yl = Xl**t * omega.evaluate(Xl_inv) * Xl_inv * prod.inverse()
Y.append(Yl)
return Y | Computes the error magnitudes (only works with errors or erasures under t = floor((n-k)/2), not with erasures above (n-k)//2) | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/brownanrs/rs.py#L890-L918 |
lrq3000/pyFileFixity | pyFileFixity/lib/brownanrs/rs.py | RSCoder._forney | def _forney(self, omega, X):
'''Computes the error magnitudes. Works also with erasures and errors+erasures beyond the (n-k)//2 bound, here the bound is 2*e+v <= (n-k-1) with e the number of errors and v the number of erasures.'''
# XXX Is floor division okay here? Should this be ceiling?
Y = [] # the final result, the error/erasures polynomial (contain the values that we should minus on the received message to get the repaired message)
Xlength = len(X)
for l, Xl in enumerate(X):
Xl_inv = Xl.inverse()
# Compute the formal derivative of the error locator polynomial (see Blahut, Algebraic codes for data transmission, pp 196-197).
# the formal derivative of the errata locator is used as the denominator of the Forney Algorithm, which simply says that the ith error value is given by error_evaluator(gf_inverse(Xi)) / error_locator_derivative(gf_inverse(Xi)). See Blahut, Algebraic codes for data transmission, pp 196-197.
sigma_prime_tmp = [1 - Xl_inv * X[j] for j in _range(Xlength) if j != l] # TODO? maybe a faster way would be to precompute sigma_prime = sigma[len(sigma) & 1:len(sigma):2] and then just do sigma_prime.evaluate(X[j]) ? (like in reedsolo.py)
# compute the product
sigma_prime = 1
for coef in sigma_prime_tmp:
sigma_prime = sigma_prime * coef
# equivalent to: sigma_prime = functools.reduce(mul, sigma_prime, 1)
# Compute Yl
# This is a more faithful translation of the theoretical equation contrary to the old forney method. Here it is exactly copy/pasted from the included presentation decoding_rs.pdf: Yl = omega(Xl.inverse()) / prod(1 - Xj*Xl.inverse()) for j in len(X) (in the paper it's for j in s, but it's useless when len(X) < s because we compute neutral terms 1 for nothing, and wrong when correcting more than s erasures or erasures+errors since it prevents computing all required terms).
# Thus here this method works with erasures too because firstly we fixed the equation to be like the theoretical one (don't know why it was modified in _old_forney(), if it's an optimization, it doesn't enhance anything), and secondly because we removed the product bound on s, which prevented computing errors and erasures above the s=(n-k)//2 bound.
# The best resource I have found for the correct equation is https://en.wikipedia.org/wiki/Forney_algorithm -- note that in the article, fcr is defined as c.
Yl = - (Xl**(1-self.fcr) * omega.evaluate(Xl_inv) / sigma_prime) # sigma_prime is the denominator of the Forney algorithm
Y.append(Yl)
return Y | python | def _forney(self, omega, X):
'''Computes the error magnitudes. Works also with erasures and errors+erasures beyond the (n-k)//2 bound, here the bound is 2*e+v <= (n-k-1) with e the number of errors and v the number of erasures.'''
# XXX Is floor division okay here? Should this be ceiling?
Y = [] # the final result, the error/erasures polynomial (contain the values that we should minus on the received message to get the repaired message)
Xlength = len(X)
for l, Xl in enumerate(X):
Xl_inv = Xl.inverse()
# Compute the formal derivative of the error locator polynomial (see Blahut, Algebraic codes for data transmission, pp 196-197).
# the formal derivative of the errata locator is used as the denominator of the Forney Algorithm, which simply says that the ith error value is given by error_evaluator(gf_inverse(Xi)) / error_locator_derivative(gf_inverse(Xi)). See Blahut, Algebraic codes for data transmission, pp 196-197.
sigma_prime_tmp = [1 - Xl_inv * X[j] for j in _range(Xlength) if j != l] # TODO? maybe a faster way would be to precompute sigma_prime = sigma[len(sigma) & 1:len(sigma):2] and then just do sigma_prime.evaluate(X[j]) ? (like in reedsolo.py)
# compute the product
sigma_prime = 1
for coef in sigma_prime_tmp:
sigma_prime = sigma_prime * coef
# equivalent to: sigma_prime = functools.reduce(mul, sigma_prime, 1)
# Compute Yl
# This is a more faithful translation of the theoretical equation contrary to the old forney method. Here it is exactly copy/pasted from the included presentation decoding_rs.pdf: Yl = omega(Xl.inverse()) / prod(1 - Xj*Xl.inverse()) for j in len(X) (in the paper it's for j in s, but it's useless when len(X) < s because we compute neutral terms 1 for nothing, and wrong when correcting more than s erasures or erasures+errors since it prevents computing all required terms).
# Thus here this method works with erasures too because firstly we fixed the equation to be like the theoretical one (don't know why it was modified in _old_forney(), if it's an optimization, it doesn't enhance anything), and secondly because we removed the product bound on s, which prevented computing errors and erasures above the s=(n-k)//2 bound.
# The best resource I have found for the correct equation is https://en.wikipedia.org/wiki/Forney_algorithm -- note that in the article, fcr is defined as c.
Yl = - (Xl**(1-self.fcr) * omega.evaluate(Xl_inv) / sigma_prime) # sigma_prime is the denominator of the Forney algorithm
Y.append(Yl)
return Y | Computes the error magnitudes. Works also with erasures and errors+erasures beyond the (n-k)//2 bound, here the bound is 2*e+v <= (n-k-1) with e the number of errors and v the number of erasures. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/brownanrs/rs.py#L920-L946 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/process.py | _ProcessMemoryInfoPS.update | def update(self):
"""
Get virtual and resident size of current process via 'ps'.
This should work for MacOS X, Solaris, Linux. Returns true if it was
successful.
"""
try:
p = Popen(['/bin/ps', '-p%s' % self.pid, '-o', 'rss,vsz'],
stdout=PIPE, stderr=PIPE)
except OSError: # pragma: no cover
pass
else:
s = p.communicate()[0].split()
if p.returncode == 0 and len(s) >= 2: # pragma: no branch
self.vsz = int(s[-1]) * 1024
self.rss = int(s[-2]) * 1024
return True
return False | python | def update(self):
"""
Get virtual and resident size of current process via 'ps'.
This should work for MacOS X, Solaris, Linux. Returns true if it was
successful.
"""
try:
p = Popen(['/bin/ps', '-p%s' % self.pid, '-o', 'rss,vsz'],
stdout=PIPE, stderr=PIPE)
except OSError: # pragma: no cover
pass
else:
s = p.communicate()[0].split()
if p.returncode == 0 and len(s) >= 2: # pragma: no branch
self.vsz = int(s[-1]) * 1024
self.rss = int(s[-2]) * 1024
return True
return False | Get virtual and resident size of current process via 'ps'.
This should work for MacOS X, Solaris, Linux. Returns true if it was
successful. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/process.py#L83-L100 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/process.py | _ProcessMemoryInfoProc.update | def update(self):
"""
Get virtual size of current process by reading the process' stat file.
This should work for Linux.
"""
try:
stat = open('/proc/self/stat')
status = open('/proc/self/status')
except IOError: # pragma: no cover
return False
else:
stats = stat.read().split()
self.vsz = int( stats[22] )
self.rss = int( stats[23] ) * self.pagesize
self.pagefaults = int( stats[11] )
for entry in status.readlines():
key, value = entry.split(':')
size_in_bytes = lambda x: int(x.split()[0]) * 1024
if key == 'VmData':
self.data_segment = size_in_bytes(value)
elif key == 'VmExe':
self.code_segment = size_in_bytes(value)
elif key == 'VmLib':
self.shared_segment = size_in_bytes(value)
elif key == 'VmStk':
self.stack_segment = size_in_bytes(value)
key = self.key_map.get(key)
if key:
self.os_specific.append((key, value.strip()))
stat.close()
status.close()
return True | python | def update(self):
"""
Get virtual size of current process by reading the process' stat file.
This should work for Linux.
"""
try:
stat = open('/proc/self/stat')
status = open('/proc/self/status')
except IOError: # pragma: no cover
return False
else:
stats = stat.read().split()
self.vsz = int( stats[22] )
self.rss = int( stats[23] ) * self.pagesize
self.pagefaults = int( stats[11] )
for entry in status.readlines():
key, value = entry.split(':')
size_in_bytes = lambda x: int(x.split()[0]) * 1024
if key == 'VmData':
self.data_segment = size_in_bytes(value)
elif key == 'VmExe':
self.code_segment = size_in_bytes(value)
elif key == 'VmLib':
self.shared_segment = size_in_bytes(value)
elif key == 'VmStk':
self.stack_segment = size_in_bytes(value)
key = self.key_map.get(key)
if key:
self.os_specific.append((key, value.strip()))
stat.close()
status.close()
return True | Get virtual size of current process by reading the process' stat file.
This should work for Linux. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/process.py#L118-L152 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py | DataView.CreateControls | def CreateControls(self):
"""Create our sub-controls"""
wx.EVT_LIST_COL_CLICK(self, self.GetId(), self.OnReorder)
wx.EVT_LIST_ITEM_SELECTED(self, self.GetId(), self.OnNodeSelected)
wx.EVT_MOTION(self, self.OnMouseMove)
wx.EVT_LIST_ITEM_ACTIVATED(self, self.GetId(), self.OnNodeActivated)
self.CreateColumns() | python | def CreateControls(self):
"""Create our sub-controls"""
wx.EVT_LIST_COL_CLICK(self, self.GetId(), self.OnReorder)
wx.EVT_LIST_ITEM_SELECTED(self, self.GetId(), self.OnNodeSelected)
wx.EVT_MOTION(self, self.OnMouseMove)
wx.EVT_LIST_ITEM_ACTIVATED(self, self.GetId(), self.OnNodeActivated)
self.CreateColumns() | Create our sub-controls | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py#L90-L96 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py | DataView.CreateColumns | def CreateColumns( self ):
"""Create/recreate our column definitions from current self.columns"""
self.SetItemCount(0)
# clear any current columns...
for i in range( self.GetColumnCount())[::-1]:
self.DeleteColumn( i )
# now create
for i, column in enumerate(self.columns):
column.index = i
self.InsertColumn(i, column.name)
if not windows or column.targetWidth is None:
self.SetColumnWidth(i, wx.LIST_AUTOSIZE)
else:
self.SetColumnWidth(i, column.targetWidth) | python | def CreateColumns( self ):
"""Create/recreate our column definitions from current self.columns"""
self.SetItemCount(0)
# clear any current columns...
for i in range( self.GetColumnCount())[::-1]:
self.DeleteColumn( i )
# now create
for i, column in enumerate(self.columns):
column.index = i
self.InsertColumn(i, column.name)
if not windows or column.targetWidth is None:
self.SetColumnWidth(i, wx.LIST_AUTOSIZE)
else:
self.SetColumnWidth(i, column.targetWidth) | Create/recreate our column definitions from current self.columns | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py#L97-L110 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py | DataView.SetColumns | def SetColumns( self, columns, sortOrder=None ):
"""Set columns to a set of values other than the originals and recreates column controls"""
self.columns = columns
self.sortOrder = [(x.defaultOrder,x) for x in self.columns if x.sortDefault]
self.CreateColumns() | python | def SetColumns( self, columns, sortOrder=None ):
"""Set columns to a set of values other than the originals and recreates column controls"""
self.columns = columns
self.sortOrder = [(x.defaultOrder,x) for x in self.columns if x.sortDefault]
self.CreateColumns() | Set columns to a set of values other than the originals and recreates column controls | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py#L111-L115 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py | DataView.OnNodeActivated | def OnNodeActivated(self, event):
"""We have double-clicked for hit enter on a node refocus squaremap to this node"""
try:
node = self.sorted[event.GetIndex()]
except IndexError, err:
log.warn(_('Invalid index in node activated: %(index)s'),
index=event.GetIndex())
else:
wx.PostEvent(
self,
squaremap.SquareActivationEvent(node=node, point=None,
map=None)
) | python | def OnNodeActivated(self, event):
"""We have double-clicked for hit enter on a node refocus squaremap to this node"""
try:
node = self.sorted[event.GetIndex()]
except IndexError, err:
log.warn(_('Invalid index in node activated: %(index)s'),
index=event.GetIndex())
else:
wx.PostEvent(
self,
squaremap.SquareActivationEvent(node=node, point=None,
map=None)
) | We have double-clicked for hit enter on a node refocus squaremap to this node | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py#L117-L129 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py | DataView.OnNodeSelected | def OnNodeSelected(self, event):
"""We have selected a node with the list control, tell the world"""
try:
node = self.sorted[event.GetIndex()]
except IndexError, err:
log.warn(_('Invalid index in node selected: %(index)s'),
index=event.GetIndex())
else:
if node is not self.selected_node:
wx.PostEvent(
self,
squaremap.SquareSelectionEvent(node=node, point=None,
map=None)
) | python | def OnNodeSelected(self, event):
"""We have selected a node with the list control, tell the world"""
try:
node = self.sorted[event.GetIndex()]
except IndexError, err:
log.warn(_('Invalid index in node selected: %(index)s'),
index=event.GetIndex())
else:
if node is not self.selected_node:
wx.PostEvent(
self,
squaremap.SquareSelectionEvent(node=node, point=None,
map=None)
) | We have selected a node with the list control, tell the world | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py#L131-L144 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py | DataView.SetIndicated | def SetIndicated(self, node):
"""Set this node to indicated status"""
self.indicated_node = node
self.indicated = self.NodeToIndex(node)
self.Refresh(False)
return self.indicated | python | def SetIndicated(self, node):
"""Set this node to indicated status"""
self.indicated_node = node
self.indicated = self.NodeToIndex(node)
self.Refresh(False)
return self.indicated | Set this node to indicated status | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py#L162-L167 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py | DataView.SetSelected | def SetSelected(self, node):
"""Set our selected node"""
self.selected_node = node
index = self.NodeToIndex(node)
if index != -1:
self.Focus(index)
self.Select(index, True)
return index | python | def SetSelected(self, node):
"""Set our selected node"""
self.selected_node = node
index = self.NodeToIndex(node)
if index != -1:
self.Focus(index)
self.Select(index, True)
return index | Set our selected node | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py#L169-L176 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py | DataView.OnReorder | def OnReorder(self, event):
"""Given a request to reorder, tell us to reorder"""
column = self.columns[event.GetColumn()]
return self.ReorderByColumn( column ) | python | def OnReorder(self, event):
"""Given a request to reorder, tell us to reorder"""
column = self.columns[event.GetColumn()]
return self.ReorderByColumn( column ) | Given a request to reorder, tell us to reorder | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py#L190-L193 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py | DataView.ReorderByColumn | def ReorderByColumn( self, column ):
"""Reorder the set of records by column"""
# TODO: store current selection and re-select after sorting...
single_column = self.SetNewOrder( column )
self.reorder( single_column = True )
self.Refresh() | python | def ReorderByColumn( self, column ):
"""Reorder the set of records by column"""
# TODO: store current selection and re-select after sorting...
single_column = self.SetNewOrder( column )
self.reorder( single_column = True )
self.Refresh() | Reorder the set of records by column | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py#L195-L200 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py | DataView.SetNewOrder | def SetNewOrder( self, column ):
"""Set new sorting order based on column, return whether a simple single-column (True) or multiple (False)"""
if column.sortOn:
# multiple sorts for the click...
columns = [self.columnByAttribute(attr) for attr in column.sortOn]
diff = [(a, b) for a, b in zip(self.sortOrder, columns)
if b is not a[1]]
if not diff:
self.sortOrder[0] = (not self.sortOrder[0][0], column)
else:
self.sortOrder = [
(c.defaultOrder, c) for c in columns
] + [(a, b) for (a, b) in self.sortOrder if b not in columns]
return False
else:
if column is self.sortOrder[0][1]:
# reverse current major order
self.sortOrder[0] = (not self.sortOrder[0][0], column)
else:
self.sortOrder = [(column.defaultOrder, column)] + [
(a, b)
for (a, b) in self.sortOrder if b is not column
]
return True | python | def SetNewOrder( self, column ):
"""Set new sorting order based on column, return whether a simple single-column (True) or multiple (False)"""
if column.sortOn:
# multiple sorts for the click...
columns = [self.columnByAttribute(attr) for attr in column.sortOn]
diff = [(a, b) for a, b in zip(self.sortOrder, columns)
if b is not a[1]]
if not diff:
self.sortOrder[0] = (not self.sortOrder[0][0], column)
else:
self.sortOrder = [
(c.defaultOrder, c) for c in columns
] + [(a, b) for (a, b) in self.sortOrder if b not in columns]
return False
else:
if column is self.sortOrder[0][1]:
# reverse current major order
self.sortOrder[0] = (not self.sortOrder[0][0], column)
else:
self.sortOrder = [(column.defaultOrder, column)] + [
(a, b)
for (a, b) in self.sortOrder if b is not column
]
return True | Set new sorting order based on column, return whether a simple single-column (True) or multiple (False) | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py#L202-L225 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py | DataView.reorder | def reorder(self, single_column=False):
"""Force a reorder of the displayed items"""
if single_column:
columns = self.sortOrder[:1]
else:
columns = self.sortOrder
for ascending,column in columns[::-1]:
# Python 2.2+ guarantees stable sort, so sort by each column in reverse
# order will order by the assigned columns
self.sorted.sort( key=column.get, reverse=(not ascending)) | python | def reorder(self, single_column=False):
"""Force a reorder of the displayed items"""
if single_column:
columns = self.sortOrder[:1]
else:
columns = self.sortOrder
for ascending,column in columns[::-1]:
# Python 2.2+ guarantees stable sort, so sort by each column in reverse
# order will order by the assigned columns
self.sorted.sort( key=column.get, reverse=(not ascending)) | Force a reorder of the displayed items | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py#L227-L236 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py | DataView.integrateRecords | def integrateRecords(self, functions):
"""Integrate records from the loader"""
self.SetItemCount(len(functions))
self.sorted = functions[:]
self.reorder()
self.Refresh() | python | def integrateRecords(self, functions):
"""Integrate records from the loader"""
self.SetItemCount(len(functions))
self.sorted = functions[:]
self.reorder()
self.Refresh() | Integrate records from the loader | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py#L238-L243 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py | DataView.OnGetItemAttr | def OnGetItemAttr(self, item):
"""Retrieve ListItemAttr for the given item (index)"""
if self.indicated > -1 and item == self.indicated:
return self.indicated_attribute
return None | python | def OnGetItemAttr(self, item):
"""Retrieve ListItemAttr for the given item (index)"""
if self.indicated > -1 and item == self.indicated:
return self.indicated_attribute
return None | Retrieve ListItemAttr for the given item (index) | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py#L248-L252 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py | DataView.OnGetItemText | def OnGetItemText(self, item, col):
"""Retrieve text for the item and column respectively"""
# TODO: need to format for rjust and the like...
try:
column = self.columns[col]
value = column.get(self.sorted[item])
except IndexError, err:
return None
else:
if value is None:
return u''
if column.percentPossible and self.percentageView and self.total:
value = value / float(self.total) * 100.00
if column.format:
try:
return column.format % (value,)
except Exception, err:
log.warn('Column %s could not format %r value: %r',
column.name, type(value), value
)
value = column.get(self.sorted[item] )
if isinstance(value,(unicode,str)):
return value
return unicode(value)
else:
if isinstance(value,(unicode,str)):
return value
return unicode(value) | python | def OnGetItemText(self, item, col):
"""Retrieve text for the item and column respectively"""
# TODO: need to format for rjust and the like...
try:
column = self.columns[col]
value = column.get(self.sorted[item])
except IndexError, err:
return None
else:
if value is None:
return u''
if column.percentPossible and self.percentageView and self.total:
value = value / float(self.total) * 100.00
if column.format:
try:
return column.format % (value,)
except Exception, err:
log.warn('Column %s could not format %r value: %r',
column.name, type(value), value
)
value = column.get(self.sorted[item] )
if isinstance(value,(unicode,str)):
return value
return unicode(value)
else:
if isinstance(value,(unicode,str)):
return value
return unicode(value) | Retrieve text for the item and column respectively | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/listviews.py#L254-L281 |
lrq3000/pyFileFixity | pyFileFixity/lib/brownanrs/imageencode.py | encode | def encode(input, output_filename):
"""Encodes the input data with reed-solomon error correction in 223 byte
blocks, and outputs each block along with 32 parity bytes to a new file by
the given filename.
input is a file-like object
The outputted image will be in png format, and will be 255 by x pixels with
one color channel. X is the number of 255 byte blocks from the input. Each
block of data will be one row, therefore, the data can be recovered if no
more than 16 pixels per row are altered.
"""
coder = rs.RSCoder(255,223)
output = []
while True:
block = input.read(223)
if not block: break
code = coder.encode_fast(block)
output.append(code)
sys.stderr.write(".")
sys.stderr.write("\n")
out = Image.new("L", (rowstride,len(output)))
out.putdata("".join(output))
out.save(output_filename) | python | def encode(input, output_filename):
"""Encodes the input data with reed-solomon error correction in 223 byte
blocks, and outputs each block along with 32 parity bytes to a new file by
the given filename.
input is a file-like object
The outputted image will be in png format, and will be 255 by x pixels with
one color channel. X is the number of 255 byte blocks from the input. Each
block of data will be one row, therefore, the data can be recovered if no
more than 16 pixels per row are altered.
"""
coder = rs.RSCoder(255,223)
output = []
while True:
block = input.read(223)
if not block: break
code = coder.encode_fast(block)
output.append(code)
sys.stderr.write(".")
sys.stderr.write("\n")
out = Image.new("L", (rowstride,len(output)))
out.putdata("".join(output))
out.save(output_filename) | Encodes the input data with reed-solomon error correction in 223 byte
blocks, and outputs each block along with 32 parity bytes to a new file by
the given filename.
input is a file-like object
The outputted image will be in png format, and will be 255 by x pixels with
one color channel. X is the number of 255 byte blocks from the input. Each
block of data will be one row, therefore, the data can be recovered if no
more than 16 pixels per row are altered. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/brownanrs/imageencode.py#L8-L35 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | redirect | def redirect(url, code=303):
""" Aborts execution and causes a 303 redirect """
scriptname = request.environ.get('SCRIPT_NAME', '').rstrip('/') + '/'
location = urljoin(request.url, urljoin(scriptname, url))
raise HTTPResponse("", status=code, header=dict(Location=location)) | python | def redirect(url, code=303):
""" Aborts execution and causes a 303 redirect """
scriptname = request.environ.get('SCRIPT_NAME', '').rstrip('/') + '/'
location = urljoin(request.url, urljoin(scriptname, url))
raise HTTPResponse("", status=code, header=dict(Location=location)) | Aborts execution and causes a 303 redirect | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L916-L920 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | parse_date | def parse_date(ims):
""" Parse rfc1123, rfc850 and asctime timestamps and return UTC epoch. """
try:
ts = email.utils.parsedate_tz(ims)
return time.mktime(ts[:8] + (0,)) - (ts[9] or 0) - time.timezone
except (TypeError, ValueError, IndexError):
return None | python | def parse_date(ims):
""" Parse rfc1123, rfc850 and asctime timestamps and return UTC epoch. """
try:
ts = email.utils.parsedate_tz(ims)
return time.mktime(ts[:8] + (0,)) - (ts[9] or 0) - time.timezone
except (TypeError, ValueError, IndexError):
return None | Parse rfc1123, rfc850 and asctime timestamps and return UTC epoch. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L986-L992 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | run | def run(app=None, server=WSGIRefServer, host='127.0.0.1', port=8080,
interval=1, reloader=False, **kargs):
""" Runs bottle as a web server. """
app = app if app else default_app()
quiet = bool(kargs.get('quiet', False))
# Instantiate server, if it is a class instead of an instance
if isinstance(server, type):
server = server(host=host, port=port, **kargs)
if not isinstance(server, ServerAdapter):
raise RuntimeError("Server must be a subclass of WSGIAdapter")
if not quiet and isinstance(server, ServerAdapter): # pragma: no cover
if not reloader or os.environ.get('BOTTLE_CHILD') == 'true':
print("Bottle server starting up (using %s)..." % repr(server))
print("Listening on http://%s:%d/" % (server.host, server.port))
print("Use Ctrl-C to quit.")
print()
else:
print("Bottle auto reloader starting up...")
try:
if reloader and interval:
reloader_run(server, app, interval)
else:
server.run(app)
except KeyboardInterrupt:
if not quiet: # pragma: no cover
print("Shutting Down...") | python | def run(app=None, server=WSGIRefServer, host='127.0.0.1', port=8080,
interval=1, reloader=False, **kargs):
""" Runs bottle as a web server. """
app = app if app else default_app()
quiet = bool(kargs.get('quiet', False))
# Instantiate server, if it is a class instead of an instance
if isinstance(server, type):
server = server(host=host, port=port, **kargs)
if not isinstance(server, ServerAdapter):
raise RuntimeError("Server must be a subclass of WSGIAdapter")
if not quiet and isinstance(server, ServerAdapter): # pragma: no cover
if not reloader or os.environ.get('BOTTLE_CHILD') == 'true':
print("Bottle server starting up (using %s)..." % repr(server))
print("Listening on http://%s:%d/" % (server.host, server.port))
print("Use Ctrl-C to quit.")
print()
else:
print("Bottle auto reloader starting up...")
try:
if reloader and interval:
reloader_run(server, app, interval)
else:
server.run(app)
except KeyboardInterrupt:
if not quiet: # pragma: no cover
print("Shutting Down...") | Runs bottle as a web server. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L1239-L1264 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | template | def template(tpl, template_adapter=SimpleTemplate, **kwargs):
'''
Get a rendered template as a string iterator.
You can use a name, a filename or a template string as first parameter.
'''
if tpl not in TEMPLATES or DEBUG:
settings = kwargs.get('template_settings',{})
lookup = kwargs.get('template_lookup', TEMPLATE_PATH)
if isinstance(tpl, template_adapter):
TEMPLATES[tpl] = tpl
if settings: TEMPLATES[tpl].prepare(settings)
elif "\n" in tpl or "{" in tpl or "%" in tpl or '$' in tpl:
TEMPLATES[tpl] = template_adapter(source=tpl, lookup=lookup, settings=settings)
else:
TEMPLATES[tpl] = template_adapter(name=tpl, lookup=lookup, settings=settings)
if not TEMPLATES[tpl]:
abort(500, 'Template (%s) not found' % tpl)
kwargs['abort'] = abort
kwargs['request'] = request
kwargs['response'] = response
return TEMPLATES[tpl].render(**kwargs) | python | def template(tpl, template_adapter=SimpleTemplate, **kwargs):
'''
Get a rendered template as a string iterator.
You can use a name, a filename or a template string as first parameter.
'''
if tpl not in TEMPLATES or DEBUG:
settings = kwargs.get('template_settings',{})
lookup = kwargs.get('template_lookup', TEMPLATE_PATH)
if isinstance(tpl, template_adapter):
TEMPLATES[tpl] = tpl
if settings: TEMPLATES[tpl].prepare(settings)
elif "\n" in tpl or "{" in tpl or "%" in tpl or '$' in tpl:
TEMPLATES[tpl] = template_adapter(source=tpl, lookup=lookup, settings=settings)
else:
TEMPLATES[tpl] = template_adapter(name=tpl, lookup=lookup, settings=settings)
if not TEMPLATES[tpl]:
abort(500, 'Template (%s) not found' % tpl)
kwargs['abort'] = abort
kwargs['request'] = request
kwargs['response'] = response
return TEMPLATES[tpl].render(**kwargs) | Get a rendered template as a string iterator.
You can use a name, a filename or a template string as first parameter. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L1566-L1586 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Route.tokens | def tokens(self):
""" Return a list of (type, value) tokens. """
if not self._tokens:
self._tokens = list(self.tokenise(self.route))
return self._tokens | python | def tokens(self):
""" Return a list of (type, value) tokens. """
if not self._tokens:
self._tokens = list(self.tokenise(self.route))
return self._tokens | Return a list of (type, value) tokens. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L204-L208 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Route.tokenise | def tokenise(cls, route):
''' Split a string into an iterator of (type, value) tokens. '''
match = None
for match in cls.syntax.finditer(route):
pre, name, rex = match.groups()
if pre: yield ('TXT', pre.replace('\\:',':'))
if rex and name: yield ('VAR', (rex, name))
elif name: yield ('VAR', (cls.default, name))
elif rex: yield ('ANON', rex)
if not match:
yield ('TXT', route.replace('\\:',':'))
elif match.end() < len(route):
yield ('TXT', route[match.end():].replace('\\:',':')) | python | def tokenise(cls, route):
''' Split a string into an iterator of (type, value) tokens. '''
match = None
for match in cls.syntax.finditer(route):
pre, name, rex = match.groups()
if pre: yield ('TXT', pre.replace('\\:',':'))
if rex and name: yield ('VAR', (rex, name))
elif name: yield ('VAR', (cls.default, name))
elif rex: yield ('ANON', rex)
if not match:
yield ('TXT', route.replace('\\:',':'))
elif match.end() < len(route):
yield ('TXT', route[match.end():].replace('\\:',':')) | Split a string into an iterator of (type, value) tokens. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L211-L223 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Route.group_re | def group_re(self):
''' Return a regexp pattern with named groups '''
out = ''
for token, data in self.tokens():
if token == 'TXT': out += re.escape(data)
elif token == 'VAR': out += '(?P<%s>%s)' % (data[1], data[0])
elif token == 'ANON': out += '(?:%s)' % data
return out | python | def group_re(self):
''' Return a regexp pattern with named groups '''
out = ''
for token, data in self.tokens():
if token == 'TXT': out += re.escape(data)
elif token == 'VAR': out += '(?P<%s>%s)' % (data[1], data[0])
elif token == 'ANON': out += '(?:%s)' % data
return out | Return a regexp pattern with named groups | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L225-L232 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Route.format_str | def format_str(self):
''' Return a format string with named fields. '''
if self.static:
return self.route.replace('%','%%')
out, i = '', 0
for token, value in self.tokens():
if token == 'TXT': out += value.replace('%','%%')
elif token == 'ANON': out += '%%(anon%d)s' % i; i+=1
elif token == 'VAR': out += '%%(%s)s' % value[1]
return out | python | def format_str(self):
''' Return a format string with named fields. '''
if self.static:
return self.route.replace('%','%%')
out, i = '', 0
for token, value in self.tokens():
if token == 'TXT': out += value.replace('%','%%')
elif token == 'ANON': out += '%%(anon%d)s' % i; i+=1
elif token == 'VAR': out += '%%(%s)s' % value[1]
return out | Return a format string with named fields. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L238-L247 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Route.is_dynamic | def is_dynamic(self):
''' Return true if the route contains dynamic parts '''
if not self._static:
for token, value in self.tokens():
if token != 'TXT':
return True
self._static = True
return False | python | def is_dynamic(self):
''' Return true if the route contains dynamic parts '''
if not self._static:
for token, value in self.tokens():
if token != 'TXT':
return True
self._static = True
return False | Return true if the route contains dynamic parts | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L253-L260 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Router.match | def match(self, uri):
''' Matches an URL and returns a (handler, target) tuple '''
if uri in self.static:
return self.static[uri], {}
for combined, subroutes in self.dynamic:
match = combined.match(uri)
if not match: continue
target, groups = subroutes[match.lastindex - 1]
groups = groups.match(uri).groupdict() if groups else {}
return target, groups
return None, {} | python | def match(self, uri):
''' Matches an URL and returns a (handler, target) tuple '''
if uri in self.static:
return self.static[uri], {}
for combined, subroutes in self.dynamic:
match = combined.match(uri)
if not match: continue
target, groups = subroutes[match.lastindex - 1]
groups = groups.match(uri).groupdict() if groups else {}
return target, groups
return None, {} | Matches an URL and returns a (handler, target) tuple | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L308-L318 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Router.build | def build(self, route_name, **args):
''' Builds an URL out of a named route and some parameters.'''
try:
return self.named[route_name] % args
except KeyError:
raise RouteBuildError("No route found with name '%s'." % route_name) | python | def build(self, route_name, **args):
''' Builds an URL out of a named route and some parameters.'''
try:
return self.named[route_name] % args
except KeyError:
raise RouteBuildError("No route found with name '%s'." % route_name) | Builds an URL out of a named route and some parameters. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L320-L325 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Bottle.add_filter | def add_filter(self, ftype, func):
''' Register a new output filter. Whenever bottle hits a handler output
matching `ftype`, `func` is applyed to it. '''
if not isinstance(ftype, type):
raise TypeError("Expected type object, got %s" % type(ftype))
self.castfilter = [(t, f) for (t, f) in self.castfilter if t != ftype]
self.castfilter.append((ftype, func))
self.castfilter.sort() | python | def add_filter(self, ftype, func):
''' Register a new output filter. Whenever bottle hits a handler output
matching `ftype`, `func` is applyed to it. '''
if not isinstance(ftype, type):
raise TypeError("Expected type object, got %s" % type(ftype))
self.castfilter = [(t, f) for (t, f) in self.castfilter if t != ftype]
self.castfilter.append((ftype, func))
self.castfilter.sort() | Register a new output filter. Whenever bottle hits a handler output
matching `ftype`, `func` is applyed to it. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L371-L378 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Bottle.match_url | def match_url(self, path, method='GET'):
""" Find a callback bound to a path and a specific HTTP method.
Return (callback, param) tuple or (None, {}).
method: HEAD falls back to GET. HEAD and GET fall back to ALL.
"""
path = path.strip().lstrip('/')
handler, param = self.routes.match(method + ';' + path)
if handler: return handler, param
if method == 'HEAD':
handler, param = self.routes.match('GET;' + path)
if handler: return handler, param
handler, param = self.routes.match('ANY;' + path)
if handler: return handler, param
return None, {} | python | def match_url(self, path, method='GET'):
""" Find a callback bound to a path and a specific HTTP method.
Return (callback, param) tuple or (None, {}).
method: HEAD falls back to GET. HEAD and GET fall back to ALL.
"""
path = path.strip().lstrip('/')
handler, param = self.routes.match(method + ';' + path)
if handler: return handler, param
if method == 'HEAD':
handler, param = self.routes.match('GET;' + path)
if handler: return handler, param
handler, param = self.routes.match('ANY;' + path)
if handler: return handler, param
return None, {} | Find a callback bound to a path and a specific HTTP method.
Return (callback, param) tuple or (None, {}).
method: HEAD falls back to GET. HEAD and GET fall back to ALL. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L380-L393 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Bottle.get_url | def get_url(self, routename, **kargs):
""" Return a string that matches a named route """
return '/' + self.routes.build(routename, **kargs).split(';', 1)[1] | python | def get_url(self, routename, **kargs):
""" Return a string that matches a named route """
return '/' + self.routes.build(routename, **kargs).split(';', 1)[1] | Return a string that matches a named route | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L395-L397 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Bottle.route | def route(self, path=None, method='GET', **kargs):
""" Decorator: Bind a function to a GET request path.
If the path parameter is None, the signature of the decorated
function is used to generate the path. See yieldroutes()
for details.
The method parameter (default: GET) specifies the HTTP request
method to listen to. You can specify a list of methods.
"""
if isinstance(method, str): #TODO: Test this
method = method.split(';')
def wrapper(callback):
paths = [] if path is None else [path.strip().lstrip('/')]
if not paths: # Lets generate the path automatically
paths = yieldroutes(callback)
for p in paths:
for m in method:
route = m.upper() + ';' + p
self.routes.add(route, callback, **kargs)
return callback
return wrapper | python | def route(self, path=None, method='GET', **kargs):
""" Decorator: Bind a function to a GET request path.
If the path parameter is None, the signature of the decorated
function is used to generate the path. See yieldroutes()
for details.
The method parameter (default: GET) specifies the HTTP request
method to listen to. You can specify a list of methods.
"""
if isinstance(method, str): #TODO: Test this
method = method.split(';')
def wrapper(callback):
paths = [] if path is None else [path.strip().lstrip('/')]
if not paths: # Lets generate the path automatically
paths = yieldroutes(callback)
for p in paths:
for m in method:
route = m.upper() + ';' + p
self.routes.add(route, callback, **kargs)
return callback
return wrapper | Decorator: Bind a function to a GET request path.
If the path parameter is None, the signature of the decorated
function is used to generate the path. See yieldroutes()
for details.
The method parameter (default: GET) specifies the HTTP request
method to listen to. You can specify a list of methods. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L399-L420 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Request.bind | def bind(self, environ, app=None):
""" Bind a new WSGI enviroment and clear out all previously computed
attributes.
This is done automatically for the global `bottle.request`
instance on every request.
"""
if isinstance(environ, Request): # Recycle already parsed content
for key in self.__dict__: #TODO: Test this
setattr(self, key, getattr(environ, key))
self.app = app
return
self._GET = self._POST = self._GETPOST = self._COOKIES = None
self._body = self._header = None
self.environ = environ
self.app = app
# These attributes are used anyway, so it is ok to compute them here
self.path = '/' + environ.get('PATH_INFO', '/').lstrip('/')
self.method = environ.get('REQUEST_METHOD', 'GET').upper() | python | def bind(self, environ, app=None):
""" Bind a new WSGI enviroment and clear out all previously computed
attributes.
This is done automatically for the global `bottle.request`
instance on every request.
"""
if isinstance(environ, Request): # Recycle already parsed content
for key in self.__dict__: #TODO: Test this
setattr(self, key, getattr(environ, key))
self.app = app
return
self._GET = self._POST = self._GETPOST = self._COOKIES = None
self._body = self._header = None
self.environ = environ
self.app = app
# These attributes are used anyway, so it is ok to compute them here
self.path = '/' + environ.get('PATH_INFO', '/').lstrip('/')
self.method = environ.get('REQUEST_METHOD', 'GET').upper() | Bind a new WSGI enviroment and clear out all previously computed
attributes.
This is done automatically for the global `bottle.request`
instance on every request. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L553-L571 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Request.path_shift | def path_shift(self, count=1):
''' Shift some levels of PATH_INFO into SCRIPT_NAME and return the
moved part. count defaults to 1'''
#/a/b/ /c/d --> 'a','b' 'c','d'
if count == 0: return ''
pathlist = self.path.strip('/').split('/')
scriptlist = self.environ.get('SCRIPT_NAME','/').strip('/').split('/')
if pathlist and pathlist[0] == '': pathlist = []
if scriptlist and scriptlist[0] == '': scriptlist = []
if count > 0 and count <= len(pathlist):
moved = pathlist[:count]
scriptlist = scriptlist + moved
pathlist = pathlist[count:]
elif count < 0 and count >= -len(scriptlist):
moved = scriptlist[count:]
pathlist = moved + pathlist
scriptlist = scriptlist[:count]
else:
empty = 'SCRIPT_NAME' if count < 0 else 'PATH_INFO'
raise AssertionError("Cannot shift. Nothing left from %s" % empty)
self['PATH_INFO'] = self.path = '/' + '/'.join(pathlist) \
+ ('/' if self.path.endswith('/') and pathlist else '')
self['SCRIPT_NAME'] = '/' + '/'.join(scriptlist)
return '/'.join(moved) | python | def path_shift(self, count=1):
''' Shift some levels of PATH_INFO into SCRIPT_NAME and return the
moved part. count defaults to 1'''
#/a/b/ /c/d --> 'a','b' 'c','d'
if count == 0: return ''
pathlist = self.path.strip('/').split('/')
scriptlist = self.environ.get('SCRIPT_NAME','/').strip('/').split('/')
if pathlist and pathlist[0] == '': pathlist = []
if scriptlist and scriptlist[0] == '': scriptlist = []
if count > 0 and count <= len(pathlist):
moved = pathlist[:count]
scriptlist = scriptlist + moved
pathlist = pathlist[count:]
elif count < 0 and count >= -len(scriptlist):
moved = scriptlist[count:]
pathlist = moved + pathlist
scriptlist = scriptlist[:count]
else:
empty = 'SCRIPT_NAME' if count < 0 else 'PATH_INFO'
raise AssertionError("Cannot shift. Nothing left from %s" % empty)
self['PATH_INFO'] = self.path = '/' + '/'.join(pathlist) \
+ ('/' if self.path.endswith('/') and pathlist else '')
self['SCRIPT_NAME'] = '/' + '/'.join(scriptlist)
return '/'.join(moved) | Shift some levels of PATH_INFO into SCRIPT_NAME and return the
moved part. count defaults to 1 | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L577-L600 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Request.url | def url(self):
""" Full URL as requested by the client (computed).
This value is constructed out of different environment variables
and includes scheme, host, port, scriptname, path and query string.
"""
scheme = self.environ.get('wsgi.url_scheme', 'http')
host = self.environ.get('HTTP_X_FORWARDED_HOST', self.environ.get('HTTP_HOST', None))
if not host:
host = self.environ.get('SERVER_NAME')
port = self.environ.get('SERVER_PORT', '80')
if scheme + port not in ('https443', 'http80'):
host += ':' + port
parts = (scheme, host, urlquote(self.fullpath), self.query_string, '')
return urlunsplit(parts) | python | def url(self):
""" Full URL as requested by the client (computed).
This value is constructed out of different environment variables
and includes scheme, host, port, scriptname, path and query string.
"""
scheme = self.environ.get('wsgi.url_scheme', 'http')
host = self.environ.get('HTTP_X_FORWARDED_HOST', self.environ.get('HTTP_HOST', None))
if not host:
host = self.environ.get('SERVER_NAME')
port = self.environ.get('SERVER_PORT', '80')
if scheme + port not in ('https443', 'http80'):
host += ':' + port
parts = (scheme, host, urlquote(self.fullpath), self.query_string, '')
return urlunsplit(parts) | Full URL as requested by the client (computed).
This value is constructed out of different environment variables
and includes scheme, host, port, scriptname, path and query string. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L625-L639 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Request.POST | def POST(self):
""" The HTTP POST body parsed into a MultiDict.
This supports urlencoded and multipart POST requests. Multipart
is commonly used for file uploads and may result in some of the
values beeing cgi.FieldStorage objects instead of strings.
Multiple values per key are possible. See MultiDict for details.
"""
if self._POST is None:
save_env = dict() # Build a save environment for cgi
for key in ('REQUEST_METHOD', 'CONTENT_TYPE', 'CONTENT_LENGTH'):
if key in self.environ:
save_env[key] = self.environ[key]
save_env['QUERY_STRING'] = '' # Without this, sys.argv is called!
if TextIOWrapper:
fb = TextIOWrapper(self.body, encoding='ISO-8859-1')
else:
fb = self.body
data = cgi.FieldStorage(fp=fb, environ=save_env)
self._POST = MultiDict()
for item in data.list:
self._POST[item.name] = item if item.filename else item.value
return self._POST | python | def POST(self):
""" The HTTP POST body parsed into a MultiDict.
This supports urlencoded and multipart POST requests. Multipart
is commonly used for file uploads and may result in some of the
values beeing cgi.FieldStorage objects instead of strings.
Multiple values per key are possible. See MultiDict for details.
"""
if self._POST is None:
save_env = dict() # Build a save environment for cgi
for key in ('REQUEST_METHOD', 'CONTENT_TYPE', 'CONTENT_LENGTH'):
if key in self.environ:
save_env[key] = self.environ[key]
save_env['QUERY_STRING'] = '' # Without this, sys.argv is called!
if TextIOWrapper:
fb = TextIOWrapper(self.body, encoding='ISO-8859-1')
else:
fb = self.body
data = cgi.FieldStorage(fp=fb, environ=save_env)
self._POST = MultiDict()
for item in data.list:
self._POST[item.name] = item if item.filename else item.value
return self._POST | The HTTP POST body parsed into a MultiDict.
This supports urlencoded and multipart POST requests. Multipart
is commonly used for file uploads and may result in some of the
values beeing cgi.FieldStorage objects instead of strings.
Multiple values per key are possible. See MultiDict for details. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L676-L699 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Request.params | def params(self):
""" A combined MultiDict with POST and GET parameters. """
if self._GETPOST is None:
self._GETPOST = MultiDict(self.GET)
self._GETPOST.update(dict(self.POST))
return self._GETPOST | python | def params(self):
""" A combined MultiDict with POST and GET parameters. """
if self._GETPOST is None:
self._GETPOST = MultiDict(self.GET)
self._GETPOST.update(dict(self.POST))
return self._GETPOST | A combined MultiDict with POST and GET parameters. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L702-L707 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Request.get_cookie | def get_cookie(self, *args):
""" Return the (decoded) value of a cookie. """
value = self.COOKIES.get(*args)
sec = self.app.config['securecookie.key']
dec = cookie_decode(value, sec)
return dec or value | python | def get_cookie(self, *args):
""" Return the (decoded) value of a cookie. """
value = self.COOKIES.get(*args)
sec = self.app.config['securecookie.key']
dec = cookie_decode(value, sec)
return dec or value | Return the (decoded) value of a cookie. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L753-L758 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Response.copy | def copy(self):
''' Returns a copy of self '''
copy = Response(self.app)
copy.status = self.status
copy.headers = self.headers.copy()
copy.content_type = self.content_type
return copy | python | def copy(self):
''' Returns a copy of self '''
copy = Response(self.app)
copy.status = self.status
copy.headers = self.headers.copy()
copy.content_type = self.content_type
return copy | Returns a copy of self | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L776-L782 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Response.wsgiheader | def wsgiheader(self):
''' Returns a wsgi conform list of header/value pairs. '''
for c in list(self.COOKIES.values()):
if c.OutputString() not in self.headers.getall('Set-Cookie'):
self.headers.append('Set-Cookie', c.OutputString())
return list(self.headers.iterallitems()) | python | def wsgiheader(self):
''' Returns a wsgi conform list of header/value pairs. '''
for c in list(self.COOKIES.values()):
if c.OutputString() not in self.headers.getall('Set-Cookie'):
self.headers.append('Set-Cookie', c.OutputString())
return list(self.headers.iterallitems()) | Returns a wsgi conform list of header/value pairs. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L784-L789 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py | Response.set_cookie | def set_cookie(self, key, value, **kargs):
""" Add a new cookie with various options.
If the cookie value is not a string, a secure cookie is created.
Possible options are:
expires, path, comment, domain, max_age, secure, version, httponly
See http://de.wikipedia.org/wiki/HTTP-Cookie#Aufbau for details
"""
if not isinstance(value, str):
sec = self.app.config['securecookie.key']
value = cookie_encode(value, sec).decode('ascii') #2to3 hack
self.COOKIES[key] = value
for k, v in kargs.items():
self.COOKIES[key][k.replace('_', '-')] = v | python | def set_cookie(self, key, value, **kargs):
""" Add a new cookie with various options.
If the cookie value is not a string, a secure cookie is created.
Possible options are:
expires, path, comment, domain, max_age, secure, version, httponly
See http://de.wikipedia.org/wiki/HTTP-Cookie#Aufbau for details
"""
if not isinstance(value, str):
sec = self.app.config['securecookie.key']
value = cookie_encode(value, sec).decode('ascii') #2to3 hack
self.COOKIES[key] = value
for k, v in kargs.items():
self.COOKIES[key][k.replace('_', '-')] = v | Add a new cookie with various options.
If the cookie value is not a string, a secure cookie is created.
Possible options are:
expires, path, comment, domain, max_age, secure, version, httponly
See http://de.wikipedia.org/wiki/HTTP-Cookie#Aufbau for details | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/pympler/util/bottle3.py#L809-L823 |
lrq3000/pyFileFixity | pyFileFixity/lib/gooey/python_bindings/source_parser.py | parse_source_file | def parse_source_file(file_name):
"""
Parses the AST of Python file for lines containing
references to the argparse module.
returns the collection of ast objects found.
Example client code:
1. parser = ArgumentParser(desc="My help Message")
2. parser.add_argument('filename', help="Name of the file to load")
3. parser.add_argument('-f', '--format', help='Format of output \nOptions: ['md', 'html']
4. args = parser.parse_args()
Variables:
* nodes Primary syntax tree object
* argparse_assignments The assignment of the ArgumentParser (line 1 in example code)
* add_arg_assignments Calls to add_argument() (lines 2-3 in example code)
* parser_var_name The instance variable of the ArgumentParser (line 1 in example code)
* ast_source The curated collection of all parser related nodes in the client code
"""
nodes = ast.parse(_openfile(file_name))
module_imports = get_nodes_by_instance_type(nodes, _ast.Import)
specific_imports = get_nodes_by_instance_type(nodes, _ast.ImportFrom)
assignment_objs = get_nodes_by_instance_type(nodes, _ast.Assign)
call_objects = get_nodes_by_instance_type(nodes, _ast.Call)
argparse_assignments = get_nodes_by_containing_attr(assignment_objs, 'ArgumentParser')
add_arg_assignments = get_nodes_by_containing_attr(call_objects, 'add_argument')
parse_args_assignment = get_nodes_by_containing_attr(call_objects, 'parse_args')
ast_argparse_source = chain(
module_imports,
specific_imports,
argparse_assignments,
add_arg_assignments
# parse_args_assignment
)
return ast_argparse_source | python | def parse_source_file(file_name):
"""
Parses the AST of Python file for lines containing
references to the argparse module.
returns the collection of ast objects found.
Example client code:
1. parser = ArgumentParser(desc="My help Message")
2. parser.add_argument('filename', help="Name of the file to load")
3. parser.add_argument('-f', '--format', help='Format of output \nOptions: ['md', 'html']
4. args = parser.parse_args()
Variables:
* nodes Primary syntax tree object
* argparse_assignments The assignment of the ArgumentParser (line 1 in example code)
* add_arg_assignments Calls to add_argument() (lines 2-3 in example code)
* parser_var_name The instance variable of the ArgumentParser (line 1 in example code)
* ast_source The curated collection of all parser related nodes in the client code
"""
nodes = ast.parse(_openfile(file_name))
module_imports = get_nodes_by_instance_type(nodes, _ast.Import)
specific_imports = get_nodes_by_instance_type(nodes, _ast.ImportFrom)
assignment_objs = get_nodes_by_instance_type(nodes, _ast.Assign)
call_objects = get_nodes_by_instance_type(nodes, _ast.Call)
argparse_assignments = get_nodes_by_containing_attr(assignment_objs, 'ArgumentParser')
add_arg_assignments = get_nodes_by_containing_attr(call_objects, 'add_argument')
parse_args_assignment = get_nodes_by_containing_attr(call_objects, 'parse_args')
ast_argparse_source = chain(
module_imports,
specific_imports,
argparse_assignments,
add_arg_assignments
# parse_args_assignment
)
return ast_argparse_source | Parses the AST of Python file for lines containing
references to the argparse module.
returns the collection of ast objects found.
Example client code:
1. parser = ArgumentParser(desc="My help Message")
2. parser.add_argument('filename', help="Name of the file to load")
3. parser.add_argument('-f', '--format', help='Format of output \nOptions: ['md', 'html']
4. args = parser.parse_args()
Variables:
* nodes Primary syntax tree object
* argparse_assignments The assignment of the ArgumentParser (line 1 in example code)
* add_arg_assignments Calls to add_argument() (lines 2-3 in example code)
* parser_var_name The instance variable of the ArgumentParser (line 1 in example code)
* ast_source The curated collection of all parser related nodes in the client code | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/gooey/python_bindings/source_parser.py#L19-L60 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py | memory_usage | def memory_usage(proc=-1, interval=.1, timeout=None, timestamps=False,
include_children=False, max_usage=False, retval=False,
stream=None):
"""
Return the memory usage of a process or piece of code
Parameters
----------
proc : {int, string, tuple, subprocess.Popen}, optional
The process to monitor. Can be given by an integer/string
representing a PID, by a Popen object or by a tuple
representing a Python function. The tuple contains three
values (f, args, kw) and specifies to run the function
f(*args, **kw).
Set to -1 (default) for current process.
interval : float, optional
Interval at which measurements are collected.
timeout : float, optional
Maximum amount of time (in seconds) to wait before returning.
max_usage : bool, optional
Only return the maximum memory usage (default False)
retval : bool, optional
For profiling python functions. Save the return value of the profiled
function. Return value of memory_usage becomes a tuple:
(mem_usage, retval)
timestamps : bool, optional
if True, timestamps of memory usage measurement are collected as well.
stream : File
if stream is a File opened with write access, then results are written
to this file instead of stored in memory and returned at the end of
the subprocess. Useful for long-running processes.
Implies timestamps=True.
Returns
-------
mem_usage : list of floating-poing values
memory usage, in MiB. It's length is always < timeout / interval
if max_usage is given, returns the two elements maximum memory and
number of measurements effectuated
ret : return value of the profiled function
Only returned if retval is set to True
"""
if stream is not None:
timestamps = True
if not max_usage:
ret = []
else:
ret = -1
if timeout is not None:
max_iter = int(timeout / interval)
elif isinstance(proc, int):
# external process and no timeout
max_iter = 1
else:
# for a Python function wait until it finishes
max_iter = float('inf')
if hasattr(proc, '__call__'):
proc = (proc, (), {})
if isinstance(proc, (list, tuple)):
if len(proc) == 1:
f, args, kw = (proc[0], (), {})
elif len(proc) == 2:
f, args, kw = (proc[0], proc[1], {})
elif len(proc) == 3:
f, args, kw = (proc[0], proc[1], proc[2])
else:
raise ValueError
while True:
child_conn, parent_conn = Pipe() # this will store MemTimer's results
p = MemTimer(os.getpid(), interval, child_conn, timestamps=timestamps,
max_usage=max_usage, include_children=include_children)
p.start()
parent_conn.recv() # wait until we start getting memory
returned = f(*args, **kw)
parent_conn.send(0) # finish timing
ret = parent_conn.recv()
n_measurements = parent_conn.recv()
if retval:
ret = ret, returned
p.join(5 * interval)
if n_measurements > 4 or interval < 1e-6:
break
interval /= 10.
elif isinstance(proc, subprocess.Popen):
# external process, launched from Python
line_count = 0
while True:
if not max_usage:
mem_usage = _get_memory(proc.pid, timestamps=timestamps,
include_children=include_children)
if stream is not None:
stream.write("MEM {0:.6f} {1:.4f}\n".format(*mem_usage))
else:
ret.append(mem_usage)
else:
ret = max([ret,
_get_memory(proc.pid,
include_children=include_children)])
time.sleep(interval)
line_count += 1
# flush every 50 lines. Make 'tail -f' usable on profile file
if line_count > 50:
line_count = 0
if stream is not None:
stream.flush()
if timeout is not None:
max_iter -= 1
if max_iter == 0:
break
if proc.poll() is not None:
break
else:
# external process
if max_iter == -1:
max_iter = 1
counter = 0
while counter < max_iter:
counter += 1
if not max_usage:
mem_usage = _get_memory(proc, timestamps=timestamps,
include_children=include_children)
if stream is not None:
stream.write("MEM {0:.6f} {1:.4f}\n".format(*mem_usage))
else:
ret.append(mem_usage)
else:
ret = max([ret,
_get_memory(proc, include_children=include_children)
])
time.sleep(interval)
# Flush every 50 lines.
if counter % 50 == 0 and stream is not None:
stream.flush()
if stream:
return None
return ret | python | def memory_usage(proc=-1, interval=.1, timeout=None, timestamps=False,
include_children=False, max_usage=False, retval=False,
stream=None):
"""
Return the memory usage of a process or piece of code
Parameters
----------
proc : {int, string, tuple, subprocess.Popen}, optional
The process to monitor. Can be given by an integer/string
representing a PID, by a Popen object or by a tuple
representing a Python function. The tuple contains three
values (f, args, kw) and specifies to run the function
f(*args, **kw).
Set to -1 (default) for current process.
interval : float, optional
Interval at which measurements are collected.
timeout : float, optional
Maximum amount of time (in seconds) to wait before returning.
max_usage : bool, optional
Only return the maximum memory usage (default False)
retval : bool, optional
For profiling python functions. Save the return value of the profiled
function. Return value of memory_usage becomes a tuple:
(mem_usage, retval)
timestamps : bool, optional
if True, timestamps of memory usage measurement are collected as well.
stream : File
if stream is a File opened with write access, then results are written
to this file instead of stored in memory and returned at the end of
the subprocess. Useful for long-running processes.
Implies timestamps=True.
Returns
-------
mem_usage : list of floating-poing values
memory usage, in MiB. It's length is always < timeout / interval
if max_usage is given, returns the two elements maximum memory and
number of measurements effectuated
ret : return value of the profiled function
Only returned if retval is set to True
"""
if stream is not None:
timestamps = True
if not max_usage:
ret = []
else:
ret = -1
if timeout is not None:
max_iter = int(timeout / interval)
elif isinstance(proc, int):
# external process and no timeout
max_iter = 1
else:
# for a Python function wait until it finishes
max_iter = float('inf')
if hasattr(proc, '__call__'):
proc = (proc, (), {})
if isinstance(proc, (list, tuple)):
if len(proc) == 1:
f, args, kw = (proc[0], (), {})
elif len(proc) == 2:
f, args, kw = (proc[0], proc[1], {})
elif len(proc) == 3:
f, args, kw = (proc[0], proc[1], proc[2])
else:
raise ValueError
while True:
child_conn, parent_conn = Pipe() # this will store MemTimer's results
p = MemTimer(os.getpid(), interval, child_conn, timestamps=timestamps,
max_usage=max_usage, include_children=include_children)
p.start()
parent_conn.recv() # wait until we start getting memory
returned = f(*args, **kw)
parent_conn.send(0) # finish timing
ret = parent_conn.recv()
n_measurements = parent_conn.recv()
if retval:
ret = ret, returned
p.join(5 * interval)
if n_measurements > 4 or interval < 1e-6:
break
interval /= 10.
elif isinstance(proc, subprocess.Popen):
# external process, launched from Python
line_count = 0
while True:
if not max_usage:
mem_usage = _get_memory(proc.pid, timestamps=timestamps,
include_children=include_children)
if stream is not None:
stream.write("MEM {0:.6f} {1:.4f}\n".format(*mem_usage))
else:
ret.append(mem_usage)
else:
ret = max([ret,
_get_memory(proc.pid,
include_children=include_children)])
time.sleep(interval)
line_count += 1
# flush every 50 lines. Make 'tail -f' usable on profile file
if line_count > 50:
line_count = 0
if stream is not None:
stream.flush()
if timeout is not None:
max_iter -= 1
if max_iter == 0:
break
if proc.poll() is not None:
break
else:
# external process
if max_iter == -1:
max_iter = 1
counter = 0
while counter < max_iter:
counter += 1
if not max_usage:
mem_usage = _get_memory(proc, timestamps=timestamps,
include_children=include_children)
if stream is not None:
stream.write("MEM {0:.6f} {1:.4f}\n".format(*mem_usage))
else:
ret.append(mem_usage)
else:
ret = max([ret,
_get_memory(proc, include_children=include_children)
])
time.sleep(interval)
# Flush every 50 lines.
if counter % 50 == 0 and stream is not None:
stream.flush()
if stream:
return None
return ret | Return the memory usage of a process or piece of code
Parameters
----------
proc : {int, string, tuple, subprocess.Popen}, optional
The process to monitor. Can be given by an integer/string
representing a PID, by a Popen object or by a tuple
representing a Python function. The tuple contains three
values (f, args, kw) and specifies to run the function
f(*args, **kw).
Set to -1 (default) for current process.
interval : float, optional
Interval at which measurements are collected.
timeout : float, optional
Maximum amount of time (in seconds) to wait before returning.
max_usage : bool, optional
Only return the maximum memory usage (default False)
retval : bool, optional
For profiling python functions. Save the return value of the profiled
function. Return value of memory_usage becomes a tuple:
(mem_usage, retval)
timestamps : bool, optional
if True, timestamps of memory usage measurement are collected as well.
stream : File
if stream is a File opened with write access, then results are written
to this file instead of stored in memory and returned at the end of
the subprocess. Useful for long-running processes.
Implies timestamps=True.
Returns
-------
mem_usage : list of floating-poing values
memory usage, in MiB. It's length is always < timeout / interval
if max_usage is given, returns the two elements maximum memory and
number of measurements effectuated
ret : return value of the profiled function
Only returned if retval is set to True | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py#L144-L290 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py | _find_script | def _find_script(script_name):
""" Find the script.
If the input is not a file, then $PATH will be searched.
"""
if os.path.isfile(script_name):
return script_name
path = os.getenv('PATH', os.defpath).split(os.pathsep)
for folder in path:
if not folder:
continue
fn = os.path.join(folder, script_name)
if os.path.isfile(fn):
return fn
sys.stderr.write('Could not find script {0}\n'.format(script_name))
raise SystemExit(1) | python | def _find_script(script_name):
""" Find the script.
If the input is not a file, then $PATH will be searched.
"""
if os.path.isfile(script_name):
return script_name
path = os.getenv('PATH', os.defpath).split(os.pathsep)
for folder in path:
if not folder:
continue
fn = os.path.join(folder, script_name)
if os.path.isfile(fn):
return fn
sys.stderr.write('Could not find script {0}\n'.format(script_name))
raise SystemExit(1) | Find the script.
If the input is not a file, then $PATH will be searched. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py#L296-L312 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py | magic_mprun | def magic_mprun(self, parameter_s=''):
""" Execute a statement under the line-by-line memory profiler from the
memory_profiler module.
Usage:
%mprun -f func1 -f func2 <statement>
The given statement (which doesn't require quote marks) is run via the
LineProfiler. Profiling is enabled for the functions specified by the -f
options. The statistics will be shown side-by-side with the code through
the pager once the statement has completed.
Options:
-f <function>: LineProfiler only profiles functions and methods it is told
to profile. This option tells the profiler about these functions. Multiple
-f options may be used. The argument may be any expression that gives
a Python function or method object. However, one must be careful to avoid
spaces that may confuse the option parser. Additionally, functions defined
in the interpreter at the In[] prompt or via %run currently cannot be
displayed. Write these functions out to a separate file and import them.
One or more -f options are required to get any useful results.
-T <filename>: dump the text-formatted statistics with the code
side-by-side out to a text file.
-r: return the LineProfiler object after it has completed profiling.
-c: If present, add the memory usage of any children process to the report.
"""
try:
from StringIO import StringIO
except ImportError: # Python 3.x
from io import StringIO
# Local imports to avoid hard dependency.
from distutils.version import LooseVersion
import IPython
ipython_version = LooseVersion(IPython.__version__)
if ipython_version < '0.11':
from IPython.genutils import page
from IPython.ipstruct import Struct
from IPython.ipapi import UsageError
else:
from IPython.core.page import page
from IPython.utils.ipstruct import Struct
from IPython.core.error import UsageError
# Escape quote markers.
opts_def = Struct(T=[''], f=[])
parameter_s = parameter_s.replace('"', r'\"').replace("'", r"\'")
opts, arg_str = self.parse_options(parameter_s, 'rf:T:c', list_all=True)
opts.merge(opts_def)
global_ns = self.shell.user_global_ns
local_ns = self.shell.user_ns
# Get the requested functions.
funcs = []
for name in opts.f:
try:
funcs.append(eval(name, global_ns, local_ns))
except Exception as e:
raise UsageError('Could not find function %r.\n%s: %s' % (name,
e.__class__.__name__, e))
include_children = 'c' in opts
profile = LineProfiler(include_children=include_children)
for func in funcs:
profile(func)
# Add the profiler to the builtins for @profile.
try:
import builtins
except ImportError: # Python 3x
import __builtin__ as builtins
if 'profile' in builtins.__dict__:
had_profile = True
old_profile = builtins.__dict__['profile']
else:
had_profile = False
old_profile = None
builtins.__dict__['profile'] = profile
try:
try:
profile.runctx(arg_str, global_ns, local_ns)
message = ''
except SystemExit:
message = "*** SystemExit exception caught in code being profiled."
except KeyboardInterrupt:
message = ("*** KeyboardInterrupt exception caught in code being "
"profiled.")
finally:
if had_profile:
builtins.__dict__['profile'] = old_profile
# Trap text output.
stdout_trap = StringIO()
show_results(profile, stdout_trap)
output = stdout_trap.getvalue()
output = output.rstrip()
if ipython_version < '0.11':
page(output, screen_lines=self.shell.rc.screen_length)
else:
page(output)
print(message,)
text_file = opts.T[0]
if text_file:
with open(text_file, 'w') as pfile:
pfile.write(output)
print('\n*** Profile printout saved to text file %s. %s' % (text_file,
message))
return_value = None
if 'r' in opts:
return_value = profile
return return_value | python | def magic_mprun(self, parameter_s=''):
""" Execute a statement under the line-by-line memory profiler from the
memory_profiler module.
Usage:
%mprun -f func1 -f func2 <statement>
The given statement (which doesn't require quote marks) is run via the
LineProfiler. Profiling is enabled for the functions specified by the -f
options. The statistics will be shown side-by-side with the code through
the pager once the statement has completed.
Options:
-f <function>: LineProfiler only profiles functions and methods it is told
to profile. This option tells the profiler about these functions. Multiple
-f options may be used. The argument may be any expression that gives
a Python function or method object. However, one must be careful to avoid
spaces that may confuse the option parser. Additionally, functions defined
in the interpreter at the In[] prompt or via %run currently cannot be
displayed. Write these functions out to a separate file and import them.
One or more -f options are required to get any useful results.
-T <filename>: dump the text-formatted statistics with the code
side-by-side out to a text file.
-r: return the LineProfiler object after it has completed profiling.
-c: If present, add the memory usage of any children process to the report.
"""
try:
from StringIO import StringIO
except ImportError: # Python 3.x
from io import StringIO
# Local imports to avoid hard dependency.
from distutils.version import LooseVersion
import IPython
ipython_version = LooseVersion(IPython.__version__)
if ipython_version < '0.11':
from IPython.genutils import page
from IPython.ipstruct import Struct
from IPython.ipapi import UsageError
else:
from IPython.core.page import page
from IPython.utils.ipstruct import Struct
from IPython.core.error import UsageError
# Escape quote markers.
opts_def = Struct(T=[''], f=[])
parameter_s = parameter_s.replace('"', r'\"').replace("'", r"\'")
opts, arg_str = self.parse_options(parameter_s, 'rf:T:c', list_all=True)
opts.merge(opts_def)
global_ns = self.shell.user_global_ns
local_ns = self.shell.user_ns
# Get the requested functions.
funcs = []
for name in opts.f:
try:
funcs.append(eval(name, global_ns, local_ns))
except Exception as e:
raise UsageError('Could not find function %r.\n%s: %s' % (name,
e.__class__.__name__, e))
include_children = 'c' in opts
profile = LineProfiler(include_children=include_children)
for func in funcs:
profile(func)
# Add the profiler to the builtins for @profile.
try:
import builtins
except ImportError: # Python 3x
import __builtin__ as builtins
if 'profile' in builtins.__dict__:
had_profile = True
old_profile = builtins.__dict__['profile']
else:
had_profile = False
old_profile = None
builtins.__dict__['profile'] = profile
try:
try:
profile.runctx(arg_str, global_ns, local_ns)
message = ''
except SystemExit:
message = "*** SystemExit exception caught in code being profiled."
except KeyboardInterrupt:
message = ("*** KeyboardInterrupt exception caught in code being "
"profiled.")
finally:
if had_profile:
builtins.__dict__['profile'] = old_profile
# Trap text output.
stdout_trap = StringIO()
show_results(profile, stdout_trap)
output = stdout_trap.getvalue()
output = output.rstrip()
if ipython_version < '0.11':
page(output, screen_lines=self.shell.rc.screen_length)
else:
page(output)
print(message,)
text_file = opts.T[0]
if text_file:
with open(text_file, 'w') as pfile:
pfile.write(output)
print('\n*** Profile printout saved to text file %s. %s' % (text_file,
message))
return_value = None
if 'r' in opts:
return_value = profile
return return_value | Execute a statement under the line-by-line memory profiler from the
memory_profiler module.
Usage:
%mprun -f func1 -f func2 <statement>
The given statement (which doesn't require quote marks) is run via the
LineProfiler. Profiling is enabled for the functions specified by the -f
options. The statistics will be shown side-by-side with the code through
the pager once the statement has completed.
Options:
-f <function>: LineProfiler only profiles functions and methods it is told
to profile. This option tells the profiler about these functions. Multiple
-f options may be used. The argument may be any expression that gives
a Python function or method object. However, one must be careful to avoid
spaces that may confuse the option parser. Additionally, functions defined
in the interpreter at the In[] prompt or via %run currently cannot be
displayed. Write these functions out to a separate file and import them.
One or more -f options are required to get any useful results.
-T <filename>: dump the text-formatted statistics with the code
side-by-side out to a text file.
-r: return the LineProfiler object after it has completed profiling.
-c: If present, add the memory usage of any children process to the report. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py#L580-L701 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py | magic_memit | def magic_memit(self, line=''):
"""Measure memory usage of a Python statement
Usage, in line mode:
%memit [-r<R>t<T>i<I>] statement
Options:
-r<R>: repeat the loop iteration <R> times and take the best result.
Default: 1
-t<T>: timeout after <T> seconds. Default: None
-i<I>: Get time information at an interval of I times per second.
Defaults to 0.1 so that there is ten measurements per second.
-c: If present, add the memory usage of any children process to the report.
Examples
--------
::
In [1]: import numpy as np
In [2]: %memit np.zeros(1e7)
maximum of 1: 76.402344 MiB per loop
In [3]: %memit np.ones(1e6)
maximum of 1: 7.820312 MiB per loop
In [4]: %memit -r 10 np.empty(1e8)
maximum of 10: 0.101562 MiB per loop
"""
opts, stmt = self.parse_options(line, 'r:t:i:c', posix=False, strict=False)
repeat = int(getattr(opts, 'r', 1))
if repeat < 1:
repeat == 1
timeout = int(getattr(opts, 't', 0))
if timeout <= 0:
timeout = None
interval = float(getattr(opts, 'i', 0.1))
include_children = 'c' in opts
# I've noticed we get less noisier measurements if we run
# a garbage collection first
import gc
gc.collect()
mem_usage = 0
counter = 0
baseline = memory_usage()[0]
while counter < repeat:
counter += 1
tmp = memory_usage((_func_exec, (stmt, self.shell.user_ns)),
timeout=timeout, interval=interval, max_usage=True,
include_children=include_children)
mem_usage = max(mem_usage, tmp[0])
if mem_usage:
print('peak memory: %.02f MiB, increment: %.02f MiB' %
(mem_usage, mem_usage - baseline))
else:
print('ERROR: could not read memory usage, try with a lower interval '
'or more iterations') | python | def magic_memit(self, line=''):
"""Measure memory usage of a Python statement
Usage, in line mode:
%memit [-r<R>t<T>i<I>] statement
Options:
-r<R>: repeat the loop iteration <R> times and take the best result.
Default: 1
-t<T>: timeout after <T> seconds. Default: None
-i<I>: Get time information at an interval of I times per second.
Defaults to 0.1 so that there is ten measurements per second.
-c: If present, add the memory usage of any children process to the report.
Examples
--------
::
In [1]: import numpy as np
In [2]: %memit np.zeros(1e7)
maximum of 1: 76.402344 MiB per loop
In [3]: %memit np.ones(1e6)
maximum of 1: 7.820312 MiB per loop
In [4]: %memit -r 10 np.empty(1e8)
maximum of 10: 0.101562 MiB per loop
"""
opts, stmt = self.parse_options(line, 'r:t:i:c', posix=False, strict=False)
repeat = int(getattr(opts, 'r', 1))
if repeat < 1:
repeat == 1
timeout = int(getattr(opts, 't', 0))
if timeout <= 0:
timeout = None
interval = float(getattr(opts, 'i', 0.1))
include_children = 'c' in opts
# I've noticed we get less noisier measurements if we run
# a garbage collection first
import gc
gc.collect()
mem_usage = 0
counter = 0
baseline = memory_usage()[0]
while counter < repeat:
counter += 1
tmp = memory_usage((_func_exec, (stmt, self.shell.user_ns)),
timeout=timeout, interval=interval, max_usage=True,
include_children=include_children)
mem_usage = max(mem_usage, tmp[0])
if mem_usage:
print('peak memory: %.02f MiB, increment: %.02f MiB' %
(mem_usage, mem_usage - baseline))
else:
print('ERROR: could not read memory usage, try with a lower interval '
'or more iterations') | Measure memory usage of a Python statement
Usage, in line mode:
%memit [-r<R>t<T>i<I>] statement
Options:
-r<R>: repeat the loop iteration <R> times and take the best result.
Default: 1
-t<T>: timeout after <T> seconds. Default: None
-i<I>: Get time information at an interval of I times per second.
Defaults to 0.1 so that there is ten measurements per second.
-c: If present, add the memory usage of any children process to the report.
Examples
--------
::
In [1]: import numpy as np
In [2]: %memit np.zeros(1e7)
maximum of 1: 76.402344 MiB per loop
In [3]: %memit np.ones(1e6)
maximum of 1: 7.820312 MiB per loop
In [4]: %memit -r 10 np.empty(1e8)
maximum of 10: 0.101562 MiB per loop | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py#L712-L775 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py | profile | def profile(func, stream=None):
"""
Decorator that will run the function and print a line-by-line profile
"""
def wrapper(*args, **kwargs):
prof = LineProfiler()
val = prof(func)(*args, **kwargs)
show_results(prof, stream=stream)
return val
return wrapper | python | def profile(func, stream=None):
"""
Decorator that will run the function and print a line-by-line profile
"""
def wrapper(*args, **kwargs):
prof = LineProfiler()
val = prof(func)(*args, **kwargs)
show_results(prof, stream=stream)
return val
return wrapper | Decorator that will run the function and print a line-by-line profile | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py#L784-L793 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py | TimeStamper.timestamp | def timestamp(self, name="<block>"):
"""Returns a context manager for timestamping a block of code."""
# Make a fake function
func = lambda x: x
func.__module__ = ""
func.__name__ = name
self.add_function(func)
timestamps = []
self.functions[func].append(timestamps)
# A new object is required each time, since there can be several
# nested context managers.
return _TimeStamperCM(timestamps) | python | def timestamp(self, name="<block>"):
"""Returns a context manager for timestamping a block of code."""
# Make a fake function
func = lambda x: x
func.__module__ = ""
func.__name__ = name
self.add_function(func)
timestamps = []
self.functions[func].append(timestamps)
# A new object is required each time, since there can be several
# nested context managers.
return _TimeStamperCM(timestamps) | Returns a context manager for timestamping a block of code. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py#L347-L358 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py | TimeStamper.wrap_function | def wrap_function(self, func):
""" Wrap a function to timestamp it.
"""
def f(*args, **kwds):
# Start time
timestamps = [_get_memory(os.getpid(), timestamps=True)]
self.functions[func].append(timestamps)
try:
result = func(*args, **kwds)
finally:
# end time
timestamps.append(_get_memory(os.getpid(), timestamps=True))
return result
return f | python | def wrap_function(self, func):
""" Wrap a function to timestamp it.
"""
def f(*args, **kwds):
# Start time
timestamps = [_get_memory(os.getpid(), timestamps=True)]
self.functions[func].append(timestamps)
try:
result = func(*args, **kwds)
finally:
# end time
timestamps.append(_get_memory(os.getpid(), timestamps=True))
return result
return f | Wrap a function to timestamp it. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py#L364-L377 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py | LineProfiler.add_function | def add_function(self, func):
""" Record line profiling information for the given Python function.
"""
try:
# func_code does not exist in Python3
code = func.__code__
except AttributeError:
warnings.warn("Could not extract a code object for the object %r"
% func)
else:
self.add_code(code) | python | def add_function(self, func):
""" Record line profiling information for the given Python function.
"""
try:
# func_code does not exist in Python3
code = func.__code__
except AttributeError:
warnings.warn("Could not extract a code object for the object %r"
% func)
else:
self.add_code(code) | Record line profiling information for the given Python function. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py#L416-L426 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py | LineProfiler.run | def run(self, cmd):
""" Profile a single executable statement in the main namespace.
"""
# TODO: can this be removed ?
import __main__
main_dict = __main__.__dict__
return self.runctx(cmd, main_dict, main_dict) | python | def run(self, cmd):
""" Profile a single executable statement in the main namespace.
"""
# TODO: can this be removed ?
import __main__
main_dict = __main__.__dict__
return self.runctx(cmd, main_dict, main_dict) | Profile a single executable statement in the main namespace. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py#L441-L447 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py | LineProfiler.trace_memory_usage | def trace_memory_usage(self, frame, event, arg):
"""Callback for sys.settrace"""
if (event in ('call', 'line', 'return')
and frame.f_code in self.code_map):
if event != 'call':
# "call" event just saves the lineno but not the memory
mem = _get_memory(-1, include_children=self.include_children)
# if there is already a measurement for that line get the max
old_mem = self.code_map[frame.f_code].get(self.prevline, 0)
self.code_map[frame.f_code][self.prevline] = max(mem, old_mem)
self.prevline = frame.f_lineno
if self._original_trace_function is not None:
(self._original_trace_function)(frame, event, arg)
return self.trace_memory_usage | python | def trace_memory_usage(self, frame, event, arg):
"""Callback for sys.settrace"""
if (event in ('call', 'line', 'return')
and frame.f_code in self.code_map):
if event != 'call':
# "call" event just saves the lineno but not the memory
mem = _get_memory(-1, include_children=self.include_children)
# if there is already a measurement for that line get the max
old_mem = self.code_map[frame.f_code].get(self.prevline, 0)
self.code_map[frame.f_code][self.prevline] = max(mem, old_mem)
self.prevline = frame.f_lineno
if self._original_trace_function is not None:
(self._original_trace_function)(frame, event, arg)
return self.trace_memory_usage | Callback for sys.settrace | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/memory_profiler/memory_profiler.py#L475-L490 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/pstatsloader.py | PStatsLoader.get_root | def get_root( self, key ):
"""Retrieve a given declared root by root-type-key"""
if key not in self.roots:
function = getattr( self, 'load_%s'%(key,) )()
self.roots[key] = function
return self.roots[key] | python | def get_root( self, key ):
"""Retrieve a given declared root by root-type-key"""
if key not in self.roots:
function = getattr( self, 'load_%s'%(key,) )()
self.roots[key] = function
return self.roots[key] | Retrieve a given declared root by root-type-key | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/pstatsloader.py#L21-L26 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/pstatsloader.py | PStatsLoader.get_rows | def get_rows( self, key ):
"""Get the set of rows for the type-key"""
if key not in self.roots:
self.get_root( key )
if key == 'location':
return self.location_rows
else:
return self.rows | python | def get_rows( self, key ):
"""Get the set of rows for the type-key"""
if key not in self.roots:
self.get_root( key )
if key == 'location':
return self.location_rows
else:
return self.rows | Get the set of rows for the type-key | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/pstatsloader.py#L27-L34 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/pstatsloader.py | PStatsLoader.load | def load( self, stats ):
"""Build a squaremap-compatible model from a pstats class"""
rows = self.rows
for func, raw in stats.iteritems():
try:
rows[func] = row = PStatRow( func,raw )
except ValueError, err:
log.info( 'Null row: %s', func )
for row in rows.itervalues():
row.weave( rows )
return self.find_root( rows ) | python | def load( self, stats ):
"""Build a squaremap-compatible model from a pstats class"""
rows = self.rows
for func, raw in stats.iteritems():
try:
rows[func] = row = PStatRow( func,raw )
except ValueError, err:
log.info( 'Null row: %s', func )
for row in rows.itervalues():
row.weave( rows )
return self.find_root( rows ) | Build a squaremap-compatible model from a pstats class | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/pstatsloader.py#L44-L54 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/pstatsloader.py | PStatsLoader.find_root | def find_root( self, rows ):
"""Attempt to find/create a reasonable root node from list/set of rows
rows -- key: PStatRow mapping
TODO: still need more robustness here, particularly in the case of
threaded programs. Should be tracing back each row to root, breaking
cycles by sorting on cumulative time, and then collecting the traced
roots (or, if they are all on the same root, use that).
"""
maxes = sorted( rows.values(), key = lambda x: x.cumulative )
if not maxes:
raise RuntimeError( """Null results!""" )
root = maxes[-1]
roots = [root]
for key,value in rows.items():
if not value.parents:
log.debug( 'Found node root: %s', value )
if value not in roots:
roots.append( value )
if len(roots) > 1:
root = PStatGroup(
directory='*',
filename='*',
name=_("<profiling run>"),
children= roots,
)
root.finalize()
self.rows[ root.key ] = root
self.roots['functions'] = root
return root | python | def find_root( self, rows ):
"""Attempt to find/create a reasonable root node from list/set of rows
rows -- key: PStatRow mapping
TODO: still need more robustness here, particularly in the case of
threaded programs. Should be tracing back each row to root, breaking
cycles by sorting on cumulative time, and then collecting the traced
roots (or, if they are all on the same root, use that).
"""
maxes = sorted( rows.values(), key = lambda x: x.cumulative )
if not maxes:
raise RuntimeError( """Null results!""" )
root = maxes[-1]
roots = [root]
for key,value in rows.items():
if not value.parents:
log.debug( 'Found node root: %s', value )
if value not in roots:
roots.append( value )
if len(roots) > 1:
root = PStatGroup(
directory='*',
filename='*',
name=_("<profiling run>"),
children= roots,
)
root.finalize()
self.rows[ root.key ] = root
self.roots['functions'] = root
return root | Attempt to find/create a reasonable root node from list/set of rows
rows -- key: PStatRow mapping
TODO: still need more robustness here, particularly in the case of
threaded programs. Should be tracing back each row to root, breaking
cycles by sorting on cumulative time, and then collecting the traced
roots (or, if they are all on the same root, use that). | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/pstatsloader.py#L61-L91 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/pstatsloader.py | PStatsLoader._load_location | def _load_location( self ):
"""Build a squaremap-compatible model for location-based hierarchy"""
directories = {}
files = {}
root = PStatLocation( '/', 'PYTHONPATH' )
self.location_rows = self.rows.copy()
for child in self.rows.values():
current = directories.get( child.directory )
directory, filename = child.directory, child.filename
if current is None:
if directory == '':
current = root
else:
current = PStatLocation( directory, '' )
self.location_rows[ current.key ] = current
directories[ directory ] = current
if filename == '~':
filename = '<built-in>'
file_current = files.get( (directory,filename) )
if file_current is None:
file_current = PStatLocation( directory, filename )
self.location_rows[ file_current.key ] = file_current
files[ (directory,filename) ] = file_current
current.children.append( file_current )
file_current.children.append( child )
# now link the directories...
for key,value in directories.items():
if value is root:
continue
found = False
while key:
new_key,rest = os.path.split( key )
if new_key == key:
break
key = new_key
parent = directories.get( key )
if parent:
if value is not parent:
parent.children.append( value )
found = True
break
if not found:
root.children.append( value )
# lastly, finalize all of the directory records...
root.finalize()
return root | python | def _load_location( self ):
"""Build a squaremap-compatible model for location-based hierarchy"""
directories = {}
files = {}
root = PStatLocation( '/', 'PYTHONPATH' )
self.location_rows = self.rows.copy()
for child in self.rows.values():
current = directories.get( child.directory )
directory, filename = child.directory, child.filename
if current is None:
if directory == '':
current = root
else:
current = PStatLocation( directory, '' )
self.location_rows[ current.key ] = current
directories[ directory ] = current
if filename == '~':
filename = '<built-in>'
file_current = files.get( (directory,filename) )
if file_current is None:
file_current = PStatLocation( directory, filename )
self.location_rows[ file_current.key ] = file_current
files[ (directory,filename) ] = file_current
current.children.append( file_current )
file_current.children.append( child )
# now link the directories...
for key,value in directories.items():
if value is root:
continue
found = False
while key:
new_key,rest = os.path.split( key )
if new_key == key:
break
key = new_key
parent = directories.get( key )
if parent:
if value is not parent:
parent.children.append( value )
found = True
break
if not found:
root.children.append( value )
# lastly, finalize all of the directory records...
root.finalize()
return root | Build a squaremap-compatible model for location-based hierarchy | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/pstatsloader.py#L97-L142 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/pstatsloader.py | PStatGroup.finalize | def finalize( self, already_done=None ):
"""Finalize our values (recursively) taken from our children"""
if already_done is None:
already_done = {}
if already_done.has_key( self ):
return True
already_done[self] = True
self.filter_children()
children = self.children
for child in children:
if hasattr( child, 'finalize' ):
child.finalize( already_done)
child.parents.append( self )
self.calculate_totals( self.children, self.local_children ) | python | def finalize( self, already_done=None ):
"""Finalize our values (recursively) taken from our children"""
if already_done is None:
already_done = {}
if already_done.has_key( self ):
return True
already_done[self] = True
self.filter_children()
children = self.children
for child in children:
if hasattr( child, 'finalize' ):
child.finalize( already_done)
child.parents.append( self )
self.calculate_totals( self.children, self.local_children ) | Finalize our values (recursively) taken from our children | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/pstatsloader.py#L230-L243 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/pstatsloader.py | PStatGroup.calculate_totals | def calculate_totals( self, children, local_children=None ):
"""Calculate our cumulative totals from children and/or local children"""
for field,local_field in (('recursive','calls'),('cumulative','local')):
values = []
for child in children:
if isinstance( child, PStatGroup ) or not self.LOCAL_ONLY:
values.append( getattr( child, field, 0 ) )
elif isinstance( child, PStatRow ) and self.LOCAL_ONLY:
values.append( getattr( child, local_field, 0 ) )
value = sum( values )
setattr( self, field, value )
if self.recursive:
self.cumulativePer = self.cumulative/float(self.recursive)
else:
self.recursive = 0
if local_children:
for field in ('local','calls'):
value = sum([ getattr( child, field, 0 ) for child in children] )
setattr( self, field, value )
if self.calls:
self.localPer = self.local / self.calls
else:
self.local = 0
self.calls = 0
self.localPer = 0 | python | def calculate_totals( self, children, local_children=None ):
"""Calculate our cumulative totals from children and/or local children"""
for field,local_field in (('recursive','calls'),('cumulative','local')):
values = []
for child in children:
if isinstance( child, PStatGroup ) or not self.LOCAL_ONLY:
values.append( getattr( child, field, 0 ) )
elif isinstance( child, PStatRow ) and self.LOCAL_ONLY:
values.append( getattr( child, local_field, 0 ) )
value = sum( values )
setattr( self, field, value )
if self.recursive:
self.cumulativePer = self.cumulative/float(self.recursive)
else:
self.recursive = 0
if local_children:
for field in ('local','calls'):
value = sum([ getattr( child, field, 0 ) for child in children] )
setattr( self, field, value )
if self.calls:
self.localPer = self.local / self.calls
else:
self.local = 0
self.calls = 0
self.localPer = 0 | Calculate our cumulative totals from children and/or local children | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/pstatsloader.py#L246-L270 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/pstatsloader.py | PStatLocation.filter_children | def filter_children( self ):
"""Filter our children into regular and local children sets"""
real_children = []
for child in self.children:
if child.name == '<module>':
self.local_children.append( child )
else:
real_children.append( child )
self.children = real_children | python | def filter_children( self ):
"""Filter our children into regular and local children sets"""
real_children = []
for child in self.children:
if child.name == '<module>':
self.local_children.append( child )
else:
real_children.append( child )
self.children = real_children | Filter our children into regular and local children sets | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/pstatsloader.py#L284-L292 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | split_box | def split_box( fraction, x,y, w,h ):
"""Return set of two boxes where first is the fraction given"""
if w >= h:
new_w = int(w*fraction)
if new_w:
return (x,y,new_w,h),(x+new_w,y,w-new_w,h)
else:
return None,None
else:
new_h = int(h*fraction)
if new_h:
return (x,y,w,new_h),(x,y+new_h,w,h-new_h)
else:
return None,None | python | def split_box( fraction, x,y, w,h ):
"""Return set of two boxes where first is the fraction given"""
if w >= h:
new_w = int(w*fraction)
if new_w:
return (x,y,new_w,h),(x+new_w,y,w-new_w,h)
else:
return None,None
else:
new_h = int(h*fraction)
if new_h:
return (x,y,w,new_h),(x,y+new_h,w,h-new_h)
else:
return None,None | Return set of two boxes where first is the fraction given | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L418-L431 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | split_by_value | def split_by_value( total, nodes, headdivisor=2.0 ):
"""Produce, (sum,head),(sum,tail) for nodes to attempt binary partition"""
head_sum,tail_sum = 0,0
divider = 0
for node in nodes[::-1]:
if head_sum < total/headdivisor:
head_sum += node[0]
divider -= 1
else:
break
return (head_sum,nodes[divider:]),(total-head_sum,nodes[:divider]) | python | def split_by_value( total, nodes, headdivisor=2.0 ):
"""Produce, (sum,head),(sum,tail) for nodes to attempt binary partition"""
head_sum,tail_sum = 0,0
divider = 0
for node in nodes[::-1]:
if head_sum < total/headdivisor:
head_sum += node[0]
divider -= 1
else:
break
return (head_sum,nodes[divider:]),(total-head_sum,nodes[:divider]) | Produce, (sum,head),(sum,tail) for nodes to attempt binary partition | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L433-L443 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | HotMapNavigator.findNode | def findNode(class_, hot_map, targetNode, parentNode=None):
''' Find the target node in the hot_map. '''
for index, (rect, node, children) in enumerate(hot_map):
if node == targetNode:
return parentNode, hot_map, index
result = class_.findNode(children, targetNode, node)
if result:
return result
return None | python | def findNode(class_, hot_map, targetNode, parentNode=None):
''' Find the target node in the hot_map. '''
for index, (rect, node, children) in enumerate(hot_map):
if node == targetNode:
return parentNode, hot_map, index
result = class_.findNode(children, targetNode, node)
if result:
return result
return None | Find the target node in the hot_map. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L16-L24 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | HotMapNavigator.firstChild | def firstChild(hot_map, index):
''' Return the first child of the node indicated by index. '''
children = hot_map[index][2]
if children:
return children[0][1]
else:
return hot_map[index][1] | python | def firstChild(hot_map, index):
''' Return the first child of the node indicated by index. '''
children = hot_map[index][2]
if children:
return children[0][1]
else:
return hot_map[index][1] | Return the first child of the node indicated by index. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L46-L52 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | HotMapNavigator.nextChild | def nextChild(hotmap, index):
''' Return the next sibling of the node indicated by index. '''
nextChildIndex = min(index + 1, len(hotmap) - 1)
return hotmap[nextChildIndex][1] | python | def nextChild(hotmap, index):
''' Return the next sibling of the node indicated by index. '''
nextChildIndex = min(index + 1, len(hotmap) - 1)
return hotmap[nextChildIndex][1] | Return the next sibling of the node indicated by index. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L55-L58 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | HotMapNavigator.lastNode | def lastNode(class_, hot_map):
''' Return the very last node (recursively) in the hot map. '''
children = hot_map[-1][2]
if children:
return class_.lastNode(children)
else:
return hot_map[-1][1] | python | def lastNode(class_, hot_map):
''' Return the very last node (recursively) in the hot map. '''
children = hot_map[-1][2]
if children:
return class_.lastNode(children)
else:
return hot_map[-1][1] | Return the very last node (recursively) in the hot map. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L72-L78 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | SquareMap.OnMouse | def OnMouse( self, event ):
"""Handle mouse-move event by selecting a given element"""
node = HotMapNavigator.findNodeAtPosition(self.hot_map, event.GetPosition())
self.SetHighlight( node, event.GetPosition() ) | python | def OnMouse( self, event ):
"""Handle mouse-move event by selecting a given element"""
node = HotMapNavigator.findNodeAtPosition(self.hot_map, event.GetPosition())
self.SetHighlight( node, event.GetPosition() ) | Handle mouse-move event by selecting a given element | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L137-L140 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | SquareMap.OnClickRelease | def OnClickRelease( self, event ):
"""Release over a given square in the map"""
node = HotMapNavigator.findNodeAtPosition(self.hot_map, event.GetPosition())
self.SetSelected( node, event.GetPosition() ) | python | def OnClickRelease( self, event ):
"""Release over a given square in the map"""
node = HotMapNavigator.findNodeAtPosition(self.hot_map, event.GetPosition())
self.SetSelected( node, event.GetPosition() ) | Release over a given square in the map | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L142-L145 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | SquareMap.OnDoubleClick | def OnDoubleClick(self, event):
"""Double click on a given square in the map"""
node = HotMapNavigator.findNodeAtPosition(self.hot_map, event.GetPosition())
if node:
wx.PostEvent( self, SquareActivationEvent( node=node, point=event.GetPosition(), map=self ) ) | python | def OnDoubleClick(self, event):
"""Double click on a given square in the map"""
node = HotMapNavigator.findNodeAtPosition(self.hot_map, event.GetPosition())
if node:
wx.PostEvent( self, SquareActivationEvent( node=node, point=event.GetPosition(), map=self ) ) | Double click on a given square in the map | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L147-L151 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | SquareMap.SetSelected | def SetSelected( self, node, point=None, propagate=True ):
"""Set the given node selected in the square-map"""
if node == self.selectedNode:
return
self.selectedNode = node
self.UpdateDrawing()
if node:
wx.PostEvent( self, SquareSelectionEvent( node=node, point=point, map=self ) ) | python | def SetSelected( self, node, point=None, propagate=True ):
"""Set the given node selected in the square-map"""
if node == self.selectedNode:
return
self.selectedNode = node
self.UpdateDrawing()
if node:
wx.PostEvent( self, SquareSelectionEvent( node=node, point=point, map=self ) ) | Set the given node selected in the square-map | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L185-L192 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | SquareMap.SetHighlight | def SetHighlight( self, node, point=None, propagate=True ):
"""Set the currently-highlighted node"""
if node == self.highlightedNode:
return
self.highlightedNode = node
# TODO: restrict refresh to the squares for previous node and new node...
self.UpdateDrawing()
if node and propagate:
wx.PostEvent( self, SquareHighlightEvent( node=node, point=point, map=self ) ) | python | def SetHighlight( self, node, point=None, propagate=True ):
"""Set the currently-highlighted node"""
if node == self.highlightedNode:
return
self.highlightedNode = node
# TODO: restrict refresh to the squares for previous node and new node...
self.UpdateDrawing()
if node and propagate:
wx.PostEvent( self, SquareHighlightEvent( node=node, point=point, map=self ) ) | Set the currently-highlighted node | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L194-L202 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | SquareMap.SetModel | def SetModel( self, model, adapter=None ):
"""Set our model object (root of the tree)"""
self.model = model
if adapter is not None:
self.adapter = adapter
self.UpdateDrawing() | python | def SetModel( self, model, adapter=None ):
"""Set our model object (root of the tree)"""
self.model = model
if adapter is not None:
self.adapter = adapter
self.UpdateDrawing() | Set our model object (root of the tree) | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L204-L209 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | SquareMap.Draw | def Draw(self, dc):
''' Draw the tree map on the device context. '''
self.hot_map = []
dc.BeginDrawing()
brush = wx.Brush( self.BackgroundColour )
dc.SetBackground( brush )
dc.Clear()
if self.model:
self.max_depth_seen = 0
font = self.FontForLabels(dc)
dc.SetFont(font)
self._em_size_ = dc.GetFullTextExtent( 'm', font )[0]
w, h = dc.GetSize()
self.DrawBox( dc, self.model, 0,0,w,h, hot_map = self.hot_map )
dc.EndDrawing() | python | def Draw(self, dc):
''' Draw the tree map on the device context. '''
self.hot_map = []
dc.BeginDrawing()
brush = wx.Brush( self.BackgroundColour )
dc.SetBackground( brush )
dc.Clear()
if self.model:
self.max_depth_seen = 0
font = self.FontForLabels(dc)
dc.SetFont(font)
self._em_size_ = dc.GetFullTextExtent( 'm', font )[0]
w, h = dc.GetSize()
self.DrawBox( dc, self.model, 0,0,w,h, hot_map = self.hot_map )
dc.EndDrawing() | Draw the tree map on the device context. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L232-L246 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | SquareMap.FontForLabels | def FontForLabels(self, dc):
''' Return the default GUI font, scaled for printing if necessary. '''
font = wx.SystemSettings_GetFont(wx.SYS_DEFAULT_GUI_FONT)
scale = dc.GetPPI()[0] / wx.ScreenDC().GetPPI()[0]
font.SetPointSize(scale*font.GetPointSize())
return font | python | def FontForLabels(self, dc):
''' Return the default GUI font, scaled for printing if necessary. '''
font = wx.SystemSettings_GetFont(wx.SYS_DEFAULT_GUI_FONT)
scale = dc.GetPPI()[0] / wx.ScreenDC().GetPPI()[0]
font.SetPointSize(scale*font.GetPointSize())
return font | Return the default GUI font, scaled for printing if necessary. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L248-L253 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | SquareMap.BrushForNode | def BrushForNode( self, node, depth=0 ):
"""Create brush to use to display the given node"""
if node == self.selectedNode:
color = wx.SystemSettings_GetColour(wx.SYS_COLOUR_HIGHLIGHT)
elif node == self.highlightedNode:
color = wx.Colour( red=0, green=255, blue=0 )
else:
color = self.adapter.background_color(node, depth)
if not color:
red = (depth * 10)%255
green = 255-((depth * 5)%255)
blue = (depth * 25)%255
color = wx.Colour( red, green, blue )
return wx.Brush( color ) | python | def BrushForNode( self, node, depth=0 ):
"""Create brush to use to display the given node"""
if node == self.selectedNode:
color = wx.SystemSettings_GetColour(wx.SYS_COLOUR_HIGHLIGHT)
elif node == self.highlightedNode:
color = wx.Colour( red=0, green=255, blue=0 )
else:
color = self.adapter.background_color(node, depth)
if not color:
red = (depth * 10)%255
green = 255-((depth * 5)%255)
blue = (depth * 25)%255
color = wx.Colour( red, green, blue )
return wx.Brush( color ) | Create brush to use to display the given node | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L255-L268 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | SquareMap.PenForNode | def PenForNode( self, node, depth=0 ):
"""Determine the pen to use to display the given node"""
if node == self.selectedNode:
return self.SELECTED_PEN
return self.DEFAULT_PEN | python | def PenForNode( self, node, depth=0 ):
"""Determine the pen to use to display the given node"""
if node == self.selectedNode:
return self.SELECTED_PEN
return self.DEFAULT_PEN | Determine the pen to use to display the given node | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L270-L274 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | SquareMap.TextForegroundForNode | def TextForegroundForNode(self, node, depth=0):
"""Determine the text foreground color to use to display the label of
the given node"""
if node == self.selectedNode:
fg_color = wx.SystemSettings_GetColour(wx.SYS_COLOUR_HIGHLIGHTTEXT)
else:
fg_color = self.adapter.foreground_color(node, depth)
if not fg_color:
fg_color = wx.SystemSettings_GetColour(wx.SYS_COLOUR_WINDOWTEXT)
return fg_color | python | def TextForegroundForNode(self, node, depth=0):
"""Determine the text foreground color to use to display the label of
the given node"""
if node == self.selectedNode:
fg_color = wx.SystemSettings_GetColour(wx.SYS_COLOUR_HIGHLIGHTTEXT)
else:
fg_color = self.adapter.foreground_color(node, depth)
if not fg_color:
fg_color = wx.SystemSettings_GetColour(wx.SYS_COLOUR_WINDOWTEXT)
return fg_color | Determine the text foreground color to use to display the label of
the given node | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L276-L285 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | SquareMap.DrawBox | def DrawBox( self, dc, node, x,y,w,h, hot_map, depth=0 ):
"""Draw a model-node's box and all children nodes"""
log.debug( 'Draw: %s to (%s,%s,%s,%s) depth %s',
node, x,y,w,h, depth,
)
if self.max_depth and depth > self.max_depth:
return
self.max_depth_seen = max( (self.max_depth_seen,depth))
dc.SetBrush( self.BrushForNode( node, depth ) )
dc.SetPen( self.PenForNode( node, depth ) )
# drawing offset by margin within the square...
dx,dy,dw,dh = x+self.margin,y+self.margin,w-(self.margin*2),h-(self.margin*2)
if sys.platform == 'darwin':
# Macs don't like drawing small rounded rects...
if w < self.padding*2 or h < self.padding*2:
dc.DrawRectangle( dx,dy,dw,dh )
else:
dc.DrawRoundedRectangle( dx,dy,dw,dh, self.padding )
else:
dc.DrawRoundedRectangle( dx,dy,dw,dh, self.padding*3 )
# self.DrawIconAndLabel(dc, node, x, y, w, h, depth)
children_hot_map = []
hot_map.append( (wx.Rect( int(x),int(y),int(w),int(h)), node, children_hot_map ) )
x += self.padding
y += self.padding
w -= self.padding*2
h -= self.padding*2
empty = self.adapter.empty( node )
icon_drawn = False
if self.max_depth and depth == self.max_depth:
self.DrawIconAndLabel(dc, node, x, y, w, h, depth)
icon_drawn = True
elif empty:
# is a fraction of the space which is empty...
log.debug( ' empty space fraction: %s', empty )
new_h = h * (1.0-empty)
self.DrawIconAndLabel(dc, node, x, y, w, h-new_h, depth)
icon_drawn = True
y += (h-new_h)
h = new_h
if w >self.padding*2 and h> self.padding*2:
children = self.adapter.children( node )
if children:
log.debug( ' children: %s', children )
self.LayoutChildren( dc, children, node, x,y,w,h, children_hot_map, depth+1 )
else:
log.debug( ' no children' )
if not icon_drawn:
self.DrawIconAndLabel(dc, node, x, y, w, h, depth)
else:
log.debug( ' not enough space: children skipped' ) | python | def DrawBox( self, dc, node, x,y,w,h, hot_map, depth=0 ):
"""Draw a model-node's box and all children nodes"""
log.debug( 'Draw: %s to (%s,%s,%s,%s) depth %s',
node, x,y,w,h, depth,
)
if self.max_depth and depth > self.max_depth:
return
self.max_depth_seen = max( (self.max_depth_seen,depth))
dc.SetBrush( self.BrushForNode( node, depth ) )
dc.SetPen( self.PenForNode( node, depth ) )
# drawing offset by margin within the square...
dx,dy,dw,dh = x+self.margin,y+self.margin,w-(self.margin*2),h-(self.margin*2)
if sys.platform == 'darwin':
# Macs don't like drawing small rounded rects...
if w < self.padding*2 or h < self.padding*2:
dc.DrawRectangle( dx,dy,dw,dh )
else:
dc.DrawRoundedRectangle( dx,dy,dw,dh, self.padding )
else:
dc.DrawRoundedRectangle( dx,dy,dw,dh, self.padding*3 )
# self.DrawIconAndLabel(dc, node, x, y, w, h, depth)
children_hot_map = []
hot_map.append( (wx.Rect( int(x),int(y),int(w),int(h)), node, children_hot_map ) )
x += self.padding
y += self.padding
w -= self.padding*2
h -= self.padding*2
empty = self.adapter.empty( node )
icon_drawn = False
if self.max_depth and depth == self.max_depth:
self.DrawIconAndLabel(dc, node, x, y, w, h, depth)
icon_drawn = True
elif empty:
# is a fraction of the space which is empty...
log.debug( ' empty space fraction: %s', empty )
new_h = h * (1.0-empty)
self.DrawIconAndLabel(dc, node, x, y, w, h-new_h, depth)
icon_drawn = True
y += (h-new_h)
h = new_h
if w >self.padding*2 and h> self.padding*2:
children = self.adapter.children( node )
if children:
log.debug( ' children: %s', children )
self.LayoutChildren( dc, children, node, x,y,w,h, children_hot_map, depth+1 )
else:
log.debug( ' no children' )
if not icon_drawn:
self.DrawIconAndLabel(dc, node, x, y, w, h, depth)
else:
log.debug( ' not enough space: children skipped' ) | Draw a model-node's box and all children nodes | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L287-L340 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | SquareMap.DrawIconAndLabel | def DrawIconAndLabel(self, dc, node, x, y, w, h, depth):
''' Draw the icon, if any, and the label, if any, of the node. '''
if w-2 < self._em_size_//2 or h-2 < self._em_size_ //2:
return
dc.SetClippingRegion(x+1, y+1, w-2, h-2) # Don't draw outside the box
try:
icon = self.adapter.icon(node, node==self.selectedNode)
if icon and h >= icon.GetHeight() and w >= icon.GetWidth():
iconWidth = icon.GetWidth() + 2
dc.DrawIcon(icon, x+2, y+2)
else:
iconWidth = 0
if self.labels and h >= dc.GetTextExtent('ABC')[1]:
dc.SetTextForeground(self.TextForegroundForNode(node, depth))
dc.DrawText(self.adapter.label(node), x + iconWidth + 2, y+2)
finally:
dc.DestroyClippingRegion() | python | def DrawIconAndLabel(self, dc, node, x, y, w, h, depth):
''' Draw the icon, if any, and the label, if any, of the node. '''
if w-2 < self._em_size_//2 or h-2 < self._em_size_ //2:
return
dc.SetClippingRegion(x+1, y+1, w-2, h-2) # Don't draw outside the box
try:
icon = self.adapter.icon(node, node==self.selectedNode)
if icon and h >= icon.GetHeight() and w >= icon.GetWidth():
iconWidth = icon.GetWidth() + 2
dc.DrawIcon(icon, x+2, y+2)
else:
iconWidth = 0
if self.labels and h >= dc.GetTextExtent('ABC')[1]:
dc.SetTextForeground(self.TextForegroundForNode(node, depth))
dc.DrawText(self.adapter.label(node), x + iconWidth + 2, y+2)
finally:
dc.DestroyClippingRegion() | Draw the icon, if any, and the label, if any, of the node. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L342-L358 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | SquareMap.LayoutChildren | def LayoutChildren( self, dc, children, parent, x,y,w,h, hot_map, depth=0, node_sum=None ):
"""Layout the set of children in the given rectangle
node_sum -- if provided, we are a recursive call that already has sizes and sorting,
so skip those operations
"""
if node_sum is None:
nodes = [ (self.adapter.value(node,parent),node) for node in children ]
nodes.sort(key=operator.itemgetter(0))
total = self.adapter.children_sum( children,parent )
else:
nodes = children
total = node_sum
if total:
if self.square_style and len(nodes) > 5:
# new handling to make parents with large numbers of parents a little less
# "sliced" looking (i.e. more square)
(head_sum,head),(tail_sum,tail) = split_by_value( total, nodes )
if head and tail:
# split into two sub-boxes and render each...
head_coord,tail_coord = split_box( head_sum/float(total), x,y,w,h )
if head_coord:
self.LayoutChildren(
dc, head, parent, head_coord[0],head_coord[1],head_coord[2],head_coord[3],
hot_map, depth,
node_sum = head_sum,
)
if tail_coord and coord_bigger_than_padding( tail_coord, self.padding+self.margin ):
self.LayoutChildren(
dc, tail, parent, tail_coord[0],tail_coord[1],tail_coord[2],tail_coord[3],
hot_map, depth,
node_sum = tail_sum,
)
return
(firstSize,firstNode) = nodes[-1]
fraction = firstSize/float(total)
head_coord,tail_coord = split_box( firstSize/float(total), x,y,w,h )
if head_coord:
self.DrawBox(
dc, firstNode, head_coord[0],head_coord[1],head_coord[2],head_coord[3],
hot_map, depth+1
)
else:
return # no other node will show up as non-0 either
if len(nodes) > 1 and tail_coord and coord_bigger_than_padding( tail_coord, self.padding+self.margin ):
self.LayoutChildren(
dc, nodes[:-1], parent,
tail_coord[0],tail_coord[1],tail_coord[2],tail_coord[3],
hot_map, depth,
node_sum = total - firstSize,
) | python | def LayoutChildren( self, dc, children, parent, x,y,w,h, hot_map, depth=0, node_sum=None ):
"""Layout the set of children in the given rectangle
node_sum -- if provided, we are a recursive call that already has sizes and sorting,
so skip those operations
"""
if node_sum is None:
nodes = [ (self.adapter.value(node,parent),node) for node in children ]
nodes.sort(key=operator.itemgetter(0))
total = self.adapter.children_sum( children,parent )
else:
nodes = children
total = node_sum
if total:
if self.square_style and len(nodes) > 5:
# new handling to make parents with large numbers of parents a little less
# "sliced" looking (i.e. more square)
(head_sum,head),(tail_sum,tail) = split_by_value( total, nodes )
if head and tail:
# split into two sub-boxes and render each...
head_coord,tail_coord = split_box( head_sum/float(total), x,y,w,h )
if head_coord:
self.LayoutChildren(
dc, head, parent, head_coord[0],head_coord[1],head_coord[2],head_coord[3],
hot_map, depth,
node_sum = head_sum,
)
if tail_coord and coord_bigger_than_padding( tail_coord, self.padding+self.margin ):
self.LayoutChildren(
dc, tail, parent, tail_coord[0],tail_coord[1],tail_coord[2],tail_coord[3],
hot_map, depth,
node_sum = tail_sum,
)
return
(firstSize,firstNode) = nodes[-1]
fraction = firstSize/float(total)
head_coord,tail_coord = split_box( firstSize/float(total), x,y,w,h )
if head_coord:
self.DrawBox(
dc, firstNode, head_coord[0],head_coord[1],head_coord[2],head_coord[3],
hot_map, depth+1
)
else:
return # no other node will show up as non-0 either
if len(nodes) > 1 and tail_coord and coord_bigger_than_padding( tail_coord, self.padding+self.margin ):
self.LayoutChildren(
dc, nodes[:-1], parent,
tail_coord[0],tail_coord[1],tail_coord[2],tail_coord[3],
hot_map, depth,
node_sum = total - firstSize,
) | Layout the set of children in the given rectangle
node_sum -- if provided, we are a recursive call that already has sizes and sorting,
so skip those operations | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L359-L411 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | DefaultAdapter.overall | def overall( self, node ):
"""Calculate overall size of the node including children and empty space"""
return sum( [self.value(value,node) for value in self.children(node)] ) | python | def overall( self, node ):
"""Calculate overall size of the node including children and empty space"""
return sum( [self.value(value,node) for value in self.children(node)] ) | Calculate overall size of the node including children and empty space | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L457-L459 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | DefaultAdapter.children_sum | def children_sum( self, children,node ):
"""Calculate children's total sum"""
return sum( [self.value(value,node) for value in children] ) | python | def children_sum( self, children,node ):
"""Calculate children's total sum"""
return sum( [self.value(value,node) for value in children] ) | Calculate children's total sum | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L460-L462 |
lrq3000/pyFileFixity | pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py | DefaultAdapter.empty | def empty( self, node ):
"""Calculate empty space as a fraction of total space"""
overall = self.overall( node )
if overall:
return (overall - self.children_sum( self.children(node), node))/float(overall)
return 0 | python | def empty( self, node ):
"""Calculate empty space as a fraction of total space"""
overall = self.overall( node )
if overall:
return (overall - self.children_sum( self.children(node), node))/float(overall)
return 0 | Calculate empty space as a fraction of total space | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/profilers/visual/runsnakerun/squaremap/squaremap.py#L463-L468 |
lrq3000/pyFileFixity | pyFileFixity/lib/tqdm/tqdm.py | format_sizeof | def format_sizeof(num, suffix='bytes'):
'''Readable size format, courtesy of Sridhar Ratnakumar'''
for unit in ['','K','M','G','T','P','E','Z']:
if abs(num) < 1000.0:
return "%3.1f%s%s" % (num, unit, suffix)
num /= 1000.0
return "%.1f%s%s" % (num, 'Y', suffix) | python | def format_sizeof(num, suffix='bytes'):
'''Readable size format, courtesy of Sridhar Ratnakumar'''
for unit in ['','K','M','G','T','P','E','Z']:
if abs(num) < 1000.0:
return "%3.1f%s%s" % (num, unit, suffix)
num /= 1000.0
return "%.1f%s%s" % (num, 'Y', suffix) | Readable size format, courtesy of Sridhar Ratnakumar | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/tqdm/tqdm.py#L20-L26 |
lrq3000/pyFileFixity | pyFileFixity/lib/tqdm/tqdm.py | tqdm.update | def update(self, n=1):
"""
Manually update the progress bar, useful for streams such as reading files (set init(total=filesize) and then in the reading loop, use update(len(current_buffer)) )
Parameters
----------
n : int
Increment to add to the internal counter of iterations.
"""
if n < 1:
n = 1
self.n += n
delta_it = self.n - self.last_print_n
if delta >= self.miniters:
# We check the counter first, to reduce the overhead of time.time()
cur_t = time.time()
if cur_t - self.last_print_t >= self.mininterval:
self.sp.print_status(format_meter(self.n, self.total, cur_t-self.start_t, self.ncols, self.prefix, self.unit, self.unit_format, self.ascii))
if self.dynamic_miniters: self.miniters = max(self.miniters, delta_it)
self.last_print_n = self.n
self.last_print_t = cur_t | python | def update(self, n=1):
"""
Manually update the progress bar, useful for streams such as reading files (set init(total=filesize) and then in the reading loop, use update(len(current_buffer)) )
Parameters
----------
n : int
Increment to add to the internal counter of iterations.
"""
if n < 1:
n = 1
self.n += n
delta_it = self.n - self.last_print_n
if delta >= self.miniters:
# We check the counter first, to reduce the overhead of time.time()
cur_t = time.time()
if cur_t - self.last_print_t >= self.mininterval:
self.sp.print_status(format_meter(self.n, self.total, cur_t-self.start_t, self.ncols, self.prefix, self.unit, self.unit_format, self.ascii))
if self.dynamic_miniters: self.miniters = max(self.miniters, delta_it)
self.last_print_n = self.n
self.last_print_t = cur_t | Manually update the progress bar, useful for streams such as reading files (set init(total=filesize) and then in the reading loop, use update(len(current_buffer)) )
Parameters
----------
n : int
Increment to add to the internal counter of iterations. | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/tqdm/tqdm.py#L239-L260 |
lrq3000/pyFileFixity | pyFileFixity/lib/tqdm/tqdm.py | tqdm.close | def close(self):
"""
Call this method to force print the last progress bar update based on the latest n value
"""
if self.leave:
if self.last_print_n < self.n:
cur_t = time.time()
self.sp.print_status(format_meter(self.n, self.total, cur_t-self.start_t, self.ncols, self.prefix, self.unit, self.unit_format, self.ascii))
self.file.write('\n')
else:
self.sp.print_status('')
self.file.write('\r') | python | def close(self):
"""
Call this method to force print the last progress bar update based on the latest n value
"""
if self.leave:
if self.last_print_n < self.n:
cur_t = time.time()
self.sp.print_status(format_meter(self.n, self.total, cur_t-self.start_t, self.ncols, self.prefix, self.unit, self.unit_format, self.ascii))
self.file.write('\n')
else:
self.sp.print_status('')
self.file.write('\r') | Call this method to force print the last progress bar update based on the latest n value | https://github.com/lrq3000/pyFileFixity/blob/fd5ef23bb13835faf1e3baa773619b86a1cc9bdf/pyFileFixity/lib/tqdm/tqdm.py#L262-L273 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.