Code
stringlengths
103
85.9k
Summary
listlengths
0
94
Please provide a description of the function:def _repr__base(self, rich_output=False): # Make a dictionary which will then be transformed in a list repr_dict = collections.OrderedDict() key = '%s (point source)' % self.name repr_dict[key] = collections.OrderedDict() repr_dict[key]['position'] = self._sky_position.to_dict(minimal=True) repr_dict[key]['spectrum'] = collections.OrderedDict() for component_name, component in self.components.iteritems(): repr_dict[key]['spectrum'][component_name] = component.to_dict(minimal=True) return dict_to_list(repr_dict, rich_output)
[ "\n Representation of the object\n\n :param rich_output: if True, generates HTML, otherwise text\n :return: the representation\n " ]
Please provide a description of the function:def get_flux(self, energies): results = [component.shape(energies) for component in self.components.values()] return numpy.sum(results, 0)
[ "Get the total flux of this particle source at the given energies (summed over the components)" ]
Please provide a description of the function:def free_parameters(self): free_parameters = collections.OrderedDict() for component in self._components.values(): for par in component.shape.parameters.values(): if par.free: free_parameters[par.path] = par return free_parameters
[ "\n Returns a dictionary of free parameters for this source.\n We use the parameter path as the key because it's \n guaranteed to be unique, unlike the parameter name.\n\n :return:\n " ]
Please provide a description of the function:def parameters(self): all_parameters = collections.OrderedDict() for component in self._components.values(): for par in component.shape.parameters.values(): all_parameters[par.path] = par return all_parameters
[ "\n Returns a dictionary of all parameters for this source.\n We use the parameter path as the key because it's \n guaranteed to be unique, unlike the parameter name.\n\n :return:\n " ]
Please provide a description of the function:def get_total_spatial_integral(self, z=None): if isinstance( z, u.Quantity): z = z.value return np.ones_like( z )
[ "\n Returns the total integral (for 2D functions) or the integral over the spatial components (for 3D functions).\n needs to be implemented in subclasses.\n\n :return: an array of values of the integral (same dimension as z).\n " ]
Please provide a description of the function:def get_function(function_name, composite_function_expression=None): # Check whether this is a composite function or a simple function if composite_function_expression is not None: # Composite function return _parse_function_expression(composite_function_expression) else: if function_name in _known_functions: return _known_functions[function_name]() else: # Maybe this is a template # NOTE: import here to avoid circular import from astromodels.functions.template_model import TemplateModel, MissingDataFile try: instance = TemplateModel(function_name) except MissingDataFile: raise UnknownFunction("Function %s is not known. Known functions are: %s" % (function_name, ",".join(_known_functions.keys()))) else: return instance
[ "\n Returns the function \"name\", which must be among the known functions or a composite function.\n\n :param function_name: the name of the function (use 'composite' if the function is a composite function)\n :param composite_function_expression: composite function specification such as\n ((((powerlaw{1} + (sin{2} * 3)) + (sin{2} * 25)) - (powerlaw{1} * 16)) + (sin{2} ** 3.0))\n :return: the an instance of the requested class\n\n " ]
Please provide a description of the function:def get_function_class(function_name): if function_name in _known_functions: return _known_functions[function_name] else: raise UnknownFunction("Function %s is not known. Known functions are: %s" % (function_name, ",".join(_known_functions.keys())))
[ "\n Return the type for the requested function\n\n :param function_name: the function to return\n :return: the type for that function (i.e., this is a class, not an instance)\n " ]
Please provide a description of the function:def _parse_function_expression(function_specification): # NOTE FOR SECURITY # This function has some security concerns. Security issues could arise if the user tries to read a model # file which has been maliciously formatted to contain harmful code. In this function we close all the doors # to a similar attack, except for those attacks which assume that the user has full access to a python environment. # Indeed, if that is the case, then the user can already do harm to the system, and so there is no point in # safeguard that from here. For example, the user could format a subclass of the Function class which perform # malicious operations in the constructor, add that to the dictionary of known functions, and then interpret # it with this code. However, if the user can instance malicious classes, then why would he use astromodels to # carry out the attack? Instead, what we explicitly check is the content of the function_specification string, # so that it cannot by itself do any harm (by for example containing instructions such as os.remove). # This can be a arbitrarily complex specification, like # ((((powerlaw{1} + (sin{2} * 3)) + (sin{2} * 25)) - (powerlaw{1} * 16)) + (sin{2} ** 3.0)) # Use regular expressions to extract the set of functions like function_name{number}, # then build the set of unique functions by using the constructor set() unique_functions = set(re.findall(r'\b([a-zA-Z0-9_]+)\{([0-9]?)\}',function_specification)) # NB: unique functions is a set like: # {('powerlaw', '1'), ('sin', '2')} # Create instances of the unique functions instances = {} # Loop over the unique functions and create instances for (unique_function, number) in unique_functions: complete_function_specification = "%s{%s}" % (unique_function, number) # As first safety measure, check that the unique function is in the dictionary of _known_functions. # This could still be easily hacked, so it won't be the only check if unique_function in _known_functions: # Get the function class and check that it is indeed a proper Function class function_class = _known_functions[unique_function] if issubclass(function_class, Function): # Ok, let's create the instance instance = function_class() # Append the instance to the list instances[complete_function_specification] = instance else: raise FunctionDefinitionError("The function specification %s does not contain a proper function" % unique_function ) else: # It might be a template # This import is here to avoid circular dependency between this module and TemplateModel.py import astromodels.functions.template_model try: instance = astromodels.functions.template_model.TemplateModel(unique_function) except astromodels.functions.template_model.MissingDataFile: # It's not a template raise UnknownFunction("Function %s in expression %s is unknown. If this is a template model, you are " "probably missing the data file" % (unique_function, function_specification)) else: # It's a template instances[complete_function_specification] = instance # Check that we have found at least one instance. if len(instances)==0: raise DesignViolation("No known function in function specification") # The following presents a slight security problem if the model file that has been parsed comes from an untrusted # source. Indeed, the use of eval could make possible to execute things like os.remove. # In order to avoid this, first we substitute the function instances with numbers and remove the operators like # +,-,/ and so on. Then we try to execute the string with ast.literal_eval, which according to its documentation: # Safely evaluate an expression node or a Unicode or Latin-1 encoded string containing a Python literal or # container display. The string or node provided may only consist of the following Python literal structures: # strings, numbers, tuples, lists, dicts, booleans, and None.This can be used for safely evaluating strings # containing Python values from untrusted sources without the need to parse the values oneself. # It is not capable of evaluating arbitrarily complex expressions, for example involving operators or indexing. # If literal_eval cannot parse the string, it means that it contains unsafe input. # Create a copy of the function_specification string_for_literal_eval = function_specification # Remove from the function_specification all the known operators and function_expressions, and substitute them # with a 0 and a space # Let's start from the function expression for function_expression in instances.keys(): string_for_literal_eval = string_for_literal_eval.replace(function_expression, '0 ') # Now remove all the known operators for operator in _operations.keys(): string_for_literal_eval = string_for_literal_eval.replace(operator,'0 ') # The string at this point should contains only numbers and parenthesis separated by one or more spaces if re.match('''([a-zA-Z]+)''', string_for_literal_eval): raise DesignViolation("Extraneous input in function specification") # By using split() we separate all the numbers and parenthesis in a list, then we join them # with a comma, to end up with a comma-separated list of parenthesis and numbers like: # ((((0,0,(0,0,3)),0,(0,0,25)),0,(0,0,16)),0,(0,0,0,3.0)) # This string can be parsed by literal_eval as a tuple containing other tuples, which is fine. # If the user has inserted some malicious content, like os.remove or more weird stuff like code objects, # the parsing will fail string_for_literal_eval = ",".join(string_for_literal_eval.split()) #print(string_for_literal_eval) # At this point the string should be just a comma separated list of numbers # Now try to execute the string try: ast.literal_eval(string_for_literal_eval) except (ValueError, SyntaxError): raise DesignViolation("The given expression is not a valid function expression") else: # The expression is safe, let's eval it # First substitute the reference to the functions (like 'powerlaw{1}') with a string # corresponding to the instance dictionary sanitized_function_specification = function_specification for function_expression in instances.keys(): sanitized_function_specification = sanitized_function_specification.replace(function_expression, 'instances["%s"]' % function_expression) # Now eval it. For safety measure, I remove all globals, and the only local is the 'instances' dictionary composite_function = eval(sanitized_function_specification, {}, {'instances': instances}) return composite_function
[ "\n Parse a complex function expression like:\n\n ((((powerlaw{1} + (sin{2} * 3)) + (sin{2} * 25)) - (powerlaw{1} * 16)) + (sin{2} ** 3.0))\n\n and return a composite function instance\n\n :param function_specification:\n :return: a composite function instance\n " ]
Please provide a description of the function:def check_calling_sequence(name, function_name, function, possible_variables): # Get calling sequence # If the function has been memoized, it will have a "input_object" member try: calling_sequence = inspect.getargspec(function.input_object).args except AttributeError: # This might happen if the function is with memoization calling_sequence = inspect.getargspec(function).args assert calling_sequence[0] == 'self', "Wrong syntax for 'evaluate' in %s. The first argument " \ "should be called 'self'." % name # Figure out how many variables are used variables = filter(lambda var: var in possible_variables, calling_sequence) # Check that they actually make sense. They must be used in the same order # as specified in possible_variables assert len(variables) > 0, "The name of the variables for 'evaluate' in %s must be one or more " \ "among %s, instead of %s" % (name, ','.join(possible_variables), ",".join(variables)) if variables != possible_variables[:len(variables)]: raise AssertionError("The variables %s are out of order in '%s' of %s. Should be %s." % (",".join(variables), function_name, name, possible_variables[:len(variables)])) other_parameters = filter(lambda var: var not in variables and var != 'self', calling_sequence) return variables, other_parameters
[ "\n Check the calling sequence for the function looking for the variables specified.\n One or more of the variables can be in the calling sequence. Note that the\n order of the variables will be enforced.\n It will also enforce that the first parameter in the calling sequence is called 'self'.\n\n :param function: the function to check\n :param possible_variables: a list of variables to check, The order is important, and will be enforced\n :return: a tuple containing the list of found variables, and the name of the other parameters in the calling\n sequence\n " ]
Please provide a description of the function:def free_parameters(self): free_parameters = collections.OrderedDict([(k,v) for k, v in self.parameters.iteritems() if v.free]) return free_parameters
[ "\n Returns a dictionary of free parameters for this function\n\n :return: dictionary of free parameters\n " ]
Please provide a description of the function:def evaluate_at(self, *args, **parameter_specification): # pragma: no cover # Set the parameters to the provided values for parameter in parameter_specification: self._get_child(parameter).value = parameter_specification[parameter] return self(*args)
[ "\n Evaluate the function at the given x(,y,z) for the provided parameters, explicitly provided as part of the\n parameter_specification keywords.\n\n :param *args:\n :param **parameter_specification:\n :return:\n " ]
Please provide a description of the function:def from_unit_cube(self, x): mu = self.mu.value sigma = self.sigma.value sqrt_two = 1.414213562 if x < 1e-16 or (1 - x) < 1e-16: res = -1e32 else: res = mu + sigma * sqrt_two * erfcinv(2 * (1 - x)) return res
[ "\n Used by multinest\n\n :param x: 0 < x < 1\n :param lower_bound:\n :param upper_bound:\n :return:\n " ]
Please provide a description of the function:def from_unit_cube(self, x): x0 = self.x0.value gamma = self.gamma.value half_pi = 1.57079632679 res = np.tan(np.pi * x - half_pi) * gamma + x0 return res
[ "\n Used by multinest\n\n :param x: 0 < x < 1\n :param lower_bound:\n :param upper_bound:\n :return:\n " ]
Please provide a description of the function:def from_unit_cube(self, x): cosdec_min = np.cos(deg2rad*(90.0 + self.lower_bound.value)) cosdec_max = np.cos(deg2rad*(90.0 + self.upper_bound.value)) v = x * (cosdec_max - cosdec_min) v += cosdec_min v = np.clip(v, -1.0, 1.0) # Now this generates on [0,pi) dec = np.arccos(v) # convert to degrees dec = rad2deg * dec # now in range [-90,90.0) dec -= 90.0 return dec
[ "\n Used by multinest\n\n :param x: 0 < x < 1\n :param lower_bound:\n :param upper_bound:\n :return:\n " ]
Please provide a description of the function:def from_unit_cube(self, x): lower_bound = self.lower_bound.value upper_bound = self.upper_bound.value low = lower_bound spread = float(upper_bound - lower_bound) par = x * spread + low return par
[ "\n Used by multinest\n\n :param x: 0 < x < 1\n :param lower_bound:\n :param upper_bound:\n :return:\n " ]
Please provide a description of the function:def from_unit_cube(self, x): low = math.log10(self.lower_bound.value) up = math.log10(self.upper_bound.value) spread = up - low par = 10 ** (x * spread + low) return par
[ "\n Used by multinest\n\n :param x: 0 < x < 1\n :param lower_bound:\n :param upper_bound:\n :return:\n " ]
Please provide a description of the function:def _get_data_file_path(data_file): try: file_path = pkg_resources.resource_filename("astromodels", 'data/%s' % data_file) except KeyError: raise IOError("Could not read or find data file %s. Try reinstalling astromodels. If this does not fix your " "problem, open an issue on github." % (data_file)) else: return os.path.abspath(file_path)
[ "\n Returns the absolute path to the required data files.\n\n :param data_file: relative path to the data file, relative to the astromodels/data path.\n So to get the path to data/dark_matter/gammamc_dif.dat you need to use data_file=\"dark_matter/gammamc_dif.dat\"\n :return: absolute path of the data file\n " ]
Please provide a description of the function:def _setup(self): tablepath = _get_data_file_path("dark_matter/gammamc_dif.dat") self._data = np.loadtxt(tablepath) channel_index_mapping = { 1: 8, # ee 2: 6, # mumu 3: 3, # tautau 4: 1, # bb 5: 2, # tt 6: 7, # gg 7: 4, # ww 8: 5, # zz 9: 0, # cc 10: 10, # uu 11: 11, # dd 12: 9, # ss } # Number of decades in x = log10(E/M) ndec = 10.0 xedge = np.linspace(0, 1.0, 251) self._x = 0.5 * (xedge[1:] + xedge[:-1]) * ndec - ndec ichan = channel_index_mapping[int(self.channel.value)] # These are the mass points self._mass = np.array([2.0, 4.0, 6.0, 8.0, 10.0, 25.0, 50.0, 80.3, 91.2, 100.0, 150.0, 176.0, 200.0, 250.0, 350.0, 500.0, 750.0, 1000.0, 1500.0, 2000.0, 3000.0, 5000.0, 7000.0, 1E4]) self._dn = self._data.reshape((12, 24, 250)) self._dn_interp = RegularGridInterpolator([self._mass, self._x], self._dn[ichan, :, :], bounds_error=False, fill_value=None) if self.mass.value > 10000: print "Warning: DMFitFunction only appropriate for masses <= 10 TeV" print "To model DM from 2 GeV < mass < 1 PeV use DMSpectra"
[ "\n Mapping between the channel codes and the rows in the gammamc file\n\n 1 : 8, # ee\n 2 : 6, # mumu\n 3 : 3, # tautau\n 4 : 1, # bb\n 5 : 2, # tt\n 6 : 7, # gg\n 7 : 4, # ww\n 8 : 5, # zz\n 9 : 0, # cc\n 10 : 10, # uu\n 11 : 11, # dd\n 12 : 9, # ss\n " ]
Please provide a description of the function:def _setup(self): # Get and open the two data files tablepath_h = _get_data_file_path("dark_matter/dmSpecTab.npy") self._data_h = np.load(tablepath_h) tablepath_f = _get_data_file_path("dark_matter/gammamc_dif.dat") self._data_f = np.loadtxt(tablepath_f) channel_index_mapping = { 1: 8, # ee 2: 6, # mumu 3: 3, # tautau 4: 1, # bb 5: 2, # tt 6: 7, # gg 7: 4, # ww 8: 5, # zz 9: 0, # cc 10: 10, # uu 11: 11, # dd 12: 9, # ss } # Number of decades in x = log10(E/M) ndec = 10.0 xedge = np.linspace(0, 1.0, 251) self._x = 0.5 * (xedge[1:] + xedge[:-1]) * ndec - ndec ichan = channel_index_mapping[int(self.channel.value)] # These are the mass points in GeV self._mass_h = np.array([50., 61.2, 74.91, 91.69, 112.22, 137.36, 168.12, 205.78, 251.87, 308.29, 377.34, 461.86, 565.31, 691.93, 846.91, 1036.6, 1268.78, 1552.97, 1900.82, 2326.57, 2847.69, 3485.53, 4266.23, 5221.81, 6391.41, 7823.0, 9575.23, 11719.94, 14345.03, 17558.1, 21490.85, 26304.48, 32196.3, 39407.79, 48234.54, 59038.36, 72262.07, 88447.7, 108258.66, 132506.99, 162186.57, 198513.95, 242978.11, 297401.58, 364015.09, 445549.04, 545345.37, 667494.6, 817003.43, 1000000.]) # These are the mass points in GeV self._mass_f = np.array([2.0, 4.0, 6.0, 8.0, 10.0, 25.0, 50.0, 80.3, 91.2, 100.0, 150.0, 176.0, 200.0, 250.0, 350.0, 500.0, 750.0, 1000.0, 1500.0, 2000.0, 3000.0, 5000.0, 7000.0, 1E4]) self._mass = np.append(self._mass_f, self._mass_h[27:]) self._dn_f = self._data_f.reshape((12, 24, 250)) # Is this really used? self._dn_h = self._data_h self._dn = np.zeros((12, len(self._mass), 250)) self._dn[:, 0:24, :] = self._dn_f self._dn[:, 24:, :] = self._dn_h[:, 27:, :] self._dn_interp = RegularGridInterpolator([self._mass, self._x], self._dn[ichan, :, :], bounds_error=False, fill_value=None) if self.channel.value in [1, 6, 7] and self.mass.value > 10000.: print "ERROR: currently spectra for selected channel and mass not implemented." print "Spectra for channels ['ee','gg','WW'] currently not available for mass > 10 TeV"
[ "\n Mapping between the channel codes and the rows in the gammamc file\n dmSpecTab.npy created to match this mapping too\n\n 1 : 8, # ee\n 2 : 6, # mumu\n 3 : 3, # tautau\n 4 : 1, # bb\n 5 : 2, # tt\n 6 : 7, # gg\n 7 : 4, # ww\n 8 : 5, # zz\n 9 : 0, # cc\n 10 : 10, # uu\n 11 : 11, # dd\n 12 : 9, # ss\n " ]
Please provide a description of the function:def vincenty(lon0, lat0, a1, s): lon0 = np.deg2rad(lon0) lat0 = np.deg2rad(lat0) a1 = np.deg2rad(a1) s = np.deg2rad(s) sina = np.cos(lat0) * np.sin(a1) num1 = np.sin(lat0)*np.cos(s) + np.cos(lat0)*np.sin(s)*np.cos(a1) den1 = np.sqrt(sina**2 + (np.sin(lat0)*np.sin(s) - np.cos(lat0)*np.cos(a1))**2) lat = np.rad2deg(np.arctan2(num1, den1)) num2 = np.sin(s)*np.sin(a1) den2 = np.cos(lat0)*np.cos(s) - np.sin(lat0)*np.sin(s)*np.cos(a1) L = np.arctan2(num2, den2) lon = np.rad2deg(lon0 + L) return lon, lat
[ "\n Returns the coordinates of a new point that is a given angular distance s away from a starting point (lon0, lat0) at bearing (angle from north) a1), to within a given precision\n\n Note that this calculation is a simplified version of the full vincenty problem, which solves for the coordinates on the surface on an arbitrary ellipsoid. Here we only care about the surface of a sphere.\n\n Note: All parameters are assumed to be given in DEGREES\n :param lon0: float, longitude of starting point\n :param lat0: float, latitude of starting point\n :param a1: float, bearing to second point, i.e. angle between due north and line connecting 2 points\n :param s: float, angular distance between the two points\n :return: coordinates of second point in degrees\n " ]
Please provide a description of the function:def is_valid_variable_name(string_to_check): try: parse('{} = None'.format(string_to_check)) return True except (SyntaxError, ValueError, TypeError): return False
[ "\n Returns whether the provided name is a valid variable name in Python\n\n :param string_to_check: the string to be checked\n :return: True or False\n " ]
Please provide a description of the function:def _check_unit(new_unit, old_unit): try: new_unit.physical_type except AttributeError: raise UnitMismatch("The provided unit (%s) has no physical type. Was expecting a unit for %s" % (new_unit, old_unit.physical_type)) if new_unit.physical_type != old_unit.physical_type: raise UnitMismatch("Physical type mismatch: you provided a unit for %s instead of a unit for %s" % (new_unit.physical_type, old_unit.physical_type))
[ "\n Check that the new unit is compatible with the old unit for the quantity described by variable_name\n\n :param new_unit: instance of astropy.units.Unit\n :param old_unit: instance of astropy.units.Unit\n :return: nothin\n " ]
Please provide a description of the function:def peak_energy(self): # Eq. 6 in Massaro et al. 2004 # (http://adsabs.harvard.edu/abs/2004A%26A...413..489M) return self.piv.value * pow(10, ((2 + self.alpha.value) * np.log(10)) / (2 * self.beta.value))
[ "\n Returns the peak energy in the nuFnu spectrum\n\n :return: peak energy in keV\n " ]
Please provide a description of the function:def get_spatially_integrated_flux( self, energies): if not isinstance(energies, np.ndarray): energies = np.array(energies, ndmin=1) # Get the differential flux from the spectral components results = [self.spatial_shape.get_total_spatial_integral(energies) * component.shape(energies) for component in self.components.values()] if isinstance(energies, u.Quantity): # Slow version with units # We need to sum like this (slower) because using np.sum will not preserve the units # (thanks astropy.units) differential_flux = sum(results) else: # Fast version without units, where x is supposed to be in the same units as currently defined in # units.get_units() differential_flux = np.sum(results, 0) return differential_flux
[ "\n Returns total flux of source at the given energy\n :param energies: energies (array or float)\n :return: differential flux at given energy\n " ]
Please provide a description of the function:def get_ra(self): try: return self.ra.value except AttributeError: # Transform from L,B to R.A., Dec return self.sky_coord.transform_to('icrs').ra.value
[ "\n Get R.A. corresponding to the current position (ICRS, J2000)\n\n :return: Right Ascension\n " ]
Please provide a description of the function:def get_dec(self): try: return self.dec.value except AttributeError: # Transform from L,B to R.A., Dec return self.sky_coord.transform_to('icrs').dec.value
[ "\n Get Dec. corresponding to the current position (ICRS, J2000)\n\n :return: Declination\n " ]
Please provide a description of the function:def get_l(self): try: return self.l.value except AttributeError: # Transform from L,B to R.A., Dec return self.sky_coord.transform_to('galactic').l.value
[ "\n Get Galactic Longitude (l) corresponding to the current position\n\n :return: Galactic Longitude\n " ]
Please provide a description of the function:def get_b(self): try: return self.b.value except AttributeError: # Transform from L,B to R.A., Dec return self.sky_coord.transform_to('galactic').b.value
[ "\n Get Galactic latitude (b) corresponding to the current position\n\n :return: Latitude\n " ]
Please provide a description of the function:def parameters(self): if self._coord_type == 'galactic': return collections.OrderedDict((('l', self.l), ('b', self.b))) else: return collections.OrderedDict((('ra', self.ra), ('dec', self.dec)))
[ "\n Get the dictionary of parameters (either ra,dec or l,b)\n\n :return: dictionary of parameters\n " ]
Please provide a description of the function:def fix(self): if self._coord_type == 'equatorial': self.ra.fix = True self.dec.fix = True else: self.l.fix = True self.b.fix = True
[ "\n Fix the parameters with the coordinates (either ra,dec or l,b depending on how the class\n has been instanced)\n \n " ]
Please provide a description of the function:def free(self): if self._coord_type == 'equatorial': self.ra.fix = False self.dec.fix = False else: self.l.fix = False self.b.fix = False
[ "\n Free the parameters with the coordinates (either ra,dec or l,b depending on how the class\n has been instanced)\n \n " ]
Please provide a description of the function:def _custom_init_(self, model_name, other_name=None,log_interp = True): # Get the data directory data_dir_path = get_user_data_path() # Sanitize the data file filename_sanitized = os.path.abspath(os.path.join(data_dir_path, '%s.h5' % model_name)) if not os.path.exists(filename_sanitized): raise MissingDataFile("The data file %s does not exists. Did you use the " "TemplateFactory?" % (filename_sanitized)) # Open the template definition and read from it self._data_file = filename_sanitized with HDFStore(filename_sanitized) as store: self._data_frame = store['data_frame'] self._parameters_grids = collections.OrderedDict() processed_parameters = 0 for key in store.keys(): match = re.search('p_([0-9]+)_(.+)', key) if match is None: continue else: tokens = match.groups() this_parameter_number = int(tokens[0]) this_parameter_name = str(tokens[1]) assert this_parameter_number == processed_parameters, "Parameters out of order!" self._parameters_grids[this_parameter_name] = store[key] processed_parameters += 1 self._energies = store['energies'] # Now get the metadata metadata = store.get_storer('data_frame').attrs.metadata description = metadata['description'] name = metadata['name'] self._interpolation_degree = metadata['interpolation_degree'] self._spline_smoothing_factor = metadata['spline_smoothing_factor'] # Make the dictionary of parameters function_definition = collections.OrderedDict() function_definition['description'] = description function_definition['latex'] = 'n.a.' # Now build the parameters according to the content of the parameter grid parameters = collections.OrderedDict() parameters['K'] = Parameter('K', 1.0) parameters['scale'] = Parameter('scale', 1.0) for parameter_name in self._parameters_grids.keys(): grid = self._parameters_grids[parameter_name] parameters[parameter_name] = Parameter(parameter_name, grid.median(), min_value=grid.min(), max_value=grid.max()) if other_name is None: super(TemplateModel, self).__init__(name, function_definition, parameters) else: super(TemplateModel, self).__init__(other_name, function_definition, parameters) # Finally prepare the interpolators self._prepare_interpolators(log_interp)
[ "\n Custom initialization for this model\n \n :param model_name: the name of the model, corresponding to the root of the .h5 file in the data directory\n :param other_name: (optional) the name to be used as name of the model when used in astromodels. If None \n (default), use the same name as model_name\n :return: none\n " ]
Please provide a description of the function:def accept_quantity(input_type=float, allow_none=False): def accept_quantity_wrapper(method): def handle_quantity(instance, value, *args, **kwargs): # For speed reasons, first run the case where the input is not a quantity, and fall back to the handling # of quantities if that fails. The parts that fails if the input is a Quantity is the conversion # input_type(value). This could have been handled more elegantly with a "finally" clause, but that would # have a 40 percent speed impact... try: new_value = input_type(value) return method(instance, new_value, *args, **kwargs) except TypeError: # Slow for slow, check that we actually have a quantity or None (if allowed) if isinstance(value, u.Quantity): new_value = value.to(instance.unit).value return method(instance, new_value, *args, **kwargs) elif value is None: if allow_none: return method(instance, None, *args, **kwargs) else: # pragma: no cover raise TypeError("You cannot pass None as argument for " "method %s of %s" % (method.__name__, instance.name)) else: # pragma: no cover raise TypeError("You need to pass either a %s or a astropy.Quantity " "to method %s of %s" % (input_type.__name__, method.__name__, instance.name)) return handle_quantity return accept_quantity_wrapper
[ "\n A class-method decorator which allow a given method (typically the set_value method) to receive both a\n astropy.Quantity or a simple float, but to be coded like it's always receiving a pure float in the right units.\n This is to give a way to avoid the huge bottleneck that are astropy.units\n\n :param input_type: the expected type for the input (float, int)\n :param allow_none : whether to allow or not the passage of None as argument (default: False)\n :return: a decorator for the particular type\n " ]
Please provide a description of the function:def in_unit_of(self, unit, as_quantity=False): new_unit = u.Unit(unit) new_quantity = self.as_quantity.to(new_unit) if as_quantity: return new_quantity else: return new_quantity.value
[ "\n Return the current value transformed to the new units\n\n :param unit: either an astropy.Unit instance, or a string which can be converted to an astropy.Unit\n instance, like \"1 / (erg cm**2 s)\"\n :param as_quantity: if True, the method return an astropy.Quantity, if False just a floating point number.\n Default is False\n :return: either a floating point or a astropy.Quantity depending on the value of \"as_quantity\"\n " ]
Please provide a description of the function:def internal_to_external_delta(self, internal_value, internal_delta): external_value = self.transformation.backward(internal_value) bound_internal = internal_value + internal_delta bound_external = self.transformation.backward(bound_internal) external_delta = bound_external - external_value return external_value, external_delta
[ "\n Transform an interval from the internal to the external reference (through the transformation). It is useful\n if you have for example a confidence interval in internal reference and you want to transform it to the\n external reference\n\n :param interval_value: value in internal reference\n :param internal_delta: delta in internal reference\n :return: value and delta in external reference\n " ]
Please provide a description of the function:def _get_value(self): # This is going to be true (possibly) only for derived classes. It is here to make the code cleaner # and also to avoid infinite recursion if self._aux_variable: return self._aux_variable['law'](self._aux_variable['variable'].value) if self._transformation is None: return self._internal_value else: # A transformation is set. Transform back from internal value to true value # # print("Interval value is %s" % self._internal_value) # print("Returning %s" % self._transformation.backward(self._internal_value)) return self._transformation.backward(self._internal_value)
[ "Return current parameter value" ]
Please provide a description of the function:def _set_value(self, new_value): if self.min_value is not None and new_value < self.min_value: raise SettingOutOfBounds( "Trying to set parameter {0} = {1}, which is less than the minimum allowed {2}".format( self.name, new_value, self.min_value)) if self.max_value is not None and new_value > self.max_value: raise SettingOutOfBounds( "Trying to set parameter {0} = {1}, which is more than the maximum allowed {2}".format( self.name, new_value, self.max_value)) # Issue a warning if there is an auxiliary variable, as the setting does not have any effect if self.has_auxiliary_variable(): with warnings.catch_warnings(): warnings.simplefilter("always", RuntimeWarning) warnings.warn("You are trying to assign to a parameter which is either linked or " "has auxiliary variables. The assignment has no effect.", RuntimeWarning) # Save the value as a pure floating point to avoid the overhead of the astropy.units machinery when # not needed if self._transformation is None: new_internal_value = new_value else: new_internal_value = self._transformation.forward(new_value) # If the parameter has changed, update its value and call the callbacks if needed if new_internal_value != self._internal_value: # Update self._internal_value = new_internal_value # Call the callbacks (if any) for callback in self._callbacks: try: callback(self) except: raise NotCallableOrErrorInCall("Could not call callback for parameter %s" % self.name)
[ "Sets the current value of the parameter, ensuring that it is within the allowed range." ]
Please provide a description of the function:def _set_internal_value(self, new_internal_value): if new_internal_value != self._internal_value: self._internal_value = new_internal_value # Call callbacks if any for callback in self._callbacks: callback(self)
[ "\n This is supposed to be only used by fitting engines\n\n :param new_internal_value: new value in internal representation\n :return: none\n " ]
Please provide a description of the function:def _set_min_value(self, min_value): # Check that the min value can be transformed if a transformation is present if self._transformation is not None: if min_value is not None: try: _ = self._transformation.forward(min_value) except FloatingPointError: raise ValueError("The provided minimum %s cannot be transformed with the transformation %s which " "is defined for the parameter %s" % (min_value, type(self._transformation), self.path)) # Store the minimum as a pure float self._external_min_value = min_value # Check that the current value of the parameter is still within the boundaries. If not, issue a warning if self._external_min_value is not None and self.value < self._external_min_value: warnings.warn("The current value of the parameter %s (%s) " "was below the new minimum %s." % (self.name, self.value, self._external_min_value), exceptions.RuntimeWarning) self.value = self._external_min_value
[ "Sets current minimum allowed value" ]
Please provide a description of the function:def _get_internal_min_value(self): if self.min_value is None: # No minimum set return None else: # There is a minimum. If there is a transformation, use it, otherwise just return the minimum if self._transformation is None: return self._external_min_value else: return self._transformation.forward(self._external_min_value)
[ "\n This is supposed to be only used by fitting engines to get the minimum value in internal representation.\n It is supposed to be called only once before doing the minimization/sampling, to set the range of the parameter\n\n :return: minimum value in internal representation (or None if there is no minimum)\n " ]
Please provide a description of the function:def _set_max_value(self, max_value): self._external_max_value = max_value # Check that the current value of the parameter is still within the boundaries. If not, issue a warning if self._external_max_value is not None and self.value > self._external_max_value: warnings.warn("The current value of the parameter %s (%s) " "was above the new maximum %s." % (self.name, self.value, self._external_max_value), exceptions.RuntimeWarning) self.value = self._external_max_value
[ "Sets current maximum allowed value" ]
Please provide a description of the function:def _get_internal_max_value(self): if self.max_value is None: # No minimum set return None else: # There is a minimum. If there is a transformation, use it, otherwise just return the minimum if self._transformation is None: return self._external_max_value else: return self._transformation.forward(self._external_max_value)
[ "\n This is supposed to be only used by fitting engines to get the maximum value in internal representation.\n It is supposed to be called only once before doing the minimization/sampling, to set the range of the parameter\n\n :return: maximum value in internal representation (or None if there is no minimum)\n " ]
Please provide a description of the function:def _set_bounds(self, bounds): # Use the properties so that the checks and the handling of units are made automatically min_value, max_value = bounds # Remove old boundaries to avoid problems with the new one, if the current value was within the old boundaries # but is not within the new ones (it will then be adjusted automatically later) self.min_value = None self.max_value = None self.min_value = min_value self.max_value = max_value
[ "Sets the boundaries for this parameter to min_value and max_value" ]
Please provide a description of the function:def to_dict(self, minimal=False): data = collections.OrderedDict() if minimal: # In the minimal representation we just output the value data['value'] = self._to_python_type(self.value) else: # In the complete representation we output everything is needed to re-build the object data['value'] = self._to_python_type(self.value) data['desc'] = str(self.description) data['min_value'] = self._to_python_type(self.min_value) data['max_value'] = self._to_python_type(self.max_value) # We use our own thread-safe format for the unit data['unit'] = self.unit.to_string(format='threadsafe') return data
[ "Returns the representation for serialization" ]
Please provide a description of the function:def _get_internal_delta(self): if self._transformation is None: return self._delta else: delta_int = None for i in range(2): # Try using the low bound low_bound_ext = self.value - self.delta # Make sure we are within the margins if low_bound_ext > self.min_value: # Ok, let's use that for the delta low_bound_int = self._transformation.forward(low_bound_ext) delta_int = abs(low_bound_int - self._get_internal_value()) break else: # Nope, try with the hi bound hi_bound_ext = self.value + self._delta if hi_bound_ext < self.max_value: # Ok, let's use it hi_bound_int = self._transformation.forward(hi_bound_ext) delta_int = abs(hi_bound_int - self._get_internal_value()) break else: # Fix delta self.delta = abs(self.value - self.min_value) / 4.0 if self.delta == 0: # Parameter at the minimum self.delta = abs(self.value - self.max_value) / 4.0 # Try again continue assert delta_int is not None, "Bug" return delta_int
[ "\n This is only supposed to be used by fitting/sampling engine, to get the initial step in internal representation\n\n :return: initial delta in internal representation\n " ]
Please provide a description of the function:def _set_prior(self, prior): if prior is None: # Removing prior self._prior = None else: # Try and call the prior with the current value of the parameter try: _ = prior(self.value) except: raise NotCallableOrErrorInCall("Could not call the provided prior. " + "Is it a function accepting the current value of the parameter?") try: prior.set_units(self.unit, u.dimensionless_unscaled) except AttributeError: raise NotCallableOrErrorInCall("It looks like the provided prior is not a astromodels function.") self._prior = prior
[ "Set prior for this parameter. The prior must be a function accepting the current value of the parameter\n as input and giving the probability density as output." ]
Please provide a description of the function:def set_uninformative_prior(self, prior_class): prior_instance = prior_class() if self.min_value is None: raise ParameterMustHaveBounds("Parameter %s does not have a defined minimum. Set one first, then re-run " "set_uninformative_prior" % self.path) else: try: prior_instance.lower_bound = self.min_value except SettingOutOfBounds: raise SettingOutOfBounds("Cannot use minimum of %s for prior %s" % (self.min_value, prior_instance.name)) if self.max_value is None: raise ParameterMustHaveBounds("Parameter %s does not have a defined maximum. Set one first, then re-run " "set_uninformative_prior" % self.path) else: # pragma: no cover try: prior_instance.upper_bound = self.max_value except SettingOutOfBounds: raise SettingOutOfBounds("Cannot use maximum of %s for prior %s" % (self.max_value, prior_instance.name)) assert np.isfinite(prior_instance.upper_bound.value),"The parameter %s must have a finite maximum" % self.name assert np.isfinite(prior_instance.lower_bound.value),"The parameter %s must have a finite minimum" % self.name self._set_prior(prior_instance)
[ "\n Sets the prior for the parameter to a uniform prior between the current minimum and maximum, or a\n log-uniform prior between the current minimum and maximum.\n\n NOTE: if the current minimum and maximum are not defined, the default bounds for the prior class will be used.\n\n :param prior_class : the class to be used as prior (either Log_uniform_prior or Uniform_prior, or a class which\n provide a lower_bound and an upper_bound properties)\n :return: (none)\n " ]
Please provide a description of the function:def remove_auxiliary_variable(self): if not self.has_auxiliary_variable(): # do nothing, but print a warning warnings.warn("Cannot remove a non-existing auxiliary variable", RuntimeWarning) else: # Remove the law from the children self._remove_child(self._aux_variable['law'].name) # Clean up the dictionary self._aux_variable = {} # Set the parameter to the status it has before the auxiliary variable was created self.free = self._old_free
[ "\n Remove an existing auxiliary variable\n\n :return:\n " ]
Please provide a description of the function:def to_dict(self, minimal=False): data = super(Parameter, self).to_dict() # Add wether is a normalization or not data['is_normalization'] = self._is_normalization if minimal: # No need to add anything pass else: # In the complete representation we output everything is needed to re-build the object if self.has_auxiliary_variable(): # Store the function and the auxiliary variable data['value'] = 'f(%s)' % self._aux_variable['variable']._get_path() aux_variable_law_data = collections.OrderedDict() aux_variable_law_data[ self._aux_variable['law'].name ] = self._aux_variable['law'].to_dict() data['law'] = aux_variable_law_data # delta and free are attributes of Parameter, but not of ParameterBase data['delta'] = self._to_python_type(self._delta) data['free'] = self.free if self.has_prior(): data['prior'] = {self.prior.name: self.prior.to_dict()} return data
[ "Returns the representation for serialization" ]
Please provide a description of the function:def get_total_spatial_integral(self, z=None): dL= self.l_max.value-self.l_min.value if self.l_max.value > self.l_min.value else 360 + self.l_max.value - self.l_max.value #integral -inf to inf exp(-b**2 / 2*sigma_b**2 ) db = sqrt(2pi)*sigma_b #Note that K refers to the peak diffuse flux (at b = 0) per square degree. integral = np.sqrt( 2*np.pi ) * self.sigma_b.value * self.K.value * dL if isinstance( z, u.Quantity): z = z.value return integral * np.power( 180. / np.pi, -2 ) * np.ones_like( z )
[ "\n Returns the total integral (for 2D functions) or the integral over the spatial components (for 3D functions).\n needs to be implemented in subclasses.\n\n :return: an array of values of the integral (same dimension as z).\n " ]
Please provide a description of the function:def _get_child_from_path(self, path): keys = path.split(".") this_child = self for key in keys: try: this_child = this_child._get_child(key) except KeyError: raise KeyError("Child %s not found" % path) return this_child
[ "\n Return a children below this level, starting from a path of the kind \"this_level.something.something.name\"\n\n :param path: the key\n :return: the child\n " ]
Please provide a description of the function:def _find_instances(self, cls): instances = collections.OrderedDict() for child_name, child in self._children.iteritems(): if isinstance(child, cls): key_name = ".".join(child._get_path()) instances[key_name] = child # Now check if the instance has children, # and if it does go deeper in the tree # NOTE: an empty dictionary evaluate as False if child._children: instances.update(child._find_instances(cls)) else: instances.update(child._find_instances(cls)) return instances
[ "\n Find all the instances of cls below this node.\n\n :return: a dictionary of instances of cls\n " ]
Please provide a description of the function:def clone_model(model_instance): data = model_instance.to_dict_with_types() parser = ModelParser(model_dict=data) return parser.get_model()
[ "\n Returns a copy of the given model with all objects cloned. This is equivalent to saving the model to\n a file and reload it, but it doesn't require writing or reading to/from disk. The original model is not touched.\n\n :param model: model to be cloned\n :return: a cloned copy of the given model\n " ]
Please provide a description of the function:def sanitize_lib_name(library_path): lib_name = os.path.basename(library_path) # Some regexp magic needed to extract in a system-independent (mac/linux) way the library name tokens = re.findall("lib(.+)(\.so|\.dylib|\.a)(.+)?", lib_name) if not tokens: raise RuntimeError('Attempting to find %s in directory %s but there are no libraries in this directory'%(lib_name,library_path)) return tokens[0][0]
[ "\n Get a fully-qualified library name, like /usr/lib/libgfortran.so.3.0, and returns the lib name needed to be\n passed to the linker in the -l option (for example gfortran)\n\n :param library_path:\n :return:\n " ]
Please provide a description of the function:def find_library(library_root, additional_places=None): # find_library searches for all system paths in a system independent way (but NOT those defined in # LD_LIBRARY_PATH or DYLD_LIBRARY_PATH) first_guess = ctypes.util.find_library(library_root) if first_guess is not None: # Found in one of the system paths if sys.platform.lower().find("linux") >= 0: # On linux the linker already knows about these paths, so we # can return None as path return sanitize_lib_name(first_guess), None elif sys.platform.lower().find("darwin") >= 0: # On Mac we still need to return the path, because the linker sometimes # does not look into it return sanitize_lib_name(first_guess), os.path.dirname(first_guess) else: # Windows is not supported raise NotImplementedError("Platform %s is not supported" % sys.platform) else: # could not find it. Let's examine LD_LIBRARY_PATH or DYLD_LIBRARY_PATH # (if they sanitize_lib_name(first_guess), are not defined, possible_locations will become [""] which will # be handled by the next loop) if sys.platform.lower().find("linux") >= 0: # Unix / linux possible_locations = os.environ.get("LD_LIBRARY_PATH", "").split(":") elif sys.platform.lower().find("darwin") >= 0: # Mac possible_locations = os.environ.get("DYLD_LIBRARY_PATH", "").split(":") else: raise NotImplementedError("Platform %s is not supported" % sys.platform) if additional_places is not None: possible_locations.extend(additional_places) # Now look into the search paths library_name = None library_dir = None for search_path in possible_locations: if search_path == "": # This can happen if there are more than one :, or if nor LD_LIBRARY_PATH # nor DYLD_LIBRARY_PATH are defined (because of the default use above for os.environ.get) continue results = glob.glob(os.path.join(search_path, "lib%s*" % library_root)) if len(results) >= 1: # Results contain things like libXS.so, libXSPlot.so, libXSpippo.so # If we are looking for libXS.so, we need to make sure that we get the right one! for result in results: if re.match("lib%s[\-_\.]" % library_root, os.path.basename(result)) is None: continue else: # FOUND IT # This is the full path of the library, like /usr/lib/libcfitsio_1.2.3.4 library_name = result library_dir = search_path break else: continue if library_name is not None: break if library_name is None: return None, None else: # Sanitize the library name to get from the fully-qualified path to just the library name # (/usr/lib/libgfortran.so.3.0 becomes gfortran) return sanitize_lib_name(library_name), library_dir
[ "\n Returns the name of the library without extension\n\n :param library_root: root of the library to search, for example \"cfitsio_\" will match libcfitsio_1.2.3.4.so\n :return: the name of the library found (NOTE: this is *not* the path), and a directory path if the library is not\n in the system paths (and None otherwise). The name of libcfitsio_1.2.3.4.so will be cfitsio_1.2.3.4, in other words,\n it will be what is needed to be passed to the linker during a c/c++ compilation, in the -l option\n " ]
Please provide a description of the function:def dict_to_table(dictionary, list_of_keys=None): # assert len(dictionary.values()) > 0, "Dictionary cannot be empty" # Create an empty table table = Table() # If the dictionary is not empty, fill the table if len(dictionary) > 0: # Add the names as first column table['name'] = dictionary.keys() # Now add all other properties # Use the first parameter as prototype prototype = dictionary.values()[0] column_names = prototype.keys() # If we have a white list for the columns, use it if list_of_keys is not None: column_names = filter(lambda key: key in list_of_keys, column_names) # Fill the table for column_name in column_names: table[column_name] = map(lambda x: x[column_name], dictionary.values()) return table
[ "\n Return a table representing the dictionary.\n\n :param dictionary: the dictionary to represent\n :param list_of_keys: optionally, only the keys in this list will be inserted in the table\n :return: a Table instance\n " ]
Please provide a description of the function:def _base_repr_(self, html=False, show_name=True, **kwargs): table_id = 'table{id}'.format(id=id(self)) data_lines, outs = self.formatter._pformat_table(self, tableid=table_id, html=html, max_width=(-1 if html else None), show_name=show_name, show_unit=None, show_dtype=False) out = '\n'.join(data_lines) # if astropy.table.six.PY2 and isinstance(out, astropy.table.six.text_type): # out = out.encode('utf-8') return out
[ "\n Override the method in the astropy.Table class\n to avoid displaying the description, and the format\n of the columns\n " ]
Please provide a description of the function:def fetch_cache_key(request): m = hashlib.md5() m.update(request.body) return m.hexdigest()
[ " Returns a hashed cache key. " ]
Please provide a description of the function:def dispatch(self, request, *args, **kwargs): if not graphql_api_settings.CACHE_ACTIVE: return self.super_call(request, *args, **kwargs) cache = caches["default"] operation_ast = self.get_operation_ast(request) if operation_ast and operation_ast.operation == "mutation": cache.clear() return self.super_call(request, *args, **kwargs) cache_key = "_graplql_{}".format(self.fetch_cache_key(request)) response = cache.get(cache_key) if not response: response = self.super_call(request, *args, **kwargs) # cache key and value cache.set(cache_key, response, timeout=graphql_api_settings.CACHE_TIMEOUT) return response
[ " Fetches queried data from graphql and returns cached & hashed key. " ]
Please provide a description of the function:def _parse(partial_dt): dt = None try: if isinstance(partial_dt, datetime): dt = partial_dt if isinstance(partial_dt, date): dt = _combine_date_time(partial_dt, time(0, 0, 0)) if isinstance(partial_dt, time): dt = _combine_date_time(date.today(), partial_dt) if isinstance(partial_dt, (int, float)): dt = datetime.fromtimestamp(partial_dt) if isinstance(partial_dt, (str, bytes)): dt = parser.parse(partial_dt, default=timezone.now()) if dt is not None and timezone.is_naive(dt): dt = timezone.make_aware(dt) return dt except ValueError: return None
[ "\n parse a partial datetime object to a complete datetime object\n " ]
Please provide a description of the function:def get_obj(app_label, model_name, object_id): try: model = apps.get_model("{}.{}".format(app_label, model_name)) assert is_valid_django_model(model), ("Model {}.{} do not exist.").format( app_label, model_name ) obj = get_Object_or_None(model, pk=object_id) return obj except model.DoesNotExist: return None except LookupError: pass except ValidationError as e: raise ValidationError(e.__str__()) except TypeError as e: raise TypeError(e.__str__()) except Exception as e: raise Exception(e.__str__())
[ "\n Function used to get a object\n :param app_label: A valid Django Model or a string with format: <app_label>.<model_name>\n :param model_name: Key into kwargs that contains de data: new_person\n :param object_id:\n :return: instance\n " ]
Please provide a description of the function:def create_obj(django_model, new_obj_key=None, *args, **kwargs): try: if isinstance(django_model, six.string_types): django_model = apps.get_model(django_model) assert is_valid_django_model(django_model), ( "You need to pass a valid Django Model or a string with format: " '<app_label>.<model_name> to "create_obj"' ' function, received "{}".' ).format(django_model) data = kwargs.get(new_obj_key, None) if new_obj_key else kwargs new_obj = django_model(**data) new_obj.full_clean() new_obj.save() return new_obj except LookupError: pass except ValidationError as e: raise ValidationError(e.__str__()) except TypeError as e: raise TypeError(e.__str__()) except Exception as e: return e.__str__()
[ "\n Function used by my on traditional Mutations to create objs\n :param django_model: A valid Django Model or a string with format:\n <app_label>.<model_name>\n :param new_obj_key: Key into kwargs that contains de data: new_person\n :param args:\n :param kwargs: Dict with model attributes values\n :return: instance of model after saved it\n " ]
Please provide a description of the function:def clean_dict(d): if not isinstance(d, (dict, list)): return d if isinstance(d, list): return [v for v in (clean_dict(v) for v in d) if v] return OrderedDict( [(k, v) for k, v in ((k, clean_dict(v)) for k, v in list(d.items())) if v] )
[ "\n Remove all empty fields in a nested dict\n " ]
Please provide a description of the function:def _get_queryset(klass): if isinstance(klass, QuerySet): return klass elif isinstance(klass, Manager): manager = klass elif isinstance(klass, ModelBase): manager = klass._default_manager else: if isinstance(klass, type): klass__name = klass.__name__ else: klass__name = klass.__class__.__name__ raise ValueError( "Object is of type '{}', but must be a Django Model, " "Manager, or QuerySet".format(klass__name) ) return manager.all()
[ "\n Returns a QuerySet from a Model, Manager, or QuerySet. Created to make\n get_object_or_404 and get_list_or_404 more DRY.\n\n Raises a ValueError if klass is not a Model, Manager, or QuerySet.\n " ]
Please provide a description of the function:def get_Object_or_None(klass, *args, **kwargs): queryset = _get_queryset(klass) try: if args: return queryset.using(args[0]).get(**kwargs) else: return queryset.get(*args, **kwargs) except queryset.model.DoesNotExist: return None
[ "\n Uses get() to return an object, or None if the object does not exist.\n\n klass may be a Model, Manager, or QuerySet object. All other passed\n arguments and keyword arguments are used in the get() query.\n\n Note: Like with get(), an MultipleObjectsReturned will be raised\n if more than one object is found.\n Ex: get_Object_or_None(User, db, id=1)\n " ]
Please provide a description of the function:def find_schema_paths(schema_files_path=DEFAULT_SCHEMA_FILES_PATH): paths = [] for path in schema_files_path: if os.path.isdir(path): paths.append(path) if paths: return paths raise SchemaFilesNotFound("Searched " + os.pathsep.join(schema_files_path))
[ "Searches the locations in the `SCHEMA_FILES_PATH` to\n try to find where the schema SQL files are located.\n " ]
Please provide a description of the function:def execute(self, cmd, *args, **kwargs): self.cursor.execute(cmd, *args, **kwargs)
[ " Execute the SQL command and return the data rows as tuples\n " ]
Please provide a description of the function:def select(self, cmd, *args, **kwargs): self.cursor.execute(cmd, *args, **kwargs) return self.cursor.fetchall()
[ " Execute the SQL command and return the data rows as tuples\n " ]
Please provide a description of the function:def run(): # create a arg parser and configure it. parser = argparse.ArgumentParser(description='SharQ Server.') parser.add_argument('-c', '--config', action='store', required=True, help='Absolute path of the SharQ configuration file.', dest='sharq_config') parser.add_argument('-gc', '--gunicorn-config', action='store', required=False, help='Gunicorn configuration file.', dest='gunicorn_config') parser.add_argument('--version', action='version', version='SharQ Server %s' % __version__) args = parser.parse_args() # read the configuration file and set gunicorn options. config_parser = ConfigParser.SafeConfigParser() # get the full path of the config file. sharq_config = os.path.abspath(args.sharq_config) config_parser.read(sharq_config) host = config_parser.get('sharq-server', 'host') port = config_parser.get('sharq-server', 'port') bind = '%s:%s' % (host, port) try: workers = config_parser.get('sharq-server', 'workers') except ConfigParser.NoOptionError: workers = number_of_workers() try: accesslog = config_parser.get('sharq-server', 'accesslog') except ConfigParser.NoOptionError: accesslog = None options = { 'bind': bind, 'workers': workers, 'worker_class': 'gevent' # required for sharq to function. } if accesslog: options.update({ 'accesslog': accesslog }) if args.gunicorn_config: gunicorn_config = os.path.abspath(args.gunicorn_config) options.update({ 'config': gunicorn_config }) print % (__version__, bind) server = setup_server(sharq_config) SharQServerApplicationRunner(server.app, options).run()
[ "Exposes a CLI to configure the SharQ Server and runs the server.", "\n ___ _ ___ ___\n / __| |_ __ _ _ _ / _ \\ / __| ___ _ ___ _____ _ _\n \\__ \\ ' \\/ _` | '_| (_) | \\__ \\/ -_) '_\\ V / -_) '_|\n |___/_||_\\__,_|_| \\__\\_\\ |___/\\___|_| \\_/\\___|_|\n\n Version: %s\n\n Listening on: %s\n " ]
Please provide a description of the function:def setup_server(config_path): # configure the SharQ server server = SharQServer(config_path) # start the requeue loop gevent.spawn(server.requeue) return server
[ "Configure SharQ server, start the requeue loop\n and return the server." ]
Please provide a description of the function:def requeue(self): job_requeue_interval = float( self.config.get('sharq', 'job_requeue_interval')) while True: self.sq.requeue() gevent.sleep(job_requeue_interval / 1000.00)
[ "Loop endlessly and requeue expired jobs." ]
Please provide a description of the function:def _view_enqueue(self, queue_type, queue_id): response = { 'status': 'failure' } try: request_data = json.loads(request.data) except Exception, e: response['message'] = e.message return jsonify(**response), 400 request_data.update({ 'queue_type': queue_type, 'queue_id': queue_id }) try: response = self.sq.enqueue(**request_data) except Exception, e: response['message'] = e.message return jsonify(**response), 400 return jsonify(**response), 201
[ "Enqueues a job into SharQ." ]
Please provide a description of the function:def _view_dequeue(self, queue_type): response = { 'status': 'failure' } request_data = { 'queue_type': queue_type } try: response = self.sq.dequeue(**request_data) if response['status'] == 'failure': return jsonify(**response), 404 except Exception, e: response['message'] = e.message return jsonify(**response), 400 return jsonify(**response)
[ "Dequeues a job from SharQ." ]
Please provide a description of the function:def _view_finish(self, queue_type, queue_id, job_id): response = { 'status': 'failure' } request_data = { 'queue_type': queue_type, 'queue_id': queue_id, 'job_id': job_id } try: response = self.sq.finish(**request_data) if response['status'] == 'failure': return jsonify(**response), 404 except Exception, e: response['message'] = e.message return jsonify(**response), 400 return jsonify(**response)
[ "Marks a job as finished in SharQ." ]
Please provide a description of the function:def _view_interval(self, queue_type, queue_id): response = { 'status': 'failure' } try: request_data = json.loads(request.data) interval = request_data['interval'] except Exception, e: response['message'] = e.message return jsonify(**response), 400 request_data = { 'queue_type': queue_type, 'queue_id': queue_id, 'interval': interval } try: response = self.sq.interval(**request_data) if response['status'] == 'failure': return jsonify(**response), 404 except Exception, e: response['message'] = e.message return jsonify(**response), 400 return jsonify(**response)
[ "Updates the queue interval in SharQ." ]
Please provide a description of the function:def _view_metrics(self, queue_type, queue_id): response = { 'status': 'failure' } request_data = {} if queue_type: request_data['queue_type'] = queue_type if queue_id: request_data['queue_id'] = queue_id try: response = self.sq.metrics(**request_data) except Exception, e: response['message'] = e.message return jsonify(**response), 400 return jsonify(**response)
[ "Gets SharQ metrics based on the params." ]
Please provide a description of the function:def _view_clear_queue(self, queue_type, queue_id): response = { 'status': 'failure' } try: request_data = json.loads(request.data) except Exception, e: response['message'] = e.message return jsonify(**response), 400 request_data.update({ 'queue_type': queue_type, 'queue_id': queue_id }) try: response = self.sq.clear_queue(**request_data) except Exception, e: response['message'] = e.message return jsonify(**response), 400 return jsonify(**response)
[ "remove queueu from SharQ based on the queue_type and queue_id." ]
Please provide a description of the function:def _get_from_path(import_path): # type: (str) -> Callable module_name, obj_name = import_path.rsplit('.', 1) module = import_module(module_name) return getattr(module, obj_name)
[ "\n Kwargs:\n import_path: full import path (to a mock factory function)\n\n Returns:\n (the mock factory function)\n " ]
Please provide a description of the function:def register(func_path, factory=mock.MagicMock): # type: (str, Callable) -> Callable global _factory_map _factory_map[func_path] = factory def decorator(decorated_factory): _factory_map[func_path] = decorated_factory return decorated_factory return decorator
[ "\n Kwargs:\n func_path: import path to mock (as you would give to `mock.patch`)\n factory: function that returns a mock for the patched func\n\n Returns:\n (decorator)\n\n Usage:\n\n automock.register('path.to.func.to.mock') # default MagicMock\n automock.register('path.to.func.to.mock', CustomMockFactory)\n\n @automock.register('path.to.func.to.mock')\n def custom_mock(result):\n return mock.MagicMock(return_value=result)\n " ]
Please provide a description of the function:def start_patching(name=None): # type: (Optional[str]) -> None global _factory_map, _patchers, _mocks if _patchers and name is None: warnings.warn('start_patching() called again, already patched') _pre_import() if name is not None: factory = _factory_map[name] items = [(name, factory)] else: items = _factory_map.items() for name, factory in items: patcher = mock.patch(name, new=factory()) mocked = patcher.start() _patchers[name] = patcher _mocks[name] = mocked
[ "\n Initiate mocking of the functions listed in `_factory_map`.\n\n For this to work reliably all mocked helper functions should be imported\n and used like this:\n\n import dp_paypal.client as paypal\n res = paypal.do_paypal_express_checkout(...)\n\n (i.e. don't use `from dp_paypal.client import x` import style)\n\n Kwargs:\n name (Optional[str]): if given, only patch the specified path, else all\n defined default mocks\n " ]
Please provide a description of the function:def stop_patching(name=None): # type: (Optional[str]) -> None global _patchers, _mocks if not _patchers: warnings.warn('stop_patching() called again, already stopped') if name is not None: items = [(name, _patchers[name])] else: items = list(_patchers.items()) for name, patcher in items: patcher.stop() del _patchers[name] del _mocks[name]
[ "\n Finish the mocking initiated by `start_patching`\n\n Kwargs:\n name (Optional[str]): if given, only unpatch the specified path, else all\n defined default mocks\n " ]
Please provide a description of the function:def standardize_back(xs, offset, scale): try: offset = float(offset) except: raise ValueError('The argument offset is not None or float.') try: scale = float(scale) except: raise ValueError('The argument scale is not None or float.') try: xs = np.array(xs, dtype="float64") except: raise ValueError('The argument xs is not numpy array or similar.') return xs*scale + offset
[ "\n This is function for de-standarization of input series.\n\n **Args:**\n\n * `xs` : standardized input (1 dimensional array)\n\n * `offset` : offset to add (float).\n\n * `scale` : scale (float).\n \n **Returns:**\n\n * `x` : original (destandardised) series\n\n " ]
Please provide a description of the function:def standardize(x, offset=None, scale=None): if offset == None: offset = np.array(x).mean() else: try: offset = float(offset) except: raise ValueError('The argument offset is not None or float') if scale == None: scale = np.array(x).std() else: try: scale = float(scale) except: raise ValueError('The argument scale is not None or float') try: x = np.array(x, dtype="float64") except: raise ValueError('The argument x is not numpy array or similar.') return (x - offset) / scale
[ " \n This is function for standarization of input series.\n\n **Args:**\n\n * `x` : series (1 dimensional array)\n\n **Kwargs:**\n\n * `offset` : offset to remove (float). If not given, \\\n the mean value of `x` is used.\n\n * `scale` : scale (float). If not given, \\\n the standard deviation of `x` is used.\n \n **Returns:**\n\n * `xs` : standardized series\n " ]
Please provide a description of the function:def input_from_history(a, n, bias=False): if not type(n) == int: raise ValueError('The argument n must be int.') if not n > 0: raise ValueError('The argument n must be greater than 0') try: a = np.array(a, dtype="float64") except: raise ValueError('The argument a is not numpy array or similar.') x = np.array([a[i:i+n] for i in range(len(a)-n+1)]) if bias: x = np.vstack((x.T, np.ones(len(x)))).T return x
[ "\n This is function for creation of input matrix.\n\n **Args:**\n\n * `a` : series (1 dimensional array)\n\n * `n` : size of input matrix row (int). It means how many samples \\\n of previous history you want to use \\\n as the filter input. It also represents the filter length.\n\n **Kwargs:**\n\n * `bias` : decides if the bias is used (Boolean). If True, \\\n array of all ones is appended as a last column to matrix `x`. \\\n So matrix `x` has `n`+1 columns.\n\n **Returns:**\n\n * `x` : input matrix (2 dimensional array) \\\n constructed from an array `a`. The length of `x` \\\n is calculated as length of `a` - `n` + 1. \\\n If the `bias` is used, then the amount of columns is `n` if not then \\\n amount of columns is `n`+1).\n\n " ]
Please provide a description of the function:def init_weights(self, w, n=-1): if n == -1: n = self.n if type(w) == str: if w == "random": w = np.random.normal(0, 0.5, n) elif w == "zeros": w = np.zeros(n) else: raise ValueError('Impossible to understand the w') elif len(w) == n: try: w = np.array(w, dtype="float64") except: raise ValueError('Impossible to understand the w') else: raise ValueError('Impossible to understand the w') self.w = w
[ "\n This function initialises the adaptive weights of the filter.\n\n **Args:**\n\n * `w` : initial weights of filter. Possible values are:\n \n * array with initial weights (1 dimensional array) of filter size\n \n * \"random\" : create random weights\n \n * \"zeros\" : create zero value weights\n\n \n **Kwargs:**\n\n * `n` : size of filter (int) - number of filter coefficients.\n\n **Returns:**\n\n * `y` : output value (float) calculated from input array.\n\n " ]
Please provide a description of the function:def predict(self, x): y = np.dot(self.w, x) return y
[ "\n This function calculates the new output value `y` from input array `x`.\n\n **Args:**\n\n * `x` : input vector (1 dimension array) in length of filter.\n\n **Returns:**\n\n * `y` : output value (float) calculated from input array.\n\n " ]
Please provide a description of the function:def pretrained_run(self, d, x, ntrain=0.5, epochs=1): Ntrain = int(len(d)*ntrain) # train for epoch in range(epochs): self.run(d[:Ntrain], x[:Ntrain]) # test y, e, w = self.run(d[Ntrain:], x[Ntrain:]) return y, e, w
[ "\n This function sacrifices part of the data for few epochs of learning.\n \n **Args:**\n\n * `d` : desired value (1 dimensional array)\n\n * `x` : input matrix (2-dimensional array). Rows are samples,\n columns are input arrays.\n \n **Kwargs:**\n\n * `ntrain` : train to test ratio (float), default value is 0.5\n (that means 50% of data is used for training)\n \n * `epochs` : number of training epochs (int), default value is 1.\n This number describes how many times the training will be repeated\n on dedicated part of data.\n\n **Returns:**\n \n * `y` : output value (1 dimensional array).\n The size corresponds with the desired value.\n\n * `e` : filter error for every sample (1 dimensional array).\n The size corresponds with the desired value.\n\n * `w` : vector of final weights (1 dimensional array). \n " ]
Please provide a description of the function:def explore_learning(self, d, x, mu_start=0, mu_end=1., steps=100, ntrain=0.5, epochs=1, criteria="MSE", target_w=False): mu_range = np.linspace(mu_start, mu_end, steps) errors = np.zeros(len(mu_range)) for i, mu in enumerate(mu_range): # init self.init_weights("zeros") self.mu = mu # run y, e, w = self.pretrained_run(d, x, ntrain=ntrain, epochs=epochs) if type(target_w) != bool: errors[i] = get_mean_error(w[-1]-target_w, function=criteria) else: errors[i] = get_mean_error(e, function=criteria) return errors, mu_range
[ "\n Test what learning rate is the best.\n\n **Args:**\n\n * `d` : desired value (1 dimensional array)\n\n * `x` : input matrix (2-dimensional array). Rows are samples,\n columns are input arrays.\n \n **Kwargs:**\n \n * `mu_start` : starting learning rate (float)\n \n * `mu_end` : final learning rate (float)\n \n * `steps` : how many learning rates should be tested between `mu_start`\n and `mu_end`.\n\n * `ntrain` : train to test ratio (float), default value is 0.5\n (that means 50% of data is used for training)\n \n * `epochs` : number of training epochs (int), default value is 1.\n This number describes how many times the training will be repeated\n on dedicated part of data.\n \n * `criteria` : how should be measured the mean error (str),\n default value is \"MSE\".\n \n * `target_w` : target weights (str or 1d array), default value is False.\n If False, the mean error is estimated from prediction error.\n If an array is provided, the error between weights and `target_w`\n is used.\n\n **Returns:**\n \n * `errors` : mean error for tested learning rates (1 dimensional array).\n\n * `mu_range` : range of used learning rates (1d array). Every value\n corresponds with one value from `errors`\n\n " ]
Please provide a description of the function:def check_float_param(self, param, low, high, name): try: param = float(param) except: raise ValueError( 'Parameter {} is not float or similar'.format(name) ) if low != None or high != None: if not low <= param <= high: raise ValueError('Parameter {} is not in range <{}, {}>' .format(name, low, high)) return param
[ "\n Check if the value of the given parameter is in the given range\n and a float.\n Designed for testing parameters like `mu` and `eps`.\n To pass this function the variable `param` must be able to be converted\n into a float with a value between `low` and `high`.\n\n **Args:**\n\n * `param` : parameter to check (float or similar)\n\n * `low` : lowest allowed value (float), or None\n\n * `high` : highest allowed value (float), or None\n\n * `name` : name of the parameter (string), it is used for an error message\n \n **Returns:**\n\n * `param` : checked parameter converted to float\n\n " ]
Please provide a description of the function:def check_int(self, param, error_msg): if type(param) == int: return int(param) else: raise ValueError(error_msg)
[ "\n This function check if the parameter is int.\n If yes, the function returns the parameter,\n if not, it raises error message.\n \n **Args:**\n \n * `param` : parameter to check (int or similar)\n\n * `error_ms` : lowest allowed value (int), or None \n \n **Returns:**\n \n * `param` : parameter (int)\n " ]
Please provide a description of the function:def check_int_param(self, param, low, high, name): try: param = int(param) except: raise ValueError( 'Parameter {} is not int or similar'.format(name) ) if low != None or high != None: if not low <= param <= high: raise ValueError('Parameter {} is not in range <{}, {}>' .format(name, low, high)) return param
[ "\n Check if the value of the given parameter is in the given range\n and an int.\n Designed for testing parameters like `mu` and `eps`.\n To pass this function the variable `param` must be able to be converted\n into a float with a value between `low` and `high`.\n\n **Args:**\n\n * `param` : parameter to check (int or similar)\n\n * `low` : lowest allowed value (int), or None\n\n * `high` : highest allowed value (int), or None\n\n * `name` : name of the parameter (string), it is used for an error message\n \n **Returns:**\n\n * `param` : checked parameter converted to float\n\n " ]
Please provide a description of the function:def adapt(self, d, x): y = np.dot(self.w, x) e = d - y nu = self.mu / (self.eps + np.dot(x, x)) self.w += nu * x * e**3
[ "\n Adapt weights according one desired value and its input.\n\n **Args:**\n\n * `d` : desired value (float)\n\n * `x` : input array (1-dimensional array)\n " ]
Please provide a description of the function:def run(self, d, x): # measure the data and check if the dimmension agree N = len(x) if not len(d) == N: raise ValueError('The length of vector d and matrix x must agree.') self.n = len(x[0]) # prepare data try: x = np.array(x) d = np.array(d) except: raise ValueError('Impossible to convert x or d to a numpy array') # create empty arrays y = np.zeros(N) e = np.zeros(N) self.w_history = np.zeros((N,self.n)) # adaptation loop for k in range(N): self.w_history[k,:] = self.w y[k] = np.dot(self.w, x[k]) e[k] = d[k] - y[k] nu = self.mu / (self.eps + np.dot(x[k], x[k])) dw = nu * x[k] * e[k]**3 self.w += dw return y, e, self.w_history
[ "\n This function filters multiple samples in a row.\n\n **Args:**\n\n * `d` : desired value (1 dimensional array)\n\n * `x` : input matrix (2-dimensional array). Rows are samples,\n columns are input arrays.\n\n **Returns:**\n\n * `y` : output value (1 dimensional array).\n The size corresponds with the desired value.\n\n * `e` : filter error for every sample (1 dimensional array).\n The size corresponds with the desired value.\n\n * `w` : history of all weights (2 dimensional array).\n Every row is set of the weights for given sample.\n " ]
Please provide a description of the function:def get_valid_error(x1, x2=-1): # just error if type(x2) == int and x2 == -1: try: e = np.array(x1) except: raise ValueError('Impossible to convert series to a numpy array') # two series else: try: x1 = np.array(x1) x2 = np.array(x2) except: raise ValueError('Impossible to convert one of series to a numpy array') if not len(x1) == len(x2): raise ValueError('The length of both series must agree.') e = x1 - x2 return e
[ "\n Function that validates:\n\n * x1 is possible to convert to numpy array\n\n * x2 is possible to convert to numpy array (if exists)\n\n * x1 and x2 have the same length (if both exist)\n " ]
Please provide a description of the function:def logSE(x1, x2=-1): e = get_valid_error(x1, x2) return 10*np.log10(e**2)
[ "\n 10 * log10(e**2) \n This function accepts two series of data or directly\n one series with error.\n\n **Args:**\n\n * `x1` - first data series or error (1d array)\n\n **Kwargs:**\n\n * `x2` - second series (1d array) if first series was not error directly,\\\\\n then this should be the second series\n\n **Returns:**\n\n * `e` - logSE of error (1d array) obtained directly from `x1`, \\\\\n or as a difference of `x1` and `x2`. The values are in dB!\n\n " ]
Please provide a description of the function:def MAE(x1, x2=-1): e = get_valid_error(x1, x2) return np.sum(np.abs(e)) / float(len(e))
[ "\n Mean absolute error - this function accepts two series of data or directly\n one series with error.\n\n **Args:**\n\n * `x1` - first data series or error (1d array)\n\n **Kwargs:**\n\n * `x2` - second series (1d array) if first series was not error directly,\\\\\n then this should be the second series\n\n **Returns:**\n\n * `e` - MAE of error (float) obtained directly from `x1`, \\\\\n or as a difference of `x1` and `x2`\n\n " ]
Please provide a description of the function:def MSE(x1, x2=-1): e = get_valid_error(x1, x2) return np.dot(e, e) / float(len(e))
[ "\n Mean squared error - this function accepts two series of data or directly\n one series with error.\n\n **Args:**\n\n * `x1` - first data series or error (1d array)\n\n **Kwargs:**\n\n * `x2` - second series (1d array) if first series was not error directly,\\\\\n then this should be the second series\n\n **Returns:**\n\n * `e` - MSE of error (float) obtained directly from `x1`, \\\\\n or as a difference of `x1` and `x2`\n\n " ]
Please provide a description of the function:def RMSE(x1, x2=-1): e = get_valid_error(x1, x2) return np.sqrt(np.dot(e, e) / float(len(e)))
[ "\n Root-mean-square error - this function accepts two series of data\n or directly one series with error.\n\n **Args:**\n\n * `x1` - first data series or error (1d array)\n\n **Kwargs:**\n\n * `x2` - second series (1d array) if first series was not error directly,\\\\\n then this should be the second series\n\n **Returns:**\n\n * `e` - RMSE of error (float) obtained directly from `x1`, \\\\\n or as a difference of `x1` and `x2`\n\n " ]
Please provide a description of the function:def get_mean_error(x1, x2=-1, function="MSE"): if function == "MSE": return MSE(x1, x2) elif function == "MAE": return MAE(x1, x2) elif function == "RMSE": return RMSE(x1, x2) else: raise ValueError('The provided error function is not known')
[ "\n This function returns desired mean error. Options are: MSE, MAE, RMSE\n \n **Args:**\n\n * `x1` - first data series or error (1d array)\n\n **Kwargs:**\n\n * `x2` - second series (1d array) if first series was not error directly,\\\\\n then this should be the second series\n\n **Returns:**\n\n * `e` - mean error value (float) obtained directly from `x1`, \\\\\n or as a difference of `x1` and `x2`\n " ]
Please provide a description of the function:def ELBND(w, e, function="max"): # check if the function is known if not function in ["max", "sum"]: raise ValueError('Unknown output function') # get length of data and number of parameters N = w.shape[0] n = w.shape[1] # get abs dw from w dw = np.zeros(w.shape) dw[:-1] = np.abs(np.diff(w, axis=0)) # absolute values of product of increments and error a = np.random.random((5,2)) b = a.T*np.array([1,2,3,4,5]) elbnd = np.abs((dw.T*e).T) # apply output function if function == "max": elbnd = np.max(elbnd, axis=1) elif function == "sum": elbnd = np.sum(elbnd, axis=1) # return output return elbnd
[ "\n This function estimates Error and Learning Based Novelty Detection measure\n from given data.\n\n **Args:**\n\n * `w` : history of adaptive parameters of an adaptive model (2d array),\n every row represents parameters in given time index.\n\n * `e` : error of adaptive model (1d array)\n\n **Kwargs:**\n\n * `functions` : output function (str). The way how to produce single\n value for every sample (from all parameters)\n \n * `max` - maximal value\n \n * `sum` - sum of values\n\n **Returns:**\n\n * ELBND values (1d array). This vector has same lenght as `w`.\n\n " ]