code
stringlengths
66
870k
docstring
stringlengths
19
26.7k
func_name
stringlengths
1
138
language
stringclasses
1 value
repo
stringlengths
7
68
path
stringlengths
5
324
url
stringlengths
46
389
license
stringclasses
7 values
def getOutputWeightMatrix(self): """ Sets the weight matrix of the output layer's input connection. """ c = self.getOutputConnection() p = c.getParameters() return reshape(p, (c.outdim, c.indim))
Sets the weight matrix of the output layer's input connection.
getOutputWeightMatrix
python
pybrain/pybrain
pybrain/supervised/evolino/networkwrapper.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/evolino/networkwrapper.py
BSD-3-Clause
def injectBackproject(self, injection): """ Injects a vector into the recurrent connection. This will be used in the evolino trainingsphase, where the target values need to be backprojected instead of the real output of the net. :key injection: vector of length self.network.outdim """ outlayer = self.getOutputLayer() outlayer.outputbuffer[self.network.offset - 1][:] = injection
Injects a vector into the recurrent connection. This will be used in the evolino trainingsphase, where the target values need to be backprojected instead of the real output of the net. :key injection: vector of length self.network.outdim
injectBackproject
python
pybrain/pybrain
pybrain/supervised/evolino/networkwrapper.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/evolino/networkwrapper.py
BSD-3-Clause
def getOutputConnection(self): """ Returns the input connection of the output layer. """ if self._output_connection is None: outlayer = self.getOutputLayer() lastlayer = self.getLastHiddenLayer() for c in self.getConnections(): if c.outmod is outlayer: assert c.inmod is lastlayer self._output_connection = c return self._output_connection
Returns the input connection of the output layer.
getOutputConnection
python
pybrain/pybrain
pybrain/supervised/evolino/networkwrapper.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/evolino/networkwrapper.py
BSD-3-Clause
def _getInputConnectionsOfLayer(self, layer): """ Returns a list of all input connections for the layer. """ connections = [] for c in sum(list(self.network.connections.values()), []): if c.outmod is layer: if not isinstance(c, FullConnection): raise NotImplementedError("At the time there is only support for FullConnection") connections.append(c) return connections
Returns a list of all input connections for the layer.
_getInputConnectionsOfLayer
python
pybrain/pybrain
pybrain/supervised/evolino/networkwrapper.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/evolino/networkwrapper.py
BSD-3-Clause
def getHiddenLayers(self): """ Returns a list of all hidden layers. """ layers = [] network = self.network for m in network.modules: if m not in network.inmodules and m not in network.outmodules: layers.append(m) return layers
Returns a list of all hidden layers.
getHiddenLayers
python
pybrain/pybrain
pybrain/supervised/evolino/networkwrapper.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/evolino/networkwrapper.py
BSD-3-Clause
def __init__(self, individual, subPopulationSize, nCombinations=1, valueInitializer=Randomization(-0.1, 0.1), **kwargs): """ :key individual: A prototype individual which is used to determine the structure of the genome. :key subPopulationSize: integer describing the size of the subpopulations """ Population.__init__(self) self._subPopulations = [] self.nCombinations = nCombinations ap = KWArgsProcessor(self, kwargs) ap.add('verbosity', default=0) genome = individual.getGenome() for chromosome in genome: self._subPopulations.append( EvolinoSubPopulation(chromosome, subPopulationSize, valueInitializer))
:key individual: A prototype individual which is used to determine the structure of the genome. :key subPopulationSize: integer describing the size of the subpopulations
__init__
python
pybrain/pybrain
pybrain/supervised/evolino/population.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/evolino/population.py
BSD-3-Clause
def getIndividuals(self): """ Returns a set of individuals of type EvolinoIndividual. The individuals are generated on the fly. Note that each subpopulation has the same size. So the number of resulting EvolinoIndividuals is subPopulationSize, since each chromosome of each subpopulation will be assembled once. The subpopulation container is a sequence with strict order. This sequence is iterated subPopulationSize times. In each iteration one random EvolinoSubIndividual is taken from each sub population. After each iteration the resulting sequence of sub individuals is supplied to the constructor of a new EvolinoIndividual. All EvolinoIndividuals are collected in a set, which is finally returned. """ assert len(self._subPopulations) individuals = set() for _ in range(self.nCombinations): subIndividualsList = [ list(sp.getIndividuals()) for sp in self._subPopulations ] nIndividuals = len(subIndividualsList[0]) for _ in range(nIndividuals): subIndividualCombination = [] for subIndividuals in subIndividualsList: sub_individual = subIndividuals.pop(randrange(len(subIndividuals))) subIndividualCombination.append(sub_individual) individuals.add(EvolinoIndividual(subIndividualCombination)) return individuals
Returns a set of individuals of type EvolinoIndividual. The individuals are generated on the fly. Note that each subpopulation has the same size. So the number of resulting EvolinoIndividuals is subPopulationSize, since each chromosome of each subpopulation will be assembled once. The subpopulation container is a sequence with strict order. This sequence is iterated subPopulationSize times. In each iteration one random EvolinoSubIndividual is taken from each sub population. After each iteration the resulting sequence of sub individuals is supplied to the constructor of a new EvolinoIndividual. All EvolinoIndividuals are collected in a set, which is finally returned.
getIndividuals
python
pybrain/pybrain
pybrain/supervised/evolino/population.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/evolino/population.py
BSD-3-Clause
def setIndividualFitness(self, individual, fitness): """ The fitness value is not stored directly inside this population, but is propagated to the subpopulations of all the subindividuals of which the individual consists of. The individual's fitness value is only adjusted if its bigger than the old value. To reset these values use clearFitness(). """ # additive fitness distribution # subIndividuals = individual.getSubIndividuals() # for i,sp in enumerate(self._subPopulations): # sp.addIndividualFitness( subIndividuals[i], fitness ) # max fitness distribution subIndividuals = individual.getSubIndividuals() for i, sp in enumerate(self._subPopulations): sub_individual = subIndividuals[i] old_fitness = sp.getIndividualFitness(sub_individual) if old_fitness < fitness: sp.setIndividualFitness(sub_individual, fitness)
The fitness value is not stored directly inside this population, but is propagated to the subpopulations of all the subindividuals of which the individual consists of. The individual's fitness value is only adjusted if its bigger than the old value. To reset these values use clearFitness().
setIndividualFitness
python
pybrain/pybrain
pybrain/supervised/evolino/population.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/evolino/population.py
BSD-3-Clause
def __init__(self, chromosome, maxNIndividuals, valueInitializer=Randomization(-0.1, 0.1), **kwargs): """ :key chromosome: The prototype chromosome :key maxNIndividuals: The maximum allowed number of individuals """ SimplePopulation.__init__(self) self._prototype = EvolinoSubIndividual(chromosome) self._maxNIndividuals = maxNIndividuals self._valueInitializer = valueInitializer self.setArgs(**kwargs) for _ in range(maxNIndividuals): self.addIndividual(self._prototype.copy()) self._valueInitializer.apply(self)
:key chromosome: The prototype chromosome :key maxNIndividuals: The maximum allowed number of individuals
__init__
python
pybrain/pybrain
pybrain/supervised/evolino/population.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/evolino/population.py
BSD-3-Clause
def __init__(self, min_val=0., max_val=1.): """ Initializes the uniform variate with a min and a max value. """ self._min_val = min_val self._max_val = max_val
Initializes the uniform variate with a min and a max value.
__init__
python
pybrain/pybrain
pybrain/supervised/evolino/variate.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/evolino/variate.py
BSD-3-Clause
def __init__(self, x0=0., alpha=1.): """ :key x0: Median and mode of the Cauchy distribution :key alpha: scale """ self.x0 = x0 self.alpha = alpha
:key x0: Median and mode of the Cauchy distribution :key alpha: scale
__init__
python
pybrain/pybrain
pybrain/supervised/evolino/variate.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/evolino/variate.py
BSD-3-Clause
def __init__(self, x0=0., alpha=1.): """ :key x0: Mean :key alpha: standard deviation """ self.x0 = x0 self.alpha = alpha
:key x0: Mean :key alpha: standard deviation
__init__
python
pybrain/pybrain
pybrain/supervised/evolino/variate.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/evolino/variate.py
BSD-3-Clause
def arrayPermutation(permutation): """Return a permutation function. The function permutes any array as specified by the supplied permutation. """ assert permutation.ndim == 1, \ "Only one dimensional permutaton arrays are supported" def permute(arr): assert arr.ndim == 1, "Only one dimensional arrays are supported" assert arr.shape == permutation.shape, "Array shapes don't match" return array([arr[i] for i in permutation]) return permute
Return a permutation function. The function permutes any array as specified by the supplied permutation.
arrayPermutation
python
pybrain/pybrain
pybrain/supervised/knn/lsh/minhash.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/knn/lsh/minhash.py
BSD-3-Clause
def jacardCoefficient(a, b): """Return the Jacard coefficient of a and b. The jacard coefficient is defined as the overlap between two sets: the sum of all equal elements divided by the size of the sets. Mind that a and b must b in Hamming space, so every element must either be 1 or 0. """ if a.shape != b.shape: raise ValueError("Arrays must be of same shape") length = a.shape[0] a = a.astype(bool) b = b.astype(bool) return float((a == b).sum()) / length
Return the Jacard coefficient of a and b. The jacard coefficient is defined as the overlap between two sets: the sum of all equal elements divided by the size of the sets. Mind that a and b must b in Hamming space, so every element must either be 1 or 0.
jacardCoefficient
python
pybrain/pybrain
pybrain/supervised/knn/lsh/minhash.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/knn/lsh/minhash.py
BSD-3-Clause
def __init__(self, dim, nPermutations): """Create a hash structure that can hold arrays of size dim and hashes with nPermutations permutations. The number of buckets is dim * nPermutations.""" self.dim = dim self.permutations = array([permutation(dim) for _ in range(nPermutations)]) self.buckets = defaultdict(lambda: [])
Create a hash structure that can hold arrays of size dim and hashes with nPermutations permutations. The number of buckets is dim * nPermutations.
__init__
python
pybrain/pybrain
pybrain/supervised/knn/lsh/minhash.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/knn/lsh/minhash.py
BSD-3-Clause
def _firstOne(self, arr): """Return the index of the first 1 in the array.""" for i, elem in enumerate(arr): if elem == 1: return i return i + 1
Return the index of the first 1 in the array.
_firstOne
python
pybrain/pybrain
pybrain/supervised/knn/lsh/minhash.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/knn/lsh/minhash.py
BSD-3-Clause
def _hash(self, item): """Return a hash for item based on the internal permutations. That hash is a tuple of ints. """ self._checkItem(item) result = [] for perm in self._permFuncs: permuted = perm(item) result.append(self._firstOne(permuted)) return tuple(result)
Return a hash for item based on the internal permutations. That hash is a tuple of ints.
_hash
python
pybrain/pybrain
pybrain/supervised/knn/lsh/minhash.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/knn/lsh/minhash.py
BSD-3-Clause
def put(self, item, satellite): """Put an item into the hash structure and attach any object satellite to it.""" self._checkItem(item) item = item.astype(bool) bucket = self._hash(item) self.buckets[bucket].append((item, satellite))
Put an item into the hash structure and attach any object satellite to it.
put
python
pybrain/pybrain
pybrain/supervised/knn/lsh/minhash.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/knn/lsh/minhash.py
BSD-3-Clause
def knn(self, item, k): """Return the k nearest neighbours of the item in the current hash. Mind that the probabilistic nature of the data structure might not return a nearest neighbor at all. """ self._checkItem(item) candidates = self.buckets[self._hash(item)] candidates.sort(key=lambda x: jacardCoefficient(x[0], item), reverse=True) return candidates[:k]
Return the k nearest neighbours of the item in the current hash. Mind that the probabilistic nature of the data structure might not return a nearest neighbor at all.
knn
python
pybrain/pybrain
pybrain/supervised/knn/lsh/minhash.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/knn/lsh/minhash.py
BSD-3-Clause
def __init__(self, dim, omega=4, prob=0.8): """Create a hash for arrays of dimension dim. The hyperspace will be split into hypercubes with a sidelength of omega * sqrt(sqrt(dim)), that is omega * radius. Every point in the dim-dimensional euclidean space will be hashed to its correct bucket with a probability of prob. """ message = ("Creating Hash with %i dimensions, sidelength %.2f and " + "cNN-probability %.2f") % (dim, omega, prob) logging.debug(message) self.dim = dim self.omega = omega self.prob = prob self.radius = sqrt(sqrt(min(dim, self.lowerDimensionBound))) logging.debug("Radius set to %.2f" % self.radius) self._initializeGrids() self._initializeProjection() self.balls = defaultdict(lambda: [])
Create a hash for arrays of dimension dim. The hyperspace will be split into hypercubes with a sidelength of omega * sqrt(sqrt(dim)), that is omega * radius. Every point in the dim-dimensional euclidean space will be hashed to its correct bucket with a probability of prob.
__init__
python
pybrain/pybrain
pybrain/supervised/knn/lsh/nearoptimal.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/knn/lsh/nearoptimal.py
BSD-3-Clause
def _findHypercube(self, point): """Return where a point lies in what hypercube. The result is a pair of two arrays. The first array is an array of integers that indicate the multidimensional index of the hypercube it is in. The second array is an array of floats, specifying the coordinates of the point in that hypercube. """ offset = self.omega * self.radius divmods = (divmod(p, offset) for p in point) hypercube_indices, relative_point = [], [] for index, rest in divmods: hypercube_indices.append(index) relative_point.append(rest) return array(hypercube_indices, dtype=int), array(relative_point)
Return where a point lies in what hypercube. The result is a pair of two arrays. The first array is an array of integers that indicate the multidimensional index of the hypercube it is in. The second array is an array of floats, specifying the coordinates of the point in that hypercube.
_findHypercube
python
pybrain/pybrain
pybrain/supervised/knn/lsh/nearoptimal.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/knn/lsh/nearoptimal.py
BSD-3-Clause
def _findLocalBall_noinline(self, point): """Return the index of the ball that the point lies in.""" for i, ball in enumerate(self.gridBalls): distance = point - ball if dot(distance.T, distance) <= self.radiusSquared: return i
Return the index of the ball that the point lies in.
_findLocalBall_noinline
python
pybrain/pybrain
pybrain/supervised/knn/lsh/nearoptimal.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/knn/lsh/nearoptimal.py
BSD-3-Clause
def _findLocalBall_inline(self, point): """Return the index of the ball that the point lies in.""" balls = self.gridBalls nBalls, dim = balls.shape #@UnusedVariable radiusSquared = self.radiusSquared #@UnusedVariable code = """ #line 121 "nearoptimal.py" return_val = -1; for (long i = 0; i < nBalls; i++) { double distance = 0.0; for (long j = 0; j < dim; j++) { double diff = balls(i, j) - point(j); distance += diff * diff; } if (distance <= radiusSquared) { return_val = i; break; } } """ variables = 'point', 'balls', 'nBalls', 'dim', 'radiusSquared', result = weave.inline( code, variables, type_converters=weave.converters.blitz, compiler='gcc') return result if result != -1 else None
Return the index of the ball that the point lies in.
_findLocalBall_inline
python
pybrain/pybrain
pybrain/supervised/knn/lsh/nearoptimal.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/knn/lsh/nearoptimal.py
BSD-3-Clause
def insert(self, point, satellite): """Put a point and its satellite information into the hash structure. """ point = dot(self.projection, point) index = self.findBall(point) self.balls[index].append((point, satellite))
Put a point and its satellite information into the hash structure.
insert
python
pybrain/pybrain
pybrain/supervised/knn/lsh/nearoptimal.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/knn/lsh/nearoptimal.py
BSD-3-Clause
def _findKnnCandidates(self, point): """Return a set of candidates that might be nearest neighbours of a query point.""" index = self.findBall(point) logging.debug("Found %i candidates for cNN" % len(self.balls[index])) return self.balls[index]
Return a set of candidates that might be nearest neighbours of a query point.
_findKnnCandidates
python
pybrain/pybrain
pybrain/supervised/knn/lsh/nearoptimal.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/knn/lsh/nearoptimal.py
BSD-3-Clause
def knn(self, point, k): """Return the k approximate nearest neighbours of the item in the current hash. Mind that the probabilistic nature of the data structure might not return a nearest neighbor at all and not the nearest neighbour.""" candidates = self._findKnnCandidates(point) def sortKey(xxx_todo_changeme): (point_, satellite_) = xxx_todo_changeme distance = point - point_ return - dot(distance.T, distance) return nlargest(k, candidates, key=sortKey)
Return the k approximate nearest neighbours of the item in the current hash. Mind that the probabilistic nature of the data structure might not return a nearest neighbor at all and not the nearest neighbour.
knn
python
pybrain/pybrain
pybrain/supervised/knn/lsh/nearoptimal.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/knn/lsh/nearoptimal.py
BSD-3-Clause
def __init__(self, module, dataset=None, learningrate=0.01, lrdecay=1.0, momentum=0., verbose=False, batchlearning=False, weightdecay=0.): """Create a BackpropTrainer to train the specified `module` on the specified `dataset`. The learning rate gives the ratio of which parameters are changed into the direction of the gradient. The learning rate decreases by `lrdecay`, which is used to to multiply the learning rate after each training step. The parameters are also adjusted with respect to `momentum`, which is the ratio by which the gradient of the last timestep is used. If `batchlearning` is set, the parameters are updated only at the end of each epoch. Default is False. `weightdecay` corresponds to the weightdecay rate, where 0 is no weight decay at all. """ Trainer.__init__(self, module) self.setData(dataset) self.verbose = verbose self.batchlearning = batchlearning self.weightdecay = weightdecay self.epoch = 0 self.totalepochs = 0 # set up gradient descender self.descent = GradientDescent() self.descent.alpha = learningrate self.descent.momentum = momentum self.descent.alphadecay = lrdecay self.descent.init(module.params)
Create a BackpropTrainer to train the specified `module` on the specified `dataset`. The learning rate gives the ratio of which parameters are changed into the direction of the gradient. The learning rate decreases by `lrdecay`, which is used to to multiply the learning rate after each training step. The parameters are also adjusted with respect to `momentum`, which is the ratio by which the gradient of the last timestep is used. If `batchlearning` is set, the parameters are updated only at the end of each epoch. Default is False. `weightdecay` corresponds to the weightdecay rate, where 0 is no weight decay at all.
__init__
python
pybrain/pybrain
pybrain/supervised/trainers/backprop.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/backprop.py
BSD-3-Clause
def train(self): """Train the associated module for one epoch.""" assert len(self.ds) > 0, "Dataset cannot be empty." self.module.resetDerivatives() errors = 0 ponderation = 0. shuffledSequences = [] for seq in self.ds._provideSequences(): shuffledSequences.append(seq) shuffle(shuffledSequences) for seq in shuffledSequences: e, p = self._calcDerivs(seq) errors += e ponderation += p if not self.batchlearning: gradient = self.module.derivs - self.weightdecay * self.module.params new = self.descent(gradient, errors) if new is not None: self.module.params[:] = new self.module.resetDerivatives() if self.verbose: print("Total error: {z: .12g}".format(z=errors / ponderation)) if self.batchlearning: self.module._setParameters(self.descent(self.module.derivs)) self.epoch += 1 self.totalepochs += 1 return errors / ponderation
Train the associated module for one epoch.
train
python
pybrain/pybrain
pybrain/supervised/trainers/backprop.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/backprop.py
BSD-3-Clause
def _calcDerivs(self, seq): """Calculate error function and backpropagate output errors to yield the gradient.""" self.module.reset() for sample in seq: self.module.activate(sample[0]) error = 0 ponderation = 0. for offset, sample in reversed(list(enumerate(seq))): # need to make a distinction here between datasets containing # importance, and others target = sample[1] outerr = target - self.module.outputbuffer[offset] if len(sample) > 2: importance = sample[2] error += 0.5 * dot(importance, outerr ** 2) ponderation += sum(importance) self.module.backActivate(outerr * importance) else: error += 0.5 * sum(outerr ** 2) ponderation += len(target) # FIXME: the next line keeps arac from producing NaNs. I don't # know why that is, but somehow the __str__ method of the # ndarray class fixes something, str(outerr) self.module.backActivate(outerr) return error, ponderation
Calculate error function and backpropagate output errors to yield the gradient.
_calcDerivs
python
pybrain/pybrain
pybrain/supervised/trainers/backprop.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/backprop.py
BSD-3-Clause
def _checkGradient(self, dataset=None, silent=False): """Numeric check of the computed gradient for debugging purposes.""" if dataset: self.setData(dataset) res = [] for seq in self.ds._provideSequences(): self.module.resetDerivatives() self._calcDerivs(seq) e = 1e-6 analyticalDerivs = self.module.derivs.copy() numericalDerivs = [] for p in range(self.module.paramdim): storedoldval = self.module.params[p] self.module.params[p] += e righterror, dummy = self._calcDerivs(seq) self.module.params[p] -= 2 * e lefterror, dummy = self._calcDerivs(seq) approxderiv = (righterror - lefterror) / (2 * e) self.module.params[p] = storedoldval numericalDerivs.append(approxderiv) r = list(zip(analyticalDerivs, numericalDerivs)) res.append(r) if not silent: print(r) return res
Numeric check of the computed gradient for debugging purposes.
_checkGradient
python
pybrain/pybrain
pybrain/supervised/trainers/backprop.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/backprop.py
BSD-3-Clause
def testOnData(self, dataset=None, verbose=False): """Compute the MSE of the module performance on the given dataset. If no dataset is supplied, the one passed upon Trainer initialization is used.""" if dataset == None: dataset = self.ds dataset.reset() if verbose: print('\nTesting on data:') errors = [] importances = [] ponderatedErrors = [] for seq in dataset._provideSequences(): self.module.reset() e, i = dataset._evaluateSequence(self.module.activate, seq, verbose) importances.append(i) errors.append(e) ponderatedErrors.append(e / i) if verbose: print(('All errors:', ponderatedErrors)) assert sum(importances) > 0 avgErr = sum(errors) / sum(importances) if verbose: print(('Average error:', avgErr)) print(('Max error:', max(ponderatedErrors), 'Median error:', sorted(ponderatedErrors)[len(errors) // 2])) return avgErr
Compute the MSE of the module performance on the given dataset. If no dataset is supplied, the one passed upon Trainer initialization is used.
testOnData
python
pybrain/pybrain
pybrain/supervised/trainers/backprop.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/backprop.py
BSD-3-Clause
def testOnClassData(self, dataset=None, verbose=False, return_targets=False): """Return winner-takes-all classification output on a given dataset. If no dataset is given, the dataset passed during Trainer initialization is used. If return_targets is set, also return corresponding target classes. """ if dataset == None: dataset = self.ds dataset.reset() out = [] targ = [] for seq in dataset._provideSequences(): self.module.reset() for input, target in seq: res = self.module.activate(input) out.append(argmax(res)) targ.append(argmax(target)) if return_targets: return out, targ else: return out
Return winner-takes-all classification output on a given dataset. If no dataset is given, the dataset passed during Trainer initialization is used. If return_targets is set, also return corresponding target classes.
testOnClassData
python
pybrain/pybrain
pybrain/supervised/trainers/backprop.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/backprop.py
BSD-3-Clause
def trainUntilConvergence(self, dataset=None, maxEpochs=None, verbose=None, continueEpochs=10, validationProportion=0.25, trainingData=None, validationData=None, convergence_threshold=10): """Train the module on the dataset until it converges. Return the module with the parameters that gave the minimal validation error. If no dataset is given, the dataset passed during Trainer initialization is used. validationProportion is the ratio of the dataset that is used for the validation dataset. If the training and validation data is already set, the splitPropotion is ignored If maxEpochs is given, at most that many epochs are trained. Each time validation error hits a minimum, try for continueEpochs epochs to find a better one.""" epochs = 0 if dataset is None: dataset = self.ds if verbose is None: verbose = self.verbose if trainingData is None or validationData is None: # Split the dataset randomly: validationProportion of the samples for # validation. trainingData, validationData = ( dataset.splitWithProportion(1 - validationProportion)) if not (len(trainingData) > 0 and len(validationData)): raise ValueError("Provided dataset too small to be split into training " + "and validation sets with proportion " + str(validationProportion)) self.ds = trainingData bestweights = self.module.params.copy() bestverr = self.testOnData(validationData) bestepoch = 0 self.trainingErrors = [] self.validationErrors = [bestverr] while True: trainingError = self.train() validationError = self.testOnData(validationData) if isnan(trainingError) or isnan(validationError): raise Exception("Training produced NaN results") self.trainingErrors.append(trainingError) self.validationErrors.append(validationError) if epochs == 0 or self.validationErrors[-1] < bestverr: # one update is always done bestverr = self.validationErrors[-1] bestweights = self.module.params.copy() bestepoch = epochs if maxEpochs != None and epochs >= maxEpochs: self.module.params[:] = bestweights break epochs += 1 if len(self.validationErrors) >= continueEpochs * 2: # have the validation errors started going up again? # compare the average of the last few to the previous few old = self.validationErrors[-continueEpochs * 2:-continueEpochs] new = self.validationErrors[-continueEpochs:] if min(new) > max(old): self.module.params[:] = bestweights break lastnew = round(new[-1], convergence_threshold) if sum(round(y, convergence_threshold) - lastnew for y in new) == 0: self.module.params[:] = bestweights break #self.trainingErrors.append(self.testOnData(trainingData)) self.ds = dataset if verbose: print(('train-errors:', fListToString(self.trainingErrors, 6))) print(('valid-errors:', fListToString(self.validationErrors, 6))) return self.trainingErrors[:bestepoch], self.validationErrors[:1 + bestepoch]
Train the module on the dataset until it converges. Return the module with the parameters that gave the minimal validation error. If no dataset is given, the dataset passed during Trainer initialization is used. validationProportion is the ratio of the dataset that is used for the validation dataset. If the training and validation data is already set, the splitPropotion is ignored If maxEpochs is given, at most that many epochs are trained. Each time validation error hits a minimum, try for continueEpochs epochs to find a better one.
trainUntilConvergence
python
pybrain/pybrain
pybrain/supervised/trainers/backprop.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/backprop.py
BSD-3-Clause
def __init__(self, evolino_network, dataset, **kwargs): """ :key subPopulationSize: Size of the subpopulations. :key nCombinations: Number of times each chromosome is built into an individual. default=1 :key nParents: Number of individuals left in a subpopulation after selection. :key initialWeightRange: Range of the weights of the RNN after initialization. default=(-0.1,0.1) :key weightInitializer: Initializer object for the weights of the RNN. default=Randomization(...) :key mutationAlpha: The mutation's intensity. default=0.01 :key mutationVariate: The variate used for mutation. default=CauchyVariate(...) :key wtRatio: The quotient: washout-time/training-time. Needed to split the sequences into washout phase and training phase. :key nBurstMutationEpochs: Number of epochs without increase of fitness in a row, before burstmutation is applied. default=Infinity :key backprojectionFactor: Weight of the backprojection. Usually supplied through evolino_network. :key selection: Selection object for evolino :key reproduction: Reproduction object for evolino :key burstMutation: BurstMutation object for evolino :key evaluation: Evaluation object for evolino :key verbosity: verbosity level """ Trainer.__init__(self, evolino_network) self.network = evolino_network self.setData(dataset) ap = KWArgsProcessor(self, kwargs) # misc ap.add('verbosity', default=0) # population ap.add('subPopulationSize', private=True, default=8) ap.add('nCombinations', private=True, default=4) ap.add('nParents', private=True, default=None) ap.add('initialWeightRange', private=True, default=(-0.1, 0.1)) ap.add('weightInitializer', private=True, default=Randomization(self._initialWeightRange[0], self._initialWeightRange[1])) # mutation ap.add('mutationAlpha', private=True, default=0.01) ap.add('mutationVariate', private=True, default=CauchyVariate(0, self._mutationAlpha)) # evaluation ap.add('wtRatio', private=True, default=(1, 3)) # burst mutation ap.add('nBurstMutationEpochs', default=Infinity) # network ap.add('backprojectionFactor', private=True, default=float(evolino_network.backprojectionFactor)) evolino_network.backprojectionFactor = self._backprojectionFactor # aggregated objects ap.add('selection', default=EvolinoSelection()) ap.add('reproduction', default=EvolinoReproduction(mutationVariate=self.mutationVariate)) ap.add('burstMutation', default=EvolinoBurstMutation()) ap.add('evaluation', default=EvolinoEvaluation(evolino_network, self.ds, **kwargs)) self.selection.nParents = self.nParents self._population = EvolinoPopulation( EvolinoSubIndividual(evolino_network.getGenome()), self._subPopulationSize, self._nCombinations, self._weightInitializer ) filters = [] filters.append(self.evaluation) filters.append(self.selection) filters.append(self.reproduction) self._filters = filters self.totalepochs = 0 self._max_fitness = self.evaluation.max_fitness self._max_fitness_epoch = self.totalepochs
:key subPopulationSize: Size of the subpopulations. :key nCombinations: Number of times each chromosome is built into an individual. default=1 :key nParents: Number of individuals left in a subpopulation after selection. :key initialWeightRange: Range of the weights of the RNN after initialization. default=(-0.1,0.1) :key weightInitializer: Initializer object for the weights of the RNN. default=Randomization(...) :key mutationAlpha: The mutation's intensity. default=0.01 :key mutationVariate: The variate used for mutation. default=CauchyVariate(...) :key wtRatio: The quotient: washout-time/training-time. Needed to split the sequences into washout phase and training phase. :key nBurstMutationEpochs: Number of epochs without increase of fitness in a row, before burstmutation is applied. default=Infinity :key backprojectionFactor: Weight of the backprojection. Usually supplied through evolino_network. :key selection: Selection object for evolino :key reproduction: Reproduction object for evolino :key burstMutation: BurstMutation object for evolino :key evaluation: Evaluation object for evolino :key verbosity: verbosity level
__init__
python
pybrain/pybrain
pybrain/supervised/trainers/evolino.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/evolino.py
BSD-3-Clause
def gaussian(x, mean, stddev): """ return value of homogenous Gaussian at given vector point x: vector, mean: vector, stddev: scalar """ tmp = -0.5 * sum(((x-mean)/stddev)**2) return np.exp(tmp) / (np.power(2.*np.pi, 0.5*len(x)) * stddev)
return value of homogenous Gaussian at given vector point x: vector, mean: vector, stddev: scalar
gaussian
python
pybrain/pybrain
pybrain/supervised/trainers/mixturedensity.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/mixturedensity.py
BSD-3-Clause
def _calcDerivs(self, seq): """ calculate derivatives assuming we have a Network with a MixtureDensityLayer as output """ assert isinstance(self.module.modulesSorted[-1], MixtureDensityLayer) self.module.reset() for time, sample in enumerate(seq): input = sample[0] self.module.inputbuffer[time] = input self.module.forward() error = 0 nDims = self.module.modulesSorted[-1].nDims nGauss = self.module.modulesSorted[-1].nGaussians for time, sample in reversed(list(enumerate(seq))): # Should these three lines be inside this 'for' block # or outside? I moved them inside - Jack gamma = [] means = [] stddevs = [] dummy, target = sample par = self.module.outputbuffer[time] # parameters for mixture # calculate error contributions from all Gaussians in the mixture for k in range(nGauss): coeff = par[k] stddevs.append(par[k+nGauss]) idxm = 2*nGauss + k*nDims means.append(par[idxm:idxm+nDims]) gamma.append(coeff * gaussian(target, means[-1], stddevs[-1])) # calculate error for this pattern, and posterior for target sumg = sum(gamma) error -= np.log(sumg) gamma = np.array(gamma)/sumg invvariance = 1./par[nGauss:2*nGauss]**2 invstddev = 1./np.array(stddevs) # calculate gradient wrt. mixture coefficients grad_c = par[0:nGauss] - gamma # calculate gradient wrt. standard deviations grad_m = [] grad_s = [] for k in range(nGauss): delta = means[k]-target grad_m.append(gamma[k]*delta*invvariance[k]) grad_s.append(-gamma[k]*(np.dot(delta,delta)*invvariance[k]*invstddev[k] - invstddev[k])) self.module.outputerror[time] = -np.r_[grad_c,grad_s,np.array(grad_m).flatten()] self.module.backward() return error, 1.0
calculate derivatives assuming we have a Network with a MixtureDensityLayer as output
_calcDerivs
python
pybrain/pybrain
pybrain/supervised/trainers/mixturedensity.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/mixturedensity.py
BSD-3-Clause
def __init__(self, module, etaminus=0.5, etaplus=1.2, deltamin=1.0e-6, deltamax=5.0, delta0=0.1, **kwargs): """ Set up training algorithm parameters, and objects associated with the trainer. :arg module: the module whose parameters should be trained. :key etaminus: factor by which step width is decreased when overstepping (0.5) :key etaplus: factor by which step width is increased when following gradient (1.2) :key delta: step width for each weight :key deltamin: minimum step width (1e-6) :key deltamax: maximum step width (5.0) :key delta0: initial step width (0.1) """ BackpropTrainer.__init__(self, module, **kwargs) self.epoch = 0 # set descender to RPROP mode and update parameters self.descent.rprop = True self.descent.etaplus = etaplus self.descent.etaminus = etaminus self.descent.deltamin = deltamin self.descent.deltamax = deltamax self.descent.deltanull = delta0 self.descent.init(module.params) # reinitialize, since mode changed
Set up training algorithm parameters, and objects associated with the trainer. :arg module: the module whose parameters should be trained. :key etaminus: factor by which step width is decreased when overstepping (0.5) :key etaplus: factor by which step width is increased when following gradient (1.2) :key delta: step width for each weight :key deltamin: minimum step width (1e-6) :key deltamax: maximum step width (5.0) :key delta0: initial step width (0.1)
__init__
python
pybrain/pybrain
pybrain/supervised/trainers/rprop.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/rprop.py
BSD-3-Clause
def train(self): """ Train the network for one epoch """ self.module.resetDerivatives() errors = 0 ponderation = 0 for seq in self.ds._provideSequences(): e, p = self._calcDerivs(seq) errors += e ponderation += p if self.verbose: print(("epoch {epoch:6d} total error {error:12.5g} avg weight {weight:12.5g}".format( epoch=self.epoch, error=errors / ponderation, weight=sqrt((self.module.params ** 2).mean())))) self.module._setParameters(self.descent(self.module.derivs - self.weightdecay * self.module.params)) self.epoch += 1 self.totalepochs += 1 return errors / ponderation
Train the network for one epoch
train
python
pybrain/pybrain
pybrain/supervised/trainers/rprop.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/rprop.py
BSD-3-Clause
def __init__(self, svmunit, dataset, modelfile=None, plot=False): """ Initialize data and unit to be trained, and load the model, if provided. The passed `svmunit` has to be an object of class :class:`SVMUnit` that is going to be trained on the :class:`ClassificationDataSet` object dataset. Compared to FNN training we do not use a test data set, instead 5-fold cross-validation is performed if needed. If `modelfile` is provided, this model is loaded instead of training. If `plot` is True, a grid search is performed and the resulting pattern is plotted.""" self.svm = svmunit self.ds = dataset self.svmtarget = dataset['target'].flatten() self.plot = plot self.searchlog = 'gridsearch_results.txt' # set default parameters for training self.params = { 'kernel_type':RBF } if modelfile is not None: self.load(modelfile)
Initialize data and unit to be trained, and load the model, if provided. The passed `svmunit` has to be an object of class :class:`SVMUnit` that is going to be trained on the :class:`ClassificationDataSet` object dataset. Compared to FNN training we do not use a test data set, instead 5-fold cross-validation is performed if needed. If `modelfile` is provided, this model is loaded instead of training. If `plot` is True, a grid search is performed and the resulting pattern is plotted.
__init__
python
pybrain/pybrain
pybrain/supervised/trainers/svmtrainer.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/svmtrainer.py
BSD-3-Clause
def train(self, search=False, **kwargs): """ Train the SVM on the dataset. For RBF kernels (the default), an optional meta-parameter search can be performed. :key search: optional name of grid search class to use for RBF kernels: 'GridSearch' or 'GridSearchDOE' :key log2g: base 2 log of the RBF width parameter :key log2C: base 2 log of the slack parameter :key searchlog: filename into which to dump the search log :key others: ...are passed through to the grid search and/or libsvm """ self.setParams(**kwargs) problem = svm_problem(self.ds['target'].flatten(), self.ds['input'].tolist()) if search: # this is a bit of a hack... model = eval(search + "(problem, self.svmtarget, cmin=[0,-7],cmax=[25,1], cstep=[0.5,0.2],plotflag=self.plot,searchlog=self.searchlog,**self.params)") else: param = svm_parameter(**self.params) model = svm_model(problem, param) logging.info("Training completed with parameters:") logging.info(repr(param)) self.svm.setModel(model)
Train the SVM on the dataset. For RBF kernels (the default), an optional meta-parameter search can be performed. :key search: optional name of grid search class to use for RBF kernels: 'GridSearch' or 'GridSearchDOE' :key log2g: base 2 log of the RBF width parameter :key log2C: base 2 log of the slack parameter :key searchlog: filename into which to dump the search log :key others: ...are passed through to the grid search and/or libsvm
train
python
pybrain/pybrain
pybrain/supervised/trainers/svmtrainer.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/svmtrainer.py
BSD-3-Clause
def setParams(self, **kwargs): """ Set parameters for SVM training. Apart from the ones below, you can use all parameters defined for the LIBSVM svm_model class, see their documentation. :key searchlog: Save a list of coordinates and the achieved CV accuracy to this file.""" if 'weight' in kwargs: self.params['nr_weight'] = len(kwargs['weight']) if 'log2C' in kwargs: self.params['C'] = 2 ** kwargs['log2C'] kwargs.pop('log2C') if 'log2g' in kwargs: self.params['gamma'] = 2 ** kwargs['log2g'] kwargs.pop('log2g') if 'searchlog' in kwargs: self.searchlog = kwargs['searchlog'] kwargs.pop('searchlog') self.params.update(kwargs)
Set parameters for SVM training. Apart from the ones below, you can use all parameters defined for the LIBSVM svm_model class, see their documentation. :key searchlog: Save a list of coordinates and the achieved CV accuracy to this file.
setParams
python
pybrain/pybrain
pybrain/supervised/trainers/svmtrainer.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/svmtrainer.py
BSD-3-Clause
def __init__(self, problem, targets, cmin, cmax, cstep=None, crossval=5, plotflag=False, maxdepth=8, searchlog='gridsearch_results.txt', **params): """ Set up (log) grid search over the two RBF kernel parameters C and gamma. :arg problem: the LIBSVM svm_problem to be optimized, ie. the input and target data :arg targets: unfortunately, the targets used in the problem definition have to be given again here :arg cmin: lower left corner of the log2C/log2gamma window to search :arg cmax: upper right corner of the log2C/log2gamma window to search :key cstep: step width for log2C and log2gamma (ignored for DOE search) :key crossval: split dataset into this many parts for cross-validation :key plotflag: if True, plot the error surface contour (regular) or search pattern (DOE) :key maxdepth: maximum window bisection depth (DOE only) :key searchlog: Save a list of coordinates and the achieved CV accuracy to this file :key others: ...are passed through to the cross_validation method of LIBSVM """ self.nPars = len(cmin) self.usermin = cmin self.usermax = cmax self.userstep = cstep self.crossval = crossval self.plotflag = plotflag self.maxdepth = maxdepth # number of zoom-in steps (DOE search only!) # set default parameters for training self.params = params if self.plotflag: import pylab as p p.ion() p.figure(figsize=[12, 8]) assert isinstance(problem, svm_problem) self.problem = problem self.targets = targets self.resfile = open(searchlog, 'w') # do the parameter searching param = self.search() if self.plotflag: p.ioff() p.show() self.resfile.close() svm_model.__init__(self, problem, param)
Set up (log) grid search over the two RBF kernel parameters C and gamma. :arg problem: the LIBSVM svm_problem to be optimized, ie. the input and target data :arg targets: unfortunately, the targets used in the problem definition have to be given again here :arg cmin: lower left corner of the log2C/log2gamma window to search :arg cmax: upper right corner of the log2C/log2gamma window to search :key cstep: step width for log2C and log2gamma (ignored for DOE search) :key crossval: split dataset into this many parts for cross-validation :key plotflag: if True, plot the error surface contour (regular) or search pattern (DOE) :key maxdepth: maximum window bisection depth (DOE only) :key searchlog: Save a list of coordinates and the achieved CV accuracy to this file :key others: ...are passed through to the cross_validation method of LIBSVM
__init__
python
pybrain/pybrain
pybrain/supervised/trainers/svmtrainer.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/svmtrainer.py
BSD-3-Clause
def search(self): """ iterate successive parameter grid refinement and evaluation; adapted from LIBSVM grid search tool """ jobs = self.calculate_jobs() scores = [] for line in jobs: for (c, g) in line: # run cross-validation for this point self.setParams(C=2 ** c, gamma=2 ** g) param = svm_parameter(**self.params) cvresult = array(cross_validation(self.problem, param, self.crossval)) corr, = where(cvresult == self.targets) res = (c, g, float(corr.size) / self.targets.size) scores.append(res) self._save_points(res) self._redraw(scores) scores = array(scores) best = scores[scores[:, 0].argmax(), 1:] self.setParams(C=2 ** best[0], gamma=2 ** best[1]) logging.info("best log2C=%12.7g, log2g=%11.7g " % (best[0], best[1])) param = svm_parameter(**self.params) return param
iterate successive parameter grid refinement and evaluation; adapted from LIBSVM grid search tool
search
python
pybrain/pybrain
pybrain/supervised/trainers/svmtrainer.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/svmtrainer.py
BSD-3-Clause
def _permute_sequence(self, seq): """ helper function to create a nice sequence of refined regular grids; from LIBSVM grid search tool """ n = len(seq) if n <= 1: return seq mid = int(n / 2) left = self._permute_sequence(seq[:mid]) right = self._permute_sequence(seq[mid + 1:]) ret = [seq[mid]] while left or right: if left: ret.append(left.pop(0)) if right: ret.append(right.pop(0)) return ret
helper function to create a nice sequence of refined regular grids; from LIBSVM grid search tool
_permute_sequence
python
pybrain/pybrain
pybrain/supervised/trainers/svmtrainer.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/svmtrainer.py
BSD-3-Clause
def _range_f(self, begin, end, step): """ like range, but works on non-integer too; from LIBSVM grid search tool """ seq = [] while 1: if step > 0 and begin > end: break if step < 0 and begin < end: break seq.append(begin) begin = begin + step return seq
like range, but works on non-integer too; from LIBSVM grid search tool
_range_f
python
pybrain/pybrain
pybrain/supervised/trainers/svmtrainer.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/svmtrainer.py
BSD-3-Clause
def calculate_jobs(self): """ like range, but works on non-integer too; from LIBSVM grid search tool """ c_seq = self._permute_sequence(self._range_f(self.usermin[0], self.usermax[0], self.userstep[0])) g_seq = self._permute_sequence(self._range_f(self.usermin[1], self.usermax[1], self.userstep[1])) nr_c = float(len(c_seq)) nr_g = float(len(g_seq)) global total_points total_points = (nr_g + 1) * (nr_g) i = 0 j = 0 jobs = [] while i < nr_c or j < nr_g: if i / nr_c < j / nr_g: # increase C resolution line = [] for k in range(0, j): line.append((c_seq[i], g_seq[k])) i = i + 1 jobs.append(line) else: # increase g resolution line = [] for k in range(0, i): line.append((c_seq[k], g_seq[j])) j = j + 1 jobs.append(line) return jobs
like range, but works on non-integer too; from LIBSVM grid search tool
calculate_jobs
python
pybrain/pybrain
pybrain/supervised/trainers/svmtrainer.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/svmtrainer.py
BSD-3-Clause
def _save_points(self, res): """ save the list of points and corresponding scores into a file """ self.resfile.write("%g, %g, %g\n" % res) logging.info("log2C=%g, log2g=%g, res=%g" % res) self.resfile.flush()
save the list of points and corresponding scores into a file
_save_points
python
pybrain/pybrain
pybrain/supervised/trainers/svmtrainer.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/svmtrainer.py
BSD-3-Clause
def search(self, cmin=None, cmax=None): """ iterate parameter grid refinement and evaluation recursively """ if self.depth > self.maxdepth: # maximum search depth reached - finish up best = self.allPts[self.allScores.argmax(), :] logging.info("best log2C=%12.7g, log2g=%11.7g " % (best[0], best[1])) self.setParams(C=2 ** best[0], gamma=2 ** best[1]) param = svm_parameter(**self.params) logging.info("Grid search completed! Final parameters:") logging.info(repr(param)) return param # generate DOE gridpoints using current range if cmin is None: # use initial values, if none given cmin = array(self.usermin) cmax = array(self.usermax) points = self.refineGrid(cmin, cmax) # calculate scores for all grid points using n-fold cross-validation scores = [] isnew = array([True] * self.nPts) for i in range(self.nPts): idx = self._findIndex(points[i, :]) if idx >= 0: # point already exists isnew[i] = False scores.append(self.allScores[idx]) else: # new point, run cross-validation self.setParams(C=2 ** points[i, 0], gamma=2 ** points[i, 1]) param = svm_parameter(**self.params) cvresult = array(cross_validation(self.problem, param, self.crossval)) # save cross validation result as "% correct" corr, = where(cvresult == self.targets) corr = float(corr.size) / self.targets.size scores.append(corr) self._save_points((points[i, 0], points[i, 1], corr)) scores = array(scores) # find max and new ranges by halving the old ones, whereby # entire search region must lie within original search range newctr = points[scores.argmax(), :].copy() newdiff = (cmax - cmin) / 4.0 for i in range(self.nPars): newctr[i] = min([max([newctr[i], self.usermin[i] + newdiff[i]]), self.usermax[i] - newdiff[i]]) cmin = newctr - newdiff cmax = newctr + newdiff logging.info("depth:\t%3d\tcrange:\t%g\tscore:\t%g" % (self.depth, cmax[0] - cmin[0], scores.max())) # append points and scores to the full list if self.depth == 0: self.allPts = points[isnew, :].copy() self.allScores = scores[isnew].copy() else: self.allPts = append(self.allPts, points[isnew, :], axis=0) self.allScores = append(self.allScores, scores[isnew], axis=0) if self.plotflag: import pylab as p if self.depth == 0: self.oPlot = p.plot(self.allPts[:, 0], self.allPts[:, 1], 'o')[0] # insert new data into plot self.oPlot.set_data(self.allPts[:, 0], self.allPts[:, 1]) p.draw() # recursively call ourselves self.depth += 1 return self.search(cmin, cmax)
iterate parameter grid refinement and evaluation recursively
search
python
pybrain/pybrain
pybrain/supervised/trainers/svmtrainer.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/svmtrainer.py
BSD-3-Clause
def refineGrid(self, cmin, cmax): """ given grid boundaries, generate the corresponding DOE pattern from template""" diff = array((cmax - cmin).tolist()*self.nPts).reshape(self.nPts, self.nPars) return self.doepat * diff + array(cmin.tolist()*self.nPts).reshape(self.nPts, self.nPars)
given grid boundaries, generate the corresponding DOE pattern from template
refineGrid
python
pybrain/pybrain
pybrain/supervised/trainers/svmtrainer.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/svmtrainer.py
BSD-3-Clause
def _findIndex(self, point): """ determines whether given point already exists in list of all calculated points. raises exception if more than one point is found, returns -1 if no point is found """ if self.depth == 0: return - 1 check = self.allPts[:, 0] == point[0] for i in range(1, point.size): check = check & (self.allPts[:, i] == point[i]) idx, = where(check) if idx.size == 0: return - 1 elif idx.size > 1: logging.error("Something went wrong - found more than one matching point!") logging.error(str(point)) logging.error(str(self.allPts)) raise else: return idx[0]
determines whether given point already exists in list of all calculated points. raises exception if more than one point is found, returns -1 if no point is found
_findIndex
python
pybrain/pybrain
pybrain/supervised/trainers/svmtrainer.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/svmtrainer.py
BSD-3-Clause
def setData(self, dataset): """Associate the given dataset with the trainer.""" self.ds = dataset if dataset: assert dataset.indim == self.module.indim assert dataset.outdim == self.module.outdim
Associate the given dataset with the trainer.
setData
python
pybrain/pybrain
pybrain/supervised/trainers/trainer.py
https://github.com/pybrain/pybrain/blob/master/pybrain/supervised/trainers/trainer.py
BSD-3-Clause
def epsilonCheck(x, epsilon=1e-6): """Checks that x is in (-epsilon, epsilon).""" epsilon = abs(epsilon) return -epsilon < x < epsilon
Checks that x is in (-epsilon, epsilon).
epsilonCheck
python
pybrain/pybrain
pybrain/tests/helpers.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tests/helpers.py
BSD-3-Clause
def buildAppropriateDataset(module): """ build a sequential dataset with 2 sequences of 3 samples, with arndom input and target values, but the appropriate dimensions to be used on the provided module. """ if module.sequential: d = SequentialDataSet(module.indim, module.outdim) for dummy in range(2): d.newSequence() for dummy in range(3): d.addSample(randn(module.indim), randn(module.outdim)) else: d = SupervisedDataSet(module.indim, module.outdim) for dummy in range(3): d.addSample(randn(module.indim), randn(module.outdim)) return d
build a sequential dataset with 2 sequences of 3 samples, with arndom input and target values, but the appropriate dimensions to be used on the provided module.
buildAppropriateDataset
python
pybrain/pybrain
pybrain/tests/helpers.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tests/helpers.py
BSD-3-Clause
def gradientCheck(module, tolerance=0.0001, dataset=None): """ check the gradient of a module with a randomly generated dataset, (and, in the case of a network, determine which modules contain incorrect derivatives). """ if module.paramdim == 0: print('Module has no parameters') return True if dataset: d = dataset else: d = buildAppropriateDataset(module) b = BackpropTrainer(module) res = b._checkGradient(d, True) # compute average precision on every parameter precision = zeros(module.paramdim) for seqres in res: for i, p in enumerate(seqres): if p[0] == 0 and p[1] == 0: precision[i] = 0 else: precision[i] += abs((p[0] + p[1]) / (p[0] - p[1])) precision /= len(res) if max(precision) < tolerance: print('Perfect gradient') return True else: print(('Incorrect gradient', precision)) if isinstance(module, Network): index = 0 for m in module._containerIterator(): if max(precision[index:index + m.paramdim]) > tolerance: print(('Incorrect module:', m, res[-1][index:index + m.paramdim])) index += m.paramdim else: print(res) return False
check the gradient of a module with a randomly generated dataset, (and, in the case of a network, determine which modules contain incorrect derivatives).
gradientCheck
python
pybrain/pybrain
pybrain/tests/helpers.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tests/helpers.py
BSD-3-Clause
def xmlInvariance(n, forwardpasses = 1): """ try writing a network to an xml file, reading it, rewrite it, reread it, and compare if the result looks the same (compare string representation, and forward processing of some random inputs) """ # We only use this for file creation. tmpfile = tempfile.NamedTemporaryFile(dir='.') f = tmpfile.name tmpfile.close() NetworkWriter.writeToFile(n, f) tmpnet = NetworkReader.readFrom(f) NetworkWriter.writeToFile(tmpnet, f) endnet = NetworkReader.readFrom(f) # Unlink temporary file. os.unlink(f) netCompare(tmpnet, endnet, forwardpasses, True)
try writing a network to an xml file, reading it, rewrite it, reread it, and compare if the result looks the same (compare string representation, and forward processing of some random inputs)
xmlInvariance
python
pybrain/pybrain
pybrain/tests/helpers.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tests/helpers.py
BSD-3-Clause
def testInterface(algo): """ Tests whether the algorithm is properly implementing the correct Blackbox-optimization interface.""" # without any arguments, initialization has to work emptyalgo = algo() try: # but not learning emptyalgo.learn(0) return "Failed to throw missing evaluator error?" except AssertionError: pass emptyalgo.setEvaluator(sf, xa1) # not it can run emptyalgo.learn(0) # simple functions don't check for dimension mismatch algo(sf, xa1) algo(sf, xa100) # for these, either an initial point or a dimension parameter is required algo(sf, numParameters=2) try: algo(sf) return "Failed to throw unknown dimension error" except ValueError: pass # FitnessEvaluators do not require that algo(ife1) # parameter containers can be used too algo(ife2, pc2) return True
Tests whether the algorithm is properly implementing the correct Blackbox-optimization interface.
testInterface
python
pybrain/pybrain
pybrain/tests/optimizationtest.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tests/optimizationtest.py
BSD-3-Clause
def testContinuousInterface(algo): """ Test the specifics for the interface for ContinuousOptimizers """ if not issubclass(algo, bbo.ContinuousOptimizer): return True # list starting points are internally converted to arrays x = algo(sf, xlist2) assert isinstance(x.bestEvaluable, ndarray), 'not converted to array' # check for dimension mismatch try: algo(ife1, xa2) return "Failed to throw dimension mismatch error" except ValueError: pass return True
Test the specifics for the interface for ContinuousOptimizers
testContinuousInterface
python
pybrain/pybrain
pybrain/tests/optimizationtest.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tests/optimizationtest.py
BSD-3-Clause
def testMinMax(algo): """ Verify that the algorithm is doing the minimization/maximization consistently. """ if (issubclass(algo, bbo.TopologyOptimizer) or algo == allopts.StochasticHillClimber): # TODO return True xa1[0] = 2 evalx = sf(xa1) amax1 = algo(sf, xa1, minimize=False) amax2 = algo(sf, xa1) amax2.minimize = False amax3 = algo() amax3.setEvaluator(sf, xa1) amax3.minimize = False amax4 = algo() amax4.minimize = False amax4.setEvaluator(sf, xa1) for i, amax in enumerate([amax1, amax2, amax3, amax4]): assert amax.minimize is False or amax.mustMinimize, 'Max: Attribute not set correctly.' \ + str(amax.minimize) + str(amax.mustMinimize) + str(i) x, xv = amax.learn(1) assert sf(x) == xv, 'Evaluation does not fit: ' + str((sf(x), xv)) assert xv >= evalx, 'Evaluation did not increase: ' + str(xv) + ' (init: ' + str(evalx) + ')' xa1[0] = 2 amin1 = algo(sf, xa1, minimize=True) amin2 = algo(sf, xa1) amin2.minimize = True amin3 = algo() amin3.setEvaluator(sf, xa1) amin3.minimize = True amin4 = algo() amin4.minimize = True amin4.setEvaluator(sf, xa1) for i, amin in enumerate([amin1, amin2, amin3, amin4]): assert amin.minimize is True or amin.mustMaximize, 'Min: Attribute not set correctly.' \ + str(amin.minimize) + str(amin.mustMaximize) + str(i) x, xv = amin.learn(1) assert sf(x) == xv, 'Evaluation does not fit: ' + str((sf(x), xv)) + str(i) assert xv <= evalx, 'Evaluation did not decrease: ' + str(xv) + ' (init: ' + str(evalx) + ')' + str(i) assert ((amin.minimize is not amax.minimize) or not (amin._wasOpposed is amax._wasOpposed)), 'Inconsistent flags.' return True
Verify that the algorithm is doing the minimization/maximization consistently.
testMinMax
python
pybrain/pybrain
pybrain/tests/optimizationtest.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tests/optimizationtest.py
BSD-3-Clause
def testImport(module_name): """Tell wether a module can be imported. This function has a cache, so modules are only tested once on importability. """ try: return testImport.cache[module_name] except KeyError: try: __import__(module_name) except ImportError: result = False else: result = True testImport.cache[module_name] = result return result
Tell wether a module can be imported. This function has a cache, so modules are only tested once on importability.
testImport
python
pybrain/pybrain
pybrain/tests/runtests.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tests/runtests.py
BSD-3-Clause
def missingDependencies(target_module): """Returns a list of dependencies of the module that the current interpreter cannot import. This does not inspect the code, but instead check for a list of strings called _dependencies in the target_module. This list should contain module names that the module depends on.""" dependencies = getattr(target_module, '_dependencies', []) return [i for i in dependencies if not testImport(i)]
Returns a list of dependencies of the module that the current interpreter cannot import. This does not inspect the code, but instead check for a list of strings called _dependencies in the target_module. This list should contain module names that the module depends on.
missingDependencies
python
pybrain/pybrain
pybrain/tests/runtests.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tests/runtests.py
BSD-3-Clause
def getSubDirectories(testdir): """Recursively builds a list of all subdirectories in the test suite.""" subdirs = [os.path.join(testdir,d) for d in filter(os.path.isdir,[os.path.join(testdir,dd) for dd in os.listdir(testdir)])] for d in copy(subdirs): subdirs.extend(getSubDirectories(os.path.join(testdir,d))) return subdirs
Recursively builds a list of all subdirectories in the test suite.
getSubDirectories
python
pybrain/pybrain
pybrain/tests/runtests.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tests/runtests.py
BSD-3-Clause
def make_test_suite(): """Load unittests placed in pybrain/tests/unittests, then return a TestSuite object of those.""" # [...]/pybrain/pybrain [cut] /tests/runtests.py path = os.path.abspath(__file__).rsplit(os.sep+'tests', 1)[0] sys.path.append(path.rstrip('pybrain')) top_testdir = os.path.join(path, 'tests', 'unittests') testdirs = getSubDirectories(top_testdir) # Initialize the testsuite to add to suite = TestSuite() optionflags = doctest.ELLIPSIS | doctest.NORMALIZE_WHITESPACE | doctest.IGNORE_EXCEPTION_DETAIL for testdir in testdirs: # All unittest modules have to start with 'test_' and have to be, of # course, python files module_names = [f[:-3] for f in os.listdir(testdir) if f.startswith('test_') and f.endswith('.py')] if not module_names: logging.info('No tests found in %s' % testdir) continue # "Magically" import the tests package and its test-modules that we've # found test_package_path = 'pybrain.tests.unittests' sub_path = os.path.relpath(testdir, top_testdir).split(os.sep) test_package_path = '.'.join([test_package_path]+sub_path) test_package = __import__(test_package_path, fromlist=module_names) # Put the test modules in a list that can be passed to the testsuite modules = (getattr(test_package, n) for n in module_names) modules = [(m, missingDependencies(m)) for m in modules] untests = [(m, md) for m, md in modules if md] modules = [m for m, md in modules if not md] # print(out modules that are missing dependencies) for module, miss_dep in untests: # Mr Dep is not around, though logging.warning('Module %s is missing dependencies: %s' % ( module.__name__, ', '.join(miss_dep))) # print(out a list of tests that are found) for m in modules: logging.info('Tests found: %s' % m.__name__) # Build up the testsuite suite.addTests([TestLoader().loadTestsFromModule(m) for m in modules]) # Add doctests from the unittest modules to the suite for mod in modules: try: suite.addTest(doctest.DocTestSuite(mod, optionflags=optionflags)) except ValueError: # No tests found. pass return suite
Load unittests placed in pybrain/tests/unittests, then return a TestSuite object of those.
make_test_suite
python
pybrain/pybrain
pybrain/tests/runtests.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tests/runtests.py
BSD-3-Clause
def runModuleTestSuite(module): """Runs a test suite for all local tests.""" suite = TestSuite([TestLoader().loadTestsFromModule(module)]) # Add local doctests optionflags = ELLIPSIS | NORMALIZE_WHITESPACE | REPORT_ONLY_FIRST_FAILURE | IGNORE_EXCEPTION_DETAIL try: suite.addTest(DocTestSuite(module), optionflags=optionflags) except ValueError: # No tests have been found in that module. pass TextTestRunner().run(suite)
Runs a test suite for all local tests.
runModuleTestSuite
python
pybrain/pybrain
pybrain/tests/testsuites.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tests/testsuites.py
BSD-3-Clause
def buildSharedCrossedNetwork(): """ build a network with shared connections. Two hidden modules are symmetrically linked, but to a different input neuron than the output neuron. The weights are random. """ N = FeedForwardNetwork('shared-crossed') h = 1 a = LinearLayer(2, name = 'a') b = LinearLayer(h, name = 'b') c = LinearLayer(h, name = 'c') d = LinearLayer(2, name = 'd') N.addInputModule(a) N.addModule(b) N.addModule(c) N.addOutputModule(d) m1 = MotherConnection(h) m1.params[:] = scipy.array((1,)) m2 = MotherConnection(h) m2.params[:] = scipy.array((2,)) N.addConnection(SharedFullConnection(m1, a, b, inSliceTo = 1)) N.addConnection(SharedFullConnection(m1, a, c, inSliceFrom = 1)) N.addConnection(SharedFullConnection(m2, b, d, outSliceFrom = 1)) N.addConnection(SharedFullConnection(m2, c, d, outSliceTo = 1)) N.sortModules() return N
build a network with shared connections. Two hidden modules are symmetrically linked, but to a different input neuron than the output neuron. The weights are random.
buildSharedCrossedNetwork
python
pybrain/pybrain
pybrain/tests/unittests/structure/connections/test_shared_connections.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tests/unittests/structure/connections/test_shared_connections.py
BSD-3-Clause
def buildSlicedNetwork(): """ build a network with shared connections. Two hidden modules are symmetrically linked, but to a different input neuron than the output neuron. The weights are random. """ N = FeedForwardNetwork('sliced') a = LinearLayer(2, name = 'a') b = LinearLayer(2, name = 'b') N.addInputModule(a) N.addOutputModule(b) N.addConnection(FullConnection(a, b, inSliceTo=1, outSliceFrom=1)) N.addConnection(FullConnection(a, b, inSliceFrom=1, outSliceTo=1)) N.sortModules() return N
build a network with shared connections. Two hidden modules are symmetrically linked, but to a different input neuron than the output neuron. The weights are random.
buildSlicedNetwork
python
pybrain/pybrain
pybrain/tests/unittests/structure/connections/test_sliced_connections.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tests/unittests/structure/connections/test_sliced_connections.py
BSD-3-Clause
def buildSimpleBorderSwipingNet(size = 3, dim = 3, hsize = 1, predefined = {}): """ build a simple swiping network,of given size and dimension, using linear inputs and output""" # assuming identical size in all dimensions dims = tuple([size]*dim) # also includes one dimension for the swipes hdims = tuple(list(dims)+[2**dim]) inmod = LinearLayer(size**dim, name = 'input') inmesh = ModuleMesh.viewOnFlatLayer(inmod, dims, 'inmesh') outmod = LinearLayer(size**dim, name = 'output') outmesh = ModuleMesh.viewOnFlatLayer(outmod, dims, 'outmesh') hiddenmesh = ModuleMesh.constructWithLayers(TanhLayer, hsize, hdims, 'hidden') return BorderSwipingNetwork(inmesh, hiddenmesh, outmesh, predefined = predefined)
build a simple swiping network,of given size and dimension, using linear inputs and output
buildSimpleBorderSwipingNet
python
pybrain/pybrain
pybrain/tests/unittests/structure/networks/test_borderswipingnetwork.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tests/unittests/structure/networks/test_borderswipingnetwork.py
BSD-3-Clause
def buildCyclicNetwork(recurrent): """ build a cyclic network with 4 modules :key recurrent: make one of the connections recurrent """ Network = RecurrentNetwork if recurrent else FeedForwardNetwork N = Network('cyc') a = LinearLayer(1, name='a') b = LinearLayer(2, name='b') c = LinearLayer(3, name='c') d = LinearLayer(4, name='d') N.addInputModule(a) N.addModule(b) N.addModule(d) N.addOutputModule(c) N.addConnection(FullConnection(a, b)) N.addConnection(FullConnection(b, c)) N.addConnection(FullConnection(c, d)) if recurrent: N.addRecurrentConnection(FullConnection(d, a)) else: N.addConnection(FullConnection(d, a)) N.sortModules() return N
build a cyclic network with 4 modules :key recurrent: make one of the connections recurrent
buildCyclicNetwork
python
pybrain/pybrain
pybrain/tests/unittests/structure/networks/test_cyclic_network.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tests/unittests/structure/networks/test_cyclic_network.py
BSD-3-Clause
def buildMixedNestedNetwork(): """ build a nested network with the inner one being a ffn and the outer one being recurrent. """ N = RecurrentNetwork('outer') a = LinearLayer(1, name='a') b = LinearLayer(2, name='b') c = buildNetwork(2, 3, 1) c.name = 'inner' N.addInputModule(a) N.addModule(c) N.addOutputModule(b) N.addConnection(FullConnection(a, b)) N.addConnection(FullConnection(b, c)) N.addRecurrentConnection(FullConnection(c, c)) N.sortModules() return N
build a nested network with the inner one being a ffn and the outer one being recurrent.
buildMixedNestedNetwork
python
pybrain/pybrain
pybrain/tests/unittests/structure/networks/test_nested_ffn_and_rnn.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tests/unittests/structure/networks/test_nested_ffn_and_rnn.py
BSD-3-Clause
def buildDecomposableNetwork(): """ three hidden neurons, with 2 in- and 2 outconnections each. """ n = buildNetwork(2, 3, 2, bias = False) ndc = NeuronDecomposableNetwork.convertNormalNetwork(n) # set all the weights to 1 ndc._setParameters(ones(12)) return ndc
three hidden neurons, with 2 in- and 2 outconnections each.
buildDecomposableNetwork
python
pybrain/pybrain
pybrain/tests/unittests/structure/networks/test_network_decomposition.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tests/unittests/structure/networks/test_network_decomposition.py
BSD-3-Clause
def buildSomeConnections(modules): """ add a connection from every second to every third module """ res = [] for i in range(len(modules)//3-1): res.append(FullConnection(modules[i*2], modules[i*3+1])) return res
add a connection from every second to every third module
buildSomeConnections
python
pybrain/pybrain
pybrain/tests/unittests/structure/networks/test_network_sort.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tests/unittests/structure/networks/test_network_sort.py
BSD-3-Clause
def convertSequenceToTimeWindows(DSseq, NewClass, winsize): """ Converts a sequential classification dataset into time windows of fixed length. Assumes the correct class is given at the last timestep of each sequence. Incomplete windows at the sequence end are pruned. No overlap between windows. :arg DSseq: the sequential data set to cut up :arg winsize: size of the data window :arg NewClass: class of the windowed data set to be returned (gets initialised with indim*winsize, outdim)""" assert isinstance(DSseq, SequentialDataSet) #assert isinstance(DSwin, SupervisedDataSet) DSwin = NewClass(DSseq.indim * winsize, DSseq.outdim) nsamples = 0 nseqs = 0 si = r_[DSseq['sequence_index'].flatten(), DSseq.endmarker['sequence_index']] for i in range(DSseq.getNumSequences()): # get one sequence as arrays input = DSseq['input'][si[i]:si[i + 1], :] target = DSseq['target'][si[i]:si[i + 1], :] nseqs += 1 # cut this sequence into windows, assuming class is given at the last step of each sequence for k in range(winsize, input.shape[0], winsize): inp_win = input[k - winsize:k, :] tar_win = target[k - 1, :] DSwin.addSample(inp_win.flatten(), tar_win.flatten()) nsamples += 1 ##print("added sample %d from sequence %d: %d - %d" %( nsamples, nseqs, k-winsize, k-1)) print(("samples in original dataset: ", len(DSseq))) print(("window size * nsamples = ", winsize * nsamples)) print(("total data points in original data: ", len(DSseq) * DSseq.indim)) print(("total data points in windowed dataset: ", len(DSwin) * DSwin.indim)) return DSwin
Converts a sequential classification dataset into time windows of fixed length. Assumes the correct class is given at the last timestep of each sequence. Incomplete windows at the sequence end are pruned. No overlap between windows. :arg DSseq: the sequential data set to cut up :arg winsize: size of the data window :arg NewClass: class of the windowed data set to be returned (gets initialised with indim*winsize, outdim)
convertSequenceToTimeWindows
python
pybrain/pybrain
pybrain/tools/datasettools.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/datasettools.py
BSD-3-Clause
def windowSequenceEval(DS, winsz, result): """ take results of a window-based classification and assess/plot them on the sequence WARNING: NOT TESTED!""" si_old = 0 idx = 0 x = [] y = [] seq_res = [] for i, si in enumerate(DS['sequence_index'][1:].astype(int)): tar = DS['target'][si - 1] curr_x = si_old correct = 0. wrong = 0. while curr_x < si: x.append(curr_x) if result[idx] == tar: correct += 1. y += [1., 1.] else: wrong += 1. y += [0., 0.] idx += 1 #print("winidx: ", idx) curr_x += winsz x.append(curr_x) seq_res.append(100. * correct / (correct + wrong)) print(("sequence %d correct: %g12.2%%" % (i, seq_res[-1]))) seq_res = array(seq_res) print(("total fraction of correct sequences: ", 100. * float((seq_res >= 0.5).sum()) / seq_res.size))
take results of a window-based classification and assess/plot them on the sequence WARNING: NOT TESTED!
windowSequenceEval
python
pybrain/pybrain
pybrain/tools/datasettools.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/datasettools.py
BSD-3-Clause
def normalize(self, ds, field='input'): """ normalize dataset or vector wrt. to stored min and max """ if self.dim <= 0: raise IndexError("No normalization parameters defined!") dsdim = ds[field].shape[1] if self.dim != dsdim: raise IndexError("Dimension of normalization params does not match DataSet field!") newfeat = ds[field] if self.meanstd: for i in range(dsdim): divisor = self.par2[i] if self.par2[i] > 0 else 1.0 newfeat[:, i] = (newfeat[:, i] - self.par1[i]) / divisor else: for i in range(dsdim): scale = self.scale[i] if isfinite(self.scale[i]) else 1.0 newfeat[:, i] = (newfeat[:, i] - self.par1[i]) * scale + self.newmin ds.setField(field, newfeat)
normalize dataset or vector wrt. to stored min and max
normalize
python
pybrain/pybrain
pybrain/tools/datasettools.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/datasettools.py
BSD-3-Clause
def getAllFilesIn(dir, tag='', extension='.pickle'): """ return a list of all filenames in the specified directory (with the given tag and/or extension). """ allfiles = os.listdir(dir) res = [] for f in allfiles: if f[-len(extension):] == extension and f[:len(tag)] == tag: res.append(f[:-len(extension)]) return res
return a list of all filenames in the specified directory (with the given tag and/or extension).
getAllFilesIn
python
pybrain/pybrain
pybrain/tools/filehandling.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/filehandling.py
BSD-3-Clause
def selectSome(strings, requiredsubstrings=[], requireAll=True): """ Filter the list of strings to only contain those that have at least one of the required substrings. """ if len(requiredsubstrings) == 0: return strings res = [] for s in strings: if requireAll: bad = False for rs in requiredsubstrings: if s.find(rs) < 0: bad = True break if not bad: res.append(s) else: for rs in requiredsubstrings: if s.find(rs) >= 0: res.append(s) break return res
Filter the list of strings to only contain those that have at least one of the required substrings.
selectSome
python
pybrain/pybrain
pybrain/tools/filehandling.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/filehandling.py
BSD-3-Clause
def pickleReadDict(name): """ pickle-read a (default: dictionnary) variable from a file """ try: f = open(name + '.pickle') val = pickle.load(f) f.close() except Exception as e: print(('Nothing read from', name, ':', str(e))) val = {} return val
pickle-read a (default: dictionnary) variable from a file
pickleReadDict
python
pybrain/pybrain
pybrain/tools/filehandling.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/filehandling.py
BSD-3-Clause
def calcFisherInformation(sigma, invSigma=None, factorSigma=None): """ Compute the exact Fisher Information Matrix of a Gaussian distribution, given its covariance matrix. Returns a list of the diagonal blocks. """ if invSigma == None: invSigma = inv(sigma) if factorSigma == None: factorSigma = cholesky(sigma) dim = sigma.shape[0] fim = [invSigma] for k in range(dim): D = invSigma[k:, k:].copy() D[0, 0] += factorSigma[k, k] ** -2 fim.append(D) return fim
Compute the exact Fisher Information Matrix of a Gaussian distribution, given its covariance matrix. Returns a list of the diagonal blocks.
calcFisherInformation
python
pybrain/pybrain
pybrain/tools/fisher.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/fisher.py
BSD-3-Clause
def calcInvFisher(sigma, invSigma=None, factorSigma=None): """ Efficiently compute the exact inverse of the FIM of a Gaussian. Returns a list of the diagonal blocks. """ if invSigma == None: invSigma = inv(sigma) if factorSigma == None: factorSigma = cholesky(sigma) dim = sigma.shape[0] invF = [mat(1 / (invSigma[-1, -1] + factorSigma[-1, -1] ** -2))] invD = 1 / invSigma[-1, -1] for k in reversed(list(range(dim - 1))): v = invSigma[k + 1:, k] w = invSigma[k, k] wr = w + factorSigma[k, k] ** -2 u = dot(invD, v) s = dot(v, u) q = 1 / (w - s) qr = 1 / (wr - s) t = -(1 + q * s) / w tr = -(1 + qr * s) / wr invF.append(blockCombine([[qr, tr * u], [mat(tr * u).T, invD + qr * outer(u, u)]])) invD = blockCombine([[q , t * u], [mat(t * u).T, invD + q * outer(u, u)]]) invF.append(sigma) invF.reverse() return invF
Efficiently compute the exact inverse of the FIM of a Gaussian. Returns a list of the diagonal blocks.
calcInvFisher
python
pybrain/pybrain
pybrain/tools/fisher.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/fisher.py
BSD-3-Clause
def semilinear(x): """ This function ensures that the values of the array are always positive. It is x+1 for x=>0 and exp(x) for x<0. """ try: # assume x is a numpy array shape = x.shape x.flatten() x = x.tolist() except AttributeError: # no, it wasn't: build shape from length of list shape = (1, len(x)) def f(val): if val < 0: # exponential function for x<0 return safeExp(val) else: # linear function for x>=0 return val + 1.0 return array(list(map(f, x))).reshape(shape)
This function ensures that the values of the array are always positive. It is x+1 for x=>0 and exp(x) for x<0.
semilinear
python
pybrain/pybrain
pybrain/tools/functions.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/functions.py
BSD-3-Clause
def semilinearPrime(x): """ This function is the first derivative of the semilinear function (above). It is needed for the backward pass of the module. """ try: # assume x is a numpy array shape = x.shape x.flatten() x = x.tolist() except AttributeError: # no, it wasn't: build shape from length of list shape = (1, len(x)) def f(val): if val < 0: # exponential function for x<0 return safeExp(val) else: # linear function for x>=0 return 1.0 return array(list(map(f, x))).reshape(shape)
This function is the first derivative of the semilinear function (above). It is needed for the backward pass of the module.
semilinearPrime
python
pybrain/pybrain
pybrain/tools/functions.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/functions.py
BSD-3-Clause
def ranking(R): """ Produces a linear ranking of the values in R. """ l = sorted(list(enumerate(R)), cmp=lambda a, b: cmp(a[1], b[1])) l = sorted(list(enumerate(l)), cmp=lambda a, b: cmp(a[1], b[1])) return array([kv[0] for kv in l])
Produces a linear ranking of the values in R.
ranking
python
pybrain/pybrain
pybrain/tools/functions.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/functions.py
BSD-3-Clause
def expln(x): """ This continuous function ensures that the values of the array are always positive. It is ln(x+1)+1 for x >= 0 and exp(x) for x < 0. """ def f(val): if val < 0: # exponential function for x < 0 return exp(val) else: # natural log function for x >= 0 return log(val + 1.0) + 1 try: result = array(list(map(f, x))) except TypeError: result = array(f(x)) return result
This continuous function ensures that the values of the array are always positive. It is ln(x+1)+1 for x >= 0 and exp(x) for x < 0.
expln
python
pybrain/pybrain
pybrain/tools/functions.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/functions.py
BSD-3-Clause
def explnPrime(x): """ This function is the first derivative of the expln function (above). It is needed for the backward pass of the module. """ def f(val): if val < 0: # exponential function for x<0 return exp(val) else: # linear function for x>=0 return 1.0 / (val + 1.0) try: result = array(list(map(f, x))) except TypeError: result = array(f(x)) return result
This function is the first derivative of the expln function (above). It is needed for the backward pass of the module.
explnPrime
python
pybrain/pybrain
pybrain/tools/functions.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/functions.py
BSD-3-Clause
def multivariateNormalPdf(z, x, sigma): """ The pdf of a multivariate normal distribution (not in scipy). The sample z and the mean x should be 1-dim-arrays, and sigma a square 2-dim-array. """ assert len(z.shape) == 1 and len(x.shape) == 1 and len(x) == len(z) and sigma.shape == (len(x), len(z)) tmp = -0.5 * dot(dot((z - x), inv(sigma)), (z - x)) res = (1. / power(2.0 * pi, len(z) / 2.)) * (1. / sqrt(det(sigma))) * exp(tmp) return res
The pdf of a multivariate normal distribution (not in scipy). The sample z and the mean x should be 1-dim-arrays, and sigma a square 2-dim-array.
multivariateNormalPdf
python
pybrain/pybrain
pybrain/tools/functions.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/functions.py
BSD-3-Clause
def simpleMultivariateNormalPdf(z, detFactorSigma): """ Assuming z has been transformed to a mean of zero and an identity matrix of covariances. Needs to provide the determinant of the factorized (real) covariance matrix. """ dim = len(z) return exp(-0.5 * dot(z, z)) / (power(2.0 * pi, dim / 2.) * detFactorSigma)
Assuming z has been transformed to a mean of zero and an identity matrix of covariances. Needs to provide the determinant of the factorized (real) covariance matrix.
simpleMultivariateNormalPdf
python
pybrain/pybrain
pybrain/tools/functions.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/functions.py
BSD-3-Clause
def multivariateCauchy(mu, sigma, onlyDiagonal=True): """ Generates a sample according to a given multivariate Cauchy distribution. """ if not onlyDiagonal: u, s, d = svd(sigma) coeffs = sqrt(s) else: coeffs = diag(sigma) r = rand(len(mu)) res = coeffs * tan(pi * (r - 0.5)) if not onlyDiagonal: res = dot(d, dot(res, u)) return res + mu
Generates a sample according to a given multivariate Cauchy distribution.
multivariateCauchy
python
pybrain/pybrain
pybrain/tools/functions.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/functions.py
BSD-3-Clause
def approxChiFunction(dim): """ Returns Chi (expectation of the length of a normal random vector) approximation according to: Ostermeier 1997. """ dim = float(dim) return sqrt(dim) * (1 - 1 / (4 * dim) + 1 / (21 * dim ** 2))
Returns Chi (expectation of the length of a normal random vector) approximation according to: Ostermeier 1997.
approxChiFunction
python
pybrain/pybrain
pybrain/tools/functions.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/functions.py
BSD-3-Clause
def sqrtm(M): """ Returns the symmetric semi-definite positive square root of a matrix. """ r = real_if_close(expm(0.5 * logm(M)), 1e-8) return (r + r.T) / 2
Returns the symmetric semi-definite positive square root of a matrix.
sqrtm
python
pybrain/pybrain
pybrain/tools/functions.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/functions.py
BSD-3-Clause
def __init__(self, min_params, max_params, n_steps=7, **kwargs): """ :key min_params: Tuple of two elements specifying the minima of the two metaparameters :key max_params: Tuple of two elements specifying the minima of the two metaparameters :key max_param: Tuple of two elements, specifying the number of steps between the minimum and maximum of each search dimension. Alternative, specify a scalar to set the same granularity for each dimension. :key **kwargs: See setArgs() """ assert len(min_params) == len(max_params) self._min_params = array(min_params, float) self._max_params = array(max_params, float) self._n_dim = len(min_params) self._n_steps = append([], n_steps) * ones(self._n_dim) self._range = self._max_params - self._min_params self._performances = {} self._verbosity = 0 self.setArgs(**kwargs)
:key min_params: Tuple of two elements specifying the minima of the two metaparameters :key max_params: Tuple of two elements specifying the minima of the two metaparameters :key max_param: Tuple of two elements, specifying the number of steps between the minimum and maximum of each search dimension. Alternative, specify a scalar to set the same granularity for each dimension. :key **kwargs: See setArgs()
__init__
python
pybrain/pybrain
pybrain/tools/gridsearch.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/gridsearch.py
BSD-3-Clause
def search(self): """ The main search method, that validates all calculated metaparameter settings (=jobs) by calling the abstract _validate() method. After enough new jobs were validated in order to visualize a grid, the _onStep() callback method is called. """ jobs = self._calculateJobs() perfs = self._performances for line in jobs: for params in line: perf = self._validate(params) perfs[params] = perf if self._verbosity > 0: print(("validated:", params, " performance = ", perf)) self._onStep() max_idx = array(list(perfs.values())).argmax() return list(perfs.keys())[max_idx]
The main search method, that validates all calculated metaparameter settings (=jobs) by calling the abstract _validate() method. After enough new jobs were validated in order to visualize a grid, the _onStep() callback method is called.
search
python
pybrain/pybrain
pybrain/tools/gridsearch.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/gridsearch.py
BSD-3-Clause
def _calculateJobs(self): """ Calculate and return the metaparameter settings to be validated (=jobs). """ ndim = len(self._min_params) linspaces = [] for i in range(ndim): linspaces.append( self._permuteSequence( list(linspace(self._min_params[i], self._max_params[i], self._n_steps[i])))) # print(linspaces; exit(0)) # linspaces = array(linspaces,float) nr_c = len(linspaces[0]) nr_g = len(linspaces[1]) i = 0 j = 0 jobs = [] while i < nr_c or j < nr_g: if i / float(nr_c) < j / float(nr_g): line = [] for k in range(0, j): line.append((linspaces[0][i], linspaces[1][k])) i += 1 jobs.append(line) else: line = [] for k in range(0, i): line.append((linspaces[0][k], linspaces[1][j])) j += 1 jobs.append(line) return jobs
Calculate and return the metaparameter settings to be validated (=jobs).
_calculateJobs
python
pybrain/pybrain
pybrain/tools/gridsearch.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/gridsearch.py
BSD-3-Clause
def _permuteSequence(self, seq): """ Helper function for calculating the job list """ n = len(seq) if n <= 1: return seq mid = int(n / 2) left = self._permuteSequence(seq[:mid]) right = self._permuteSequence(seq[mid + 1:]) ret = [seq[mid]] while left or right: if left: ret.append(left.pop(0)) if right: ret.append(right.pop(0)) return ret
Helper function for calculating the job list
_permuteSequence
python
pybrain/pybrain
pybrain/tools/gridsearch.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/gridsearch.py
BSD-3-Clause
def search(self): """ The main search method, that validates all calculated metaparameter settings by calling the abstract _validate() method. """ self._n_params = len(self._min_params) center = self._min_params + self._range / 2. for level in range(self._n_iterations): grid = self._calcGrid(center, level) local_perf = apply_along_axis(self._validateWrapper, 1, grid) max_idx = local_perf.argmax() center = grid[max_idx] if self._verbosity > 0: print() print(("Found maximum at:", center, " performance = ", local_perf[max_idx])) print() return center
The main search method, that validates all calculated metaparameter settings by calling the abstract _validate() method.
search
python
pybrain/pybrain
pybrain/tools/gridsearch.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/gridsearch.py
BSD-3-Clause
def _validateWrapper(self, params): """ Helper function that wraps the _validate() method. """ perf = self._validate(params) if self._verbosity > 0: print(("validated:", params, " performance = ", perf)) self._performances[tuple(params)] = perf return perf
Helper function that wraps the _validate() method.
_validateWrapper
python
pybrain/pybrain
pybrain/tools/gridsearch.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/gridsearch.py
BSD-3-Clause
def _calcGrid(self, center, level): """ Calculate the next grid to validate. :arg center: The central position of the grid :arg level: The iteration number """ local_range = self._range / (self._refine_factor ** level) scale = local_range / 2 translation = center grid = self._doe_pat * scale + translation grid = self._moveGridIntoBounds(grid) return grid
Calculate the next grid to validate. :arg center: The central position of the grid :arg level: The iteration number
_calcGrid
python
pybrain/pybrain
pybrain/tools/gridsearch.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/gridsearch.py
BSD-3-Clause
def _moveGridIntoBounds(self, grid): """ If the calculated grid is out of bounds, this method moves it back inside, and returns the new grid. """ grid = array(grid) local_min_params = grid.min(axis=0) local_max_params = grid.max(axis=0) tosmall_idxs, = where(local_min_params < self._min_params) togreat_idxs, = where(local_max_params > self._max_params) translation = zeros(self._n_params) for idx in tosmall_idxs: translation[idx] = self._min_params[idx] - local_min_params[idx] for idx in togreat_idxs: translation[idx] = self._max_params[idx] - local_max_params[idx] grid += translation return grid
If the calculated grid is out of bounds, this method moves it back inside, and returns the new grid.
_moveGridIntoBounds
python
pybrain/pybrain
pybrain/tools/gridsearch.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/gridsearch.py
BSD-3-Clause
def __init__(self, trainer, dataset, min_params=[-5, -15], max_params=[15, 3], n_steps=7, **kwargs): """ The parameter boundaries are specified in log2-space. :arg trainer: The SVM trainer including the SVM module. (Could be any kind of trainer and module) :arg dataset: Dataset used for crossvalidation """ GridSearch2D.__init__(self, min_params, max_params, n_steps) self._trainer = trainer self._dataset = dataset self._validator_kwargs = {} self._n_folds = 5 self.setArgs(**kwargs)
The parameter boundaries are specified in log2-space. :arg trainer: The SVM trainer including the SVM module. (Could be any kind of trainer and module) :arg dataset: Dataset used for crossvalidation
__init__
python
pybrain/pybrain
pybrain/tools/gridsearch.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/gridsearch.py
BSD-3-Clause
def setArgs(self, **kwargs): """ :key **kwargs: nfolds : Number of folds of crossvalidation max_epochs: Maximum number of epochs for training verbosity : set verbosity """ for key, value in list(kwargs.items()): if key in ("folds", "nfolds"): self._n_folds = int(value) elif key in ("max_epochs"): self._validator_kwargs['max_epochs'] = value elif key in ("verbose", "ver", "v"): self._verbosity = value else: GridSearch2D.setArgs(self, **{key:value})
:key **kwargs: nfolds : Number of folds of crossvalidation max_epochs: Maximum number of epochs for training verbosity : set verbosity
setArgs
python
pybrain/pybrain
pybrain/tools/gridsearch.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/gridsearch.py
BSD-3-Clause
def _validate(self, params): """ The overridden validate function, that uses cross-validation in order to determine the params' performance value. """ trainer = self._getTrainerForParams(params) return CrossValidator(trainer, self._dataset, self._n_folds, **self._validator_kwargs).validate()
The overridden validate function, that uses cross-validation in order to determine the params' performance value.
_validate
python
pybrain/pybrain
pybrain/tools/gridsearch.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/gridsearch.py
BSD-3-Clause
def _getTrainerForParams(self, params): """ Returns a trainer, loaded with the supplied metaparameters. """ trainer = copy.deepcopy(self._trainer) trainer.setArgs(cost=2 ** params[0], gamma=2 ** params[1], ver=0) return trainer
Returns a trainer, loaded with the supplied metaparameters.
_getTrainerForParams
python
pybrain/pybrain
pybrain/tools/gridsearch.py
https://github.com/pybrain/pybrain/blob/master/pybrain/tools/gridsearch.py
BSD-3-Clause