desc
stringlengths
3
26.7k
decl
stringlengths
11
7.89k
bodies
stringlengths
8
553k
'Serializes or deserializes a value by given name. This operator saves or loads a value by given name. If this is a serializer, then the value is simply saved at the key. Note that some type information might be missed depending on the implementation (and the target file format). If this is a deserializer, then the value is loaded by the key. The deserialization differently works on scalars and arrays. For scalars, the ``value`` argument is used just for determining the type of restored value to be converted, and the converted value is returned. For arrays, the restored elements are directly copied into the ``value`` argument. String values are treated like scalars. .. note:: As of v2.0.0, serializers and deserializers are required to correctly handle the ``None`` value. When ``value`` is ``None``, serializers save it in format-dependent ways, and deserializers just return the loaded value. When the saved ``None`` value is loaded by a deserializer, it should quietly return the ``None`` value without modifying the ``value`` object. Args: key (str): Name of the serialization entry. value (scalar, array, None, or str): Object to be (de)serialized. ``None`` is only supported by deserializers. Returns: Serialized or deserialized value.'
def __call__(self, key, value):
raise NotImplementedError
'Saves an object by this serializer. This is equivalent to ``obj.serialize(self)``. Args: obj: Target object to be serialized.'
def save(self, obj):
obj.serialize(self)
'Loads an object from this deserializer. This is equivalent to ``obj.serialize(self)``. Args: obj: Target object to be serialized.'
def load(self, obj):
obj.serialize(self)
'Transition in forword and backword algorithms is represented as matrix. See also https://blog.wtf.sg/2014/10/06/connectionist-temporal-classification-ctc-with-theano/'
def recurrence_relation(self, label, path_length, max_length, dtype, xp):
(batch, lab) = label.shape repeat_mask = xp.ones((batch, ((lab * 2) + 1))) repeat_mask[:, 1::2] = (label != xp.take(label, ((xp.arange((-1), (lab - 1)) % lab) + xp.arange(0, (batch * lab), lab)[:, None]))) repeat_mask[:, 1] = 1 rr = ((xp.eye(max_length, dtype=dtype)[None, :] + xp.eye(max_length, k=1, dtype=dtype)[None, :]) + ((xp.eye(max_length, k=2, dtype=dtype) * (xp.arange(max_length, dtype=dtype) % dtype(2))[None, :]) * repeat_mask[:, None])) return self.log_matrix((rr * (path_length[:, None] > xp.arange(max_length))[..., None]), xp)
'Callback function invoked before forward propagation. Args: function(~chainer.Function): Function object to which the function hook is registered. in_data(tuple of numpy.ndarray or tuple of cupy.ndarray): Input data of forward propagation.'
def forward_preprocess(self, function, in_data):
pass
'Callback function invoked after forward propagation. Args: function(~chainer.Function): Function object to which the function hook is registered. in_data(tuple of numpy.ndarray or tuple of cupy.ndarray): Input data of forward propagation.'
def forward_postprocess(self, function, in_data):
pass
'Callback function invoked before backward propagation. Args: function(~chainer.Function): Function object to which the function hook is registered. in_data(tuple of numpy.ndarray or tuple of cupy.ndarray): Input data of forward propagation. out_grad(tuple of numpy.ndarray or tuple of cupy.ndarray): Gradient data of backward propagation.'
def backward_preprocess(self, function, in_data, out_grad):
pass
'Callback function invoked after backward propagation. Args: function(~chainer.Function): Function object to which the function hook is registered. in_data(tuple of numpy.ndarray or tuple of cupy.ndarray): Input of forward propagation. out_grad(tuple of numpy.ndarray or tuple of cupy.ndarray): Gradient data of backward propagation.'
def backward_postprocess(self, function, in_data, out_grad):
pass
'show(file=sys.stdout) Prints the global config entries. The entries are sorted in the lexicographical order of the entry name. Args: file: Output file-like object.'
def show(self, file=sys.stdout):
keys = sorted(self.__dict__) _print_attrs(self, keys, file)
'show(file=sys.stdout) Prints the config entries. The entries are sorted in the lexicographical order of the entry names. Args: file: Output file-like object. .. admonition:: Example You can easily print the list of configurations used in the current thread. >>> chainer.config.show() # doctest: +SKIP debug False enable_backprop True train True type_check True'
def show(self, file=sys.stdout):
keys = sorted((set(self._global.__dict__) | set(self._local.__dict__))) _print_attrs(self, keys, file)
'Makes this reporter object current.'
def __enter__(self):
_reporters.append(self)
'Recovers the previous reporter object to the current.'
def __exit__(self, exc_type, exc_value, traceback):
_reporters.pop()
'Creates a scope to report observed values to ``observation``. This is a context manager to be passed to ``with`` statements. In this scope, the observation dictionary is changed to the given one. It also makes this reporter object current. Args: observation (dict): Observation dictionary. All observations reported inside of the ``with`` statement are written to this dictionary.'
@contextlib.contextmanager def scope(self, observation):
old = self.observation self.observation = observation self.__enter__() (yield) self.__exit__(None, None, None) self.observation = old
'Registers an observer of values. Observer defines a scope of names for observed values. Values observed with the observer are registered with names prefixed by the observer name. Args: name (str): Name of the observer. observer: The observer object. Note that the reporter distinguishes the observers by their object ids (i.e., ``id(owner)``), rather than the object equality.'
def add_observer(self, name, observer):
self._observer_names[id(observer)] = name
'Registers multiple observers at once. This is a convenient method to register multiple objects at once. Args: prefix (str): Prefix of each name of observers. observers: Iterator of name and observer pairs.'
def add_observers(self, prefix, observers):
for (name, observer) in observers: self._observer_names[id(observer)] = (prefix + name)
'Reports observed values. The values are written with the key, prefixed by the name of the observer object if given. .. note:: As of v2.0.0, if a value is of type :class:`~chainer.Variable`, the variable is copied without preserving the computational graph and the new variable object purged from the graph is stored to the observer. This behavior can be changed by setting ``chainer.config.keep_graph_on_report`` to ``True``. Args: values (dict): Dictionary of observed values. observer: Observer object. Its object ID is used to retrieve the observer name, which is used as the prefix of the registration name of the observed value.'
def report(self, values, observer=None):
if (not configuration.config.keep_graph_on_report): values = {k: _copy_variable(v) for (k, v) in six.iteritems(values)} if (observer is not None): observer_id = id(observer) if (observer_id not in self._observer_names): raise KeyError('Given observer is not registered to the reporter.') observer_name = self._observer_names[observer_id] for (key, value) in six.iteritems(values): name = ('%s/%s' % (observer_name, key)) self.observation[name] = value else: self.observation.update(values)
'Adds a scalar value. Args: value: Scalar value to accumulate. It is either a NumPy scalar or a zero-dimensional array (on CPU or GPU).'
def add(self, value):
with _get_device(value): self._x += value self._x2 += (value * value) self._n += 1
'Computes the mean.'
def compute_mean(self):
(x, n) = (self._x, self._n) with _get_device(x): return (x / n)
'Computes and returns the mean and standard deviation values. Returns: tuple: Mean and standard deviation values.'
def make_statistics(self):
(x, n) = (self._x, self._n) xp = cuda.get_array_module(x) with _get_device(x): mean = (x / n) var = ((self._x2 / n) - (mean * mean)) std = xp.sqrt(var) return (mean, std)
'Adds a dictionary of scalars. Args: d (dict): Dictionary of scalars to accumulate. Only elements of scalars, zero-dimensional arrays, and variables of zero-dimensional arrays are accumulated.'
def add(self, d):
summaries = self._summaries for (k, v) in six.iteritems(d): if isinstance(v, variable.Variable): v = v.data if (numpy.isscalar(v) or (getattr(v, 'ndim', (-1)) == 0)): summaries[k].add(v)
'Creates a dictionary of mean values. It returns a single dictionary that holds a mean value for each entry added to the summary. Returns: dict: Dictionary of mean values.'
def compute_mean(self):
return {name: summary.compute_mean() for (name, summary) in six.iteritems(self._summaries)}
'Creates a dictionary of statistics. It returns a single dictionary that holds mean and standard deviation values for every entry added to the summary. For an entry of name ``\'key\'``, these values are added to the dictionary by names ``\'key\'`` and ``\'key.std\'``, respectively. Returns: dict: Dictionary of statistics of all entries.'
def make_statistics(self):
stats = {} for (name, summary) in six.iteritems(self._summaries): (mean, std) = summary.make_statistics() stats[name] = mean stats[(name + '.std')] = std return stats
'Ordered dictionary of registered function hooks. Contrary to ``chainer.thread_local.function_hooks``, which registers its elements to all functions, Function hooks in this property is specific to this function.'
@property def local_function_hooks(self):
if (self._local_function_hooks is None): self._local_function_hooks = collections.OrderedDict() return self._local_function_hooks
'Short text that represents the function. The default implementation returns its type name. Each function should override it to give more information.'
@property def label(self):
return self.__class__.__name__
'A tuple of the retained output arrays. This property is mainly used by :class:`Function`. Users basically do not have to use this property; use :meth:`get_retained_outputs` instead.'
@property def output_data(self):
if (self._retained_output_data is None): raise RuntimeError('retained output data is gone') out_data = ([None] * len(self.outputs)) for (index, data) in six.moves.zip(self._output_indexes_to_retain, self._retained_output_data): out_data[index] = data return tuple(out_data)
'Computes output variables and grows the computational graph. Basic behavior is expressed in the documentation of :class:`FunctionNode`. .. note:: If the :data:`~Variable.data` attribute of input variables exist on a GPU device, that device is made current before calling :meth:`forward`, so implementors do not need to take care of device selection in most cases. Args: inputs: Tuple of input variables. Each element can be either :class:`Variable`, :class:`numpy.ndarray`, or :class:`cupy.ndarray`. If the element is an ndarray, it is automatically wrapped with :class:`Variable`. Returns: A tuple of output :class:`Variable` objects.'
def apply(self, inputs):
input_vars = [(x if isinstance(x, variable.Variable) else variable.Variable(x, requires_grad=False)) for x in inputs] in_data = tuple([x.data for x in input_vars]) requires_grad = any([x.requires_grad for x in input_vars]) if chainer.is_debug(): self.stack = traceback.extract_stack() if configuration.config.type_check: self._check_data_type_forward(in_data) hooks = chainer.get_function_hooks() if (self._n_local_function_hooks > 0): hooks = collections.OrderedDict(hooks) hooks.update(self.local_function_hooks) hooks = hooks.values() for hook in hooks: hook.forward_preprocess(self, in_data) with cuda.get_device_from_array(*in_data): self._input_indexes_to_retain = None self._output_indexes_to_retain = None outputs = self.forward(in_data) assert (type(outputs) is tuple) for hook in hooks: hook.forward_postprocess(self, in_data) if chainer.is_debug(): if any((((out.dtype.kind == 'f') and cuda.get_array_module(out).isnan(out).any()) for out in outputs)): msg = 'NaN is detected on forward computation' raise RuntimeError(msg) ret = tuple([variable.Variable(y, requires_grad=requires_grad) for y in outputs]) if configuration.config.enable_backprop: self.rank = (max([x.rank for x in input_vars]) if input_vars else 0) for (i, y) in enumerate(ret): y.creator_node = self self.inputs = tuple([x.node for x in input_vars]) self.outputs = tuple([weakref.ref(y.node) for y in ret]) if (self._input_indexes_to_retain is not None): for index in self._input_indexes_to_retain: input_vars[index].retain_data() if (self._output_indexes_to_retain is not None): retained_data = [] for index in self._output_indexes_to_retain: ret[index].retain_data() retained_data.append(outputs[index]) self._retained_output_data = tuple(retained_data) return ret
'Checks types of input data before forward propagation. This method is called before :meth:`forward` and validates the types of input variables using :ref:`the type checking utilities <type-check-utils>`. Args: in_types (~chainer.utils.type_check.TypeInfoTuple): The type information of input variables for :meth:`forward`.'
def check_type_forward(self, in_types):
pass
'Computes the output arrays from the input arrays. It delegates the procedure to :meth:`forward_cpu` or :meth:`forward_gpu` by default. Which of them this method selects is determined by the type of input arrays. Implementations of :class:`FunctionNode` must implement either CPU/GPU methods or this method. Args: inputs: Tuple of input array(s). Returns: Tuple of output array(s). .. warning:: Implementations of :class:`FunctionNode` must take care that the return value must be a tuple even if it returns only one array.'
def forward(self, inputs):
assert (len(inputs) > 0) if isinstance(inputs[0], cuda.ndarray): return self.forward_gpu(inputs) return self.forward_cpu(inputs)
'Computes the output arrays from the input NumPy arrays. Args: inputs: Tuple of input :class:`numpy.ndarray` objects. Returns: Tuple of output arrays. Each element can be NumPy or CuPy arrays. .. warning:: Implementation of :class:`FunctionNode` must take care that the return value must be a tuple even if it returns only one array.'
def forward_cpu(self, inputs):
raise NotImplementedError
'Computes the output arrays from the input CuPy arrays. Args: inputs: Tuple of input :class:`cupy.ndarray` objects. Returns: Tuple of output arrays. Each element can be NumPy or CuPy arrays. .. warning:: Implementation of :class:`FunctionNode` must take care that the return value must be a tuple even if it returns only one array.'
def forward_gpu(self, inputs):
raise NotImplementedError
'Lets specified input variable nodes keep data arrays. By calling this method from :meth:`forward`, the function node can specify which inputs are required for backprop. The input variables with retained arrays can be obtained by :meth:`get_retained_inputs` from :meth:`backward`. Unlike :class:`Function`, the function node **DOES NOT** keep input arrays by default. If you want to keep some or all input arrays, do not forget to call this method. Note that **this method must not be called from the outside of forward method.** Args: indexes (iterable of int): Indexes of input variables that the function does not require for backprop.'
def retain_inputs(self, indexes):
self._input_indexes_to_retain = indexes
'Lets specified output variable nodes keep data arrays. By calling this method from :meth:`forward`, the function node can specify which outputs are required for backprop. If this method is not called, any output variables are not marked to keep the data array at the point of returning from :meth:`apply`. The output variables with retained arrays can be obtained by :meth:`get_retained_outputs` from :meth:`backward`. .. note:: It is recommended to use this method if the function requires some or all output arrays in backprop. The function can also use output arrays just by keeping references to them directly, whereas it might influence on the performance of later function applications to the output variables. Note that **this method must not be called from the outside of forward method.** Args: indexes (iterable of int): Indexes of input variables that the function does not require for backprop.'
def retain_outputs(self, indexes):
self._output_indexes_to_retain = indexes
'Computes gradients w.r.t. specified inputs given output gradients. This method is used to compute one step of the backpropagation corresponding to the forward computation of this function node. Given the gradients w.r.t. output variables, this method computes the gradients w.r.t. specified input variables. Note that this method does not need to compute any input gradients not specified by ``target_input_indices``. Unlike :meth:`Function.backward`, gradients are given as :class:`Variable` objects and this method itself has to return input gradients as :class:`Variable` objects. It enables the function node to return the input gradients with the full computational history, in which case it supports *differentiable backpropagation* or *higher-order differentiation*. The default implementation returns ``None`` s, which means the function is not differentiable. Args: target_input_indexes (tuple of int): Indices of the input variables w.r.t. which the gradients are required. It is guaranteed that this tuple contains at least one element. grad_outputs (tuple of Variable): Gradients w.r.t. the output variables. If the gradient w.r.t. an output variable is not given, the corresponding element is ``None``. Returns: Tuple of variables that represent the gradients w.r.t. specified input variables. The length of the tuple can be same as either ``len(target_input_indexes)`` or the number of inputs. In the latter case, the elements not specified by ``target_input_indexes`` will be discarded. .. seealso:: :meth:`backward_accumulate` provides an alternative interface that allows you to implement the backward computation fused with the gradient accumulation.'
def backward(self, target_input_indexes, grad_outputs):
return ((None,) * len(target_input_indexes))
'Computes gradients w.r.t. specified inputs and accumulates them. This method provides a way to fuse the backward computation and the gradient accumulations in the case that the multiple functions are applied to the same variable. Users have to override either of this method or :meth:`backward`. It is often simpler to implement :meth:`backward` and is recommended if you do not need to provide efficient gradient accumulation. Args: target_input_indexes (tuple of int): Indices of the input variables w.r.t. which the gradients are required. It is guaranteed that this tuple contains at least one element. grad_outputs (tuple of Variable): Gradients w.r.t. the output variables. If the gradient w.r.t. an output variable is not given, the corresponding element is ``None``. grad_inputs (tuple of Variable): Gradients w.r.t. the input variables specified by ``target_input_indexes``. These values are computed by other computation paths. If there is no gradient value existing for the variable, the corresponding element is ``None``. See also the note below. Returns: Tuple of variables that represent the gradients w.r.t. specified input variables. Unlike :meth:`backward`, the length of the tuple **must** be same as that of ``target_input_indices``. .. note:: When the same variable is passed to the multiple input arguments of a function, only the first position of ``grad_inputs`` corresponding to these input arguments may contain the gradient variable corresponding to that input variable, and other entries are set to ``None``. This is an implementation-detail convention to avoid the complication of correctly accumulating gradients in such a case. This behavior might be changed in a future version.'
def backward_accumulate(self, target_input_indexes, grad_outputs, grad_inputs):
gxs = self.backward(target_input_indexes, grad_outputs) len_gxs = len(gxs) if (len_gxs == len(self.inputs)): gxs = tuple([gxs[i] for i in target_input_indexes]) elif (len_gxs != len(target_input_indexes)): raise ValueError(('number of gradients returned by %s (%s) is incorrect.' % (self._impl_name, self.label))) return tuple([(gx if (g_input is None) else (g_input if (gx is None) else (gx + g_input))) for (gx, g_input) in six.moves.zip(gxs, grad_inputs)])
'Returns a tuple of retained input variables. This method is used to retrieve the input variables retained in :meth:`forward`. Returns: A tuple of retained input variables.'
def get_retained_inputs(self):
inputs = self.inputs return tuple([inputs[index].get_variable() for index in self._input_indexes_to_retain])
'Returns a tuple of retained output variables. This method is used to retrieve the output variables retained in :meth:`forward`. Returns: A tuple of retained output variables. .. note:: This method does a tricky thing to support the case of an output node garbage-collected before this method is called; in this case, this method creates a fresh variable node that acts as an output node of the function node.'
def get_retained_outputs(self):
ret = [] outputs = self.outputs new_outputs = list(outputs) outputs_modified = False for (index, data) in six.moves.zip(self._output_indexes_to_retain, self._retained_output_data): output = outputs[index]() if (output is None): output_var = variable.Variable(data) output_var.creator_node = self new_outputs[index] = weakref.ref(output_var) outputs_modified = True else: output_var = output.get_variable() ret.append(output_var) if outputs_modified: self.outputs = tuple(new_outputs) return ret
'Purges in/out nodes and this function node itself from the graph.'
def unchain(self):
for y in self.outputs: y_ref = y() if (y_ref is not None): y_ref.unchain() self.inputs = None
'Registers a function hook. Args: hook (~chainer.function.FunctionHook): Function hook to be registered. name (str): Name of the function hook. The name must be unique among function hooks registered to this function. If ``None``, the default name of the function hook is used.'
def add_hook(self, hook, name=None):
if (not isinstance(hook, function_hook.FunctionHook)): raise TypeError('Hook must be of type FunctionHook') if (name is None): name = hook.name hooks = self.local_function_hooks if (name in hooks): raise KeyError(('Hook %s already exists' % name)) hooks[name] = hook
'Unregisters the function hook. Args: name (str): The name of the function hook to be unregistered.'
def delete_hook(self, name):
del self.local_function_hooks[name]
'Computes the loss value for given input and ground truth labels. Args: x (~chainer.Variable): Input of the weight matrix multiplication. t (~chainer.Variable): Batch of ground truth labels. reduce (str): Reduction option. Its value must be either ``\'sum\'`` or ``\'no\'``. Otherwise, :class:`ValueError` is raised. Returns: ~chainer.Variable: Loss value.'
def __call__(self, x, t, reduce='sum'):
return negative_sampling.negative_sampling(x, t, self.W, self.sampler.sample, self.sample_size, reduce=reduce)
'Makes a Huffman tree from a dictionary containing word counts. This method creates a binary Huffman tree, that is required for :class:`BinaryHierarchicalSoftmax`. For example, ``{0: 8, 1: 5, 2: 6, 3: 4}`` is converted to ``((3, 1), (2, 0))``. Args: word_counts (dict of int key and int or float values): Dictionary representing counts of words. Returns: Binary Huffman tree with tuples and keys of ``word_coutns``.'
@staticmethod def create_huffman_tree(word_counts):
if (len(word_counts) == 0): raise ValueError('Empty vocabulary') q = six.moves.queue.PriorityQueue() for (uid, (w, c)) in enumerate(six.iteritems(word_counts)): q.put((c, uid, w)) while (q.qsize() >= 2): (count1, id1, word1) = q.get() (count2, id2, word2) = q.get() count = (count1 + count2) tree = (word1, word2) q.put((count, min(id1, id2), tree)) return q.get()[2]
'Computes the loss value for given input and ground truth labels. Args: x (~chainer.Variable): Input to the classifier at each node. t (~chainer.Variable): Batch of ground truth labels. Returns: ~chainer.Variable: Loss value.'
def __call__(self, x, t):
f = copy.copy(self._func) return f(x, t, self.W)
'Computes a state that maximizes a joint probability. Args: xs (list of Variable): Input vector for each label. Returns: tuple: A tuple of :class:`~chainer.Variable` representing each log-likelihood and a list representing the argmax path. .. seealso:: See :func:`~chainer.frunctions.crf1d_argmax` for more detail.'
def argmax(self, xs):
return crf1d.argmax_crf1d(self.cost, xs)
'Computes the loss value for given input and ground truth labels. Args: x (~chainer.Variable): Input of the weight matrix multiplication. t (~chainer.Variable): Batch of ground truth labels. Returns: ~chainer.Variable: Loss value.'
def __call__(self, x, t):
batch_size = x.shape[0] if hasattr(self, 'sample_data'): sample_data = self.sample_data else: shape = (batch_size, self.sample_size) sample_data = self.sampler.sample(shape) samples = variable.Variable(sample_data) return black_out.black_out(x, t, self.W, samples)
'__call__(self, inputs, outputs, disable=()) Executes a sub-network of the network. This function acts as an interpreter of the network definition for Caffe. On execution, it interprets each layer one by one, and if the bottom blobs are already computed, then emulates the layer and stores output blobs as :class:`~chainer.Variable` objects. .. warning:: ``train`` argument is not supported anymore since v2. Instead, use ``chainer.using_config(\'train\', train)``. See :func:`chainer.using_config`. Args: inputs (dict): A dictionary whose key-value pairs indicate initial correspondences between blob names and :class:`~chainer.Variable` objects. outputs (Iterable): A list of blob names whose corresponding :class:`~chainer.Variable` objects are returned. disable (Iterable): A list of layer names that will be ignored during the forward computation. Returns: tuple: A tuple of output :class:`~chainer.Variable` objects corresponding to elements of the `outputs` argument.'
def __call__(self, inputs, outputs, disable=(), **kwargs):
argument.check_unexpected_kwargs(kwargs, train='train argument is not supported anymore. Use chainer.using_config') argument.assert_kwargs_empty(kwargs) variables = dict(inputs) for (func_name, bottom, top) in self.layers: if ((func_name in disable) or (func_name not in self.forwards) or any(((blob not in variables) for blob in bottom))): continue func = self.forwards[func_name] input_vars = tuple((variables[blob] for blob in bottom)) output_vars = func(*input_vars) if (not isinstance(output_vars, collections.Iterable)): output_vars = (output_vars,) for (var, name) in zip(output_vars, top): variables[name] = var self.variables = variables return tuple((variables[blob] for blob in outputs))
'Applies the simplified dropconnect layer. Args: x (chainer.Variable or :class:`numpy.ndarray` or cupy.ndarray): Batch of input vectors. Its first dimension ``n`` is assumed to be the *minibatch dimension*. train (bool): If ``True``, executes simplified dropconnect. Otherwise, simplified dropconnect link works as a linear unit. mask (None or chainer.Variable or numpy.ndarray or cupy.ndarray): If ``None``, randomized simplified dropconnect mask is generated. Otherwise, The mask must be ``(n, M, N)`` or ``(M, N)`` shaped array, and `use_batchwise_mask` is ignored. Main purpose of this option is debugging. `mask` array will be used as a dropconnect mask. use_batchwise_mask (bool): If ``True``, dropped connections depend on each sample in mini-batch. Returns: ~chainer.Variable: Output of the simplified dropconnect layer.'
def __call__(self, x, train=True, mask=None, use_batchwise_mask=True):
if (self.W.data is None): self._initialize_params((x.size // len(x.data))) if ((mask is not None) and ('mask' not in self.__dict__)): self.add_persistent('mask', mask) return simplified_dropconnect.simplified_dropconnect(x, self.W, self.b, self.ratio, train, mask, use_batchwise_mask)
'Applies the parametric ReLU activation function. Args: x (~chainer.Variable): Input variable. Returns: ~chainer.Variable: Output of the parametric ReLU function.'
def __call__(self, x):
return prelu.prelu(x, self.W)
'Applies the maxout layer. Args: x (~chainer.Variable): Batch of input vectors. Returns: ~chainer.Variable: Output of the maxout layer.'
def __call__(self, x):
y = self.linear(x) return maxout.maxout(y, self.pool_size)
'Converts a pre-trained caffemodel to a chainer model. Args: path_caffemodel (str): Path of the pre-trained caffemodel. path_npz (str): Path of the converted chainer model.'
@classmethod def convert_caffemodel_to_npz(cls, path_caffemodel, path_npz, n_layers=50):
from chainer.links.caffe.caffe_function import CaffeFunction caffemodel = CaffeFunction(path_caffemodel) chainermodel = cls(pretrained_model=None, n_layers=n_layers) if (n_layers == 50): _transfer_resnet50(caffemodel, chainermodel) elif (n_layers == 101): _transfer_resnet101(caffemodel, chainermodel) elif (n_layers == 152): _transfer_resnet152(caffemodel, chainermodel) else: raise ValueError('The n_layers argument should be either 50, 101, or 152, but {} was given.'.format(n_layers)) npz.save_npz(path_npz, chainermodel, compression=False)
'__call__(self, x, layers=[\'prob\']) Computes all the feature maps specified by ``layers``. .. warning:: ``test`` argument is not supported anymore since v2. Instead, use ``chainer.using_config(\'train\', train)``. See :func:`chainer.using_config`. Args: x (~chainer.Variable): Input variable. layers (list of str): The list of layer names you want to extract. Returns: Dictionary of ~chainer.Variable: A directory in which the key contains the layer name and the value contains the corresponding feature map variable.'
def __call__(self, x, layers=['prob'], **kwargs):
argument.check_unexpected_kwargs(kwargs, test='test argument is not supported anymore. Use chainer.using_config') argument.assert_kwargs_empty(kwargs) h = x activations = {} target_layers = set(layers) for (key, funcs) in self.functions.items(): if (len(target_layers) == 0): break for func in funcs: h = func(h) if (key in target_layers): activations[key] = h target_layers.remove(key) return activations
'extract(self, images, layers=[\'pool5\'], size=(224, 224)) Extracts all the feature maps of given images. The difference of directly executing ``__call__`` is that it directly accepts images as an input and automatically transforms them to a proper variable. That is, it is also interpreted as a shortcut method that implicitly calls ``prepare`` and ``__call__`` functions. .. warning:: ``test`` and ``volatile`` arguments are not supported anymore since v2. Instead, use ``chainer.using_config(\'train\', train)`` and ``chainer.using_config(\'enable_backprop\', not volatile)`` respectively. See :func:`chainer.using_config`. Args: images (iterable of PIL.Image or numpy.ndarray): Input images. layers (list of str): The list of layer names you want to extract. size (pair of ints): The resolution of resized images used as an input of CNN. All the given images are not resized if this argument is ``None``, but the resolutions of all the images should be the same. Returns: Dictionary of ~chainer.Variable: A directory in which the key contains the layer name and the value contains the corresponding feature map variable.'
def extract(self, images, layers=['pool5'], size=(224, 224), **kwargs):
argument.check_unexpected_kwargs(kwargs, test='test argument is not supported anymore. Use chainer.using_config', volatile='volatile argument is not supported anymore. Use chainer.using_config') argument.assert_kwargs_empty(kwargs) x = concat_examples([prepare(img, size=size) for img in images]) x = Variable(self.xp.asarray(x)) return self(x, layers=layers)
'Computes all the probabilities of given images. Args: images (iterable of PIL.Image or numpy.ndarray): Input images. oversample (bool): If ``True``, it averages results across center, corners, and mirrors. Otherwise, it uses only the center. Returns: ~chainer.Variable: Output that contains the class probabilities of given images.'
def predict(self, images, oversample=True):
x = concat_examples([prepare(img, size=(256, 256)) for img in images]) if oversample: x = imgproc.oversample(x, crop_dims=(224, 224)) else: x = x[:, :, 16:240, 16:240] with function.no_backprop_mode(): x = Variable(self.xp.asarray(x)) y = self(x, layers=['prob'])['prob'] if oversample: n = (y.data.shape[0] // 10) y_shape = y.data.shape[1:] y = reshape(y, ((n, 10) + y_shape)) y = (sum(y, axis=1) / 10) return y
'Converts a pre-trained caffemodel to a chainer model. Args: path_caffemodel (str): Path of the pre-trained caffemodel. path_npz (str): Path of the converted chainer model.'
@classmethod def convert_caffemodel_to_npz(cls, path_caffemodel, path_npz):
from chainer.links.caffe.caffe_function import CaffeFunction caffemodel = CaffeFunction(path_caffemodel) npz.save_npz(path_npz, caffemodel, compression=False)
'__call__(self, x, layers=[\'prob\']) Computes all the feature maps specified by ``layers``. .. warning:: ``test`` argument is not supported anymore since v2. Instead, use ``chainer.using_config(\'train\', train)``. See :func:`chainer.using_config`. Args: x (~chainer.Variable): Input variable. layers (list of str): The list of layer names you want to extract. Returns: Dictionary of ~chainer.Variable: A directory in which the key contains the layer name and the value contains the corresponding feature map variable.'
def __call__(self, x, layers=['prob'], **kwargs):
argument.check_unexpected_kwargs(kwargs, test='test argument is not supported anymore. Use chainer.using_config') argument.assert_kwargs_empty(kwargs) h = x activations = {} target_layers = set(layers) for (key, funcs) in self.functions.items(): if (len(target_layers) == 0): break for func in funcs: h = func(h) if (key in target_layers): activations[key] = h target_layers.remove(key) return activations
'extract(self, images, layers=[\'fc7\'], size=(224, 224)) Extracts all the feature maps of given images. The difference of directly executing ``__call__`` is that it directly accepts images as an input and automatically transforms them to a proper variable. That is, it is also interpreted as a shortcut method that implicitly calls ``prepare`` and ``__call__`` functions. .. warning:: ``test`` and ``volatile`` arguments are not supported anymore since v2. Instead, use ``chainer.using_config(\'train\', train)`` and ``chainer.using_config(\'enable_backprop\', not volatile)`` respectively. See :func:`chainer.using_config`. Args: images (iterable of PIL.Image or numpy.ndarray): Input images. layers (list of str): The list of layer names you want to extract. size (pair of ints): The resolution of resized images used as an input of CNN. All the given images are not resized if this argument is ``None``, but the resolutions of all the images should be the same. Returns: Dictionary of ~chainer.Variable: A directory in which the key contains the layer name and the value contains the corresponding feature map variable.'
def extract(self, images, layers=['fc7'], size=(224, 224), **kwargs):
argument.check_unexpected_kwargs(kwargs, test='test argument is not supported anymore. Use chainer.using_config', volatile='volatile argument is not supported anymore. Use chainer.using_config') argument.assert_kwargs_empty(kwargs) x = concat_examples([prepare(img, size=size) for img in images]) x = Variable(self.xp.asarray(x)) return self(x, layers=layers)
'Computes all the probabilities of given images. Args: images (iterable of PIL.Image or numpy.ndarray): Input images. oversample (bool): If ``True``, it averages results across center, corners, and mirrors. Otherwise, it uses only the center. Returns: ~chainer.Variable: Output that contains the class probabilities of given images.'
def predict(self, images, oversample=True):
x = concat_examples([prepare(img, size=(256, 256)) for img in images]) if oversample: x = imgproc.oversample(x, crop_dims=(224, 224)) else: x = x[:, :, 16:240, 16:240] with function.no_backprop_mode(): x = Variable(self.xp.asarray(x)) y = self(x, layers=['prob'])['prob'] if oversample: n = (y.data.shape[0] // 10) y_shape = y.data.shape[1:] y = reshape(y, ((n, 10) + y_shape)) y = (sum(y, axis=1) / 10) return y
'Converts a pre-trained caffemodel to a chainer model. Args: path_caffemodel (str): Path of the pre-trained caffemodel. path_npz (str): Path of the converted chainer model.'
@classmethod def convert_caffemodel_to_npz(cls, path_caffemodel, path_npz):
from chainer.links.caffe.caffe_function import CaffeFunction caffemodel = CaffeFunction(path_caffemodel) chainermodel = cls(pretrained_model=None) _transfer_googlenet(caffemodel, chainermodel) npz.save_npz(path_npz, chainermodel, compression=False)
'__call__(self, x, layers=[\'prob\']) Computes all the feature maps specified by ``layers``. .. warning:: ``train`` argument is not supported anymore since v2. Instead, use ``chainer.using_config(\'train\', train)``. See :func:`chainer.using_config`. Args: x (~chainer.Variable): Input variable. It should be prepared by ``prepare`` function. layers (list of str): The list of layer names you want to extract. Returns: Dictionary of ~chainer.Variable: A directory in which the key contains the layer name and the value contains the corresponding feature map variable.'
def __call__(self, x, layers=['prob'], **kwargs):
argument.check_unexpected_kwargs(kwargs, train='train argument is not supported anymore. Use chainer.using_config') argument.assert_kwargs_empty(kwargs) h = x activations = {} inception_4a_cache = None inception_4d_cache = None target_layers = set(layers) for (key, funcs) in self.functions.items(): if (len(target_layers) == 0): break if (key == 'loss1_fc2'): h = inception_4a_cache elif (key == 'loss2_fc2'): h = inception_4d_cache for func in funcs: h = func(h) if (key in target_layers): activations[key] = h target_layers.remove(key) if (key == 'inception_4a'): inception_4a_cache = h elif (key == 'inception_4d'): inception_4d_cache = h return activations
'extract(self, images, layers=[\'pool5\'], size=(224, 224)) Extracts all the feature maps of given images. The difference of directly executing ``__call__`` is that it directly accepts images as an input and automatically transforms them to a proper variable. That is, it is also interpreted as a shortcut method that implicitly calls ``prepare`` and ``__call__`` functions. .. warning:: ``train`` and ``volatile`` arguments are not supported anymore since v2. Instead, use ``chainer.using_config(\'train\', train)`` and ``chainer.using_config(\'enable_backprop\', not volatile)`` respectively. See :func:`chainer.using_config`. Args: images (iterable of PIL.Image or numpy.ndarray): Input images. layers (list of str): The list of layer names you want to extract. size (pair of ints): The resolution of resized images used as an input of CNN. All the given images are not resized if this argument is ``None``, but the resolutions of all the images should be the same. Returns: Dictionary of ~chainer.Variable: A directory in which the key contains the layer name and the value contains the corresponding feature map variable.'
def extract(self, images, layers=['pool5'], size=(224, 224), **kwargs):
argument.check_unexpected_kwargs(kwargs, train='train argument is not supported anymore. Use chainer.using_config', volatile='volatile argument is not supported anymore. Use chainer.using_config') argument.assert_kwargs_empty(kwargs) x = concat_examples([prepare(img, size=size) for img in images]) x = Variable(self.xp.asarray(x)) return self(x, layers=layers)
'Computes all the probabilities of given images. Args: images (iterable of PIL.Image or numpy.ndarray): Input images. oversample (bool): If ``True``, it averages results across center, corners, and mirrors. Otherwise, it uses only the center. Returns: ~chainer.Variable: Output that contains the class probabilities of given images.'
def predict(self, images, oversample=True):
x = concat_examples([prepare(img, size=(256, 256)) for img in images]) if oversample: x = imgproc.oversample(x, crop_dims=(224, 224)) else: x = x[:, :, 16:240, 16:240] with function.no_backprop_mode(): x = Variable(self.xp.asarray(x)) y = self(x, layers=['prob'])['prob'] if oversample: n = (y.data.shape[0] // 10) y_shape = y.data.shape[1:] y = reshape(y, ((n, 10) + y_shape)) y = average(y, axis=1) return y
'Computes the loss value for an input and label pair. It also computes accuracy and stores it to the attribute. Args: args (list of ~chainer.Variable): Input minibatch. kwargs (dict of ~chainer.Variable): Input minibatch. When ``label_key`` is ``int``, the correpoding element in ``args`` is treated as ground truth labels. And when it is ``str``, the element in ``kwargs`` is used. The all elements of ``args`` and ``kwargs`` except the ground trush labels are features. It feeds features to the predictor and compare the result with ground truth labels. Returns: ~chainer.Variable: Loss value.'
def __call__(self, *args, **kwargs):
if isinstance(self.label_key, int): if (not ((- len(args)) <= self.label_key < len(args))): msg = ('Label key %d is out of bounds' % self.label_key) raise ValueError(msg) t = args[self.label_key] if (self.label_key == (-1)): args = args[:(-1)] else: args = (args[:self.label_key] + args[(self.label_key + 1):]) elif isinstance(self.label_key, str): if (self.label_key not in kwargs): msg = ('Label key "%s" is not found' % self.label_key) raise ValueError(msg) t = kwargs[self.label_key] del kwargs[self.label_key] self.y = None self.loss = None self.accuracy = None self.y = self.predictor(*args, **kwargs) self.loss = self.lossfun(self.y, t) reporter.report({'loss': self.loss}, self) if self.compute_accuracy: self.accuracy = self.accfun(self.y, t) reporter.report({'accuracy': self.accuracy}, self) return self.loss
'Apply layer normalization to given input. Args: x (~chainer.Variable): Batch vectors. Shape of this value must be `(batch_size, unit_size)`, e.g., the output of :func:`~chainer.functions.linear`. Returns: ~chainer.Variable: Output of the layer normalization.'
def __call__(self, x):
if (self.gamma.data is None): self._initialize_params((x.size // x.shape[0])) return layer_normalization.layer_normalization(x, self.gamma, self.beta, self.eps)
'__call__(self, x, finetune=False) Invokes the forward propagation of BatchNormalization. In training mode, the BatchNormalization computes moving averages of mean and variance for evaluatino during training, and normalizes the input using batch statistics. .. warning:: ``test`` argument is not supported anymore since v2. Instead, use ``chainer.using_config(\'train\', train)``. See :func:`chainer.using_config`. Args: x (Variable): Input variable. finetune (bool): If it is in the training mode and ``finetune`` is ``True``, BatchNormalization runs in fine-tuning mode; it accumulates the input array to compute population statistics for normalization, and normalizes the input using batch statistics.'
def __call__(self, x, **kwargs):
argument.check_unexpected_kwargs(kwargs, test='test argument is not supported anymore. Use chainer.using_config') (finetune,) = argument.parse_kwargs(kwargs, ('finetune', False)) if hasattr(self, 'gamma'): gamma = self.gamma else: with cuda.get_device_from_id(self._device_id): gamma = variable.Variable(self.xp.ones(self.avg_mean.shape, dtype=x.dtype)) if hasattr(self, 'beta'): beta = self.beta else: with cuda.get_device_from_id(self._device_id): beta = variable.Variable(self.xp.zeros(self.avg_mean.shape, dtype=x.dtype)) if configuration.config.train: if finetune: self.N += 1 decay = (1.0 - (1.0 / self.N)) else: decay = self.decay func = batch_normalization.BatchNormalizationFunction(self.eps, self.avg_mean, self.avg_var, decay) ret = func(x, gamma, beta) self.avg_mean[:] = func.running_mean self.avg_var[:] = func.running_var else: mean = variable.Variable(self.avg_mean) var = variable.Variable(self.avg_var) ret = batch_normalization.fixed_batch_normalization(x, gamma, beta, mean, var, self.eps) return ret
'Resets the population count for collecting population statistics. This method can be skipped if it is the first time to use the fine-tuning mode. Otherwise, this method should be called before starting the fine-tuning mode again.'
def start_finetuning(self):
self.N = 0
'Returns the parameter variable. Args: volatile (~chainer.Flag): The volatility of the returned variable. Returns: ~chainer.Variable: A copy of the parameter variable with given volatility.'
def __call__(self, volatile='off'):
W = identity.identity(self.W) W.volatile = volatile return identity.identity(W)
'Returns new cell state and updated output of LSTM. Args: c (~chainer.Variable): Cell states of LSTM units. h (~chainer.Variable): Output at the previous time step. x (~chainer.Variable): A new batch from the input sequence. Returns: tuple of ~chainer.Variable: Returns ``(c_new, h_new)``, where ``c_new`` represents new cell state, and ``h_new`` is updated output of LSTM units.'
def __call__(self, c, h, x):
if (self.upward.W.data is None): in_size = (x.size // x.shape[0]) with cuda.get_device_from_id(self._device_id): self.upward._initialize_params(in_size) self._initialize_params() lstm_in = self.upward(x) if (h is not None): lstm_in += self.lateral(h) if (c is None): xp = self.xp with cuda.get_device_from_id(self._device_id): c = variable.Variable(xp.zeros((x.shape[0], self.state_size), dtype=x.dtype)) return lstm.lstm(c, lstm_in)
'Sets the internal state. It sets the :attr:`c` and :attr:`h` attributes. Args: c (~chainer.Variable): A new cell states of LSTM units. h (~chainer.Variable): A new output at the previous time step.'
def set_state(self, c, h):
assert isinstance(c, variable.Variable) assert isinstance(h, variable.Variable) c_ = c h_ = h if (self.xp == numpy): c_.to_cpu() h_.to_cpu() else: c_.to_gpu(self._device_id) h_.to_gpu(self._device_id) self.c = c_ self.h = h_
'Resets the internal state. It sets ``None`` to the :attr:`c` and :attr:`h` attributes.'
def reset_state(self):
self.c = self.h = None
'Updates the internal state and returns the LSTM outputs. Args: x (~chainer.Variable): A new batch from the input sequence. Returns: ~chainer.Variable: Outputs of updated LSTM units.'
def __call__(self, x):
if (self.upward.W.data is None): with cuda.get_device_from_id(self._device_id): in_size = (x.size // x.shape[0]) self.upward._initialize_params(in_size) self._initialize_params() batch = x.shape[0] lstm_in = self.upward(x) h_rest = None if (self.h is not None): h_size = self.h.shape[0] if (batch == 0): h_rest = self.h elif (h_size < batch): msg = 'The batch size of x must be equal to or less thanthe size of the previous state h.' raise TypeError(msg) elif (h_size > batch): (h_update, h_rest) = split_axis.split_axis(self.h, [batch], axis=0) lstm_in += self.lateral(h_update) else: lstm_in += self.lateral(self.h) if (self.c is None): xp = self.xp with cuda.get_device_from_id(self._device_id): self.c = variable.Variable(xp.zeros((batch, self.state_size), dtype=x.dtype)) (self.c, y) = lstm.lstm(self.c, lstm_in) if (h_rest is None): self.h = y elif (len(y.data) == 0): self.h = h_rest else: self.h = concat.concat([y, h_rest], axis=0) return y
'Applies the convolution layer. Args: x (~chainer.Variable): Input image. Returns: ~chainer.Variable: Output of the convolution.'
def __call__(self, x):
if (self.W.data is None): self._initialize_params(x.shape[1]) return dilated_convolution_2d.dilated_convolution_2d(x, self.W, self.b, self.stride, self.pad, self.dilate)
'Applies the linear layer. Args: x (~chainer.Variable): Batch of input vectors. Returns: ~chainer.Variable: Output of the linear layer.'
def __call__(self, x):
if (self.W.data is None): self._initialize_params((x.size // x.shape[0])) return linear.linear(x, self.W, self.b)
'Applies broadcasted elementwise product. Args: xs (list of Variables): Input variables whose length should be one if the link has a learnable weight parameter, otherwise should be two.'
def __call__(self, *xs):
axis = self.axis if hasattr(self, 'W'): if chainer.is_debug(): assert (len(xs) == 1) (x,) = xs W = self.W z = scale.scale(x, W, axis) else: if chainer.is_debug(): assert (len(xs) == 2) (x, y) = xs z = scale.scale(x, y, axis) if hasattr(self, 'bias'): return self.bias(z) else: return z
'__call__(self, x) Does forward propagation.'
def __call__(self, *args):
n_args = len(args) msg = ('Invalid argument. The length of GRU.__call__ must be 1. But %d is given. ' % n_args) if ((n_args == 0) or (n_args >= 3)): raise ValueError(msg) elif (n_args == 2): msg += 'In Chainer v2, chainer.links.GRU is changed from stateless to stateful. One possiblity is you assume GRU to be stateless. Use chainer.links.StatelessGRU instead.' raise ValueError(msg) return super(GRU, self).__call__(args[0])
'Applies broadcasted elementwise summation. Args: xs (list of Variables): Input variables whose length should be one if the link has a learnable bias parameter, otherwise should be two.'
def __call__(self, *xs):
axis = self.axis if hasattr(self, 'b'): if chainer.is_debug(): assert (len(xs) == 1) (x,) = xs b = self.b return bias.bias(x, b, axis) else: if chainer.is_debug(): assert (len(xs) == 2) (x, y) = xs return bias.bias(x, y, axis)
'__call__(self, hx, cx, xs) Calculate all hidden states and cell states. .. warning:: ``train`` argument is not supported anymore since v2. Instead, use ``chainer.using_config(\'train\', train)``. See :func:`chainer.using_config`. Args: hx (~chainer.Variable or None): Initial hidden states. If ``None`` is specified zero-vector is used. cx (~chainer.Variable or None): Initial cell states. If ``None`` is specified zero-vector is used. xs (list of ~chianer.Variable): List of input sequences. Each element ``xs[i]`` is a :class:`chainer.Variable` holding a sequence.'
def __call__(self, hx, cx, xs, **kwargs):
argument.check_unexpected_kwargs(kwargs, train='train argument is not supported anymore. Use chainer.using_config') argument.assert_kwargs_empty(kwargs) assert isinstance(xs, (list, tuple)) indices = n_step_rnn.argsort_list_descent(xs) xs = n_step_rnn.permutate_list(xs, indices, inv=False) if (hx is None): hx = self.init_hx(xs) else: hx = permutate.permutate(hx, indices, axis=1, inv=False) if (cx is None): cx = self.init_hx(xs) else: cx = permutate.permutate(cx, indices, axis=1, inv=False) trans_x = transpose_sequence.transpose_sequence(xs) ws = [[w.w0, w.w1, w.w2, w.w3, w.w4, w.w5, w.w6, w.w7] for w in self] bs = [[w.b0, w.b1, w.b2, w.b3, w.b4, w.b5, w.b6, w.b7] for w in self] (hy, cy, trans_y) = self.rnn(self.n_layers, self.dropout, hx, cx, ws, bs, trans_x) hy = permutate.permutate(hy, indices, axis=1, inv=True) cy = permutate.permutate(cy, indices, axis=1, inv=True) ys = transpose_sequence.transpose_sequence(trans_y) ys = n_step_rnn.permutate_list(ys, indices, inv=True) return (hy, cy, ys)
'Extracts the word embedding of given IDs. Args: x (~chainer.Variable): Batch vectors of IDs. Returns: ~chainer.Variable: Batch of corresponding embeddings.'
def __call__(self, x):
return embed_id.embed_id(x, self.W, ignore_label=self.ignore_label)
'Returns new cell state and output of Child-Sum TreeLSTM. Args: cshsx (list of :class:`~chainer.Variable`): Variable arguments which include all cell vectors and all output vectors of variable children, and an input vector. Returns: tuple of ~chainer.Variable: Returns :math:`(c_{new}, h_{new})`, where :math:`c_{new}` represents new cell state vector, and :math:`h_{new}` is new output vector.'
def __call__(self, *cshsx):
cs = cshsx[:(len(cshsx) // 2)] hs = cshsx[(len(cshsx) // 2):(-1)] x = cshsx[(-1)] assert (len(cs) >= 1) assert (len(hs) >= 1) assert (len(cs) == len(hs)) if (x is None): if any(((c is not None) for c in cs)): base = [c for c in cs if (c is not None)][0] elif any(((h is not None) for h in hs)): base = [h for h in hs if (h is not None)][0] else: raise ValueError('All inputs are None.') (batchsize, dtype) = (base.shape[0], base.dtype) x = self.xp.zeros((batchsize, self.in_size), dtype=dtype) W_x_in = self.W_x(x) (W_x_aio_in, W_x_f_in) = split_axis.split_axis(W_x_in, [(3 * self.state_size)], axis=1) hs = self._pad_zero_nodes(hs, (x.shape[0], self.state_size), dtype=x.dtype) cs = self._pad_zero_nodes(cs, (x.shape[0], self.state_size), dtype=x.dtype) aio_in = (self.W_h_aio(sum(hs)) + W_x_aio_in) W_h_fs_in = concat.concat(split_axis.split_axis(self.W_h_f(concat.concat(hs, axis=0)), len(hs), axis=0), axis=1) f_in = (W_h_fs_in + concat.concat(([W_x_f_in] * len(hs)), axis=1)) tree_lstm_in = concat.concat([aio_in, f_in], axis=1) return tree_lstm.tree_lstm(*(cs + (tree_lstm_in,)))
'Returns new cell state and output of N-ary TreeLSTM. Args: cshsx (list of :class:`~chainer.Variable`): Arguments which include all cell vectors and all output vectors of fixed-length children, and an input vector. The number of arguments must be same as ``n_ary * 2 + 1``. Returns: tuple of ~chainer.Variable: Returns :math:`(c_{new}, h_{new})`, where :math:`c_{new}` represents new cell state vector, and :math:`h_{new}` is new output vector.'
def __call__(self, *cshsx):
assert (len(cshsx) == ((self.n_ary * 2) + 1)) cs = cshsx[:self.n_ary] hs = cshsx[self.n_ary:(-1)] x = cshsx[(-1)] if (x is None): if any(((c is not None) for c in cs)): base = [c for c in cs if (c is not None)][0] elif any(((h is not None) for h in hs)): base = [h for h in hs if (h is not None)][0] else: raise ValueError('All inputs are None.') (batchsize, dtype) = (base.shape[0], base.dtype) x = self.xp.zeros((batchsize, self.in_size), dtype=dtype) tree_lstm_in = self.W_x(x) for (i, h) in enumerate(hs, start=1): if (h is not None): tree_lstm_in += getattr(self, 'W_h{}'.format(i))(h) cs = self._pad_zero_nodes(cs, (x.shape[0], self.state_size), dtype=x.dtype) return tree_lstm.tree_lstm(*(cs + (tree_lstm_in,)))
'Applies the depthwise convolution layer. Args: x (chainer.Variable or :class:`numpy.ndarray` or cupy.ndarray): Input image. Returns: ~chainer.Variable: Output of the depthwise convolution.'
def __call__(self, x):
if (self.W.data is None): self._initialize_params(x.shape[1]) return depthwise_convolution_2d.depthwise_convolution_2d(x, self.W, self.b, self.stride, self.pad)
'Computes the output of the mlpconv layer. Args: x (~chainer.Variable): Input image. Returns: ~chainer.Variable: Output of the mlpconv layer.'
def __call__(self, x):
f = self.activation for l in self[:(-1)]: x = f(l(x)) return self[(-1)](x)
'Applies N-dimensional convolution layer. Args: x (~chainer.Variable): Input image. Returns: ~chainer.Variable: Output of convolution.'
def __call__(self, x):
return convolution_nd.convolution_nd(x, self.W, self.b, self.stride, self.pad, cover_all=self.cover_all)
'Applies the convolution layer. Args: x (~chainer.Variable): Input image. Returns: ~chainer.Variable: Output of the convolution.'
def __call__(self, x):
if (self.W.data is None): self._initialize_params(x.shape[1]) return convolution_2d.convolution_2d(x, self.W, self.b, self.stride, self.pad)
'__call__(self, hx, xs) Calculate all hidden states and cell states. .. warning:: ``train`` argument is not supported anymore since v2. Instead, use ``chainer.using_config(\'train\', train)``. See :func:`chainer.using_config`. Args: hx (~chainer.Variable or None): Initial hidden states. If ``None`` is specified zero-vector is used. xs (list of ~chianer.Variable): List of input sequences. Each element ``xs[i]`` is a :class:`chainer.Variable` holding a sequence.'
def __call__(self, hx, xs, **kwargs):
argument.check_unexpected_kwargs(kwargs, train='train argument is not supported anymore. Use chainer.using_config') argument.assert_kwargs_empty(kwargs) assert isinstance(xs, (list, tuple)) indices = argsort_list_descent(xs) xs = permutate_list(xs, indices, inv=False) if (hx is None): hx = self.init_hx(xs) else: hx = permutate.permutate(hx, indices, axis=1, inv=False) trans_x = transpose_sequence.transpose_sequence(xs) ws = [[w.w0, w.w1] for w in self] bs = [[w.b0, w.b1] for w in self] (hy, trans_y) = self.rnn(self.n_layers, self.dropout, hx, ws, bs, trans_x, activation=self.activation) hy = permutate.permutate(hy, indices, axis=1, inv=True) ys = transpose_sequence.transpose_sequence(trans_y) ys = permutate_list(ys, indices, inv=True) return (hy, ys)
'__call__(self, hx, xs) Calculate all hidden states and cell states. .. warning:: ``train`` argument is not supported anymore since v2. Instead, use ``chainer.using_config(\'train\', train)``. See :func:`chainer.using_config`. Args: hx (~chainer.Variable or None): Initial hidden states. If ``None`` is specified zero-vector is used. xs (list of ~chianer.Variable): List of input sequences. Each element ``xs[i]`` is a :class:`chainer.Variable` holding a sequence.'
def __call__(self, hx, xs, **kwargs):
argument.check_unexpected_kwargs(kwargs, train='train argument is not supported anymore. Use chainer.using_config') argument.assert_kwargs_empty(kwargs) assert isinstance(xs, (list, tuple)) indices = argsort_list_descent(xs) xs = permutate_list(xs, indices, inv=False) if (hx is None): hx = self.init_hx(xs) else: hx = permutate.permutate(hx, indices, axis=1, inv=False) trans_x = transpose_sequence.transpose_sequence(xs) ws = [[w.w0, w.w1, w.w2, w.w3, w.w4, w.w5] for w in self] bs = [[w.b0, w.b1, w.b2, w.b3, w.b4, w.b5] for w in self] (hy, trans_y) = self.rnn(self.n_layers, self.dropout, hx, ws, bs, trans_x) hy = permutate.permutate(hy, indices, axis=1, inv=True) ys = transpose_sequence.transpose_sequence(trans_y) ys = permutate_list(ys, indices, inv=True) return (hy, ys)
'Applies the bilinear function to inputs and the internal parameters. Args: e1 (~chainer.Variable): Left input. e2 (~chainer.Variable): Right input. Returns: ~chainer.Variable: Output variable.'
def __call__(self, e1, e2):
if self.nobias: return bilinear.bilinear(e1, e2, self.W) else: return bilinear.bilinear(e1, e2, self.W, self.V1, self.V2, self.b)
'Resets the internal states. It sets ``None`` to the :attr:`c` and :attr:`h` attributes.'
def reset_state(self):
self.c = self.h = None
'Updates the internal state and returns the LSTM outputs. Args: x (~chainer.Variable): A new batch from the input sequence. Returns: ~chainer.Variable: Outputs of updated LSTM units.'
def __call__(self, x):
lstm_in = self.upward(x) if (self.h is not None): lstm_in += self.lateral(self.h) if (self.c is None): xp = self.xp with cuda.get_device_from_id(self._device_id): self.c = variable.Variable(xp.zeros((x.shape[0], self.state_size), dtype=x.dtype)) lstm_in = reshape.reshape(lstm_in, (len(lstm_in.data), (lstm_in.shape[1] // 4), 4)) (a, i, f, o) = split_axis.split_axis(lstm_in, 4, 2) a = reshape.reshape(a, (len(a.data), a.shape[1])) i = reshape.reshape(i, (len(i.data), i.shape[1])) f = reshape.reshape(f, (len(f.data), f.shape[1])) o = reshape.reshape(o, (len(o.data), o.shape[1])) peep_in_i = self.peep_i(self.c) peep_in_f = self.peep_f(self.c) a = tanh.tanh(a) i = sigmoid.sigmoid((i + peep_in_i)) f = sigmoid.sigmoid((f + peep_in_f)) self.c = ((a * i) + (f * self.c)) peep_in_o = self.peep_o(self.c) o = sigmoid.sigmoid((o + peep_in_o)) self.h = (o * tanh.tanh(self.c)) return self.h
'Computes the output of the Highway module. Args: x (~chainer.Variable): Input variable. Returns: Variable: Output variable. Its array has the same spatial size and the same minibatch size as the input array.'
def __call__(self, x):
out_plain = self.activate(self.plain(x)) out_transform = sigmoid.sigmoid(self.transform(x)) y = ((out_plain * out_transform) + (x * (1 - out_transform))) return y
'Sets the internal state. It sets the :attr:`c` and :attr:`h` attributes. Args: c (~chainer.Variable): A new cell states of LSTM units. h (~chainer.Variable): A new output at the previous time step.'
def set_state(self, c, h):
assert isinstance(c, variable.Variable) assert isinstance(h, variable.Variable) c_ = c h_ = h if (self.xp is numpy): c_.to_cpu() h_.to_cpu() else: c_.to_gpu(self._device_id) h_.to_gpu(self._device_id) self.c = c_ self.h = h_
'Resets the internal state. It sets ``None`` to the :attr:`c` and :attr:`h` attributes.'
def reset_state(self):
self.c = self.h = None
'Updates the internal state and returns the LSTM outputs. Args: x (~chainer.Variable): A new batch from the input sequence. Returns: ~chainer.Variable: Outputs of updated LSTM units.'
def __call__(self, x):
lstm_in = self.upward(x) if (self.h is not None): lstm_in += self.lateral(self.h) else: xp = self.xp with cuda.get_device_from_id(self._device_id): self.h = variable.Variable(xp.zeros((len(x.data), self.state_size), dtype=x.data.dtype)) if (self.c is None): xp = self.xp with cuda.get_device_from_id(self._device_id): self.c = variable.Variable(xp.zeros((len(x.data), self.state_size), dtype=x.data.dtype)) lstm_in = reshape.reshape(lstm_in, (len(lstm_in.data), (lstm_in.data.shape[1] // 4), 4)) (a, i, f, o) = split_axis.split_axis(lstm_in, 4, 2) a = reshape.reshape(a, (len(a.data), self.state_size)) i = reshape.reshape(i, (len(i.data), self.state_size)) f = reshape.reshape(f, (len(f.data), self.state_size)) o = reshape.reshape(o, (len(o.data), self.state_size)) c_tmp = ((tanh.tanh(a) * sigmoid.sigmoid(i)) + (sigmoid.sigmoid(f) * self.c)) self.c = zoneout.zoneout(self.c, c_tmp, self.c_ratio) self.h = zoneout.zoneout(self.h, (sigmoid.sigmoid(o) * tanh.tanh(c_tmp)), self.h_ratio) return self.h
'Computes the output of the Inception module. Args: x (~chainer.Variable): Input variable. Returns: Variable: Output variable. Its array has the same spatial size and the same minibatch size as the input array. The channel dimension has size ``out1 + out3 + out5 + proj_pool``.'
def __call__(self, x):
out1 = self.conv1(x) out3 = self.conv3(relu.relu(self.proj3(x))) out5 = self.conv5(relu.relu(self.proj5(x))) pool = self.projp(max_pooling_2d.max_pooling_2d(x, 3, stride=1, pad=1)) y = relu.relu(concat.concat((out1, out3, out5, pool), axis=1)) return y
'Function object that created this variable node. When the function is implemented with the old-style API (i.e., it uses :class:`Function` class), this property returns the :class:`Function` object. The object is extracted from the :class:`FunctionAdapter` object, so the returned object is not the function node, but instead the actual implementation of forward and backward procedures. When the function is implemented with the new-style API (i.e., it uses :class:`FunctionNode` class), this property returns the function node object. In this case, the returned object is same as :attr:`creator_node`. .. warning:: As of v3.0.0, when the creator is an old-style function, the following code is invalid: .. code-block:: python creator = v.creator v.creator = None v.creator = creator The point is that :class:`FunctionNode` objects are used as nodes in the computational graph instead of :class:`Function`, and each :class:`Function` object only holds a *weak reference* to the corresponding :class:`FunctionNode`. Since ``creator`` returns the :class:`Function` object, the :class:`FunctionNode` object is not kept by preserving ``creator``. The above code should be fixed as follows. .. code-block:: python creator_node = v.creator_node v.creator_node = None v.creator_node = creator_node'
@property def creator(self):
node = self._creator_node if (node is None): return None if isinstance(node, chainer.function.FunctionAdapter): return node.function return node
'Function node that has this variable as an output. See :class:`FunctionNode` for the definition of a function node.'
@property def creator_node(self):
return self._creator_node
'Data array of the corresponding variable. If the data is not available, it returns ``None``.'
@property def data(self):
return self._data
'Gradient array of the corresponding variable. If the variable is not available, it returns ``None``.'
@property def grad(self):
var = self.get_variable() return (None if (var is None) else var.grad)
'Gradient variable of the corresponding variable. If the corresponding variable is not available, it return ``None``.'
@property def grad_var(self):
var = self.get_variable() return (None if (var is None) else var._grad_var)
'Short text that represents the variable node.'
@property def label(self):
if (self.shape == ()): return str(self.dtype) return ('(%s), %s' % (', '.join(map(str, self.shape)), str(self.dtype)))
'It indicates that ``grad`` will be set in backward calculation.'
@property def requires_grad(self):
return self._requires_grad
'Returns the corresponding :class:`Variable` object. VariableNode object holds a weak reference of the variable object. If the reference is alive, it is returned by this property. Otherwise, this property creates a new :class:`Variable` object from this node object and returns it. Returns: Variable: The variable object that refers this node.'
def get_variable(self):
var = self._variable() if (var is not None): return var var = Variable(self.data, name=self.name, requires_grad=self._requires_grad) var._node = self return var