desc
stringlengths
3
26.7k
decl
stringlengths
11
7.89k
bodies
stringlengths
8
553k
'Sets a :class:`Function` object that created this node. This method is equivalent to ``self.creator = creator``. A :class:`FunctionNode` object can also be passed. Args: creator (Function or FunctionNode): Function that has created this variable.'
def set_creator(self, creator):
self.creator = creator
'Sets a :class:`FunctionNode` object that created this node. This method is equivalent to ``self.creator_node = creator_node``. A :class:`Function` object can also be passed, in which case the :attr:`~Function.node` object is extracted. Args: creator_node (FunctionNode or Function): Function node that has this variable as an output.'
def set_creator_node(self, creator_node):
self.creator_node = creator_node
'Deletes the reference to the creator of this variable node. This method is equivalent to ``self.creator_node = None``.'
def unchain(self):
self.creator_node = None
'Lets the node hold a reference to the underlying data array. This method gets the data array of the corresponding variable and keeps it. If the weak reference to the corresponding variable is dead, it raises an error.'
def retain_data(self):
variable = self._variable() if (variable is not None): self.data = variable.data else: raise RuntimeError('cannot retain variable data: the variable has been already released')
'Display a summary of the stored data and location of the Variable'
def debug_print(self):
msg = '{summary}\n- device: {device}\n- backend: {background}\n- shape: {shape}\n- dtype: {dtype}\n- statistics: {stats}\n- grad: {grad}' stats_msg = 'mean={0:.8f}, std={1:.8f}' try: device = self.data.device except AttributeError: device = 'CPU' with cuda.get_device_from_array(self.data) as dev: xp = (numpy if (int(dev) == (-1)) else cuda.cupy) if (self.grad is None): grad = None elif xp.all((self.grad == 0)): grad = 0 else: grad = stats_msg.format(float(xp.mean(self.grad)), float(xp.std(self.grad))) stats = stats_msg.format(float(xp.mean(self.data)), float(xp.std(self.data))) return msg.format(summary=self.summary(), grad=grad, shape=self.data.shape, background=type(self.data), dtype=self.data.dtype, device=device, stats=stats)
'Returns the first dimension of the data array. Returns: int: Number of the first dimension of the data array.'
def __len__(self):
return len(self.data)
'Short text that represents the variable.'
@property def label(self):
return self._node.label
'Function implementation that created this variable. When this variable has been created by an old-style function (i.e., it is implemented as a subclass of :class:`Function`), this property returns that :class:`Function` object. When this variable has been created by a new-style function (i.e., it is implemented as a subclass of :class:`FunctionNode` class), this property returns that node object.'
@property def creator(self):
return self._node.creator
':class:`FunctionNode` object that created this variable. This property has a setter to which ``None`` can be set. Setting ``None`` to this property is equivalent to call :meth:`unchain`; it purges the variable from the function that created this variable. The setter also accepts the original :class:`FunctionNode` object that created this variable. For example, you can once set ``None`` to this property and then set the original value again. .. note:: Setting an irrelevant :meth:`FunctionNode` object does not emit any error immediately, whereas the behavior is undefined. Do not set a :meth:`FunctionNode` object that did not create this variable object.'
@property def creator_node(self):
return self._node._creator_node
'Gradient array of this variable. Not that this property returns the underlying array of the gradient variable instead of the gradient variable itself; to get/set gradient variable, use :attr:`grad_var` instead.'
@property def grad(self):
gv = self._grad_var return (None if (gv is None) else gv.data)
'It indicates that ``grad`` will be set in backward calculation.'
@property def requires_grad(self):
return self._requires_grad
'Copies the data and gradient arrays to CPU.'
def to_cpu(self):
if (self.data is None): return self._data = [cuda.to_cpu(self.data)] if (self._grad_var is not None): self._grad_var.to_cpu() node = self._node if (node._data is not None): node.retain_data()
'Copies the data and gradient arrays to specified GPU. Args: device: Target device specifier. If omitted, the current device is used.'
def to_gpu(self, device=None):
if (self.data is None): self._initial_device = (cuda.Device().id if (device is None) else device) else: self._data = [cuda.to_gpu(self.data, device)] if (self._grad_var is not None): self._grad_var.to_gpu(device) node = self._node if (node._data is not None): node.retain_data()
'Clears the gradient array.'
def cleargrad(self):
self._grad_var = None
'Initializes the gradient array by zeros. Note that the gradient variable is unchained from the computational graph by this method because this operation breaks the backprop validity. .. deprecated:: v1.15 Use :meth:`cleargrad` instead.'
def zerograd(self):
warnings.warn('Variable.zerograd is deprecated. Use Variable.cleargrad instead.', DeprecationWarning) if (self.data is None): return with cuda.get_device_from_array(self.data) as dev: gv = self._grad_var if (gv is None): xp = (numpy if (dev.id == (-1)) else cuda.cupy) self.grad = xp.zeros_like(self.data) else: gv.unchain() gv.data.fill(0)
'Copies the data array from given source variable. This method copies the data array from given variable to this variable. The copy is done even if the arrays reside on different devices, including across the host and a GPU device. If this variable has an uninitialized data array, this method initializes it by the data array of the given variable. Similarly, if the given variable has an uninitialized data array, this method initializes it by the data array of this variable (``self``). If both are uninitialized, this method does nothing. Args: var (Variable): Source variable.'
def copydata(self, var):
src = var.data dst = self.data if (src is None): if (dst is None): return var.initialize(self.shape) src = var.data elif (dst is None): self.initialize(src.shape) dst = self.data src_xp = cuda.get_array_module(src) dst_xp = cuda.get_array_module(dst) if (dst_xp is src_xp): dst_xp.copyto(dst, src) elif (dst_xp is numpy): dst_xp.copyto(dst, src.get()) else: dst.set(src)
'Accumulates the gradient array from given source variable. This method adds the gradient of a given variable to the gradient of this variable. The accumulation is even done across the host and different devices. If this variable has uninitialized data/grad arrays, this method initializes it with the shape of the given variable and then accumulates the gradient. Args: var (Variable): Source variable.'
def addgrad(self, var):
src = var._grad_var if (src is None): return if (self.data is None): self.initialize(var.shape) dst = self._grad_var src_dev = cuda.get_device_from_array(src.data) dst_dev = cuda.get_device_from_array(self.data) if (src_dev.id != dst_dev.id): src = chainer.functions.copy(src, dst_dev.id) self._grad_var = (src if (dst is None) else (src + dst))
'Notifies the variable that the given function is its creator. Args: gen_func (Function): Function object that creates this variable as one of its outputs.'
def set_creator(self, gen_func):
self._node.set_creator(gen_func)
'Notifies the variable that the given node is its creator. Args: fnode (FunctionNode): Function node that has this variable as an output.'
def set_creator_node(self, fnode):
self._node.set_creator_node(fnode)
'Runs error backpropagation (a.k.a. backprop) from this variable. On backprop, :meth:`FunctionNode.backward` is called on each :class:`FunctionNode` object appearing in the backward graph starting from this variable. The backward graph is represented by backward references from variable nodes to their creators, and from function nodes to their input variable nodes. The backprop stops at all root nodes. Some function nodes set ``None`` as gradients of some inputs, where further backprop does not take place at such inputs. This method uses :data:`grad` as the initial error array. User can manually set a gradient array before calling this method. If :data:`data` contains only one element (i.e., it is scalar) and :data:`grad` is ``None``, then this method automatically complements 1.0 as the initial error. This is useful on starting backprop from some scalar loss value. Note that this method does not support *differentiable backprop*. Use :func:`grad` to compute the gradient of gradients. Args: retain_grad (bool): If ``True``, the gradient arrays of all intermediate variables are kept. Otherwise, :data:`grad` of the intermediate variables are set to ``None`` on appropriate timing, which may reduce the maximum memory consumption. In most cases of training some models, the purpose of backprop is to compute gradients of parameters, not of all variables, and therefore it is recommended to set this flag ``False``.'
def backward(self, retain_grad=False):
self._node._check_old_style_gradient() if (self.creator_node is None): return initial_device = None if (cuda.available and isinstance(self.data, cuda.cupy.ndarray)): try: initial_device = cuda.Device() except cuda.cupy.cuda.runtime.CUDARuntimeError as e: if (e.status != 38): raise is_debug = chainer.is_debug() cand_funcs = [] seen_set = set() grads = {} if ((self.data.size == 1) and (self._grad_var is None)): with cuda.get_device_from_array(self.data) as device: if (device is cuda.DummyDevice): self.grad = numpy.ones_like(self.data) else: self.grad = cuda.cupy.ones_like(self.data) grads[self._node] = self._grad_var def add_cand(cand): if (cand not in seen_set): heapq.heappush(cand_funcs, ((- cand.rank), len(seen_set), cand)) seen_set.add(cand) add_cand(self.creator_node) def get_grad(node): if (node is None): return None if (node in grads): return grads[node] return node.grad_var while cand_funcs: (_, _, func) = heapq.heappop(cand_funcs) inputs = func.inputs outputs = [y() for y in func.outputs] in_data = tuple([x.data for x in inputs]) out_grad = tuple([get_grad(y) for y in outputs]) out_grad_data = tuple([(None if (g is None) else g.data) for g in out_grad]) hooks = chainer.get_function_hooks() if (func._n_local_function_hooks != 0): hooks = collections.OrderedDict(hooks) hooks.update(func.local_function_hooks) hooks = hooks.values() cuda.get_device_from_array(*in_data).use() for hook in hooks: hook.backward_preprocess(func, in_data, out_grad_data) target_input_indexes = [i for (i, x) in enumerate(inputs) if x.requires_grad] target_inputs = [inputs[i] for i in target_input_indexes] in_grad = [] for (i, index_i) in enumerate(target_input_indexes): x = inputs[index_i] if (x in target_inputs[:i]): gx = None elif (x in grads): gx = grads[x] elif (x.creator_node is None): x._check_old_style_gradient() gx = x.grad_var else: gx = None in_grad.append(gx) gxs = func.backward_accumulate(target_input_indexes, out_grad, in_grad) assert (len(gxs) == len(in_grad)) for hook in hooks: hook.backward_postprocess(func, in_data, out_grad_data) if is_debug: for gx in gxs: if (gx is None): continue gx_data = gx.data cuda.get_device_from_array(gx_data).use() if cuda.get_array_module(gx_data).isnan(gx_data).any(): msg = 'NaN is detected on backward computation' raise RuntimeError(msg) if (not retain_grad): for y in outputs: if ((y is not None) and (y is not self.node)): grads[y] = None y_var = y.get_variable() if (y_var is not None): y_var._grad_var = None for (i, gx) in enumerate(gxs): if (gx is None): continue x = target_inputs[i] if (not x.requires_grad): continue _check_grad_type(func, x, gx.data) if (x in target_inputs[:i]): cur_gx = grads[x] grads[x] = (gx if (cur_gx is None) else (gx + cur_gx)) else: grads[x] = gx x_var = x.get_variable() if (x_var is not None): x_var._grad_var = grads[x] if (x.creator_node is not None): add_cand(x.creator_node) del gxs if (initial_device is not None): initial_device.use()
'Returns a variable of a different shape and the same content. .. seealso:: :func:`chainer.functions.reshape` for full documentation,'
def reshape(self, *shape):
if ((len(shape) == 1) and isinstance(shape[0], (tuple, list))): shape = shape[0] return chainer.functions.reshape(self, shape)
'Permute the dimensions of an input variable without copy. .. seealso:: :func:`chainer.functions.transpose` for full documentation.'
def transpose(self, *axes):
if (len(axes) == 0): axes = None elif ((len(axes) == 1) and (isinstance(axes[0], (tuple, list)) or (axes[0] is None))): axes = axes[0] return chainer.functions.transpose(self, axes)
'Deletes the reference to the creator of this variable. This method deletes the reference to the creator from the corresponding variable node. Unlike :meth:`unchain_backward`, it does not backtrack the graph. This method is equivalent to ``self.creator_node = None``.'
def unchain(self):
self.creator_node = None
'Deletes references between variable nodes and functions backward. After this method completes, intermediate variable nodes and functions that are not referenced from anywhere are deallocated by reference count GC. Also this variable itself deletes the reference to its creator function from the node, i.e. the node becomes root in the computation graph. It indicates that backprop after unchaining stops at this variable. This behavior is useful to implement truncated BPTT.'
def unchain_backward(self):
cand_funcs = [] seen_set = set() def add_cand(cand): if ((cand is not None) and (cand not in seen_set)): cand_funcs.append(cand) seen_set.add(cand) add_cand(self.creator_node) while cand_funcs: func = cand_funcs.pop() for var in func.inputs: add_cand(var.creator_node) func.unchain()
'Lets the corresponding variable node keep the underlying array.'
def retain_data(self):
self._node.data = self._data[0]
'Initializes the uninitialized variable. Uninitialized variable is a variable created with the data array set to None. This method creates and initializes the data array. The shape of the variable can be left unknown until this method is called. Args: shape (tuple of int): Shape of the data array.'
def initialize(self, shape):
xp = (numpy if (self._initial_device is None) else cuda.cupy) with cuda.get_device_from_id(self._initial_device): data = initializers.generate_array(self.initializer, shape, xp) ginit = self._grad_initializer grad = (None if (ginit is None) else initializers.generate_array(ginit, shape, xp)) self._data[0] = data self.grad = grad
'Updates the data array using the gradient and the update rule. This method updates the parameter using the attached update rule.'
def update(self):
if (self.update_rule is not None): self.update_rule.update(self)
'Initializes given array. This method destructively changes the value of array. The derived class is required to implement this method. The algorithms used to make the new values depend on the concrete derived classes. Args: array (numpy.ndarray or cupy.ndarray): An array to be initialized by this initializer.'
def __call__(self, array):
raise NotImplementedError()
'Returns self.'
def __iter__(self):
return self
'Returns the next batch. This is a part of the iterator protocol of Python. It may raise the :class:`StopIteration` exception when it stops the iteration.'
def __next__(self):
raise NotImplementedError
'Python2 alternative of ``__next__``. It calls :meth:`__next__` by default.'
def next(self):
return self.__next__()
'Finalizes the iterator and possibly releases the resources. This method does nothing by default. Implementation may override it to better handle the internal resources.'
def finalize(self):
pass
'Serializes the internal state of the iterator. This is a method to support serializer protocol of Chainer. .. note:: It should only serialize the internal state that changes over the iteration. It should not serializes what is set manually by users such as the batch size.'
def serialize(self, serializer):
pass
'Returns an example or a sequence of examples. It implements the standard Python indexing and one-dimensional integer array indexing. It uses the :meth:`get_example` method by default, but it may be overridden by the implementation to, for example, improve the slicing performance. Args: index (int, slice, list or numpy.ndarray): An index of an example or indexes of examples. Returns: If index is int, returns an example created by `get_example`. If index is either slice or one-dimensional list or numpy.ndarray, returns a list of examples created by `get_example`. .. admonition:: Example >>> import numpy >>> from chainer import dataset >>> class SimpleDataset(dataset.DatasetMixin): ... def __init__(self, values): ... self.values = values ... def __len__(self): ... return len(self.values) ... def get_example(self, i): ... return self.values[i] >>> ds = SimpleDataset([0, 1, 2, 3, 4, 5]) >>> ds[1] # Access by int 1 >>> ds[1:3] # Access by slice [1, 2] >>> ds[[4, 0]] # Access by one-dimensional integer list [4, 0] >>> index = numpy.arange(3) >>> ds[index] # Access by one-dimensional integer numpy.ndarray [0, 1, 2]'
def __getitem__(self, index):
if isinstance(index, slice): (current, stop, step) = index.indices(len(self)) return [self.get_example(i) for i in six.moves.range(current, stop, step)] elif (isinstance(index, list) or isinstance(index, numpy.ndarray)): return [self.get_example(i) for i in index] else: return self.get_example(index)
'Returns the number of data points.'
def __len__(self):
raise NotImplementedError
'Returns the i-th example. Implementations should override it. It should raise :class:`IndexError` if the index is invalid. Args: i (int): The index of the example. Returns: The i-th example.'
def get_example(self, i):
raise NotImplementedError
'Parent hyperparmaeter object.'
@property def parent(self):
return self._parent
'Converts the hyperparameter into a dictionary. Returns: Dictionary containing all entries that can be referred by this hyperparameter object.'
def get_dict(self):
d = ({} if (self._parent is None) else self._parent.get_dict()) for (k, v) in six.iteritems(self.__dict__): if (k != '_parent'): d[k] = v return d
'State dictionary.'
@property def state(self):
return self._state
'Adds a hook function. The hook function is called before any updates. Args: hook (callable): Hook function to be added. It takes two arguments: the update rule object and the parameter variable. name (str): Name of the hook function. The name attribute of the hook function is used by default.'
def add_hook(self, hook, name=None):
if (not callable(hook)): raise TypeError('hook function must be callable') if (name is None): name = getattr(hook, 'name', getattr(hook, '__name__', None)) if (name is None): raise ValueError('the name of the hook function is not specified') if (name in self._hooks): raise ValueError('hook "{}" already exists'.format(name)) self._hooks[name] = hook
'Removes the specified hook function. Args: name (str): Name of the hook function to be removed. The hook function registered with this name will be removed.'
def remove_hook(self, name):
del self._hooks[name]
'Invokes hook functions and updates the parameter. Args: param (~chainer.Variable): Variable to be updated.'
def update(self, param):
if (not self.enabled): return self.t += 1 self._prepare(param) for hook in six.itervalues(self._hooks): hook(self, param) self.update_core(param)
'Updates the parameter. Implementation of UpdateRule should override this method or both of :meth:`_update_core_cpu` and :meth:`_update_core_gpu`. Args: param (~chainer.Variable): Variable to be updated.'
def update_core(self, param):
with cuda.get_device_from_array(param.data) as dev: if (int(dev) == (-1)): self.update_core_cpu(param) else: self.update_core_gpu(param)
'Updates the parameter on CPU. See :meth:`update_core` for details. Args: param (~chainer.Variable): Variable to be updated.'
def update_core_cpu(self, param):
raise NotImplementedError
'Updates the parameter on GPU. See :meth:`update_core` for details. Args: param (~chainer.Variable): Variable to be updated.'
def update_core_gpu(self, param):
raise NotImplementedError
'Initializes the state. Any implementations that use the state should override this mehtod. This method is called at the first update. Args: param (~chainer.Variable): Parameter variable. It can be used to extract the shape and the data type of the parameter.'
def init_state(self, param):
pass
'Serializes the update rule state. Be careful that this method only saves/loads the state of the update rule. The parameters of the target link is not saved/loaded by this method, and so you need to serialize the target link separately if you want to fully recover the training state including parameters. Args: serializer (~chainer.AbstractSerializer): Serializer object.'
def serialize(self, serializer):
if (self.state is None): if isinstance(serializer, serializer_module.Deserializer): self._state = {} self_copy = copy.copy(self) arr = numpy.empty(1, dtype=numpy.float32) self_copy.init_state(variable.Variable(arr, grad=arr)) for key in self._state: self._state[key] = serializer(key, None) else: for key in self._state: self._state[key] = serializer(key, self._state[key])
'Sets a target link and initializes the optimizer states. Given link is set to the :attr:`target` attribute. It also prepares the optimizer state dictionaries corresponding to all parameters in the link hierarchy. The existing states are discarded. Args: link (~chainer.Link): Target link object.'
def setup(self, link):
if (not isinstance(link, link_module.Link)): raise TypeError('optimization target must be a link') self.target = link self.t = 0 self.epoch = 0 self._hooks = collections.OrderedDict()
'Updates the parameters. This method updates the parameters of the target link. The behavior of this method is different for the cases either ``lossfun`` is given or not. If ``lossfun`` is given, this method typically clears the gradients, calls the loss function with given extra arguments, and calls the :meth:`~chainer.Variable.backward` method of its output to compute the gradients. The actual implementation might call ``lossfun`` more than once. If ``lossfun`` is not given, then this method assumes that the gradients of all parameters are already computed. An implementation that requires multiple gradient computations might raise an error on this case. In both cases, this method invokes the update procedure for all parameters. Args: lossfun (function): Loss function. It accepts arbitrary arguments and returns one :class:`~chainer.Variable` object that represents the loss (or objective) value. This argument can be omitted for single gradient-based methods. In this case, this method assumes gradient arrays computed. args, kwds: Arguments for the loss function.'
def update(self, lossfun=None, *args, **kwds):
raise NotImplementedError
'Starts a new epoch. This method increments the :attr:`epoch` count. Note that if the optimizer depends on the epoch count, then user should call this method appropriately at the beginning of each epoch.'
def new_epoch(self):
self.epoch += 1
'Registers a hook function. Hook function is typically called right after the gradient computation, though the timing depends on the optimization method. Args: hook (function): Hook function. If ``hook.call_for_each_param`` is true, this hook function is called for each parameter by passing the update rule and the parameter. Otherwise, this hook function is called only once each iteration by passing the optimizer. name (str): Name of the registration. If omitted, ``hook.name`` is used by default.'
def add_hook(self, hook, name=None):
if (not callable(hook)): raise TypeError('hook function is not callable') if (not hasattr(self, '_hooks')): raise RuntimeError('call `setup` method before `add_hook` method') if (name is None): name = hook.name if (name in self._hooks): raise KeyError(('hook %s already exists' % name)) self._hooks[name] = hook
'Removes a hook function. Args: name (str): Registered name of the hook function to remove.'
def remove_hook(self, name):
del self._hooks[name]
'Invokes hook functions in registration order.'
def call_hooks(self):
for hook in six.itervalues(self._hooks): self._call_hook(hook)
'Serializes or deserializes the optimizer. It only saves or loads the following things: - Optimizer states - Global states (:attr:`t` and :attr:`epoch`) **It does not saves nor loads the parameters of the target link.** They should be separately saved or loaded. Args: serializer (~chainer.AbstractSerializer): Serializer or deserializer object.'
def serialize(self, serializer):
self.t = serializer('t', self.t) self.epoch = serializer('epoch', self.epoch) for (name, param) in self.target.namedparams(): rule = getattr(param, 'update_rule', None) if (rule is not None): rule.serialize(serializer[name])
'Reallocate gradients cleared by :meth:`~chainer.Variable.cleargrad`. This method allocates arrays for all gradients which have :obj:`None`. This method is called before and after every optimizer hook. If an inheriting optimizer does not require this allocation, the optimizer can override this method with a blank function.'
def reallocate_cleared_grads(self):
for (name, param) in self.target.namedparams(False): if (param.grad is None): with cuda.get_device_from_array(param.data): xp = cuda.get_array_module(param.data) param.grad = xp.zeros_like(param.data)
'Invokes hook functions in registration order.'
def call_hooks(self):
for hook in six.itervalues(self._hooks): self._call_hook(hook) self.reallocate_cleared_grads()
'Updates parameters based on a loss function or computed gradients. This method runs in two ways. - If ``lossfun`` is given, then it is used as a loss function to compute gradients. - Otherwise, this method assumes that the gradients are already computed. In both cases, the computed gradients are used to update parameters. The actual update routines are defined by the update rule of each parameter.'
def update(self, lossfun=None, *args, **kwds):
if (lossfun is not None): use_cleargrads = getattr(self, '_use_cleargrads', True) loss = lossfun(*args, **kwds) if use_cleargrads: self.target.cleargrads() else: self.target.zerograds() loss.backward() del loss self.reallocate_cleared_grads() self.call_hooks() self.t += 1 for param in self.target.params(): param.update()
'Enables or disables use of :func:`~chainer.Link.cleargrads` in `update`. Args: use (bool): If ``True``, this function enables use of `cleargrads`. If ``False``, disables use of `cleargrads` (`zerograds` is used). .. deprecated:: v2.0 Note that :meth:`update` calls :meth:`~Link.cleargrads` by default. :meth:`~Link.cleargrads` is more efficient than :meth:`~Link.zerograds`, so one does not have to call :meth:`use_cleargrads`. This method remains for backward compatibility.'
def use_cleargrads(self, use=True):
warnings.warn('GradientMethod.use_cleargrads is deprecated.', DeprecationWarning) self._use_cleargrads = use
'Creates a new update rule object. This method creates an update rule object. It is called by :meth:`setup` to set up an update rule of each parameter. Each implementation of the gradient method should override this method to provide the default update rule implementation. Return: UpdateRule: Update rule object.'
def create_update_rule(self):
raise NotImplementedError
'Array module for this link. Depending on which of CPU/GPU this link is on, this property returns :mod:`numpy` or :mod:`cupy`.'
@property def xp(self):
return (numpy if self._cpu else cuda.cupy)
'True if the current code is inside of an initialization scope. See :meth:`init_scope` for the details of the initialization scope.'
@property def within_init_scope(self):
return getattr(self, '_within_init_scope', False)
'Creates an initialization scope. This method returns a context manager object that enables registration of parameters (and links for :class:`~chainer.Chain`) by an assignment. A :class:`~chainer.Parameter` object can be automatically registered by assigning it to an attribute under this context manager. .. admonition:: Example In most cases, the parameter registration is done in the initializer method. Using the ``init_scope`` method, we can simply assign a :class:`~chainer.Parameter` object to register it to the link. .. code-block:: python class MyLink(chainer.Link): def __init__(self): super().__init__() with self.init_scope(): self.W = chainer.Parameter(0, (10, 5)) self.b = chainer.Parameter(0, (5,))'
@contextlib.contextmanager def init_scope(self):
old_flag = self.within_init_scope self._within_init_scope = True try: (yield) finally: self._within_init_scope = old_flag
'Registers a parameter to the link. .. deprecated:: v2.0.0 Assign a :class:`~chainer.Parameter` object directly to an attribute within :meth:`an initialization scope <init_scope>` instead. For example, the following code .. code-block:: python link.add_param(\'W\', shape=(5, 3)) can be replaced by the following assignment. .. code-block:: python with self.init_scope(): link.W = chainer.Parameter(None, (5, 3)) The latter one is easier for IDEs to keep track of the attribute\'s type. Args: name (str): Name of the parameter. This name is also used as the attribute name. shape (int or tuple of ints): Shape of the parameter array. If it is omitted, the parameter variable is left uninitialized. dtype: Data type of the parameter array. initializer: If it is not ``None``, the data is initialized with the given initializer. If it is an array, the data is directly initialized by it. If it is callable, it is used as a weight initializer. Note that in these cases, ``dtype`` argument is ignored.'
def add_param(self, name, shape=None, dtype=numpy.float32, initializer=None):
warnings.warn('Parameter registeration via Link.__init__ and Link.add_param are deprecated.\nAssign a Parameter object directly to an attribute within a "with Link.init_scope():" block instead.\n', DeprecationWarning) if (name in self.__dict__): raise AttributeError(('cannot register a new parameter %s: attribute exists' % name)) if (initializer is None): initializer = initializers.NaN(dtype) param = variable.Parameter(initializer, shape) with self.init_scope(): setattr(self, name, param)
'Registers a persistent value to the link. The registered value is saved and loaded on serialization and deserialization. The value is set to an attribute of the link. Args: name (str): Name of the persistent value. This name is also used for the attribute name. value: Value to be registered.'
def add_persistent(self, name, value):
d = self.__dict__ if (name in d): raise AttributeError(('cannot register a new persistent value %s: attribute exists' % name)) self._persistent.add(name) self._params.discard(name) d[name] = value
'Registers an attribute of a given name as a persistent value. This is a convenient method to register an existing attribute as a persistent value. If ``name`` has been already registered as a parameter, this method removes it from the list of parameter names and re-registers it as a persistent value. Args: name (str): Name of the attribute to be registered.'
def register_persistent(self, name):
if (not hasattr(self, name)): raise AttributeError(('cannot register non-existent attribute %s as a persistent value' % name)) self._persistent.add(name) self._params.discard(name)
'Copies the link hierarchy to new one. The whole hierarchy rooted by this link is copied. The copy is basically shallow, except that the parameter variables are also shallowly copied. It means that the parameter variables of copied one are different from ones of original link, while they share the data and gradient arrays. The name of the link is reset on the copy, since the copied instance does not belong to the original parent chain (even if exists). Returns: Link: Copied link object.'
def copy(self):
ret = copy.copy(self) ret._params = set(self._params) ret._persistent = set(self._persistent) ret.name = None d = ret.__dict__ for name in ret._params: d[name] = copy.copy(d[name]) d[name].grad = None return ret
'Copies parameter variables and persistent values to CPU. This method does not handle non-registered attributes. If some of such attributes must be copied to CPU, the link implementation must override this method to do so. Returns: self'
def to_cpu(self):
if self._cpu: return self d = self.__dict__ for name in self._params: d[name].to_cpu() for name in self._persistent: value = d[name] if isinstance(value, cuda.ndarray): d[name] = value.get() self._cpu = True self._device_id = None return self
'Copies parameter variables and persistent values to GPU. This method does not handle non-registered attributes. If some of such attributes must be copied to GPU, the link implementation must override this method to do so. Args: device: Target device specifier. If omitted, the current device is used. Returns: self'
def to_gpu(self, device=None):
cuda.check_cuda_available() if (not self._cpu): return self d = self.__dict__ with cuda._get_device(device): for name in self._params: d[name].to_gpu() for name in self._persistent: value = d[name] if isinstance(value, numpy.ndarray): d[name] = cuda.to_gpu(value) self._device_id = cuda.cupy.cuda.get_device_id() self._cpu = False return self
'Returns a generator of all parameters under the link hierarchy. Args: include_uninit (bool): If ``True``, it also generates uninitialized parameters. Returns: A generator object that generates all parameters.'
def params(self, include_uninit=True):
d = self.__dict__ for name in self._params: if (include_uninit or (d[name].data is not None)): (yield d[name])
'Returns a generator of all (path, param) pairs under the hierarchy. Args: include_uninit (bool): If ``True``, it also generates uninitialized parameters. Returns: A generator object that generates all (path, parameter) pairs. The paths are relative from this link.'
def namedparams(self, include_uninit=True):
d = self.__dict__ for name in self._params: if (include_uninit or (d[name].data is not None)): (yield (('/' + name), d[name]))
'Returns a generator of all links under the hierarchy. Args: skipself (bool): If ``True``, then the generator skips this link and starts with the first child link. Returns: A generator object that generates all links.'
def links(self, skipself=False):
if (not skipself): (yield self)
'Returns a generator of all (path, link) pairs under the hierarchy. Args: skipself (bool): If ``True``, then the generator skips this link and starts with the first child link. Returns: A generator object that generates all (path, link) pairs.'
def namedlinks(self, skipself=False):
if (not skipself): (yield ('/', self))
'Returns a generator of all child links. Returns: A generator object that generates all child links.'
def children(self):
if 0: (yield)
'Copies all parameters from given link. This method copies data arrays of all parameters in the hierarchy. The copy is even done across the host and devices. Note that this method does not copy the gradient arrays. Args: link (Link): Source link object.'
def copyparams(self, link):
src = link.__dict__ dst = self.__dict__ for name in self._params: dst[name].copydata(src[name])
'Clears all gradient arrays. This method should be called before the backward computation at every iteration of the optimization.'
def cleargrads(self):
for param in self.params(): param.cleargrad()
'Initializes all gradient arrays by zero. This method can be used for the same purpose of cleargrads, but less efficient. This method is left for backward compatibility. .. deprecated:: v1.15 Use :meth:`cleargrads` instead.'
def zerograds(self):
warnings.warn('Link.zerograds is deprecated. Use Link.cleargrads instead.', DeprecationWarning) for param in self.params(): param.zerograd()
'Accumulates gradient values from given link. This method adds each gradient array of the given link to corresponding gradient array of this link. The accumulation is even done across host and different devices. Args: link (Link): Source link object.'
def addgrads(self, link):
src = link.__dict__ dst = self.__dict__ for name in self._params: dst[name].addgrad(src[name])
'Enables update rules of all parameters under the link hierarchy. This method sets the :attr:`~chainer.UpdateRule.enabled` flag of the update rule of each parameter variable to ``True``.'
def enable_update(self):
for param in self.params(): rule = param.update_rule if (rule is not None): rule.enabled = True
'Disables update rules of all parameters under the link hierarchy. This method sets the :attr:`~chainer.UpdateRule.enabled` flag of the update rule of each parameter variable to ``False``.'
def disable_update(self):
for param in self.params(): rule = param.update_rule if (rule is not None): rule.enabled = False
'``True`` if at least one parameter has an update rule enabled.'
@property def update_enabled(self):
for param in self.params(): rule = param.update_rule if ((rule is not None) and rule.enabled): return True return False
'Serializes the link object. Args: serializer (~chainer.AbstractSerializer): Serializer object.'
def serialize(self, serializer):
d = self.__dict__ for name in self._params: param = d[name] data = serializer(name, param.data) if ((param.data is None) and (data is not None)): param.initialize(data.shape) if isinstance(param.data, numpy.ndarray): numpy.copyto(param.data, data) else: param.data.set(numpy.asarray(data)) for name in self._persistent: d[name] = serializer(name, d[name])
'Equivalent to getattr.'
def __getitem__(self, name):
return getattr(self, name)
'Registers a child link to this chain. .. deprecated:: v2.0.0 Assign the child link directly to an attribute within :meth:`an initialization scope <chainer.Link.init_scope>`, instead. For example, the following code .. code-block:: python chain.add_link(\'l1\', L.Linear(3, 5)) can be replaced by the following line. .. code-block:: python with self.init_scope(): chain.l1 = L.Linear(3, 5) The latter one is easier for IDEs to keep track of the attribute\'s type. Args: name (str): Name of the child link. This name is also used as the attribute name. link (Link): The link object to be registered.'
def add_link(self, name, link):
warnings.warn('Child link registeration via Chain.__init__ and Chain.add_link are deprecated.\nAssign a Link object directly to an attribute within a "with link.init_scope():" block instead.\n ', DeprecationWarning) if (name in self.__dict__): raise AttributeError(('cannot register a new link %s: attribute exists' % name)) if (not isinstance(link, Link)): raise TypeError('cannot register a non-link object as a child') with self.init_scope(): setattr(self, name, link)
'Returns the child at given index. Args: index (int): Index of the child in the list. Returns: Link: The ``index``-th child link.'
def __getitem__(self, index):
return self._children[index]
'Returns the number of children.'
def __len__(self):
return len(self._children)
'Registers a child link and adds it to the tail of the list. This is equivalent to :meth:`add_link`. This method has been added to emulate the ``list`` interface. Args: link (Link): The link object to be regsitered.'
def append(self, link):
self.add_link(link)
'Registers a child link and adds it to the tail of the list. Args: link (Link): The link object to be registered.'
def add_link(self, link):
link.name = str(len(self._children)) self._children.append(link)
'Returns total elapsed time in seconds.'
def total_time(self):
return self._total_time
'Returns a summary of time profiling in functions. Returns: A summarized dictionary whose keys are function names and values are dictionaries of `elapsed_time` and `occurrrence`.'
def summary(self):
summary = {} for (func, elapsed_time) in self.call_history: function_name = func._impl_name if (function_name not in summary): summary[function_name] = {'elapsed_time': 0, 'occurrence': 0} record = summary[function_name] record['elapsed_time'] += elapsed_time record['occurrence'] += 1 return summary
'Returns a human readable time.'
def _humanized_time(self, second):
for unit in ['sec', 'ms', 'us']: if (second >= 1): return ('%3.2f%s' % (second, unit)) second *= 1000.0 return ('%.2f%s' % (second, 'ns'))
'Prints a summary report of time profiling in functions.'
def print_report(self, file=sys.stdout):
entries = [['FunctionName', 'ElapsedTime', 'Occurrence']] for (function_name, record) in self.summary().items(): elapsed_time = self._humanized_time(record['elapsed_time']) occurrence = str(record['occurrence']) entries.append([function_name, elapsed_time, occurrence]) entry_widths = [] entry_widths.append(max((len(f) for (f, _, _) in entries))) entry_widths.append(max((len(e) for (_, e, _) in entries))) entry_widths.append(max((len(o) for (_, _, o) in entries))) template = ' '.join((('{:>%d}' % w) for w in entry_widths)) for (function_name, elapsed_time, occurrence) in entries: line = template.format(function_name, elapsed_time, occurrence) file.write(line) file.write('\n') file.flush()
'The :class:`Function` object that this adapter is wrapping.'
@property def function(self):
func = self._function if (func is not None): return func weak_func = self._weak_function return (weak_func and weak_func())
'Applies forward propagation with chaining backward references. This method creates a new :class:`FunctionAdapter` object and runs the forward propagation using it. See :class:`FunctionNode` for the detailed behavior of building the computational graph. Args: inputs: Tuple of input :class:`Variable`, :class:`numpy.ndarray` or :class:`cupy.ndarray` objects. If the input is an :class:`numpy.ndarray` or a :class:`cupy.ndarray`, it is automatically wrapped with :class:`Variable`. Returns: One :class:`Variable` object or a tuple of multiple :class:`Variable` objects.'
def __call__(self, *inputs):
node = self.node node._function = self node._weak_function = None self._node = weakref.ref(node) self._owned_node = None ret = node.apply(inputs) if (len(ret) == 1): return ret[0] else: return tuple(ret)
'The input nodes of the function.'
@property def inputs(self):
return self.node.inputs
'Weak references to the output nodes of the function.'
@property def outputs(self):
return self.node.outputs
'The :class:`FunctionAdapter` object that wraps this Function. If the Function does not have a node object, this property automatically creates a new one.'
@property def node(self):
noderef = self._node nd = ((noderef and noderef()) or self._owned_node) if (nd is not None): return nd nd = FunctionAdapter(self) self._owned_node = nd return nd
'Ordered Dictionary of registered function hooks. See :attr:`FunctionNode.local_function_hooks` for the detail.'
@property def local_function_hooks(self):
return self.node.local_function_hooks
'Short text that represents the function. The default implementation returns its type name. Each function should override it to give more information.'
@property def label(self):
return self.__class__.__name__
'A tuple of the retained output arrays. It has the same length as the :attr:`outputs`. Elements that are not retained are set to ``None``.'
@property def output_data(self):
return self.node.output_data
'The topological ordinal of the corresponding function node.'
@property def rank(self):
return self.node.rank