desc
stringlengths
3
26.7k
decl
stringlengths
11
7.89k
bodies
stringlengths
8
553k
'Connects the module into the graph, with input Tensor `inputs`. Args: inputs: A 4D Tensor of shape: [batch_size, input_height, input_width, input_channels]. Returns: A 4D Tensor of shape: [batch_size, output_height, output_width, output_channels]. Raises: ValueError: If connecting the module into the graph any time after the first time and the inferred input size does not match previous invocations. ValueError: If `channel_multiplier` * `input_channels` > `output_channels`, which means that the separable convolution is overparameterized. base.IncompatibleShapeError: If the input tensor has the wrong number of dimensions; or if the input tensor has an unknown `input_channels`. TypeError: If input Tensor dtype is not tf.float32.'
def _build(self, inputs):
self._input_shape = tuple(inputs.get_shape().as_list()) if (len(self._input_shape) != 4): raise base.IncompatibleShapeError('Input Tensor must have shape (batch_size, input_height, input_width, input_channels)') if (self._input_shape[3] is None): raise base.IncompatibleShapeError('Number of input channels must be known at module build time') self._input_channels = self._input_shape[3] if (inputs.dtype != tf.float32): raise TypeError(('Input must have dtype tf.float32, but dtype was ' + inputs.dtype.name)) depthwise_weight_shape = (self._kernel_shape[0], self._kernel_shape[1], self._input_channels, self._channel_multiplier) pointwise_input_size = (self._channel_multiplier * self._input_channels) pointwise_weight_shape = (1, 1, pointwise_input_size, self._output_channels) bias_shape = (self._output_channels,) if ('w_dw' not in self._initializers): fan_in_shape = depthwise_weight_shape[:3] self._initializers['w_dw'] = create_weight_initializer(fan_in_shape) if ('w_pw' not in self._initializers): fan_in_shape = pointwise_weight_shape[:3] self._initializers['w_pw'] = create_weight_initializer(fan_in_shape) if (('b' not in self._initializers) and self._use_bias): self._initializers['b'] = create_bias_initializer(bias_shape) self._w_dw = tf.get_variable('w_dw', shape=depthwise_weight_shape, initializer=self._initializers['w_dw'], partitioner=self._partitioners.get('w_dw', None), regularizer=self._regularizers.get('w_dw', None)) self._w_pw = tf.get_variable('w_pw', shape=pointwise_weight_shape, initializer=self._initializers['w_pw'], partitioner=self._partitioners.get('w_pw', None), regularizer=self._regularizers.get('w_pw', None)) outputs = tf.nn.separable_conv2d(inputs, self._w_dw, self._w_pw, strides=self._stride, padding=self._padding) if self._use_bias: self._b = tf.get_variable('b', shape=bias_shape, initializer=self._initializers['b'], partitioner=self._partitioners.get('b', None), regularizer=self._regularizers.get('b', None)) outputs += self._b return outputs
'Returns the number of input channels.'
@property def input_channels(self):
self._ensure_is_connected() return self._input_channels
'Returns the number of output channels.'
@property def output_channels(self):
return self._output_channels
'Returns the channel multiplier.'
@property def channel_multiplier(self):
return self._channel_multiplier
'Returns the input shape.'
@property def input_shape(self):
self._ensure_is_connected() return self._input_shape
'Returns the kernel shape.'
@property def kernel_shape(self):
return self._kernel_shape
'Returns the stride.'
@property def stride(self):
return self._stride
'Returns the padding algorithm.'
@property def padding(self):
return self._padding
'Returns the Variable containing the depthwise weight matrix.'
@property def w_dw(self):
self._ensure_is_connected() return self._w_dw
'Returns the Variable containing the pointwise weight matrix.'
@property def w_pw(self):
self._ensure_is_connected() return self._w_pw
'Returns the Variable containing the bias. Returns: Variable object containing the bias, from the most recent __call__. Raises: base.NotConnectedError: If the module has not been connected to the graph yet, meaning the variables do not exist. AttributeError: If the module does not use bias.'
@property def b(self):
self._ensure_is_connected() if (not self._use_bias): raise AttributeError('No bias Variable in SeparableConv2D Module when `use_bias=False`.') return self._b
'Returns `True` if bias Variable is present in the module.'
@property def has_bias(self):
return self._use_bias
'Returns the initializers dictionary.'
@property def initializers(self):
return self._initializers
'Returns the partitioners dictionary.'
@property def partitioners(self):
return self._partitioners
'Returns the regularizers dictionary.'
@property def regularizers(self):
return self._regularizers
'Constructs a Conv3D module. See the following documentation for an explanation of VALID versus SAME padding modes: https://www.tensorflow.org/api_guides/python/nn#Convolution Args: output_channels: Number of output channels. `output_channels` can be either a number or a callable. In the latter case, since the function invocation is deferred to graph construction time, the user must only ensure that output_channels can be called, returning an integer, when `build` is called. kernel_shape: Sequence of kernel sizes (of size 3), or integer that is used to define kernel size in all dimensions. stride: Sequence of kernel strides (of size 3), or integer that is used to define stride in all dimensions. rate: Sequence of dilation rates (of size 3), or integer that is used to define dilation rate in all dimensions. 1 corresponds to standard 2D convolution, `rate > 1` corresponds to dilated convolution. Cannot be > 1 if any of `stride` is also > 1. padding: Padding algorithm, either `snt.SAME` or `snt.VALID`. use_bias: Whether to include bias parameters. Default `True`. initializers: Optional dict containing ops to initialize the filters (with key \'w\') or biases (with key \'b\'). The default initializer for the weights is a truncated normal initializer, which is commonly used when the inputs are zero centered (see https://arxiv.org/pdf/1502.03167v3.pdf). The default initializer for the bias is a zero initializer. partitioners: Optional dict containing partitioners to partition weights (with key \'w\') or biases (with key \'b\'). As a default, no partitioners are used. regularizers: Optional dict containing regularizers for the filters (with key \'w\') and the biases (with key \'b\'). As a default, no regularizers are used. A regularizer should be a function that takes a single `Tensor` as an input and returns a scalar `Tensor` output, e.g. the L1 and L2 regularizers in `tf.contrib.layers`. custom_getter: Callable or dictionary of callables to use as custom getters inside the module. If a dictionary, the keys correspond to regexes to match variable names. See the `tf.get_variable` documentation for information about the custom_getter API. name: Name of the module. Raises: base.IncompatibleShapeError: If the given kernel shape is not an integer; or if the given kernel shape is not a sequence of two integers. base.IncompatibleShapeError: If the given stride is not an integer; or if the given stride is not a sequence of two or four integers. base.IncompatibleShapeError: If the given rate is not an integer; or if the given rate is not a sequence of two integers. base.NotSupportedError: If rate in any dimension and the stride in any dimension are simultaneously > 1. ValueError: If the given padding is not `snt.VALID` or `snt.SAME`. KeyError: If `initializers`, `partitioners` or `regularizers` contain any keys other than \'w\' or \'b\'. TypeError: If any of the given initializers, partitioners or regularizers are not callable.'
def __init__(self, output_channels, kernel_shape, stride=1, rate=1, padding=SAME, use_bias=True, initializers=None, partitioners=None, regularizers=None, custom_getter=None, name='conv_3d'):
super(Conv3D, self).__init__(custom_getter=custom_getter, name=name) self._output_channels = output_channels self._input_shape = None self._kernel_shape = _fill_and_verify_parameter_shape(kernel_shape, 3, 'kernel') if (isinstance(stride, collections.Iterable) and (len(stride) == 5)): self._stride = tuple(stride)[1:(-1)] else: self._stride = _fill_and_verify_parameter_shape(stride, 3, 'stride') self._rate = _fill_and_verify_parameter_shape(rate, 3, 'rate') if (any(((x > 1) for x in self._stride)) and any(((x > 1) for x in self._rate))): raise base.NotSupportedError('Cannot have stride > 1 with rate > 1') self._padding = _verify_padding(padding) self._use_bias = use_bias self.possible_keys = self.get_possible_initializer_keys(use_bias=use_bias) self._initializers = util.check_initializers(initializers, self.possible_keys) self._partitioners = util.check_partitioners(partitioners, self.possible_keys) self._regularizers = util.check_regularizers(regularizers, self.possible_keys)
'Connects the Conv3D module into the graph, with input Tensor `inputs`. If this is not the first time the module has been connected to the graph, the input Tensor provided here must have the same final dimension (i.e. `input_channels`), in order for the existing variables to be the correct size for the multiplication. The batch size may differ for each connection. Args: inputs: A 5D Tensor of shape `[batch_size, input_depth, input_height, input_width, input_channels]`. Returns: A 5D Tensor of shape `[batch_size, output_depth, output_height, output_width, output_channels]`. Raises: ValueError: If connecting the module into the graph any time after the first time and the inferred size of the input does not match previous invocations. base.IncompatibleShapeError: If the input tensor has the wrong number of dimensions. base.UnderspecifiedError: If the input tensor has an unknown `input_channels`. TypeError: If input Tensor dtype is not `tf.float32`.'
def _build(self, inputs):
self._input_shape = tuple(inputs.get_shape().as_list()) if (len(self._input_shape) != 5): raise base.IncompatibleShapeError('Input Tensor must have shape (batch_size, input_depth, input_height, input_width, input_channels)') if (self._input_shape[4] is None): raise base.UnderspecifiedError('Number of input channels must be known at module build time') else: input_channels = self._input_shape[4] if (inputs.dtype != tf.float32): raise TypeError('Input must have dtype tf.float32, but dtype was {}'.format(inputs.dtype)) weight_shape = (self._kernel_shape[0], self._kernel_shape[1], self._kernel_shape[2], input_channels, self.output_channels) bias_shape = (self.output_channels,) if ('w' not in self._initializers): self._initializers['w'] = create_weight_initializer(weight_shape[:4]) if (('b' not in self._initializers) and self._use_bias): self._initializers['b'] = create_bias_initializer(bias_shape) self._w = tf.get_variable('w', shape=weight_shape, initializer=self._initializers['w'], partitioner=self._partitioners.get('w', None), regularizer=self._regularizers.get('w', None)) outputs = tf.nn.convolution(inputs, self._w, strides=self._stride, padding=self._padding, dilation_rate=self._rate) if self._use_bias: self._b = tf.get_variable('b', shape=bias_shape, initializer=self._initializers['b'], partitioner=self._partitioners.get('b', None), regularizer=self._regularizers.get('b', None)) outputs += self._b return outputs
'Returns the number of output channels.'
@property def output_channels(self):
if callable(self._output_channels): self._output_channels = self._output_channels() return self._output_channels
'Returns the input shape.'
@property def input_shape(self):
self._ensure_is_connected() return self._input_shape
'Returns the kernel shape.'
@property def kernel_shape(self):
return self._kernel_shape
'Returns the stride.'
@property def stride(self):
return (((1,) + self._stride) + (1,))
'Returns the padding algorithm.'
@property def padding(self):
return self._padding
'Returns the Variable containing the weight matrix.'
@property def w(self):
self._ensure_is_connected() return self._w
'Returns the Variable containing the bias.'
@property def b(self):
self._ensure_is_connected() if (not self._use_bias): raise AttributeError('No bias Variable in Conv2D Module when `use_bias=False`.') return self._b
'Returns `True` if bias Variable is present in the module.'
@property def has_bias(self):
return self._use_bias
'Returns the initializers dictionary.'
@property def initializers(self):
return self._initializers
'Returns the partitioners dictionary.'
@property def partitioners(self):
return self._partitioners
'Returns the regularizers dictionary.'
@property def regularizers(self):
return self._regularizers
'Returns matching `Conv3DTranspose` module. Args: name: Optional string assigning name of transpose module. The default name is constructed by appending "_transpose" to `self.name`. Returns: `Conv3DTranspose` module. Raises: base.NotSupportedError: If `rate` in any dimension > 1.'
def transpose(self, name=None):
if any(((x > 1) for x in self._rate)): raise base.NotSupportedError('Cannot transpose a dilated convolution module.') if (name is None): name = (self.module_name + '_transpose') return Conv3DTranspose(output_channels=(lambda : self.input_shape[(-1)]), output_shape=(lambda : self.input_shape[1:(-1)]), kernel_shape=self.kernel_shape, stride=self.stride, padding=self.padding, use_bias=self._use_bias, initializers=self.initializers, partitioners=self.partitioners, regularizers=self.regularizers, custom_getter=self._custom_getter, name=name)
'Constructs a `Conv3DTranspose` module. See the following documentation for an explanation of VALID versus SAME padding modes: https://www.tensorflow.org/api_guides/python/nn#Convolution Args: output_channels: Number of output channels. `output_channels` can be either a number or a callable. In the latter case, since the function invocation is deferred to graph construction time, the user must only ensure `output_channels` can be called, returning an integer, when `build` is called. output_shape: Output shape of transpose convolution. Can be either an iterable of integers or a callable. In the latter case, since the function invocation is deferred to graph construction time, the user must only ensure that `output_shape` can be called, returning an iterable of format `(out_depth, out_height, out_width)` when `build` is called. Note that `output_shape` defines the size of output signal domain, as opposed to the shape of the output `Tensor`. If a None value is given, a default shape is automatically calculated (see docstring of _default_transpose_size function for more details). kernel_shape: Sequence of kernel sizes (of size 3), or integer that is used to define kernel size in all dimensions. stride: Sequence of kernel strides (of size 3), or integer that is used to define stride in all dimensions. padding: Padding algorithm, either `snt.SAME` or `snt.VALID`. use_bias: Whether to include bias parameters. Default `True`. initializers: Optional dict containing ops to initialize the filters (with key \'w\') or biases (with key \'b\'). partitioners: Optional dict containing partitioners to partition weights (with key \'w\') or biases (with key \'b\'). As a default, no partitioners are used. regularizers: Optional dict containing regularizers for the filters (with key \'w\') and the biases (with key \'b\'). As a default, no regularizers are used. A regularizer should be a function that takes a single `Tensor` as an input and returns a scalar `Tensor` output, e.g. the L1 and L2 regularizers in `tf.contrib.layers`. custom_getter: Callable or dictionary of callables to use as custom getters inside the module. If a dictionary, the keys correspond to regexes to match variable names. See the `tf.get_variable` documentation for information about the custom_getter API. name: Name of the module. Raises: module.IncompatibleShapeError: If the given kernel shape is neither an integer nor a sequence of three integers. module.IncompatibleShapeError: If the given stride is neither an integer nor a sequence of three or five integers. ValueError: If the given padding is not `snt.VALID` or `snt.SAME`. ValueError: If the given kernel_shape is `None`. KeyError: If `initializers`, `partitioners` or `regularizers` contain any keys other than \'w\' or \'b\'. TypeError: If any of the given initializers, partitioners or regularizers are not callable.'
def __init__(self, output_channels, output_shape=None, kernel_shape=None, stride=1, padding=SAME, use_bias=True, initializers=None, partitioners=None, regularizers=None, custom_getter=None, name='conv_3d_transpose'):
super(Conv3DTranspose, self).__init__(custom_getter=custom_getter, name=name) self._output_channels = output_channels if (output_shape is None): self._output_shape = None self._use_default_output_shape = True else: self._use_default_output_shape = False if callable(output_shape): self._output_shape = output_shape else: self._output_shape = _fill_and_verify_parameter_shape(output_shape, 3, 'output_shape') self._input_shape = None if (kernel_shape is None): raise ValueError('`kernel_shape` cannot be None.') self._kernel_shape = _fill_and_verify_parameter_shape(kernel_shape, 3, 'kernel') if (isinstance(stride, collections.Iterable) and (len(stride) == 5)): if (not (stride[0] == stride[3] == 1)): raise base.IncompatibleShapeError('Invalid stride: First and last element must be 1.') self._stride = tuple(stride) else: self._stride = _fill_and_one_pad_stride(stride, 3) self._padding = _verify_padding(padding) self._use_bias = use_bias self.possible_keys = self.get_possible_initializer_keys(use_bias=use_bias) self._initializers = util.check_initializers(initializers, self.possible_keys) self._partitioners = util.check_partitioners(partitioners, self.possible_keys) self._regularizers = util.check_regularizers(regularizers, self.possible_keys)
'Connects the Conv3DTranspose module into the graph. If this is not the first time the module has been connected to the graph, the input Tensor provided here must have the same final dimension (i.e. `input_channels`), in order for the existing variables to be the correct size for the multiplication. The batch size may differ for each connection. Args: inputs: A 5D Tensor of shape [batch_size, input_depth, input_height, input_width, input_channels]. Returns: A 5D Tensor of shape [batch_size, output_depth, output_height, output_width, output_channels]. Raises: ValueError: If connecting the module into the graph any time after the first time and the inferred size of the input does not match previous invocations. module.IncompatibleShapeError: If the input tensor has the wrong number of dimensions; or if the input tensor has an unknown `input_channels`; or or if `output_shape` is an iterable and is not in the format `(out_height, out_width)`. TypeError: If input Tensor dtype is not `tf.float32`.'
def _build(self, inputs):
self._input_shape = tuple(inputs.get_shape().as_list()) if (len(self._input_shape) != 5): raise base.IncompatibleShapeError('Input Tensor must have shape (batch_size, input_depth, input_height, input_width, input_channels)') if (self._input_shape[4] is None): raise base.IncompatibleShapeError('Number of input channels must be known at module build time') input_channels = self._input_shape[4] if (inputs.dtype != tf.float32): raise TypeError(('Input must have dtype tf.float32, but dtype was ' + inputs.dtype)) if self._use_default_output_shape: self._output_shape = (lambda : _default_transpose_size(self._input_shape[1:(-1)], self.stride[1:(-1)], kernel_shape=self.kernel_shape, padding=self.padding)) if (len(self.output_shape) != 3): raise base.IncompatibleShapeError('Output shape must be specified as (output_depth, output_height, output_width)') weight_shape = (self._kernel_shape[0], self._kernel_shape[1], self._kernel_shape[2], self.output_channels, input_channels) bias_shape = (self.output_channels,) if ('w' not in self._initializers): fan_in = (weight_shape[:3] + (weight_shape[4],)) stddev = (1 / math.sqrt(np.prod(fan_in))) self._initializers['w'] = tf.truncated_normal_initializer(stddev=stddev) if (('b' not in self._initializers) and self._use_bias): stddev = (1 / math.sqrt(np.prod(bias_shape))) self._initializers['b'] = tf.truncated_normal_initializer(stddev=stddev) self._w = tf.get_variable('w', shape=weight_shape, initializer=self._initializers['w'], partitioner=self._partitioners.get('w', None), regularizer=self._regularizers.get('w', None)) batch_size = tf.expand_dims(tf.shape(inputs)[0], 0) conv_output_shape = tf.convert_to_tensor((tuple(self.output_shape) + (self.output_channels,))) output_shape = tf.concat([batch_size, conv_output_shape], 0) outputs = tf.nn.conv3d_transpose(inputs, self._w, output_shape, strides=self._stride, padding=self._padding) if self._use_bias: self._b = tf.get_variable('b', shape=bias_shape, initializer=self._initializers['b'], partitioner=self._partitioners.get('b', None), regularizer=self._regularizers.get('b', None)) outputs += self._b batch_size_value = inputs.get_shape()[0] output_shape_value = (((batch_size_value,) + self.output_shape) + (self.output_channels,)) outputs.set_shape(output_shape_value) return outputs
'Returns the number of output channels.'
@property def output_channels(self):
if callable(self._output_channels): self._output_channels = self._output_channels() return self._output_channels
'Returns the kernel shape.'
@property def kernel_shape(self):
return self._kernel_shape
'Returns the stride.'
@property def stride(self):
return self._stride
'Returns the output shape.'
@property def output_shape(self):
if (self._output_shape is None): self._ensure_is_connected() if callable(self._output_shape): self._output_shape = tuple(self._output_shape()) return self._output_shape
'Returns the padding algorithm.'
@property def padding(self):
return self._padding
'Returns the Variable containing the weight matrix.'
@property def w(self):
self._ensure_is_connected() return self._w
'Returns the Variable containing the bias. Returns: Variable object containing the bias, from the most recent __call__. Raises: module.NotConnectedError: If the module has not been connected to the graph yet, meaning the variables do not exist. AttributeError: If the module does not use bias.'
@property def b(self):
self._ensure_is_connected() if (not self._use_bias): raise AttributeError('No bias Variable in Conv3DTranspose Module when `use_bias=False`.') return self._b
'Returns `True` if bias Variable is present in the module.'
@property def has_bias(self):
return self._use_bias
'Returns the intializers dictionary.'
@property def initializers(self):
return self._initializers
'Returns the partitioners dictionary.'
@property def partitioners(self):
return self._partitioners
'Returns the regularizers dictionary.'
@property def regularizers(self):
return self._regularizers
'Returns the input shape.'
@property def input_shape(self):
self._ensure_is_connected() return self._input_shape
'Returns transposed Conv3DTranspose module, i.e. a Conv3D module.'
def transpose(self, name=None):
if (name is None): name = (self.module_name + '_transpose') return Conv3D(output_channels=(lambda : self.input_shape[(-1)]), kernel_shape=self.kernel_shape, stride=self.stride[1:(-1)], padding=self.padding, use_bias=self._use_bias, initializers=self.initializers, partitioners=self.partitioners, regularizers=self.regularizers, custom_getter=self._custom_getter, name=name)
'Tests that the op can be instantiated twice with appropriate results. Implementations with inappropriate global registration of gradients will fail this test.'
def testTwoOps(self):
x = tf.placeholder(tf.float32, [1]) y = (x * x) y = snt.scale_gradient(y, 0.1) y = snt.scale_gradient(y, 0.1) dydx = tf.gradients([y], [x])[0] with self.test_session() as sess: (dydx_, y_) = sess.run([dydx, y], feed_dict={x: [3.0]}) self.assertAlmostEqual(dydx_[0], ((2 * (0.1 ** 2)) * 3.0), places=6) self.assertAlmostEqual(y_[0], (3.0 ** 2), places=6)
'Constructs a BatchNorm module. By default reduces over all input tensor dimensions apart from the final dimension. This has the effect of treating pixels in 1D/2D/3D images as additional elements of the minibatch. If this is not the desired behaviour, the user can specify the tensor indices to reduce over with `axis`. Args: axis: Optional iterable of indices of dimensions to reduce over. By default `None` and all dimensions except the last are reduced over. offset: Optional boolean to specify whether or not to apply a trained component-wise bias after the batch normalization and scaling. scale: Optional boolean to specify whether or not to apply a trained component-wise scale after the batch normalization. decay_rate: Decay rate of the exponential moving averages of the mean and variance. eps: Small number to avoid dividing by zero when diving by the standard deviation. initializers: Optional dict containing ops to initialize the weights of the affine transform (`gamma` and `beta`). partitioners: Optional dict containing partitioners to partition the weights of the affine transform (`gamma` and `beta`). regularizers: Optional dict containing regularizers for the weights of the affine transform (\'gamma\' and \'beta\'). As a default, no regularizers are used. A regularizer should be a function that takes a single `Tensor` as an input and returns a scalar `Tensor` output, e.g. the L1 and L2 regularizers in `tf.contrib.layers`. update_ops_collection: Name of TensorFlow variable collection to add the moving average update ops to. If `None`, we instead add the update ops as control dependencies of the output of the module. This may result in some slowdown, as the feed-forward of the network is now blocked. By default, `tf.GraphKeys.UPDATE_OPS`. fused: Use nn.fused_batch_norm if True, nn.batch_normalization otherwise. name: Name of the module. Raises: KeyError: If `initializers` contains any keys other than `gamma`, `beta`, `moving_mean` or `moving_variance`. KeyError: If `partitioners` or `regularizers` contains any keys other than `gamma` or `beta`. TypeError: If any of the given initializers, partitioners or regularizers are not callable.'
def __init__(self, axis=None, offset=True, scale=False, decay_rate=0.999, eps=0.001, initializers=None, partitioners=None, regularizers=None, update_ops_collection='update_ops', fused=False, name='batch_norm'):
super(BatchNorm, self).__init__(name=name) self._axis = axis self._offset = offset self._scale = scale self._decay_rate = decay_rate self._eps = eps self._update_ops_collection = update_ops_collection self._fused = fused self._initializers = util.check_initializers(initializers, self.POSSIBLE_INITIALIZER_KEYS) self._partitioners = util.check_partitioners(partitioners, self.POSSIBLE_PARTITIONER_KEYS) self._regularizers = util.check_regularizers(regularizers, self.POSSIBLE_REGULARIZER_KEYS)
'Builds the statistics part of the graph when using moving variance. Args: input_batch: Input batch Tensor. axis: Indices of `input_batch` to reduce over. use_batch_stats: Boolean to indicate if batch statistics should be calculated, otherwise moving averages are returned. dtype: TensorFlow datatype to use for the moving mean and variance. Returns: Tuple of (mean, variance).'
def _build_statistics(self, input_batch, axis, use_batch_stats, dtype):
if (self.MOVING_MEAN not in self._initializers): self._initializers[self.MOVING_MEAN] = create_mean_initializer() self._moving_mean = tf.get_variable('moving_mean', dtype=dtype, shape=self._mean_shape, collections=[tf.GraphKeys.MOVING_AVERAGE_VARIABLES, tf.GraphKeys.GLOBAL_VARIABLES], initializer=self._initializers[self.MOVING_MEAN], trainable=False) if (self.MOVING_VARIANCE not in self._initializers): self._initializers[self.MOVING_VARIANCE] = create_variance_initializer() self._moving_variance = tf.get_variable('moving_variance', dtype=dtype, shape=self._mean_shape, collections=[tf.GraphKeys.MOVING_AVERAGE_VARIABLES, tf.GraphKeys.GLOBAL_VARIABLES], initializer=self._initializers[self.MOVING_VARIANCE], trainable=False) def build_batch_stats(): 'Builds the batch statistics calculation ops.' (mean, variance) = tf.nn.moments(input_batch, axis, keep_dims=True, name='normalize_moments') return (mean, variance) def build_moving_stats(): return (tf.identity(self._moving_mean), tf.identity(self._moving_variance)) (mean, variance) = utils.smart_cond(use_batch_stats, build_batch_stats, build_moving_stats) return (mean, variance)
'Builds the moving average update ops when using moving variance. Args: mean: The mean value to update with. variance: The variance value to update with. is_training: Boolean Tensor to indicate if we\'re currently in training mode. Returns: Tuple of `(update_mean_op, update_variance_op)` when `is_training` is or could be `True`. Returns `None` when `is_training=False`.'
def _build_update_ops(self, mean, variance, is_training):
def build_update_ops(): 'Builds the exponential moving average update ops.' update_mean_op = moving_averages.assign_moving_average(variable=self._moving_mean, value=mean, decay=self._decay_rate, zero_debias=False, name='update_moving_mean').op update_variance_op = moving_averages.assign_moving_average(variable=self._moving_variance, value=variance, decay=self._decay_rate, zero_debias=False, name='update_moving_variance').op return (update_mean_op, update_variance_op) def build_no_ops(): return (tf.no_op(), tf.no_op()) is_training_const = utils.constant_value(is_training) if ((is_training_const is None) or is_training_const): (update_mean_op, update_variance_op) = utils.smart_cond(is_training, build_update_ops, build_no_ops) return (update_mean_op, update_variance_op) else: return None
'Infers the data format for the fused batch norm. It uses the axis option to infer this information. Specifically, the axis value (0, 1, 2) corresponds to data format NHWC and the axis value (0, 2, 3) to data format NCHW. Args: input_batch: A Tensor of arbitrary dimension. Returns: A string description of the data format NHWC or NCHW. Raises: NotImplementedError: for input of dimensionality different from 4. ValueError: for axis configuration different from (0, 1, 2) and (0, 2, 3).'
def _infer_fused_data_format(self, input_batch):
input_shape = input_batch.get_shape().as_list() input_shape_len = len(input_shape) if (input_shape_len != 4): raise NotImplementedError('fused batch norm supports only input with 4 dimensions, it received input of dimensionality {:d}'.format(input_shape_len)) axis = (range(input_shape_len)[:(-1)] if (self._axis is None) else self._axis) axis = tuple(axis) if (axis == (0, 1, 2)): return 'NHWC' elif (axis == (0, 2, 3)): return 'NCHW' else: raise ValueError('Invalid axis option {}. This does not correspond to either the NHWC format (0, 1, 2) or the NCHW (0, 2, 3).'.format(axis))
'Creates a fused batch normalization op.'
def _fused_batch_norm_op(self, input_batch, mean, variance, use_batch_stats):
gamma_flatten = tf.reshape(self._gamma, shape=((-1),)) beta_flatten = tf.reshape(self._beta, shape=((-1),)) flatten_mean = tf.reshape(mean, shape=((-1),)) flatten_variance = tf.reshape(variance, shape=((-1),)) use_batch_stats = tf.convert_to_tensor(use_batch_stats) common_args = {'scale': gamma_flatten, 'offset': beta_flatten, 'epsilon': self._eps, 'data_format': self._infer_fused_data_format(input_batch), 'name': 'batch_norm'} def use_batch_stats_fused_batch_norm(): return tf.nn.fused_batch_norm(input_batch, mean=None, variance=None, is_training=True, **common_args) def moving_average_fused_batch_norm(): return tf.nn.fused_batch_norm(input_batch, mean=flatten_mean, variance=flatten_variance, is_training=False, **common_args) (batch_norm_op, mean, variance) = utils.smart_cond(use_batch_stats, use_batch_stats_fused_batch_norm, moving_average_fused_batch_norm) return (batch_norm_op, mean, variance)
'Creates a batch normalization op. It uses the tf.nn.batch_normalization op by default and the tf.nn.fused_batch_norm op to support fused batch normalization. Args: input_batch: A input Tensor of arbitrary dimension. mean: A mean tensor. variance: A variance tensor. use_batch_stats: A bool value that indicates whether the operation should use the batch statistics. Returns: A batch normalization operation. The current mean tensor. The current variance tensor.'
def _batch_norm_op(self, input_batch, mean, variance, use_batch_stats):
if self._fused: mean_shape = mean.get_shape() variance_shape = variance.get_shape() (batch_norm_op, mean, variance) = self._fused_batch_norm_op(input_batch, mean, variance, use_batch_stats) mean = tf.reshape(mean, mean_shape) variance = tf.reshape(variance, variance_shape) else: batch_norm_op = tf.nn.batch_normalization(input_batch, mean, variance, self._beta, self._gamma, self._eps, name='batch_norm') return (batch_norm_op, mean, variance)
'Sets up optional scale and offset factors.'
def _build_scale_offset(self, dtype):
self._beta = None if (self._offset or self._fused): if (self.BETA not in self._initializers): self._initializers[self.BETA] = create_beta_initializer() self._beta = tf.get_variable(self.BETA, dtype=dtype, shape=self._mean_shape, initializer=self._initializers[self.BETA], partitioner=self._partitioners.get(self.BETA, None), regularizer=self._regularizers.get(self.BETA, None), trainable=self._offset) self._gamma = None if (self._scale or self._fused): if (self.GAMMA not in self._initializers): self._initializers[self.GAMMA] = create_gamma_initializer() self._gamma = tf.get_variable(self.GAMMA, dtype=dtype, shape=self._mean_shape, initializer=self._initializers[self.GAMMA], partitioner=self._partitioners.get(self.GAMMA, None), regularizer=self._regularizers.get(self.GAMMA, None), trainable=self._scale)
'Connects the BatchNorm module into the graph. Args: input_batch: A Tensor of arbitrary dimension. By default, the final dimension is not reduced over when computing the minibatch statistics. is_training: A boolean to indicate if the module should be connected in training mode, meaning the moving averages are updated. Can be a Tensor. test_local_stats: A boolean to indicate if local batch statistics should be used when `is_training=False`. If not, moving averages are used. By default `True`. Can be a Tensor. Returns: A tensor with the same shape as `input_batch`. Raises: base.IncompatibleShapeError: If `axis` is not valid for the input shape or has negative entries. base.NotSupportedError: If `input_batch` has data type of `tf.float16`.'
def _build(self, input_batch, is_training, test_local_stats=True):
input_shape = input_batch.get_shape() if (self._axis is not None): if (len(self._axis) > len(input_shape)): raise base.IncompatibleShapeError('Too many indices specified in axis: len({}) > len({}).'.format(self._axis, input_shape)) if (max(self._axis) >= len(input_shape)): raise base.IncompatibleShapeError('One or more index in axis is too large for input shape: {} >= {:d}.'.format(self._axis, len(input_shape))) if (min(self._axis) < 0): raise base.IncompatibleShapeError('Indices in axis must be non-negative: {} < 0.'.format(self._axis)) axis = self._axis else: axis = tuple(range(len(input_shape))[:(-1)]) dtype = input_batch.dtype if (dtype == tf.float16): raise base.NotSupportedError('BatchNorm does not support `tf.float16`, insufficient precision for calculating sufficient statistics.') self._mean_shape = input_batch.get_shape().as_list() for index in axis: self._mean_shape[index] = 1 use_batch_stats = (is_training | test_local_stats) (mean, variance) = self._build_statistics(input_batch, axis, use_batch_stats, dtype) self._build_scale_offset(dtype) (out, mean, variance) = self._batch_norm_op(input_batch, mean, variance, use_batch_stats) update_ops = self._build_update_ops(mean, variance, is_training) if update_ops: if self._update_ops_collection: for update_op in update_ops: tf.add_to_collection(self._update_ops_collection, update_op) else: with tf.control_dependencies(update_ops): out = tf.identity(out) return out
'Constructs an Embed module. Args: vocab_size: int. Number of unique tokens to embed. If not provided, an existing vocabulary matrix from which vocab_size can be inferred must be provided as existing_vocab. embed_dim: int or None. Number of dimensions to assign to each embedding. If not specified, a sensible default is chosen based on `vocab_size`. If an existing vocabulary matrix initializes the module, this should not be provided as it will be inferred. existing_vocab: a [vocab_size, embed_dim] vocabulary matrix. Will be converted to a tf.float32 tensor. If provided, neither or vocab_size or embed_dim should be provided as they are inferred. initializers: Optional dict containing initializers for embeddings (with key \'embeddings\'). As a default, embeddings are initialized via a truncated normal distribution. partitioners: Optional dict containing partitioners for embeddings (with key \'embeddings\'). As a default, no partitioners are used. regularizers: Optional dict containing regularizers for embeddings (with key \'embeddings\'). As a default, no regularizers are used. A regularizer should be a function that takes a single `Tensor` as an input and returns a scalar `Tensor` output, e.g. the L1 and L2 regularizers in `tf.contrib.layers`. trainable: if True, the embeddings will be updated during training. If False, they are fixed to their initial values. If `trainable=False` and a regularizer is given, the resulting loss stays constant. name: string. Name for this module. Raises: ValueError: if neither one of vocab_size or existing_vocab is provided, or if existing_vocab is provided along with vocab_size, embedding_dim, initializers, partitioners or regularizers (as these should be inferred).'
def __init__(self, vocab_size=None, embed_dim=None, existing_vocab=None, initializers=None, partitioners=None, regularizers=None, trainable=True, name='embed'):
if ((vocab_size is None) and (existing_vocab is None)): raise ValueError('Must provide on of vocab_size or existing_vocab.') if ((existing_vocab is not None) and (not all(((x is None) for x in [vocab_size, embed_dim, initializers, partitioners])))): raise ValueError('If existing_vocab is provided, none of vocab_size, embedding_dim, initializers, or partitioners is needed.') super(Embed, self).__init__(name=name) self._existing_vocab = None if (existing_vocab is None): self._vocab_size = vocab_size self._embed_dim = (embed_dim or _embedding_dim(self._vocab_size)) else: self._existing_vocab = tf.convert_to_tensor(existing_vocab, dtype=tf.float32) existing_vocab_shape = self._existing_vocab.get_shape().with_rank(2) existing_vocab_shape.assert_is_fully_defined() (self._vocab_size, self._embed_dim) = existing_vocab_shape.as_list() self._initializers = util.check_initializers(initializers, self.POSSIBLE_INITIALIZER_KEYS) self._partitioners = util.check_partitioners(partitioners, self.POSSIBLE_INITIALIZER_KEYS) self._regularizers = util.check_regularizers(regularizers, self.POSSIBLE_INITIALIZER_KEYS) self._trainable = trainable
'Lookup embeddings. Looks up an embedding vector for each value in `ids`. All ids must be within [0, vocab_size), else an `InvalidArgumentError` is raised at runtime. Args: ids: Tensor of dtype int64. Returns: Tensor of tf.shape(ids) + [embedding_dim] and dtype float32.'
def _build(self, ids):
if (self._existing_vocab is None): if (self.EMBEDDINGS not in self._initializers): self._initializers[self.EMBEDDINGS] = basic.create_linear_initializer(self._vocab_size) self._embeddings = tf.get_variable('embeddings', shape=[self._vocab_size, self._embed_dim], dtype=tf.float32, initializer=self._initializers[self.EMBEDDINGS], partitioner=self._partitioners.get(self.EMBEDDINGS, None), regularizer=self._regularizers.get(self.EMBEDDINGS, None), trainable=self._trainable) else: self._embeddings = tf.get_variable('embeddings', dtype=tf.float32, initializer=self._existing_vocab, regularizer=self._regularizers.get(self.EMBEDDINGS, None), trainable=self._trainable) return tf.nn.embedding_lookup(self._embeddings, ids, name='embedding_lookup')
'Size of input vocabulary.'
@property def vocab_size(self):
return self._vocab_size
'Size of embedding vectors.'
@property def embed_dim(self):
return self._embed_dim
'Returns the Variable containing embeddings. Returns: A 2D Variable containing one embedding vector per row, constructed in the most recent __call__. Raises: base.NotConnectedError: If the module has not been connected to the graph yet, meaning the variables do not exist.'
@property def embeddings(self):
self._ensure_is_connected() return self._embeddings
'Construct a SkipConnectionCore. Args: base_core: Base RNNCore to wrap. input_shape: Shape of the input as tuple, excluding the batch size. name: Name of the module.'
def __init__(self, base_core, input_shape=None, name='skip_connection_core'):
super(SkipConnectionCore, self).__init__(name=name) self._base_core = base_core self._input_shape = input_shape
'Check that custom getters work appropriately.'
def testCustomGetter(self):
def custom_getter(getter, *args, **kwargs): kwargs['trainable'] = False return getter(*args, **kwargs) inputs = tf.placeholder(tf.float32, shape=[self.batch_size, self.in_size]) lin1 = snt.Linear(output_size=self.out_size, custom_getter=custom_getter) lin1(inputs) self.assertEqual(0, len(tf.trainable_variables())) self.assertEqual(2, len(tf.global_variables())) lin2 = snt.Linear(output_size=self.out_size, custom_getter={'w': custom_getter}) lin2(inputs) self.assertEqual(1, len(tf.trainable_variables())) self.assertEqual(4, len(tf.global_variables()))
'Tests a particular device (e.g. gpu, cpu) placement. This test ensures that the following device placement is possible: * The Linear module is on the gpu, * the optimizer is declared to be on the cpu, * but when calling minimize on the optimizer, we pass True to colocate_gradients_with_ops. The test exists because while one may expect tf.matmul(X, w) + b to be equivalent to tf.nn.xw_plus_b(X, w, b), with the latter this placement results in an InvalidArgumentError. Warning: if there is no gpu available to tensorflow this test will be skipped with just a warning! This is because the test requires that tensorflow has access to a gpu, but often this is not the case.'
def testGradientColocation(self):
if (not any(((x.device_type == 'GPU') for x in device_lib.list_local_devices()))): tf.logging.warn('Skipping the gradient colocation test as there is no gpu available to tensorflow.') return n_outputs = 5 n_inputs = 3 batch_size = 7 linear = snt.Linear(n_outputs) with tf.device('/cpu:*'): inputs = tf.placeholder(tf.float32, [batch_size, n_inputs]) labels = tf.to_int64(np.ones(batch_size)) with tf.device('/gpu:*'): outputs = linear(inputs) cross_entropy = tf.contrib.nn.deprecated_flipped_sparse_softmax_cross_entropy_with_logits(outputs, labels, name='xentropy') loss = tf.reduce_mean(cross_entropy, name='xentropy_mean') optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1) optimizer.minimize(loss, colocate_gradients_with_ops=True) init = tf.global_variables_initializer() try: with self.test_session(force_gpu=True) as sess: sess.run(init) except tf.errors.InvalidArgumentError as e: self.fail(('Cannot start the session. Details:\n' + e.message))
'Test where idx is an integer.'
def testBasicSelect(self):
shape0 = [2, 3] shape1 = [2, 3, 4] input0 = tf.random_uniform(shape=shape0) input1 = tf.random_uniform(shape=shape1) mod = snt.SelectInput(idx=0) output = mod(input0, input1) output0 = tf.identity(input0) with self.test_session() as sess: out = sess.run([output, output0]) self.assertAllEqual(out[0], out[1])
'Test where idx is a tuple.'
def testTupleSelect(self):
shape0 = [1, 2] shape1 = [1, 2, 3] shape2 = [1, 2, 3, 4] input0 = tf.random_uniform(shape=shape0) input1 = tf.random_uniform(shape=shape1) input2 = tf.random_uniform(shape=shape2) mod = snt.SelectInput(idx=(0, 2)) output = mod(input0, input1, input2) output0 = tf.identity(input0) output2 = tf.identity(input2) with self.test_session() as sess: out = sess.run([output, [output0, output2]]) self.assertAllEqual(out[0][0], out[1][0]) self.assertAllEqual(out[0][1], out[1][1])
'Test where idx is a nested list.'
def testNestedListSelect(self):
shape0 = [1, 2] shape1 = [1, 2, 3] shape2 = [1, 2, 3, 4] input0 = tf.random_uniform(shape=shape0) input1 = tf.random_uniform(shape=shape1) input2 = tf.random_uniform(shape=shape2) mod = snt.SelectInput(idx=[2, [1, 0, 1]]) output = mod(input0, input1, input2) output0 = tf.identity(input0) output1 = tf.identity(input1) output2 = tf.identity(input2) with self.test_session() as sess: out = sess.run([output, [output2, [output1, output0, output1]]]) self.assertAllEqual(out[0][0], out[1][0]) self.assertAllEqual(out[0][1][0], out[1][1][0]) self.assertAllEqual(out[0][1][1], out[1][1][1]) self.assertAllEqual(out[0][1][2], out[1][1][2])
'Checks error on invalid idx value.'
def testInvalidIdxValue(self):
input1 = tf.placeholder(tf.float32, shape=[2, 3, 4, 5, 6]) input2 = tf.placeholder(tf.float32, shape=[7, 8]) invalid_idx = 2 mod = snt.SelectInput(idx=[invalid_idx]) err = '`idx` contains out of bound entries \\(they should be in the range \\[0, 2\\)\\)' with self.assertRaisesRegexp(ValueError, err): mod(input1, input2)
'Checks error on invalid idx type.'
def testInvalidIdxType(self):
invalid_idx = 0.5 err = '`idx` should be a \\(nested\\) array/tuple, or an integer.' with self.assertRaisesRegexp(TypeError, err): snt.SelectInput(idx=invalid_idx)
'Constructs a Linear module. Args: output_size: Output dimensionality. `output_size` can be either an integer or a callable. In the latter case, since the function invocation is deferred to graph construction time, the user must only ensure that output_size can be called, returning an integer, when build is called. use_bias: Whether to include bias parameters. Default `True`. initializers: Optional dict containing initializers to initialize the weights (with key \'w\') or biases (with key \'b\'). The default initializer for the weights is a truncated normal initializer, which is commonly used when the inputs are zero centered (see https://arxiv.org/pdf/1502.03167v3.pdf). The default initializer for the bias is a zero initializer. partitioners: Optional dict containing partitioners to partition weights (with key \'w\') or biases (with key \'b\'). As a default, no partitioners are used. regularizers: Optional dict containing regularizers for the weights (with key \'w\') and the biases (with key \'b\'). As a default, no regularizers are used. A regularizer should be a function that takes a single `Tensor` as an input and returns a scalar `Tensor` output, e.g. the L1 and L2 regularizers in `tf.contrib.layers`. custom_getter: Callable or dictionary of callables to use as custom getters inside the module. If a dictionary, the keys correspond to regexes to match variable names. See the `tf.get_variable` documentation for information about the custom_getter API. name: Name of the module. Raises: KeyError: If `initializers`, `partitioners` or `regularizers` contains any keys other than \'w\' or \'b\'. TypeError: If any of the given initializers, partitioners or regularizers are not callable.'
def __init__(self, output_size, use_bias=True, initializers=None, partitioners=None, regularizers=None, custom_getter=None, name='linear'):
super(Linear, self).__init__(custom_getter=custom_getter, name=name) self._output_size = output_size self._use_bias = use_bias self._input_shape = None self._w = None self._b = None self.possible_keys = self.get_possible_initializer_keys(use_bias=use_bias) self._initializers = util.check_initializers(initializers, self.possible_keys) self._partitioners = util.check_partitioners(partitioners, self.possible_keys) self._regularizers = util.check_regularizers(regularizers, self.possible_keys)
'Connects the Linear module into the graph, with input Tensor `inputs`. If this is not the first time the module has been connected to the graph, the Tensor provided here must have the same final dimension, in order for the existing variables to be the correct size for the multiplication. The batch size may differ for each connection. Args: inputs: A 2D Tensor of size [batch_size, input_size]. Returns: A 2D Tensor of size [batch_size, output_size]. Raises: base.IncompatibleShapeError: If the input is not a 2-D `Tensor` with the size of the second dimension specified. base.IncompatibleShapeError: If reconnecting an already connected module into the graph, and the shape of the input is not compatible with previous inputs.'
def _build(self, inputs):
input_shape = tuple(inputs.get_shape().as_list()) if (len(input_shape) != 2): raise base.IncompatibleShapeError('{}: rank of shape must be 2 not: {}'.format(self.scope_name, len(input_shape))) if (input_shape[1] is None): raise base.IncompatibleShapeError('{}: Input size must be specified at module build time'.format(self.scope_name)) if ((self._input_shape is not None) and (input_shape[1] != self._input_shape[1])): raise base.IncompatibleShapeError('{}: Input shape must be [batch_size, {}] not: [batch_size, {}]'.format(self.scope_name, self._input_shape[1], input_shape[1])) self._input_shape = input_shape dtype = inputs.dtype if ('w' not in self._initializers): self._initializers['w'] = create_linear_initializer(self._input_shape[1], dtype) if (('b' not in self._initializers) and self._use_bias): self._initializers['b'] = create_bias_initializer(self._input_shape[1], dtype) weight_shape = (self._input_shape[1], self.output_size) self._w = tf.get_variable('w', shape=weight_shape, dtype=dtype, initializer=self._initializers['w'], partitioner=self._partitioners.get('w', None), regularizer=self._regularizers.get('w', None)) outputs = tf.matmul(inputs, self._w) if self._use_bias: bias_shape = (self.output_size,) self._b = tf.get_variable('b', shape=bias_shape, dtype=dtype, initializer=self._initializers['b'], partitioner=self._partitioners.get('b', None), regularizer=self._regularizers.get('b', None)) outputs += self._b return outputs
'Returns the Variable containing the weight matrix. Returns: Variable object containing the weights, from the most recent __call__. Raises: base.NotConnectedError: If the module has not been connected to the graph yet, meaning the variables do not exist.'
@property def w(self):
self._ensure_is_connected() return self._w
'Returns the Variable containing the bias. Returns: Variable object containing the bias, from the most recent __call__. Raises: base.NotConnectedError: If the module has not been connected to the graph yet, meaning the variables do not exist. AttributeError: If the module does not use bias.'
@property def b(self):
self._ensure_is_connected() if (not self._use_bias): raise AttributeError('No bias Variable in Linear Module when `use_bias=False`.') return self._b
'Returns the module output size.'
@property def output_size(self):
if callable(self._output_size): self._output_size = self._output_size() return self._output_size
'Returns `True` if bias Variable is present in the module.'
@property def has_bias(self):
return self._use_bias
'Returns the initializers dictionary.'
@property def initializers(self):
return self._initializers
'Returns the partitioners dictionary.'
@property def partitioners(self):
return self._partitioners
'Returns the regularizers dictionary.'
@property def regularizers(self):
return self._regularizers
'Returns a cloned `Linear` module. Args: name: Optional string assigning name of cloned module. The default name is constructed by appending "_clone" to `self.module_name`. Returns: Cloned `Linear` module.'
def clone(self, name=None):
if (name is None): name = (self.module_name + '_clone') return Linear(output_size=self.output_size, use_bias=self._use_bias, initializers=self._initializers, partitioners=self._partitioners, regularizers=self._regularizers, name=name)
'Returns shape of input `Tensor` passed at last call to `build`.'
@property def input_shape(self):
self._ensure_is_connected() return self._input_shape
'Returns transposed `Linear` module. Args: name: Optional string assigning name of transpose module. The default name is constructed by appending "_transpose" to `self.module_name`. Returns: Transposed `Linear` module.'
def transpose(self, name=None):
if (name is None): name = (self.module_name + '_transpose') return Linear(output_size=(lambda : self.input_shape[1]), use_bias=self._use_bias, initializers=self._initializers, partitioners=self._partitioners, regularizers=self._regularizers, name=name)
'Constructs an AddBias module that supports broadcasting. Args: output_shape: Output dimensionality. `output_shape` can be either `None`, a `tuple`, or a `callable`. In the latter case, since the function invocation is deferred to graph construction time, the user must only ensure that `output_shape` can be called, returning a tuple, when build is called. If `output_shape` is left as `None`, the size will be directly inferred by the input. bias_dims: List of which dimensions to retain from the input shape when constructing the bias. The remaining dimensions will get broadcasted over (given size of 1), and leading dimensions will be removed completely. For example, for an input of [batch_size, dim1_size, dim2_size, dim3_size] and `bias_dims=[1, 3]`, the resulting bias will have shape [dim1_size, 1, dim2_size]. The default is to retain all dimensions apart from the minibatch dimension. Trying to retain the bias shape over the minibatch dimension, e.g. `bias_dims=[0]`, will result in an error at build time. See the \'Example Usage\' section below for more information. initializers: Optional dict containing ops to initialize the biases (with key \'b\'). The default initializer for the bias is a zero initializer. partitioners: Optional dict containing a partitioner to partition the bias (with key \'b\'). As a default, no partitioner is used. regularizers: Optional dict containing regularizers of the biases (with key \'b\'). As a default, no regularizers are used. A regularizer should be a function that takes a single `Tensor` as an input and returns a scalar `Tensor` output, e.g. the L1 and L2 regularizers in `tf.contrib.layers`. name: Name of the module. Example Usage: ```python # Create a 4D input Tensor. input = tf.random_normal( shape=(batch_size, dim1_size, dim2_size, dim3_size))) # Create a scalar bias: scalar_bias = snt.AddBias(bias_dims=[]) scalar_bias_output = scalar_bias(input) scalar_bias.b.get_shape() # () # Create a bias over all non-minibatch dimensions: all_bias = snt.AddBias() # or snt.AddBias(bias_dims=None) all_bias_output = all_bias(input) all_bias.b.get_shape() # (dim1_size, dim2_size, dim3_size) # Create a bias over the last non-minibatch dimension: last_bias = snt.AddBias(bias_dims=[-1]) last_bias_output = last_bias(input) last_bias.b.get_shape() # (dim3_size) # Create a bias over the first non-minibatch dimension: first_bias = snt.AddBias(bias_dims=[1]) first_bias_output = first_bias(input) first_bias.b.get_shape() # (dim1_size, 1, 1) # Subtract and later add the same learned bias: bias = snt.AddBias() hidden1 = bias(input, multiplier=-1) reconstructed_input = bias(hidden4) Raises: KeyError: If `initializers` contains any keys other than \'b\'. KeyError: If `partitioners` contains any keys other than \'b\'. KeyError: If `regularizers` contains any keys other than \'b\'. TypeError: If any of the given initializers are not callable. TypeError: If any of the given partitioners are not callable. TypeError: If any of the given regularizers are not callable.'
def __init__(self, output_shape=None, bias_dims=None, initializers=None, partitioners=None, regularizers=None, name='add'):
super(AddBias, self).__init__(name=name) self._output_shape = output_shape self._input_shape = None self._bias_dims = bias_dims self._b = None self._initializers = util.check_initializers(initializers, self.POSSIBLE_INITIALIZER_KEYS) self._partitioners = util.check_partitioners(partitioners, self.POSSIBLE_INITIALIZER_KEYS) self._regularizers = util.check_regularizers(regularizers, self.POSSIBLE_INITIALIZER_KEYS)
'Connects the Add module into the graph, with input Tensor `inputs`. Args: inputs: A Tensor of size `[batch_size, input_size1, ...]`. multiplier: A scalar or Tensor which the bias term is multiplied by before adding it to `inputs`. Anything which works in the expression `bias * multiplier` is acceptable here. This may be useful if you want to add a bias in one place and subtract the same bias in another place via `multiplier=-1`. Returns: A Tensor of size `[batch_size, input_size1, ...]`. Raises: base.IncompatibleShapeError: If the input is not a >= 2D `Tensor`. base.IncompatibleShapeError: If connecting the module into the graph any time after the first time, and the inferred size of the input does not match previous invocations. base.IncompatibleShapeError: If the `output_shape` has been specified but it does not match the input_shape`. base.ParentNotBuiltError: If the module is a transposed and the original untransposed module has not been built.'
def _build(self, inputs, multiplier=1):
input_shape = tuple(inputs.get_shape().as_list()) bias_shape = calculate_bias_shape(input_shape, self._bias_dims) if (len(input_shape) < 2): raise base.IncompatibleShapeError('Rank of input shape must be >=2 not: {}.'.format(len(input_shape))) if ((self._input_shape is not None) and (input_shape[1:] != self._input_shape[1:])): raise base.IncompatibleShapeError('Input shape has changed.') if callable(self._output_shape): self._output_shape = self._output_shape() if (self._output_shape is None): raise base.ParentNotBuiltError('Build the original untransposed module before building this one.') if ((self._output_shape is not None) and (self._output_shape[1:] != input_shape[1:])): raise base.IncompatibleShapeError('Input shape must be {} not: {}.'.format(self._output_shape, input_shape[1])) self._input_shape = input_shape dtype = inputs.dtype if ('b' not in self._initializers): self._initializers['b'] = create_bias_initializer(bias_shape, dtype) self._b = tf.get_variable('b', shape=bias_shape, dtype=dtype, initializer=self._initializers['b'], partitioner=self._partitioners.get('b', None), regularizer=self._regularizers.get('b', None)) bias = self._b if (multiplier != 1): bias *= multiplier outputs = (inputs + bias) return outputs
'Returns the Variable containing the bias. Returns: Variable object containing the bias, from the most recent __call__. Raises: base.NotConnectedError: If the module has not been connected to the graph yet, meaning the variables do not exist.'
@property def b(self):
self._ensure_is_connected() return self._b
'Returns shape of input `Tensor` passed at last call to `build`.'
@property def input_shape(self):
self._ensure_is_connected() return self._input_shape
'Returns transposed `AddBias` module. Args: name: Optional string assigning name of transpose module. The default name is constructed by appending "_transpose" to `self.module_name`. Returns: Transposed `AddBias` module.'
def transpose(self, name=None):
if (name is None): name = (self.module_name + '_transpose') return AddBias(output_shape=(lambda : self._input_shape), bias_dims=self._bias_dims, initializers=self._initializers, regularizers=self._regularizers, name=name)
'Constructs a BatchReshape module. Args: shape: Shape to reshape the input Tensor to while preserving its first `preserve_dims` dimensions; `shape` can be either a tuple/list, or a callable that returns the actual shape. The callable does not need to be ready to return something meaningful at construction time, but it will be required to be able to do so when the module is connected to the graph. When the special value -1 appears in `shape` the corresponding size is automatically inferred. Note that -1 can only appear once in `shape`. To flatten all non-batch dimensions, the snt.BatchFlatten module can also be used. preserve_dims: Number of leading dimensions that will not be reshaped. For example, given an input Tensor with shape `[B, H, W, C, D]`, and argument `shape` equal to `(-1, D)`: * `preserve_dims=1` will return a Tensor with shape `[B, H*W*C, D]`. * `preserve_dims=2` will return a Tensor with shape `[B, H, W*C, D]`. * `preserve_dims=3` will return a Tensor with shape `[B, H, W, C, D]`. * `preserve_dims=4` will return a Tensor with shape `[B, H, W, C, 1, D]`. * `preserve_dims>=5` will throw an error on build unless D=1. The preserved dimensions can be unknown at building time. name: Name of the module. Raises: ValueError: If `preserve_dims <= 0`.'
def __init__(self, shape, preserve_dims=1, name='batch_reshape'):
super(BatchReshape, self).__init__(name=name) self._input_shape = None self._shape = shape self._preserve_dims = preserve_dims if (preserve_dims <= 0): raise ValueError('Argument preserve_dims should be >= 1.') if (not callable(self._shape)): self._shape = tuple(self._shape)
'Replaces the -1 wildcard in the output shape vector. This function infers the correct output shape given the input dimensions. Args: dimensions: List of input non-batch dimensions. Returns: Tuple of non-batch output dimensions.'
def _infer_shape(self, dimensions):
n = np.prod(dimensions) m = np.prod(abs(np.array(self._shape))) v = np.array(self._shape) v[(v == (-1))] = (n // m) return tuple(v)
'Connects the module into the graph, with input Tensor `inputs`. Args: inputs: A Tensor of shape [b_1, b_2, ..., b_preserve_dims, b_preserve_dims+1, ...]. Returns: A Tensor of shape [b_1, b_2, ..., b_preserve_dims, b_reshape_1, b_reshape_2, ...], with reshaping defined by the constructor `shape` parameter. Raises: ValueError: If output shape is incompatible with input shape; or if shape array contains non numeric entries; or if shape array contains more than 1 wildcard -1; or if the input array contains unknown, non-preserved dimensions (except when the unknown dimension is the only non-preserved dimension and doesn\'t actually need reshaping).'
def _build(self, inputs):
full_input_shape = inputs.get_shape().as_list() if (len(full_input_shape) < self._preserve_dims): raise ValueError('Input tensor has {} dimensions, should have at least as many as preserve_dims={}'.format(len(full_input_shape), self._preserve_dims)) self._input_shape = full_input_shape[self._preserve_dims:] if callable(self._shape): self._shape = tuple(self._shape()) if ((len(self._input_shape) == 1) and (len(self._shape) == 1)): if ((self._shape[0] == (-1)) or (self._shape[0] == self._input_shape[0])): return inputs elif (self._input_shape[0] is None): raise ValueError('Unknown non-preserved dimensions are not allowed in the input to BatchReshape unless it is only one and the desired shape is (-1,).') else: raise ValueError('Output shape is incompatible with input shape') if (not all([(isinstance(x, numbers.Integral) and ((x > 0) or (x == (-1)))) for x in self._shape])): raise ValueError('Desired shape can only contain positive integral numbers and the wildcard -1. Given shape {}'.format(self._shape)) if (self._shape.count((-1)) > 1): raise ValueError('Wildcard -1 can appear only once in desired output shape. Given shape {}'.format(self._shape)) preserved_shape = tf.shape(inputs)[:self._preserve_dims] preserved_shape_list = inputs.get_shape()[:self._preserve_dims] if (None in self._input_shape): raise ValueError('Unknown non-preserved dimensions are not allowed in the input to BatchReshape unless it is only one and the desired shape is (-1,). The offending non-preserved input shape is {}'.format(self._input_shape)) if (self._shape.count((-1)) > 0): trailing_shape = self._infer_shape(self._input_shape) else: trailing_shape = self._shape if (np.prod(self._input_shape) != np.prod(trailing_shape)): raise ValueError('Output shape is incompatible with input shape') shape = tf.concat([preserved_shape, trailing_shape], 0) output = tf.reshape(inputs, shape) shape_list = preserved_shape_list.concatenate(trailing_shape) output.set_shape(shape_list) return output
'Returns transpose batch reshape.'
def transpose(self, name=None):
if (name is None): name = (self.module_name + '_transpose') return BatchReshape(shape=(lambda : self.input_shape), preserve_dims=self._preserve_dims, name=name)
'Constructs a BatchFlatten module. Args: preserve_dims: Number of leading dimensions that will not be reshaped. For example, given an input Tensor with shape `[B, H, W, C]`: * `preserve_dims=1` will return a Tensor with shape `[B, H*W*C]`. * `preserve_dims=2` will return a Tensor with shape `[B, H, W*C]`. * `preserve_dims=3` will return the input itself, shape `[B, H, W, C]`. * `preserve_dims=4` will return a Tensor with shape `[B, H, W, C, 1]`. * `preserve_dims>=5` will throw an error on build. The preserved dimensions can be unknown at building time. name: Name of the module.'
def __init__(self, preserve_dims=1, name='batch_flatten'):
super(BatchFlatten, self).__init__(shape=((-1),), preserve_dims=preserve_dims, name=name)
'Constructs a FlattenTrailingDimensions module. For example, given an input Tensor with shape `[B, H, W, C]`: * `dim_from=1` will return a Tensor with shape `[B, H*W*C]`. * `dim_from=2` will return a Tensor with shape `[B, H, W*C]`. * `dim_from=3` will return the input itself. * `dim_from=4` will return a Tensor with shape `[B, H, W, C, 1]`. * `dim_from>=5` will generate a ValueError when building the module. The preserved dimensions can be unknown at building time. Equivalent to BatchFlatten(preserve_dims=dim_from, name=name). Args: dim_from: All dimensions after and including `dim_from` will be flattened into a single dimension. name: Name of the module. Raises: ValueError: If `dim_from <= 0`.'
def __init__(self, dim_from, name='batch_dim_from'):
if (dim_from <= 0): raise ValueError('Argument dim_from should be >= 1.') super(FlattenTrailingDimensions, self).__init__(shape=((-1),), preserve_dims=dim_from, name=name)
'Constructs a TrainableVariable module. Args: shape: Tensor shape. dtype: Tensor data type. initializers: Optional dictionary containing ops to initialize the weight Tensor, with key \'w\'. partitioners: Optional dict containing a partitioner to partition the weight (with key \'w\'). As a default, no partitioner is used. regularizers: Optional dict containing regularizers for the weights (with key \'w\'). As a default, no regularizers are used. A regularizer should be a function that takes a single `Tensor` as an input and returns a scalar `Tensor` output, e.g. the L1 and L2 regularizers in `tf.contrib.layers`. name: Name of the module. Raises: KeyError: If `initializers` contains any keys other than \'w\'. KeyError: If `partitioners` contains any keys other than \'w\'. KeyError: If `regularizers` contains any keys other than \'w\'. TypeError: If any of the given initializers are not callable. TypeError: If any of the given partitioners are not callable. TypeError: If any of the given regularizers are not callable.'
def __init__(self, shape, dtype=tf.float32, initializers=None, partitioners=None, regularizers=None, name='trainable_variable'):
super(TrainableVariable, self).__init__(name=name) self._shape = tuple(shape) self._dtype = dtype self._initializers = util.check_initializers(initializers, self.POSSIBLE_INITIALIZER_KEYS) self._partitioners = util.check_partitioners(partitioners, self.POSSIBLE_INITIALIZER_KEYS) self._regularizers = util.check_regularizers(regularizers, self.POSSIBLE_INITIALIZER_KEYS)
'Connects the TrainableTensor module into the graph. Returns: A Tensor of shape as determined in the constructor.'
def _build(self):
if ('w' not in self._initializers): stddev = (1 / math.sqrt(np.prod(self._shape))) self._initializers['w'] = tf.truncated_normal_initializer(stddev=stddev) self._w = tf.get_variable('w', shape=self._shape, dtype=self._dtype, initializer=self._initializers['w'], partitioner=self._partitioners.get('w', None), regularizer=self._regularizers.get('w', None)) return self._w
'Returns the Variable containing the weights Tensor. Returns: Variable object containing the weights, from the most recent __call__. Raises: base.Error: If the module has not been connected to the graph yet, meaning the variables do not exist.'
@property def w(self):
self._ensure_is_connected() return self._w
'Constructor of the module. Args: module_or_op: Module or tensorflow op to apply to an input tensor. n_dims: Number of dimensions to merge before using module on the input of BatchApply. input_example_index: Index of input that has same shape for the first `n_dims` dimensions as `module_or_op` output(s). This is used for unflattening the output(s) if static shape inference is not possible. name: Name of the module. Raises: TypeError: If n_dims is not an integer. ValueError: If n_dims is not greater than zero.'
def __init__(self, module_or_op, n_dims=2, input_example_index=0, name='batch_apply'):
super(BatchApply, self).__init__(name=name) if (not isinstance(n_dims, int)): raise TypeError(('n_dims should be an integer, it is a %s instead.' % type(n_dims))) if (n_dims <= 0): raise ValueError('n_dims should be greater than zero.') self._module = module_or_op self._n_dims = n_dims self._input_example_index = input_example_index
'Connects the BatchApply module into the graph. Args: *args: a Tensor or a nested list or dictionary of Tensors. The input tensors will have their first dimensions merged, then an op or a module will be called on the input. The first dimension of the output tensor(s) will be split again based on the leading dimensions of the first input tensor. **kwargs: Dictionary of named arguments; used in the same way as `*args`. Returns: A Tensor or nested list or dictionary of Tensors as a result of applying the process above. ("None" return values are also supported.)'
def _build(self, *args, **kwargs):
flattened = nest.flatten_iterable([args, kwargs]) merged_flattened = [(merge_leading_dims(inp, self._n_dims) if (inp is not None) else None) for inp in flattened] (merged_args, merged_kwargs) = nest.pack_iterable_as([args, kwargs], merged_flattened) results = self._module(*merged_args, **merged_kwargs) example_input = tf.convert_to_tensor(flattened[self._input_example_index]) def _split_to_original_leading_dims(result): if (result is None): return None else: return split_leading_dim(result, example_input, self._n_dims) flat_results = nest.flatten_iterable(results) flat_unmerged_results = [_split_to_original_leading_dims(result) for result in flat_results] return nest.pack_iterable_as(results, flat_unmerged_results)
'Constructs the `SliceByDim` module. Args: dims: The dimensions to slice along, as a list of unique integers. Negative integers index from the final dimension backwards, as in python arrays. begin: The beginning indices of the slicing, as a list of integers. Must be the same length as the `dims` list. size: The size of the slices, as a list of integers. Must be the same length as the `dims` list. name: The name of the module. Raises: ValueError: If `dims` has non-unique integers, or if the size of `begin` is different from the size of `dims`, or if the size of `size` is different from the size of `dims`.'
def __init__(self, dims, begin, size, name='slice_by_dim'):
super(SliceByDim, self).__init__(name=name) self._dims = dims self._begin = begin self._size = size if (np.unique(dims).size != len(dims)): raise ValueError('dims must not have any repeated integers.') if (len(begin) != len(dims)): raise ValueError('begin must have the same length as dims: {}.'.format(len(dims))) if (len(size) != len(dims)): raise ValueError('size must have the same length as dims: {}.'.format(len(dims)))
'Connects the SliceByDim module into the graph. Args: inputs: `Tensor` to slice. Its rank must be greater than the maximum dimension specified in `dims` (plus one as python is 0 indexed). Returns: The sliced tensor. Raises: ValueError: If `inputs` tensor has insufficient rank.'
def _build(self, inputs):
shape_inputs = inputs.get_shape().as_list() rank = len(shape_inputs) max_dim = (np.max(self._dims) + 1) if (rank < max_dim): raise ValueError('Rank of inputs must be at least {}.'.format(max_dim)) full_begin = ([0] * rank) full_size = ([(-1)] * rank) for (dim, begin, size) in zip(self._dims, self._begin, self._size): full_begin[dim] = begin full_size[dim] = size return tf.slice(inputs, begin=full_begin, size=full_size)
'Constructs the `TileByDim` module. Args: dims: The dimensions to tile along, as a list of unique integers. multiples: The multiple of the tiling, as a list of integers. Must be the same length as the `dims` list. name: The name of the module. Raises: ValueError: If `dims` has non-unique integers, or if the size of `multiples` is different from the size of `dims`.'
def __init__(self, dims, multiples, name='tile_by_dim'):
super(TileByDim, self).__init__(name=name) self._dims = dims self._multiples = multiples if (np.unique(dims).size != len(dims)): raise ValueError('dims must not have any repeated integers.') if (len(multiples) != len(dims)): raise ValueError('multiples must have the same length as dims: {}.'.format(len(dims)))
'Connects the `TileByDim` module into the graph. Args: inputs: `Tensor` to tile. Returns: The tiled tensor.'
def _build(self, inputs):
shape_inputs = inputs.get_shape().as_list() rank = len(shape_inputs) full_multiples = ([1] * rank) for (dim, multiple) in zip(self._dims, self._multiples): full_multiples[dim] = multiple return tf.tile(inputs, multiples=full_multiples)
'Constructs the MergeDims module. Args: start: Start of the range of dimensions to merge. size: Size the range of dimensions to merge. name: The name of the module. Raises: ValueError: If `size` is not strictly greater than 1.'
def __init__(self, start, size, name='merge_dims'):
super(MergeDims, self).__init__(name=name) self._start = start self._size = size if (size <= 1): raise ValueError('`size` should be strictly greater than 1.')
'Connects the MergeDims module into the graph. Args: inputs: Tensor or a nested list of Tensors to merge. Its rank must be greater than or equal to `start` + `size`. Returns: The merged Tensor or a nested list of merged Tensors. Raises: ValueError: If any of the `inputs` tensors has insufficient rank.'
def _build(self, inputs):
if nest.is_sequence(inputs): merged_tensors = [self._merge(tensor) for tensor in nest.flatten(inputs)] return nest.pack_sequence_as(inputs, merged_tensors) return self._merge(inputs)