text
stringlengths 5
261k
| id
stringlengths 16
106
| metadata
dict | __index_level_0__
int64 0
266
|
---|---|---|---|
<jupyter_start><jupyter_text>Convolutional autoencoder for image denoising**Author:** [Santiago L. Valdarrama](https://twitter.com/svpino)**Date created:** 2021/03/01**Last modified:** 2021/03/01**Description:** How to train a deep convolutional autoencoder for image denoising. IntroductionThis example demonstrates how to implement a deep convolutional autoencoderfor image denoising, mapping noisy digits images from the MNIST dataset toclean digits images. This implementation is based on an original blog posttitled [Building Autoencoders in Keras](https://blog.keras.io/building-autoencoders-in-keras.html)by [François Chollet](https://twitter.com/fchollet). Setup<jupyter_code>import numpy as np
import matplotlib.pyplot as plt
from keras import layers
from keras.datasets import mnist
from keras.models import Model
def preprocess(array):
"""Normalizes the supplied array and reshapes it."""
array = array.astype("float32") / 255.0
array = np.reshape(array, (len(array), 28, 28, 1))
return array
def noise(array):
"""Adds random noise to each image in the supplied array."""
noise_factor = 0.4
noisy_array = array + noise_factor * np.random.normal(
loc=0.0, scale=1.0, size=array.shape
)
return np.clip(noisy_array, 0.0, 1.0)
def display(array1, array2):
"""Displays ten random images from each array."""
n = 10
indices = np.random.randint(len(array1), size=n)
images1 = array1[indices, :]
images2 = array2[indices, :]
plt.figure(figsize=(20, 4))
for i, (image1, image2) in enumerate(zip(images1, images2)):
ax = plt.subplot(2, n, i + 1)
plt.imshow(image1.reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(image2.reshape(28, 28))
plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()<jupyter_output><empty_output><jupyter_text>Prepare the data<jupyter_code># Since we only need images from the dataset to encode and decode, we
# won't use the labels.
(train_data, _), (test_data, _) = mnist.load_data()
# Normalize and reshape the data
train_data = preprocess(train_data)
test_data = preprocess(test_data)
# Create a copy of the data with added noise
noisy_train_data = noise(train_data)
noisy_test_data = noise(test_data)
# Display the train data and a version of it with added noise
display(train_data, noisy_train_data)<jupyter_output><empty_output><jupyter_text>Build the autoencoderWe are going to use the Functional API to build our convolutional autoencoder.<jupyter_code>input = layers.Input(shape=(28, 28, 1))
# Encoder
x = layers.Conv2D(32, (3, 3), activation="relu", padding="same")(input)
x = layers.MaxPooling2D((2, 2), padding="same")(x)
x = layers.Conv2D(32, (3, 3), activation="relu", padding="same")(x)
x = layers.MaxPooling2D((2, 2), padding="same")(x)
# Decoder
x = layers.Conv2DTranspose(32, (3, 3), strides=2, activation="relu", padding="same")(x)
x = layers.Conv2DTranspose(32, (3, 3), strides=2, activation="relu", padding="same")(x)
x = layers.Conv2D(1, (3, 3), activation="sigmoid", padding="same")(x)
# Autoencoder
autoencoder = Model(input, x)
autoencoder.compile(optimizer="adam", loss="binary_crossentropy")
autoencoder.summary()<jupyter_output><empty_output><jupyter_text>Now we can train our autoencoder using `train_data` as both our input dataand target. Notice we are setting up the validation data using the sameformat.<jupyter_code>autoencoder.fit(
x=train_data,
y=train_data,
epochs=50,
batch_size=128,
shuffle=True,
validation_data=(test_data, test_data),
)<jupyter_output><empty_output><jupyter_text>Let's predict on our test dataset and display the original image together withthe prediction from our autoencoder.Notice how the predictions are pretty close to the original images, althoughnot quite the same.<jupyter_code>predictions = autoencoder.predict(test_data)
display(test_data, predictions)<jupyter_output><empty_output><jupyter_text>Now that we know that our autoencoder works, let's retrain it using the noisydata as our input and the clean data as our target. We want our autoencoder tolearn how to denoise the images.<jupyter_code>autoencoder.fit(
x=noisy_train_data,
y=train_data,
epochs=100,
batch_size=128,
shuffle=True,
validation_data=(noisy_test_data, test_data),
)<jupyter_output><empty_output><jupyter_text>Let's now predict on the noisy data and display the results of our autoencoder.Notice how the autoencoder does an amazing job at removing the noise from theinput images.<jupyter_code>predictions = autoencoder.predict(noisy_test_data)
display(noisy_test_data, predictions)<jupyter_output><empty_output> | keras-io/examples/vision/ipynb/autoencoder.ipynb/0 | {
"file_path": "keras-io/examples/vision/ipynb/autoencoder.ipynb",
"repo_id": "keras-io",
"token_count": 1764
} | 99 |
<jupyter_start><jupyter_text>FixRes: Fixing train-test resolution discrepancy**Author:** [Sayak Paul](https://twitter.com/RisingSayak)**Date created:** 2021/10/08**Last modified:** 2021/10/10**Description:** Mitigating resolution discrepancy between training and test sets. IntroductionIt is a common practice to use the same input image resolution while training and testingvision models. However, as investigated in[Fixing the train-test resolution discrepancy](https://arxiv.org/abs/1906.06423)(Touvron et al.), this practice leads to suboptimal performance. Data augmentationis an indispensable part of the training process of deep neural networks. For vision models, wetypically use random resized crops during training and center crops during inference.This introduces a discrepancy in the object sizes seen during training and inference.As shown by Touvron et al., if we can fix this discrepancy, we can significantlyboost model performance.In this example, we implement the **FixRes** techniques introduced by Touvron et al.to fix this discrepancy. Imports<jupyter_code>import keras
from keras import layers
import tensorflow as tf # just for image processing and pipeline
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
import matplotlib.pyplot as plt<jupyter_output><empty_output><jupyter_text>Load the `tf_flowers` dataset<jupyter_code>train_dataset, val_dataset = tfds.load(
"tf_flowers", split=["train[:90%]", "train[90%:]"], as_supervised=True
)
num_train = train_dataset.cardinality()
num_val = val_dataset.cardinality()
print(f"Number of training examples: {num_train}")
print(f"Number of validation examples: {num_val}")<jupyter_output><empty_output><jupyter_text>Data preprocessing utilities We create three datasets:1. A dataset with a smaller resolution - 128x128.2. Two datasets with a larger resolution - 224x224.We will apply different augmentation transforms to the larger-resolution datasets.The idea of FixRes is to first train a model on a smaller resolution dataset and then fine-tuneit on a larger resolution dataset. This simple yet effective recipe leads to non-trivial performanceimprovements. Please refer to the [original paper](https://arxiv.org/abs/1906.06423) forresults.<jupyter_code># Reference: https://github.com/facebookresearch/FixRes/blob/main/transforms_v2.py.
batch_size = 32
auto = tf.data.AUTOTUNE
smaller_size = 128
bigger_size = 224
size_for_resizing = int((bigger_size / smaller_size) * bigger_size)
central_crop_layer = layers.CenterCrop(bigger_size, bigger_size)
def preprocess_initial(train, image_size):
"""Initial preprocessing function for training on smaller resolution.
For training, do random_horizontal_flip -> random_crop.
For validation, just resize.
No color-jittering has been used.
"""
def _pp(image, label, train):
if train:
channels = image.shape[-1]
begin, size, _ = tf.image.sample_distorted_bounding_box(
tf.shape(image),
tf.zeros([0, 0, 4], tf.float32),
area_range=(0.05, 1.0),
min_object_covered=0,
use_image_if_no_bounding_boxes=True,
)
image = tf.slice(image, begin, size)
image.set_shape([None, None, channels])
image = tf.image.resize(image, [image_size, image_size])
image = tf.image.random_flip_left_right(image)
else:
image = tf.image.resize(image, [image_size, image_size])
return image, label
return _pp
def preprocess_finetune(image, label, train):
"""Preprocessing function for fine-tuning on a higher resolution.
For training, resize to a bigger resolution to maintain the ratio ->
random_horizontal_flip -> center_crop.
For validation, do the same without any horizontal flipping.
No color-jittering has been used.
"""
image = tf.image.resize(image, [size_for_resizing, size_for_resizing])
if train:
image = tf.image.random_flip_left_right(image)
image = central_crop_layer(image[None, ...])[0]
return image, label
def make_dataset(
dataset: tf.data.Dataset,
train: bool,
image_size: int = smaller_size,
fixres: bool = True,
num_parallel_calls=auto,
):
if image_size not in [smaller_size, bigger_size]:
raise ValueError(f"{image_size} resolution is not supported.")
# Determine which preprocessing function we are using.
if image_size == smaller_size:
preprocess_func = preprocess_initial(train, image_size)
elif not fixres and image_size == bigger_size:
preprocess_func = preprocess_initial(train, image_size)
else:
preprocess_func = preprocess_finetune
dataset = dataset.map(
lambda x, y: preprocess_func(x, y, train),
num_parallel_calls=num_parallel_calls,
)
dataset = dataset.batch(batch_size)
if train:
dataset = dataset.shuffle(batch_size * 10)
return dataset.prefetch(num_parallel_calls)<jupyter_output><empty_output><jupyter_text>Notice how the augmentation transforms vary for the kind of dataset we are preparing. Prepare datasets<jupyter_code>initial_train_dataset = make_dataset(train_dataset, train=True, image_size=smaller_size)
initial_val_dataset = make_dataset(val_dataset, train=False, image_size=smaller_size)
finetune_train_dataset = make_dataset(train_dataset, train=True, image_size=bigger_size)
finetune_val_dataset = make_dataset(val_dataset, train=False, image_size=bigger_size)
vanilla_train_dataset = make_dataset(
train_dataset, train=True, image_size=bigger_size, fixres=False
)
vanilla_val_dataset = make_dataset(
val_dataset, train=False, image_size=bigger_size, fixres=False
)<jupyter_output><empty_output><jupyter_text>Visualize the datasets<jupyter_code>def visualize_dataset(batch_images):
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(batch_images[n].numpy().astype("int"))
plt.axis("off")
plt.show()
print(f"Batch shape: {batch_images.shape}.")
# Smaller resolution.
initial_sample_images, _ = next(iter(initial_train_dataset))
visualize_dataset(initial_sample_images)
# Bigger resolution, only for fine-tuning.
finetune_sample_images, _ = next(iter(finetune_train_dataset))
visualize_dataset(finetune_sample_images)
# Bigger resolution, with the same augmentation transforms as
# the smaller resolution dataset.
vanilla_sample_images, _ = next(iter(vanilla_train_dataset))
visualize_dataset(vanilla_sample_images)<jupyter_output><empty_output><jupyter_text>Model training utilitiesWe train multiple variants of ResNet50V2([He et al.](https://arxiv.org/abs/1603.05027)):1. On the smaller resolution dataset (128x128). It will be trained from scratch.2. Then fine-tune the model from 1 on the larger resolution (224x224) dataset.3. Train another ResNet50V2 from scratch on the larger resolution dataset.As a reminder, the larger resolution datasets differ in terms of their augmentationtransforms.<jupyter_code>def get_training_model(num_classes=5):
inputs = layers.Input((None, None, 3))
resnet_base = keras.applications.ResNet50V2(
include_top=False, weights=None, pooling="avg"
)
resnet_base.trainable = True
x = layers.Rescaling(scale=1.0 / 127.5, offset=-1)(inputs)
x = resnet_base(x)
outputs = layers.Dense(num_classes, activation="softmax")(x)
return keras.Model(inputs, outputs)
def train_and_evaluate(
model,
train_ds,
val_ds,
epochs,
learning_rate=1e-3,
use_early_stopping=False,
):
optimizer = keras.optimizers.Adam(learning_rate=learning_rate)
model.compile(
optimizer=optimizer,
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
if use_early_stopping:
es_callback = keras.callbacks.EarlyStopping(patience=5)
callbacks = [es_callback]
else:
callbacks = None
model.fit(
train_ds,
validation_data=val_ds,
epochs=epochs,
callbacks=callbacks,
)
_, accuracy = model.evaluate(val_ds)
print(f"Top-1 accuracy on the validation set: {accuracy*100:.2f}%.")
return model<jupyter_output><empty_output><jupyter_text>Experiment 1: Train on 128x128 and then fine-tune on 224x224<jupyter_code>epochs = 30
smaller_res_model = get_training_model()
smaller_res_model = train_and_evaluate(
smaller_res_model, initial_train_dataset, initial_val_dataset, epochs
)<jupyter_output><empty_output><jupyter_text>Freeze all the layers except for the final Batch Normalization layerFor fine-tuning, we train only two layers:* The final Batch Normalization ([Ioffe et al.](https://arxiv.org/abs/1502.03167)) layer.* The classification layer.We are unfreezing the final Batch Normalization layer to compensate for the change inactivation statistics before the global average pooling layer. As shown in[the paper](https://arxiv.org/abs/1906.06423), unfreezing the final BatchNormalization layer is enough.For a comprehensive guide on fine-tuning models in Keras, refer to[this tutorial](https://keras.io/guides/transfer_learning/).<jupyter_code>for layer in smaller_res_model.layers[2].layers:
layer.trainable = False
smaller_res_model.layers[2].get_layer("post_bn").trainable = True
epochs = 10
# Use a lower learning rate during fine-tuning.
bigger_res_model = train_and_evaluate(
smaller_res_model,
finetune_train_dataset,
finetune_val_dataset,
epochs,
learning_rate=1e-4,
)<jupyter_output><empty_output><jupyter_text>Experiment 2: Train a model on 224x224 resolution from scratchNow, we train another model from scratch on the larger resolution dataset. Recall thatthe augmentation transforms used in this dataset are different from before.<jupyter_code>epochs = 30
vanilla_bigger_res_model = get_training_model()
vanilla_bigger_res_model = train_and_evaluate(
vanilla_bigger_res_model, vanilla_train_dataset, vanilla_val_dataset, epochs
)<jupyter_output><empty_output> | keras-io/examples/vision/ipynb/fixres.ipynb/0 | {
"file_path": "keras-io/examples/vision/ipynb/fixres.ipynb",
"repo_id": "keras-io",
"token_count": 3610
} | 100 |
<jupyter_start><jupyter_text>Image classification with Perceiver**Author:** [Khalid Salama](https://www.linkedin.com/in/khalid-salama-24403144/)**Date created:** 2021/04/30**Last modified:** 2023/12/30**Description:** Implementing the Perceiver model for image classification. IntroductionThis example implements the[Perceiver: General Perception with Iterative Attention](https://arxiv.org/abs/2103.03206)model by Andrew Jaegle et al. for image classification,and demonstrates it on the CIFAR-100 dataset.The Perceiver model leverages an asymmetric attention mechanism to iterativelydistill inputs into a tight latent bottleneck,allowing it to scale to handle very large inputs.In other words: let's assume that your input data array (e.g. image) has `M` elements (i.e. patches), where `M` is large.In a standard Transformer model, a self-attention operation is performed for the `M` elements.The complexity of this operation is `O(M^2)`.However, the Perceiver model creates a latent array of size `N` elements, where `N << M`,and performs two operations iteratively:1. Cross-attention Transformer between the latent array and the data array - The complexity of this operation is `O(M.N)`.2. Self-attention Transformer on the latent array - The complexity of this operation is `O(N^2)`.This example requires Keras 3.0 or higher. Setup<jupyter_code>import keras
from keras import layers, activations, ops<jupyter_output><empty_output><jupyter_text>Prepare the data<jupyter_code>num_classes = 100
input_shape = (32, 32, 3)
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar100.load_data()
print(f"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}")
print(f"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}")<jupyter_output><empty_output><jupyter_text>Configure the hyperparameters<jupyter_code>learning_rate = 0.001
weight_decay = 0.0001
batch_size = 64
num_epochs = 2 # It is recommended to run 50 epochs to observe improvements in accuracy
dropout_rate = 0.2
image_size = 64 # We'll resize input images to this size.
patch_size = 2 # Size of the patches to be extract from the input images.
num_patches = (image_size // patch_size) ** 2 # Size of the data array.
latent_dim = 256 # Size of the latent array.
projection_dim = 256 # Embedding size of each element in the data and latent arrays.
num_heads = 8 # Number of Transformer heads.
ffn_units = [
projection_dim,
projection_dim,
] # Size of the Transformer Feedforward network.
num_transformer_blocks = 4
num_iterations = 2 # Repetitions of the cross-attention and Transformer modules.
classifier_units = [
projection_dim,
num_classes,
] # Size of the Feedforward network of the final classifier.
print(f"Image size: {image_size} X {image_size} = {image_size ** 2}")
print(f"Patch size: {patch_size} X {patch_size} = {patch_size ** 2} ")
print(f"Patches per image: {num_patches}")
print(f"Elements per patch (3 channels): {(patch_size ** 2) * 3}")
print(f"Latent array shape: {latent_dim} X {projection_dim}")
print(f"Data array shape: {num_patches} X {projection_dim}")<jupyter_output><empty_output><jupyter_text>Note that, in order to use each pixel as an individual input in the data array,set `patch_size` to 1. Use data augmentation<jupyter_code>data_augmentation = keras.Sequential(
[
layers.Normalization(),
layers.Resizing(image_size, image_size),
layers.RandomFlip("horizontal"),
layers.RandomZoom(height_factor=0.2, width_factor=0.2),
],
name="data_augmentation",
)
# Compute the mean and the variance of the training data for normalization.
data_augmentation.layers[0].adapt(x_train)<jupyter_output><empty_output><jupyter_text>Implement Feedforward network (FFN)<jupyter_code>def create_ffn(hidden_units, dropout_rate):
ffn_layers = []
for units in hidden_units[:-1]:
ffn_layers.append(layers.Dense(units, activation=activations.gelu))
ffn_layers.append(layers.Dense(units=hidden_units[-1]))
ffn_layers.append(layers.Dropout(dropout_rate))
ffn = keras.Sequential(ffn_layers)
return ffn<jupyter_output><empty_output><jupyter_text>Implement patch creation as a layer<jupyter_code>class Patches(layers.Layer):
def __init__(self, patch_size):
super().__init__()
self.patch_size = patch_size
def call(self, images):
batch_size = ops.shape(images)[0]
patches = ops.image.extract_patches(
image=images,
size=(self.patch_size, self.patch_size),
strides=(self.patch_size, self.patch_size),
dilation_rate=1,
padding="valid",
)
patch_dims = patches.shape[-1]
patches = ops.reshape(patches, [batch_size, -1, patch_dims])
return patches<jupyter_output><empty_output><jupyter_text>Implement the patch encoding layerThe `PatchEncoder` layer will linearly transform a patch by projecting it intoa vector of size `latent_dim`. In addition, it adds a learnable position embeddingto the projected vector.Note that the orginal Perceiver paper uses the Fourier feature positional encodings.<jupyter_code>class PatchEncoder(layers.Layer):
def __init__(self, num_patches, projection_dim):
super().__init__()
self.num_patches = num_patches
self.projection = layers.Dense(units=projection_dim)
self.position_embedding = layers.Embedding(
input_dim=num_patches, output_dim=projection_dim
)
def call(self, patches):
positions = ops.arange(start=0, stop=self.num_patches, step=1)
encoded = self.projection(patches) + self.position_embedding(positions)
return encoded<jupyter_output><empty_output><jupyter_text>Build the Perceiver modelThe Perceiver consists of two modules: a cross-attentionmodule and a standard Transformer with self-attention. Cross-attention moduleThe cross-attention expects a `(latent_dim, projection_dim)` latent array,and the `(data_dim, projection_dim)` data array as inputs,to produce a `(latent_dim, projection_dim)` latent array as an output.To apply cross-attention, the `query` vectors are generated from the latent array,while the `key` and `value` vectors are generated from the encoded image.Note that the data array in this example is the image,where the `data_dim` is set to the `num_patches`.<jupyter_code>def create_cross_attention_module(
latent_dim, data_dim, projection_dim, ffn_units, dropout_rate
):
inputs = {
# Recieve the latent array as an input of shape [1, latent_dim, projection_dim].
"latent_array": layers.Input(
shape=(latent_dim, projection_dim), name="latent_array"
),
# Recieve the data_array (encoded image) as an input of shape [batch_size, data_dim, projection_dim].
"data_array": layers.Input(shape=(data_dim, projection_dim), name="data_array"),
}
# Apply layer norm to the inputs
latent_array = layers.LayerNormalization(epsilon=1e-6)(inputs["latent_array"])
data_array = layers.LayerNormalization(epsilon=1e-6)(inputs["data_array"])
# Create query tensor: [1, latent_dim, projection_dim].
query = layers.Dense(units=projection_dim)(latent_array)
# Create key tensor: [batch_size, data_dim, projection_dim].
key = layers.Dense(units=projection_dim)(data_array)
# Create value tensor: [batch_size, data_dim, projection_dim].
value = layers.Dense(units=projection_dim)(data_array)
# Generate cross-attention outputs: [batch_size, latent_dim, projection_dim].
attention_output = layers.Attention(use_scale=True, dropout=0.1)(
[query, key, value], return_attention_scores=False
)
# Skip connection 1.
attention_output = layers.Add()([attention_output, latent_array])
# Apply layer norm.
attention_output = layers.LayerNormalization(epsilon=1e-6)(attention_output)
# Apply Feedforward network.
ffn = create_ffn(hidden_units=ffn_units, dropout_rate=dropout_rate)
outputs = ffn(attention_output)
# Skip connection 2.
outputs = layers.Add()([outputs, attention_output])
# Create the Keras model.
model = keras.Model(inputs=inputs, outputs=outputs)
return model<jupyter_output><empty_output><jupyter_text>Transformer moduleThe Transformer expects the output latent vector from the cross-attention moduleas an input, applies multi-head self-attention to its `latent_dim` elements,followed by feedforward network, to produce another `(latent_dim, projection_dim)` latent array.<jupyter_code>def create_transformer_module(
latent_dim,
projection_dim,
num_heads,
num_transformer_blocks,
ffn_units,
dropout_rate,
):
# input_shape: [1, latent_dim, projection_dim]
inputs = layers.Input(shape=(latent_dim, projection_dim))
x0 = inputs
# Create multiple layers of the Transformer block.
for _ in range(num_transformer_blocks):
# Apply layer normalization 1.
x1 = layers.LayerNormalization(epsilon=1e-6)(x0)
# Create a multi-head self-attention layer.
attention_output = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=projection_dim, dropout=0.1
)(x1, x1)
# Skip connection 1.
x2 = layers.Add()([attention_output, x0])
# Apply layer normalization 2.
x3 = layers.LayerNormalization(epsilon=1e-6)(x2)
# Apply Feedforward network.
ffn = create_ffn(hidden_units=ffn_units, dropout_rate=dropout_rate)
x3 = ffn(x3)
# Skip connection 2.
x0 = layers.Add()([x3, x2])
# Create the Keras model.
model = keras.Model(inputs=inputs, outputs=x0)
return model<jupyter_output><empty_output><jupyter_text>Perceiver modelThe Perceiver model repeats the cross-attention and Transformer modules`num_iterations` times—with shared weights and skip connections—to allowthe latent array to iteratively extract information from the input image as it is needed.<jupyter_code>class Perceiver(keras.Model):
def __init__(
self,
patch_size,
data_dim,
latent_dim,
projection_dim,
num_heads,
num_transformer_blocks,
ffn_units,
dropout_rate,
num_iterations,
classifier_units,
):
super().__init__()
self.latent_dim = latent_dim
self.data_dim = data_dim
self.patch_size = patch_size
self.projection_dim = projection_dim
self.num_heads = num_heads
self.num_transformer_blocks = num_transformer_blocks
self.ffn_units = ffn_units
self.dropout_rate = dropout_rate
self.num_iterations = num_iterations
self.classifier_units = classifier_units
def build(self, input_shape):
# Create latent array.
self.latent_array = self.add_weight(
shape=(self.latent_dim, self.projection_dim),
initializer="random_normal",
trainable=True,
)
# Create patching module.
self.patcher = Patches(self.patch_size)
# Create patch encoder.
self.patch_encoder = PatchEncoder(self.data_dim, self.projection_dim)
# Create cross-attenion module.
self.cross_attention = create_cross_attention_module(
self.latent_dim,
self.data_dim,
self.projection_dim,
self.ffn_units,
self.dropout_rate,
)
# Create Transformer module.
self.transformer = create_transformer_module(
self.latent_dim,
self.projection_dim,
self.num_heads,
self.num_transformer_blocks,
self.ffn_units,
self.dropout_rate,
)
# Create global average pooling layer.
self.global_average_pooling = layers.GlobalAveragePooling1D()
# Create a classification head.
self.classification_head = create_ffn(
hidden_units=self.classifier_units, dropout_rate=self.dropout_rate
)
super().build(input_shape)
def call(self, inputs):
# Augment data.
augmented = data_augmentation(inputs)
# Create patches.
patches = self.patcher(augmented)
# Encode patches.
encoded_patches = self.patch_encoder(patches)
# Prepare cross-attention inputs.
cross_attention_inputs = {
"latent_array": ops.expand_dims(self.latent_array, 0),
"data_array": encoded_patches,
}
# Apply the cross-attention and the Transformer modules iteratively.
for _ in range(self.num_iterations):
# Apply cross-attention from the latent array to the data array.
latent_array = self.cross_attention(cross_attention_inputs)
# Apply self-attention Transformer to the latent array.
latent_array = self.transformer(latent_array)
# Set the latent array of the next iteration.
cross_attention_inputs["latent_array"] = latent_array
# Apply global average pooling to generate a [batch_size, projection_dim] repesentation tensor.
representation = self.global_average_pooling(latent_array)
# Generate logits.
logits = self.classification_head(representation)
return logits<jupyter_output><empty_output><jupyter_text>Compile, train, and evaluate the mode<jupyter_code>def run_experiment(model):
# Create ADAM instead of LAMB optimizer with weight decay. (LAMB isn't supported yet)
optimizer = keras.optimizers.Adam(learning_rate=learning_rate)
# Compile the model.
model.compile(
optimizer=optimizer,
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=[
keras.metrics.SparseCategoricalAccuracy(name="acc"),
keras.metrics.SparseTopKCategoricalAccuracy(5, name="top5-acc"),
],
)
# Create a learning rate scheduler callback.
reduce_lr = keras.callbacks.ReduceLROnPlateau(
monitor="val_loss", factor=0.2, patience=3
)
# Create an early stopping callback.
early_stopping = keras.callbacks.EarlyStopping(
monitor="val_loss", patience=15, restore_best_weights=True
)
# Fit the model.
history = model.fit(
x=x_train,
y=y_train,
batch_size=batch_size,
epochs=num_epochs,
validation_split=0.1,
callbacks=[early_stopping, reduce_lr],
)
_, accuracy, top_5_accuracy = model.evaluate(x_test, y_test)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
print(f"Test top 5 accuracy: {round(top_5_accuracy * 100, 2)}%")
# Return history to plot learning curves.
return history<jupyter_output><empty_output><jupyter_text>Note that training the perceiver model with the current settings on a V100 GPUs takesaround 200 seconds.<jupyter_code>perceiver_classifier = Perceiver(
patch_size,
num_patches,
latent_dim,
projection_dim,
num_heads,
num_transformer_blocks,
ffn_units,
dropout_rate,
num_iterations,
classifier_units,
)
history = run_experiment(perceiver_classifier)<jupyter_output><empty_output> | keras-io/examples/vision/ipynb/perceiver_image_classification.ipynb/0 | {
"file_path": "keras-io/examples/vision/ipynb/perceiver_image_classification.ipynb",
"repo_id": "keras-io",
"token_count": 5826
} | 101 |
<jupyter_start><jupyter_text>Supervised Contrastive Learning**Author:** [Khalid Salama](https://www.linkedin.com/in/khalid-salama-24403144/)**Date created:** 2020/11/30**Last modified:** 2020/11/30**Description:** Using supervised contrastive learning for image classification. Introduction[Supervised Contrastive Learning](https://arxiv.org/abs/2004.11362)(Prannay Khosla et al.) is a training methodology that outperformssupervised training with crossentropy on classification tasks.Essentially, training an image classification model with Supervised ContrastiveLearning is performed in two phases:1. Training an encoder to learn to produce vector representations of input images suchthat representations of images in the same class will be more similar compared torepresentations of images in different classes.2. Training a classifier on top of the frozen encoder.Note that this example requires [TensorFlow Addons](https://www.tensorflow.org/addons), which you can install using the following command:```pythonpip install tensorflow-addons``` Setup<jupyter_code>import tensorflow as tf
import tensorflow_addons as tfa
import numpy as np
from tensorflow import keras
from tensorflow.keras import layers<jupyter_output><empty_output><jupyter_text>Prepare the data<jupyter_code>num_classes = 10
input_shape = (32, 32, 3)
# Load the train and test data splits
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
# Display shapes of train and test datasets
print(f"x_train shape: {x_train.shape} - y_train shape: {y_train.shape}")
print(f"x_test shape: {x_test.shape} - y_test shape: {y_test.shape}")<jupyter_output><empty_output><jupyter_text>Using image data augmentation<jupyter_code>data_augmentation = keras.Sequential(
[
layers.Normalization(),
layers.RandomFlip("horizontal"),
layers.RandomRotation(0.02),
]
)
# Setting the state of the normalization layer.
data_augmentation.layers[0].adapt(x_train)<jupyter_output><empty_output><jupyter_text>Build the encoder modelThe encoder model takes the image as input and turns it into a 2048-dimensionalfeature vector.<jupyter_code>def create_encoder():
resnet = keras.applications.ResNet50V2(
include_top=False, weights=None, input_shape=input_shape, pooling="avg"
)
inputs = keras.Input(shape=input_shape)
augmented = data_augmentation(inputs)
outputs = resnet(augmented)
model = keras.Model(inputs=inputs, outputs=outputs, name="cifar10-encoder")
return model
encoder = create_encoder()
encoder.summary()
learning_rate = 0.001
batch_size = 265
hidden_units = 512
projection_units = 128
num_epochs = 50
dropout_rate = 0.5
temperature = 0.05<jupyter_output><empty_output><jupyter_text>Build the classification modelThe classification model adds a fully-connected layer on top of the encoder,plus a softmax layer with the target classes.<jupyter_code>def create_classifier(encoder, trainable=True):
for layer in encoder.layers:
layer.trainable = trainable
inputs = keras.Input(shape=input_shape)
features = encoder(inputs)
features = layers.Dropout(dropout_rate)(features)
features = layers.Dense(hidden_units, activation="relu")(features)
features = layers.Dropout(dropout_rate)(features)
outputs = layers.Dense(num_classes, activation="softmax")(features)
model = keras.Model(inputs=inputs, outputs=outputs, name="cifar10-classifier")
model.compile(
optimizer=keras.optimizers.Adam(learning_rate),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
return model<jupyter_output><empty_output><jupyter_text>Experiment 1: Train the baseline classification modelIn this experiment, a baseline classifier is trained as usual, i.e., theencoder and the classifier parts are trained together as a single modelto minimize the crossentropy loss.<jupyter_code>encoder = create_encoder()
classifier = create_classifier(encoder)
classifier.summary()
history = classifier.fit(x=x_train, y=y_train, batch_size=batch_size, epochs=num_epochs)
accuracy = classifier.evaluate(x_test, y_test)[1]
print(f"Test accuracy: {round(accuracy * 100, 2)}%")<jupyter_output><empty_output><jupyter_text>Experiment 2: Use supervised contrastive learningIn this experiment, the model is trained in two phases. In the first phase,the encoder is pretrained to optimize the supervised contrastive loss,described in [Prannay Khosla et al.](https://arxiv.org/abs/2004.11362).In the second phase, the classifier is trained using the trained encoder withits weights freezed; only the weights of fully-connected layers with thesoftmax are optimized. 1. Supervised contrastive learning loss function<jupyter_code>class SupervisedContrastiveLoss(keras.losses.Loss):
def __init__(self, temperature=1, name=None):
super().__init__(name=name)
self.temperature = temperature
def __call__(self, labels, feature_vectors, sample_weight=None):
# Normalize feature vectors
feature_vectors_normalized = tf.math.l2_normalize(feature_vectors, axis=1)
# Compute logits
logits = tf.divide(
tf.matmul(
feature_vectors_normalized, tf.transpose(feature_vectors_normalized)
),
self.temperature,
)
return tfa.losses.npairs_loss(tf.squeeze(labels), logits)
def add_projection_head(encoder):
inputs = keras.Input(shape=input_shape)
features = encoder(inputs)
outputs = layers.Dense(projection_units, activation="relu")(features)
model = keras.Model(
inputs=inputs, outputs=outputs, name="cifar-encoder_with_projection-head"
)
return model<jupyter_output><empty_output><jupyter_text>2. Pretrain the encoder<jupyter_code>encoder = create_encoder()
encoder_with_projection_head = add_projection_head(encoder)
encoder_with_projection_head.compile(
optimizer=keras.optimizers.Adam(learning_rate),
loss=SupervisedContrastiveLoss(temperature),
)
encoder_with_projection_head.summary()
history = encoder_with_projection_head.fit(
x=x_train, y=y_train, batch_size=batch_size, epochs=num_epochs
)<jupyter_output><empty_output><jupyter_text>3. Train the classifier with the frozen encoder<jupyter_code>classifier = create_classifier(encoder, trainable=False)
history = classifier.fit(x=x_train, y=y_train, batch_size=batch_size, epochs=num_epochs)
accuracy = classifier.evaluate(x_test, y_test)[1]
print(f"Test accuracy: {round(accuracy * 100, 2)}%")<jupyter_output><empty_output> | keras-io/examples/vision/ipynb/supervised-contrastive-learning.ipynb/0 | {
"file_path": "keras-io/examples/vision/ipynb/supervised-contrastive-learning.ipynb",
"repo_id": "keras-io",
"token_count": 2263
} | 102 |
# 3D image classification from CT scans
**Author:** [Hasib Zunair](https://twitter.com/hasibzunair)<br>
**Date created:** 2020/09/23<br>
**Last modified:** 2024/01/11<br>
**Description:** Train a 3D convolutional neural network to predict presence of pneumonia.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/3D_image_classification.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/vision/3D_image_classification.py)
---
## Introduction
This example will show the steps needed to build a 3D convolutional neural network (CNN)
to predict the presence of viral pneumonia in computer tomography (CT) scans. 2D CNNs are
commonly used to process RGB images (3 channels). A 3D CNN is simply the 3D
equivalent: it takes as input a 3D volume or a sequence of 2D frames (e.g. slices in a CT scan),
3D CNNs are a powerful model for learning representations for volumetric data.
---
## References
- [A survey on Deep Learning Advances on Different 3D DataRepresentations](https://arxiv.org/abs/1808.01462)
- [VoxNet: A 3D Convolutional Neural Network for Real-Time Object Recognition](https://www.ri.cmu.edu/pub_files/2015/9/voxnet_maturana_scherer_iros15.pdf)
- [FusionNet: 3D Object Classification Using MultipleData Representations](https://arxiv.org/abs/1607.05695)
- [Uniformizing Techniques to Process CT scans with 3D CNNs for Tuberculosis Prediction](https://arxiv.org/abs/2007.13224)
---
## Setup
```python
import os
import zipfile
import numpy as np
import tensorflow as tf # for data preprocessing
import keras
from keras import layers
```
---
## Downloading the MosMedData: Chest CT Scans with COVID-19 Related Findings
In this example, we use a subset of the
[MosMedData: Chest CT Scans with COVID-19 Related Findings](https://www.medrxiv.org/content/10.1101/2020.05.20.20100362v1).
This dataset consists of lung CT scans with COVID-19 related findings, as well as without such findings.
We will be using the associated radiological findings of the CT scans as labels to build
a classifier to predict presence of viral pneumonia.
Hence, the task is a binary classification problem.
```python
# Download url of normal CT scans.
url = "https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-0.zip"
filename = os.path.join(os.getcwd(), "CT-0.zip")
keras.utils.get_file(filename, url)
# Download url of abnormal CT scans.
url = "https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-23.zip"
filename = os.path.join(os.getcwd(), "CT-23.zip")
keras.utils.get_file(filename, url)
# Make a directory to store the data.
os.makedirs("MosMedData")
# Unzip data in the newly created directory.
with zipfile.ZipFile("CT-0.zip", "r") as z_fp:
z_fp.extractall("./MosMedData/")
with zipfile.ZipFile("CT-23.zip", "r") as z_fp:
z_fp.extractall("./MosMedData/")
```
<div class="k-default-codeblock">
```
Downloading data from https://github.com/hasibzunair/3D-image-classification-tutorial/releases/download/v0.2/CT-0.zip
```
</div>
1045162547/1045162547 ━━━━━━━━━━━━━━━━━━━━ 4s 0us/step
---
## Loading data and preprocessing
The files are provided in Nifti format with the extension .nii. To read the
scans, we use the `nibabel` package.
You can install the package via `pip install nibabel`. CT scans store raw voxel
intensity in Hounsfield units (HU). They range from -1024 to above 2000 in this dataset.
Above 400 are bones with different radiointensity, so this is used as a higher bound. A threshold
between -1000 and 400 is commonly used to normalize CT scans.
To process the data, we do the following:
* We first rotate the volumes by 90 degrees, so the orientation is fixed
* We scale the HU values to be between 0 and 1.
* We resize width, height and depth.
Here we define several helper functions to process the data. These functions
will be used when building training and validation datasets.
```python
import nibabel as nib
from scipy import ndimage
def read_nifti_file(filepath):
"""Read and load volume"""
# Read file
scan = nib.load(filepath)
# Get raw data
scan = scan.get_fdata()
return scan
def normalize(volume):
"""Normalize the volume"""
min = -1000
max = 400
volume[volume < min] = min
volume[volume > max] = max
volume = (volume - min) / (max - min)
volume = volume.astype("float32")
return volume
def resize_volume(img):
"""Resize across z-axis"""
# Set the desired depth
desired_depth = 64
desired_width = 128
desired_height = 128
# Get current depth
current_depth = img.shape[-1]
current_width = img.shape[0]
current_height = img.shape[1]
# Compute depth factor
depth = current_depth / desired_depth
width = current_width / desired_width
height = current_height / desired_height
depth_factor = 1 / depth
width_factor = 1 / width
height_factor = 1 / height
# Rotate
img = ndimage.rotate(img, 90, reshape=False)
# Resize across z-axis
img = ndimage.zoom(img, (width_factor, height_factor, depth_factor), order=1)
return img
def process_scan(path):
"""Read and resize volume"""
# Read scan
volume = read_nifti_file(path)
# Normalize
volume = normalize(volume)
# Resize width, height and depth
volume = resize_volume(volume)
return volume
```
Let's read the paths of the CT scans from the class directories.
```python
# Folder "CT-0" consist of CT scans having normal lung tissue,
# no CT-signs of viral pneumonia.
normal_scan_paths = [
os.path.join(os.getcwd(), "MosMedData/CT-0", x)
for x in os.listdir("MosMedData/CT-0")
]
# Folder "CT-23" consist of CT scans having several ground-glass opacifications,
# involvement of lung parenchyma.
abnormal_scan_paths = [
os.path.join(os.getcwd(), "MosMedData/CT-23", x)
for x in os.listdir("MosMedData/CT-23")
]
print("CT scans with normal lung tissue: " + str(len(normal_scan_paths)))
print("CT scans with abnormal lung tissue: " + str(len(abnormal_scan_paths)))
```
<div class="k-default-codeblock">
```
CT scans with normal lung tissue: 100
CT scans with abnormal lung tissue: 100
```
</div>
---
## Build train and validation datasets
Read the scans from the class directories and assign labels. Downsample the scans to have
shape of 128x128x64. Rescale the raw HU values to the range 0 to 1.
Lastly, split the dataset into train and validation subsets.
```python
# Read and process the scans.
# Each scan is resized across height, width, and depth and rescaled.
abnormal_scans = np.array([process_scan(path) for path in abnormal_scan_paths])
normal_scans = np.array([process_scan(path) for path in normal_scan_paths])
# For the CT scans having presence of viral pneumonia
# assign 1, for the normal ones assign 0.
abnormal_labels = np.array([1 for _ in range(len(abnormal_scans))])
normal_labels = np.array([0 for _ in range(len(normal_scans))])
# Split data in the ratio 70-30 for training and validation.
x_train = np.concatenate((abnormal_scans[:70], normal_scans[:70]), axis=0)
y_train = np.concatenate((abnormal_labels[:70], normal_labels[:70]), axis=0)
x_val = np.concatenate((abnormal_scans[70:], normal_scans[70:]), axis=0)
y_val = np.concatenate((abnormal_labels[70:], normal_labels[70:]), axis=0)
print(
"Number of samples in train and validation are %d and %d."
% (x_train.shape[0], x_val.shape[0])
)
```
<div class="k-default-codeblock">
```
Number of samples in train and validation are 140 and 60.
```
</div>
---
## Data augmentation
The CT scans also augmented by rotating at random angles during training. Since
the data is stored in rank-3 tensors of shape `(samples, height, width, depth)`,
we add a dimension of size 1 at axis 4 to be able to perform 3D convolutions on
the data. The new shape is thus `(samples, height, width, depth, 1)`. There are
different kinds of preprocessing and augmentation techniques out there,
this example shows a few simple ones to get started.
```python
import random
from scipy import ndimage
def rotate(volume):
"""Rotate the volume by a few degrees"""
def scipy_rotate(volume):
# define some rotation angles
angles = [-20, -10, -5, 5, 10, 20]
# pick angles at random
angle = random.choice(angles)
# rotate volume
volume = ndimage.rotate(volume, angle, reshape=False)
volume[volume < 0] = 0
volume[volume > 1] = 1
return volume
augmented_volume = tf.numpy_function(scipy_rotate, [volume], tf.float32)
return augmented_volume
def train_preprocessing(volume, label):
"""Process training data by rotating and adding a channel."""
# Rotate volume
volume = rotate(volume)
volume = tf.expand_dims(volume, axis=3)
return volume, label
def validation_preprocessing(volume, label):
"""Process validation data by only adding a channel."""
volume = tf.expand_dims(volume, axis=3)
return volume, label
```
While defining the train and validation data loader, the training data is passed through
and augmentation function which randomly rotates volume at different angles. Note that both
training and validation data are already rescaled to have values between 0 and 1.
```python
# Define data loaders.
train_loader = tf.data.Dataset.from_tensor_slices((x_train, y_train))
validation_loader = tf.data.Dataset.from_tensor_slices((x_val, y_val))
batch_size = 2
# Augment the on the fly during training.
train_dataset = (
train_loader.shuffle(len(x_train))
.map(train_preprocessing)
.batch(batch_size)
.prefetch(2)
)
# Only rescale.
validation_dataset = (
validation_loader.shuffle(len(x_val))
.map(validation_preprocessing)
.batch(batch_size)
.prefetch(2)
)
```
Visualize an augmented CT scan.
```python
import matplotlib.pyplot as plt
data = train_dataset.take(1)
images, labels = list(data)[0]
images = images.numpy()
image = images[0]
print("Dimension of the CT scan is:", image.shape)
plt.imshow(np.squeeze(image[:, :, 30]), cmap="gray")
```
<div class="k-default-codeblock">
```
Dimension of the CT scan is: (128, 128, 64, 1)
<matplotlib.image.AxesImage at 0x7fc5b9900d50>
```
</div>

Since a CT scan has many slices, let's visualize a montage of the slices.
```python
def plot_slices(num_rows, num_columns, width, height, data):
"""Plot a montage of 20 CT slices"""
data = np.rot90(np.array(data))
data = np.transpose(data)
data = np.reshape(data, (num_rows, num_columns, width, height))
rows_data, columns_data = data.shape[0], data.shape[1]
heights = [slc[0].shape[0] for slc in data]
widths = [slc.shape[1] for slc in data[0]]
fig_width = 12.0
fig_height = fig_width * sum(heights) / sum(widths)
f, axarr = plt.subplots(
rows_data,
columns_data,
figsize=(fig_width, fig_height),
gridspec_kw={"height_ratios": heights},
)
for i in range(rows_data):
for j in range(columns_data):
axarr[i, j].imshow(data[i][j], cmap="gray")
axarr[i, j].axis("off")
plt.subplots_adjust(wspace=0, hspace=0, left=0, right=1, bottom=0, top=1)
plt.show()
# Visualize montage of slices.
# 4 rows and 10 columns for 100 slices of the CT scan.
plot_slices(4, 10, 128, 128, image[:, :, :40])
```

---
## Define a 3D convolutional neural network
To make the model easier to understand, we structure it into blocks.
The architecture of the 3D CNN used in this example
is based on [this paper](https://arxiv.org/abs/2007.13224).
```python
def get_model(width=128, height=128, depth=64):
"""Build a 3D convolutional neural network model."""
inputs = keras.Input((width, height, depth, 1))
x = layers.Conv3D(filters=64, kernel_size=3, activation="relu")(inputs)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv3D(filters=64, kernel_size=3, activation="relu")(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv3D(filters=128, kernel_size=3, activation="relu")(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv3D(filters=256, kernel_size=3, activation="relu")(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
x = layers.GlobalAveragePooling3D()(x)
x = layers.Dense(units=512, activation="relu")(x)
x = layers.Dropout(0.3)(x)
outputs = layers.Dense(units=1, activation="sigmoid")(x)
# Define the model.
model = keras.Model(inputs, outputs, name="3dcnn")
return model
# Build model.
model = get_model(width=128, height=128, depth=64)
model.summary()
```
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold">Model: "3dcnn"</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃<span style="font-weight: bold"> Layer (type) </span>┃<span style="font-weight: bold"> Output Shape </span>┃<span style="font-weight: bold"> Param # </span>┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ input_layer (<span style="color: #0087ff; text-decoration-color: #0087ff">InputLayer</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">128</span>, <span style="color: #00af00; text-decoration-color: #00af00">128</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>, <span style="color: #00af00; text-decoration-color: #00af00">1</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ conv3d (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv3D</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">126</span>, <span style="color: #00af00; text-decoration-color: #00af00">126</span>, <span style="color: #00af00; text-decoration-color: #00af00">62</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">1,792</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ max_pooling3d (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling3D</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">63</span>, <span style="color: #00af00; text-decoration-color: #00af00">63</span>, <span style="color: #00af00; text-decoration-color: #00af00">31</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ batch_normalization │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">63</span>, <span style="color: #00af00; text-decoration-color: #00af00">63</span>, <span style="color: #00af00; text-decoration-color: #00af00">31</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">256</span> │
│ (<span style="color: #0087ff; text-decoration-color: #0087ff">BatchNormalization</span>) │ │ │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ conv3d_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv3D</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">61</span>, <span style="color: #00af00; text-decoration-color: #00af00">61</span>, <span style="color: #00af00; text-decoration-color: #00af00">29</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">110,656</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ max_pooling3d_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling3D</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">30</span>, <span style="color: #00af00; text-decoration-color: #00af00">30</span>, <span style="color: #00af00; text-decoration-color: #00af00">14</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ batch_normalization_1 │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">30</span>, <span style="color: #00af00; text-decoration-color: #00af00">30</span>, <span style="color: #00af00; text-decoration-color: #00af00">14</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">256</span> │
│ (<span style="color: #0087ff; text-decoration-color: #0087ff">BatchNormalization</span>) │ │ │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ conv3d_2 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv3D</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">28</span>, <span style="color: #00af00; text-decoration-color: #00af00">28</span>, <span style="color: #00af00; text-decoration-color: #00af00">12</span>, <span style="color: #00af00; text-decoration-color: #00af00">128</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">221,312</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ max_pooling3d_2 (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling3D</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">14</span>, <span style="color: #00af00; text-decoration-color: #00af00">14</span>, <span style="color: #00af00; text-decoration-color: #00af00">6</span>, <span style="color: #00af00; text-decoration-color: #00af00">128</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ batch_normalization_2 │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">14</span>, <span style="color: #00af00; text-decoration-color: #00af00">14</span>, <span style="color: #00af00; text-decoration-color: #00af00">6</span>, <span style="color: #00af00; text-decoration-color: #00af00">128</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">512</span> │
│ (<span style="color: #0087ff; text-decoration-color: #0087ff">BatchNormalization</span>) │ │ │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ conv3d_3 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv3D</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">12</span>, <span style="color: #00af00; text-decoration-color: #00af00">12</span>, <span style="color: #00af00; text-decoration-color: #00af00">4</span>, <span style="color: #00af00; text-decoration-color: #00af00">256</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">884,992</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ max_pooling3d_3 (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling3D</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">6</span>, <span style="color: #00af00; text-decoration-color: #00af00">6</span>, <span style="color: #00af00; text-decoration-color: #00af00">2</span>, <span style="color: #00af00; text-decoration-color: #00af00">256</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ batch_normalization_3 │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">6</span>, <span style="color: #00af00; text-decoration-color: #00af00">6</span>, <span style="color: #00af00; text-decoration-color: #00af00">2</span>, <span style="color: #00af00; text-decoration-color: #00af00">256</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">1,024</span> │
│ (<span style="color: #0087ff; text-decoration-color: #0087ff">BatchNormalization</span>) │ │ │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ global_average_pooling3d │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">256</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
│ (<span style="color: #0087ff; text-decoration-color: #0087ff">GlobalAveragePooling3D</span>) │ │ │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ dense (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">512</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">131,584</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ dropout (<span style="color: #0087ff; text-decoration-color: #0087ff">Dropout</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">512</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ dense_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">1</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">513</span> │
└─────────────────────────────────┴───────────────────────────┴────────────┘
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Total params: </span><span style="color: #00af00; text-decoration-color: #00af00">1,352,897</span> (5.16 MB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">1,351,873</span> (5.16 MB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Non-trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">1,024</span> (4.00 KB)
</pre>
---
## Train model
```python
# Compile model.
initial_learning_rate = 0.0001
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
)
model.compile(
loss="binary_crossentropy",
optimizer=keras.optimizers.Adam(learning_rate=lr_schedule),
metrics=["acc"],
run_eagerly=True,
)
# Define callbacks.
checkpoint_cb = keras.callbacks.ModelCheckpoint(
"3d_image_classification.keras", save_best_only=True
)
early_stopping_cb = keras.callbacks.EarlyStopping(monitor="val_acc", patience=15)
# Train the model, doing validation at the end of each epoch
epochs = 100
model.fit(
train_dataset,
validation_data=validation_dataset,
epochs=epochs,
shuffle=True,
verbose=2,
callbacks=[checkpoint_cb, early_stopping_cb],
)
```
<div class="k-default-codeblock">
```
Epoch 1/100
70/70 - 40s - 568ms/step - acc: 0.5786 - loss: 0.7128 - val_acc: 0.5000 - val_loss: 0.8744
Epoch 2/100
70/70 - 26s - 370ms/step - acc: 0.6000 - loss: 0.6760 - val_acc: 0.5000 - val_loss: 1.2741
Epoch 3/100
70/70 - 26s - 373ms/step - acc: 0.5643 - loss: 0.6768 - val_acc: 0.5000 - val_loss: 1.4767
Epoch 4/100
70/70 - 26s - 376ms/step - acc: 0.6643 - loss: 0.6671 - val_acc: 0.5000 - val_loss: 1.2609
Epoch 5/100
70/70 - 26s - 374ms/step - acc: 0.6714 - loss: 0.6274 - val_acc: 0.5667 - val_loss: 0.6470
Epoch 6/100
70/70 - 26s - 372ms/step - acc: 0.5929 - loss: 0.6492 - val_acc: 0.6667 - val_loss: 0.6022
Epoch 7/100
70/70 - 26s - 374ms/step - acc: 0.5929 - loss: 0.6601 - val_acc: 0.5667 - val_loss: 0.6788
Epoch 8/100
70/70 - 26s - 378ms/step - acc: 0.6000 - loss: 0.6559 - val_acc: 0.6667 - val_loss: 0.6090
Epoch 9/100
70/70 - 26s - 373ms/step - acc: 0.6357 - loss: 0.6423 - val_acc: 0.6000 - val_loss: 0.6535
Epoch 10/100
70/70 - 26s - 374ms/step - acc: 0.6500 - loss: 0.6127 - val_acc: 0.6500 - val_loss: 0.6204
Epoch 11/100
70/70 - 26s - 374ms/step - acc: 0.6714 - loss: 0.5994 - val_acc: 0.7000 - val_loss: 0.6218
Epoch 12/100
70/70 - 26s - 374ms/step - acc: 0.6714 - loss: 0.5980 - val_acc: 0.7167 - val_loss: 0.5069
Epoch 13/100
70/70 - 26s - 369ms/step - acc: 0.7214 - loss: 0.6003 - val_acc: 0.7833 - val_loss: 0.5182
Epoch 14/100
70/70 - 26s - 372ms/step - acc: 0.6643 - loss: 0.6076 - val_acc: 0.7167 - val_loss: 0.5613
Epoch 15/100
70/70 - 26s - 373ms/step - acc: 0.6571 - loss: 0.6359 - val_acc: 0.6167 - val_loss: 0.6184
Epoch 16/100
70/70 - 26s - 374ms/step - acc: 0.6429 - loss: 0.6053 - val_acc: 0.7167 - val_loss: 0.5258
Epoch 17/100
70/70 - 26s - 370ms/step - acc: 0.6786 - loss: 0.6119 - val_acc: 0.5667 - val_loss: 0.8481
Epoch 18/100
70/70 - 26s - 372ms/step - acc: 0.6286 - loss: 0.6298 - val_acc: 0.6667 - val_loss: 0.5709
Epoch 19/100
70/70 - 26s - 372ms/step - acc: 0.7214 - loss: 0.5979 - val_acc: 0.5833 - val_loss: 0.6730
Epoch 20/100
70/70 - 26s - 372ms/step - acc: 0.7571 - loss: 0.5224 - val_acc: 0.7167 - val_loss: 0.5710
Epoch 21/100
70/70 - 26s - 372ms/step - acc: 0.7357 - loss: 0.5606 - val_acc: 0.7167 - val_loss: 0.5444
Epoch 22/100
70/70 - 26s - 372ms/step - acc: 0.7357 - loss: 0.5334 - val_acc: 0.5667 - val_loss: 0.7919
Epoch 23/100
70/70 - 26s - 373ms/step - acc: 0.7071 - loss: 0.5337 - val_acc: 0.5167 - val_loss: 0.9527
Epoch 24/100
70/70 - 26s - 371ms/step - acc: 0.7071 - loss: 0.5635 - val_acc: 0.7167 - val_loss: 0.5333
Epoch 25/100
70/70 - 26s - 373ms/step - acc: 0.7643 - loss: 0.4787 - val_acc: 0.6333 - val_loss: 1.0172
Epoch 26/100
70/70 - 26s - 372ms/step - acc: 0.7357 - loss: 0.5535 - val_acc: 0.6500 - val_loss: 0.6926
Epoch 27/100
70/70 - 26s - 370ms/step - acc: 0.7286 - loss: 0.5608 - val_acc: 0.5000 - val_loss: 3.3032
Epoch 28/100
70/70 - 26s - 370ms/step - acc: 0.7429 - loss: 0.5436 - val_acc: 0.6500 - val_loss: 0.6438
<keras.src.callbacks.history.History at 0x7fc5b923e810>
```
</div>
It is important to note that the number of samples is very small (only 200) and we don't
specify a random seed. As such, you can expect significant variance in the results. The full dataset
which consists of over 1000 CT scans can be found [here](https://www.medrxiv.org/content/10.1101/2020.05.20.20100362v1). Using the full
dataset, an accuracy of 83% was achieved. A variability of 6-7% in the classification
performance is observed in both cases.
---
## Visualizing model performance
Here the model accuracy and loss for the training and the validation sets are plotted.
Since the validation set is class-balanced, accuracy provides an unbiased representation
of the model's performance.
```python
fig, ax = plt.subplots(1, 2, figsize=(20, 3))
ax = ax.ravel()
for i, metric in enumerate(["acc", "loss"]):
ax[i].plot(model.history.history[metric])
ax[i].plot(model.history.history["val_" + metric])
ax[i].set_title("Model {}".format(metric))
ax[i].set_xlabel("epochs")
ax[i].set_ylabel(metric)
ax[i].legend(["train", "val"])
```

---
## Make predictions on a single CT scan
```python
# Load best weights.
model.load_weights("3d_image_classification.keras")
prediction = model.predict(np.expand_dims(x_val[0], axis=0))[0]
scores = [1 - prediction[0], prediction[0]]
class_names = ["normal", "abnormal"]
for score, name in zip(scores, class_names):
print(
"This model is %.2f percent confident that CT scan is %s"
% ((100 * score), name)
)
```
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 478ms/step
<div class="k-default-codeblock">
```
```
</div>
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 479ms/step
<div class="k-default-codeblock">
```
This model is 32.99 percent confident that CT scan is normal
This model is 67.01 percent confident that CT scan is abnormal
```
</div>
| keras-io/examples/vision/md/3D_image_classification.md/0 | {
"file_path": "keras-io/examples/vision/md/3D_image_classification.md",
"repo_id": "keras-io",
"token_count": 12172
} | 103 |
# Monocular depth estimation
**Author:** [Victor Basu](https://www.linkedin.com/in/victor-basu-520958147)<br>
**Date created:** 2021/08/30<br>
**Last modified:** 2021/08/30<br>
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/depth_estimation.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/vision/depth_estimation.py)
**Description:** Implement a depth estimation model with a convnet.
---
## Introduction
_Depth estimation_ is a crucial step towards inferring scene geometry from 2D images.
The goal in _monocular depth estimation_ is to predict the depth value of each pixel or
inferring depth information, given only a single RGB image as input.
This example will show an approach to build a depth estimation model with a convnet
and simple loss functions.

---
## Setup
```python
import os
import sys
import tensorflow as tf
from tensorflow.keras import layers
import pandas as pd
import numpy as np
import cv2
import matplotlib.pyplot as plt
tf.random.set_seed(123)
```
---
## Downloading the dataset
We will be using the dataset **DIODE: A Dense Indoor and Outdoor Depth Dataset** for this
tutorial. However, we use the validation set generating training and evaluation subsets
for our model. The reason we use the validation set rather than the training set of the original dataset is because
the training set consists of 81GB of data, which is challenging to download compared
to the validation set which is only 2.6GB.
Other datasets that you could use are
**[NYU-v2](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html)**
and **[KITTI](http://www.cvlibs.net/datasets/kitti/)**.
```python
annotation_folder = "/dataset/"
if not os.path.exists(os.path.abspath(".") + annotation_folder):
annotation_zip = tf.keras.utils.get_file(
"val.tar.gz",
cache_subdir=os.path.abspath("."),
origin="http://diode-dataset.s3.amazonaws.com/val.tar.gz",
extract=True,
)
```
<div class="k-default-codeblock">
```
Downloading data from http://diode-dataset.s3.amazonaws.com/val.tar.gz
2774630400/2774625282 [==============================] - 90s 0us/step
2774638592/2774625282 [==============================] - 90s 0us/step
```
</div>
---
## Preparing the dataset
We only use the indoor images to train our depth estimation model.
```python
path = "val/indoors"
filelist = []
for root, dirs, files in os.walk(path):
for file in files:
filelist.append(os.path.join(root, file))
filelist.sort()
data = {
"image": [x for x in filelist if x.endswith(".png")],
"depth": [x for x in filelist if x.endswith("_depth.npy")],
"mask": [x for x in filelist if x.endswith("_depth_mask.npy")],
}
df = pd.DataFrame(data)
df = df.sample(frac=1, random_state=42)
```
---
## Preparing hyperparameters
```python
HEIGHT = 256
WIDTH = 256
LR = 0.0002
EPOCHS = 30
BATCH_SIZE = 32
```
---
## Building a data pipeline
1. The pipeline takes a dataframe containing the path for the RGB images,
as well as the depth and depth mask files.
2. It reads and resize the RGB images.
3. It reads the depth and depth mask files, process them to generate the depth map image and
resize it.
4. It returns the RGB images and the depth map images for a batch.
```python
class DataGenerator(tf.keras.utils.Sequence):
def __init__(self, data, batch_size=6, dim=(768, 1024), n_channels=3, shuffle=True):
"""
Initialization
"""
self.data = data
self.indices = self.data.index.tolist()
self.dim = dim
self.n_channels = n_channels
self.batch_size = batch_size
self.shuffle = shuffle
self.min_depth = 0.1
self.on_epoch_end()
def __len__(self):
return int(np.ceil(len(self.data) / self.batch_size))
def __getitem__(self, index):
if (index + 1) * self.batch_size > len(self.indices):
self.batch_size = len(self.indices) - index * self.batch_size
# Generate one batch of data
# Generate indices of the batch
index = self.indices[index * self.batch_size : (index + 1) * self.batch_size]
# Find list of IDs
batch = [self.indices[k] for k in index]
x, y = self.data_generation(batch)
return x, y
def on_epoch_end(self):
"""
Updates indexes after each epoch
"""
self.index = np.arange(len(self.indices))
if self.shuffle == True:
np.random.shuffle(self.index)
def load(self, image_path, depth_map, mask):
"""Load input and target image."""
image_ = cv2.imread(image_path)
image_ = cv2.cvtColor(image_, cv2.COLOR_BGR2RGB)
image_ = cv2.resize(image_, self.dim)
image_ = tf.image.convert_image_dtype(image_, tf.float32)
depth_map = np.load(depth_map).squeeze()
mask = np.load(mask)
mask = mask > 0
max_depth = min(300, np.percentile(depth_map, 99))
depth_map = np.clip(depth_map, self.min_depth, max_depth)
depth_map = np.log(depth_map, where=mask)
depth_map = np.ma.masked_where(~mask, depth_map)
depth_map = np.clip(depth_map, 0.1, np.log(max_depth))
depth_map = cv2.resize(depth_map, self.dim)
depth_map = np.expand_dims(depth_map, axis=2)
depth_map = tf.image.convert_image_dtype(depth_map, tf.float32)
return image_, depth_map
def data_generation(self, batch):
x = np.empty((self.batch_size, *self.dim, self.n_channels))
y = np.empty((self.batch_size, *self.dim, 1))
for i, batch_id in enumerate(batch):
x[i,], y[i,] = self.load(
self.data["image"][batch_id],
self.data["depth"][batch_id],
self.data["mask"][batch_id],
)
return x, y
```
---
## Visualizing samples
```python
def visualize_depth_map(samples, test=False, model=None):
input, target = samples
cmap = plt.cm.jet
cmap.set_bad(color="black")
if test:
pred = model.predict(input)
fig, ax = plt.subplots(6, 3, figsize=(50, 50))
for i in range(6):
ax[i, 0].imshow((input[i].squeeze()))
ax[i, 1].imshow((target[i].squeeze()), cmap=cmap)
ax[i, 2].imshow((pred[i].squeeze()), cmap=cmap)
else:
fig, ax = plt.subplots(6, 2, figsize=(50, 50))
for i in range(6):
ax[i, 0].imshow((input[i].squeeze()))
ax[i, 1].imshow((target[i].squeeze()), cmap=cmap)
visualize_samples = next(
iter(DataGenerator(data=df, batch_size=6, dim=(HEIGHT, WIDTH)))
)
visualize_depth_map(visualize_samples)
```

---
## 3D point cloud visualization
```python
depth_vis = np.flipud(visualize_samples[1][1].squeeze()) # target
img_vis = np.flipud(visualize_samples[0][1].squeeze()) # input
fig = plt.figure(figsize=(15, 10))
ax = plt.axes(projection="3d")
STEP = 3
for x in range(0, img_vis.shape[0], STEP):
for y in range(0, img_vis.shape[1], STEP):
ax.scatter(
[depth_vis[x, y]] * 3,
[y] * 3,
[x] * 3,
c=tuple(img_vis[x, y, :3] / 255),
s=3,
)
ax.view_init(45, 135)
```

---
## Building the model
1. The basic model is from U-Net.
2. Addditive skip-connections are implemented in the downscaling block.
```python
class DownscaleBlock(layers.Layer):
def __init__(
self, filters, kernel_size=(3, 3), padding="same", strides=1, **kwargs
):
super().__init__(**kwargs)
self.convA = layers.Conv2D(filters, kernel_size, strides, padding)
self.convB = layers.Conv2D(filters, kernel_size, strides, padding)
self.reluA = layers.LeakyReLU(alpha=0.2)
self.reluB = layers.LeakyReLU(alpha=0.2)
self.bn2a = tf.keras.layers.BatchNormalization()
self.bn2b = tf.keras.layers.BatchNormalization()
self.pool = layers.MaxPool2D((2, 2), (2, 2))
def call(self, input_tensor):
d = self.convA(input_tensor)
x = self.bn2a(d)
x = self.reluA(x)
x = self.convB(x)
x = self.bn2b(x)
x = self.reluB(x)
x += d
p = self.pool(x)
return x, p
class UpscaleBlock(layers.Layer):
def __init__(
self, filters, kernel_size=(3, 3), padding="same", strides=1, **kwargs
):
super().__init__(**kwargs)
self.us = layers.UpSampling2D((2, 2))
self.convA = layers.Conv2D(filters, kernel_size, strides, padding)
self.convB = layers.Conv2D(filters, kernel_size, strides, padding)
self.reluA = layers.LeakyReLU(alpha=0.2)
self.reluB = layers.LeakyReLU(alpha=0.2)
self.bn2a = tf.keras.layers.BatchNormalization()
self.bn2b = tf.keras.layers.BatchNormalization()
self.conc = layers.Concatenate()
def call(self, x, skip):
x = self.us(x)
concat = self.conc([x, skip])
x = self.convA(concat)
x = self.bn2a(x)
x = self.reluA(x)
x = self.convB(x)
x = self.bn2b(x)
x = self.reluB(x)
return x
class BottleNeckBlock(layers.Layer):
def __init__(
self, filters, kernel_size=(3, 3), padding="same", strides=1, **kwargs
):
super().__init__(**kwargs)
self.convA = layers.Conv2D(filters, kernel_size, strides, padding)
self.convB = layers.Conv2D(filters, kernel_size, strides, padding)
self.reluA = layers.LeakyReLU(alpha=0.2)
self.reluB = layers.LeakyReLU(alpha=0.2)
def call(self, x):
x = self.convA(x)
x = self.reluA(x)
x = self.convB(x)
x = self.reluB(x)
return x
```
---
## Defining the loss
We will optimize 3 losses in our mode.
1. Structural similarity index(SSIM).
2. L1-loss, or Point-wise depth in our case.
3. Depth smoothness loss.
Out of the three loss functions, SSIM contributes the most to improving model performance.
```python
class DepthEstimationModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.ssim_loss_weight = 0.85
self.l1_loss_weight = 0.1
self.edge_loss_weight = 0.9
self.loss_metric = tf.keras.metrics.Mean(name="loss")
f = [16, 32, 64, 128, 256]
self.downscale_blocks = [
DownscaleBlock(f[0]),
DownscaleBlock(f[1]),
DownscaleBlock(f[2]),
DownscaleBlock(f[3]),
]
self.bottle_neck_block = BottleNeckBlock(f[4])
self.upscale_blocks = [
UpscaleBlock(f[3]),
UpscaleBlock(f[2]),
UpscaleBlock(f[1]),
UpscaleBlock(f[0]),
]
self.conv_layer = layers.Conv2D(1, (1, 1), padding="same", activation="tanh")
def calculate_loss(self, target, pred):
# Edges
dy_true, dx_true = tf.image.image_gradients(target)
dy_pred, dx_pred = tf.image.image_gradients(pred)
weights_x = tf.exp(tf.reduce_mean(tf.abs(dx_true)))
weights_y = tf.exp(tf.reduce_mean(tf.abs(dy_true)))
# Depth smoothness
smoothness_x = dx_pred * weights_x
smoothness_y = dy_pred * weights_y
depth_smoothness_loss = tf.reduce_mean(abs(smoothness_x)) + tf.reduce_mean(
abs(smoothness_y)
)
# Structural similarity (SSIM) index
ssim_loss = tf.reduce_mean(
1
- tf.image.ssim(
target, pred, max_val=WIDTH, filter_size=7, k1=0.01 ** 2, k2=0.03 ** 2
)
)
# Point-wise depth
l1_loss = tf.reduce_mean(tf.abs(target - pred))
loss = (
(self.ssim_loss_weight * ssim_loss)
+ (self.l1_loss_weight * l1_loss)
+ (self.edge_loss_weight * depth_smoothness_loss)
)
return loss
@property
def metrics(self):
return [self.loss_metric]
def train_step(self, batch_data):
input, target = batch_data
with tf.GradientTape() as tape:
pred = self(input, training=True)
loss = self.calculate_loss(target, pred)
gradients = tape.gradient(loss, self.trainable_variables)
self.optimizer.apply_gradients(zip(gradients, self.trainable_variables))
self.loss_metric.update_state(loss)
return {
"loss": self.loss_metric.result(),
}
def test_step(self, batch_data):
input, target = batch_data
pred = self(input, training=False)
loss = self.calculate_loss(target, pred)
self.loss_metric.update_state(loss)
return {
"loss": self.loss_metric.result(),
}
def call(self, x):
c1, p1 = self.downscale_blocks[0](x)
c2, p2 = self.downscale_blocks[1](p1)
c3, p3 = self.downscale_blocks[2](p2)
c4, p4 = self.downscale_blocks[3](p3)
bn = self.bottle_neck_block(p4)
u1 = self.upscale_blocks[0](bn, c4)
u2 = self.upscale_blocks[1](u1, c3)
u3 = self.upscale_blocks[2](u2, c2)
u4 = self.upscale_blocks[3](u3, c1)
return self.conv_layer(u4)
```
---
## Model training
```python
optimizer = tf.keras.optimizers.Adam(
learning_rate=LR,
amsgrad=False,
)
model = DepthEstimationModel()
# Compile the model
model.compile(optimizer)
train_loader = DataGenerator(
data=df[:260].reset_index(drop="true"), batch_size=BATCH_SIZE, dim=(HEIGHT, WIDTH)
)
validation_loader = DataGenerator(
data=df[260:].reset_index(drop="true"), batch_size=BATCH_SIZE, dim=(HEIGHT, WIDTH)
)
model.fit(
train_loader,
epochs=EPOCHS,
validation_data=validation_loader,
)
```
<div class="k-default-codeblock">
```
Epoch 1/30
9/9 [==============================] - 18s 1s/step - loss: 1.1543 - val_loss: 1.4281
Epoch 2/30
9/9 [==============================] - 3s 390ms/step - loss: 0.8727 - val_loss: 1.0686
Epoch 3/30
9/9 [==============================] - 4s 428ms/step - loss: 0.6659 - val_loss: 0.7884
Epoch 4/30
9/9 [==============================] - 3s 334ms/step - loss: 0.6462 - val_loss: 0.6198
Epoch 5/30
9/9 [==============================] - 3s 355ms/step - loss: 0.5689 - val_loss: 0.6207
Epoch 6/30
9/9 [==============================] - 3s 361ms/step - loss: 0.5067 - val_loss: 0.4876
Epoch 7/30
9/9 [==============================] - 3s 357ms/step - loss: 0.4680 - val_loss: 0.4698
Epoch 8/30
9/9 [==============================] - 3s 325ms/step - loss: 0.4622 - val_loss: 0.7249
Epoch 9/30
9/9 [==============================] - 3s 393ms/step - loss: 0.4215 - val_loss: 0.3826
Epoch 10/30
9/9 [==============================] - 3s 337ms/step - loss: 0.3788 - val_loss: 0.3289
Epoch 11/30
9/9 [==============================] - 3s 345ms/step - loss: 0.3347 - val_loss: 0.3032
Epoch 12/30
9/9 [==============================] - 3s 327ms/step - loss: 0.3488 - val_loss: 0.2631
Epoch 13/30
9/9 [==============================] - 3s 326ms/step - loss: 0.3315 - val_loss: 0.2383
Epoch 14/30
9/9 [==============================] - 3s 331ms/step - loss: 0.3349 - val_loss: 0.2379
Epoch 15/30
9/9 [==============================] - 3s 333ms/step - loss: 0.3394 - val_loss: 0.2151
Epoch 16/30
9/9 [==============================] - 3s 337ms/step - loss: 0.3073 - val_loss: 0.2243
Epoch 17/30
9/9 [==============================] - 3s 355ms/step - loss: 0.3951 - val_loss: 0.2627
Epoch 18/30
9/9 [==============================] - 3s 335ms/step - loss: 0.3657 - val_loss: 0.2175
Epoch 19/30
9/9 [==============================] - 3s 321ms/step - loss: 0.3404 - val_loss: 0.2073
Epoch 20/30
9/9 [==============================] - 3s 320ms/step - loss: 0.3549 - val_loss: 0.1972
Epoch 21/30
9/9 [==============================] - 3s 317ms/step - loss: 0.2802 - val_loss: 0.1936
Epoch 22/30
9/9 [==============================] - 3s 316ms/step - loss: 0.2632 - val_loss: 0.1893
Epoch 23/30
9/9 [==============================] - 3s 318ms/step - loss: 0.2862 - val_loss: 0.1807
Epoch 24/30
9/9 [==============================] - 3s 328ms/step - loss: 0.3083 - val_loss: 0.1923
Epoch 25/30
9/9 [==============================] - 3s 312ms/step - loss: 0.3666 - val_loss: 0.1795
Epoch 26/30
9/9 [==============================] - 3s 316ms/step - loss: 0.2928 - val_loss: 0.1753
Epoch 27/30
9/9 [==============================] - 3s 325ms/step - loss: 0.2945 - val_loss: 0.1790
Epoch 28/30
9/9 [==============================] - 3s 325ms/step - loss: 0.2642 - val_loss: 0.1775
Epoch 29/30
9/9 [==============================] - 3s 333ms/step - loss: 0.2546 - val_loss: 0.1810
Epoch 30/30
9/9 [==============================] - 3s 315ms/step - loss: 0.2650 - val_loss: 0.1795
<keras.callbacks.History at 0x7f5151799fd0>
```
</div>
---
## Visualizing model output
We visualize the model output over the validation set.
The first image is the RGB image, the second image is the ground truth depth map image
and the third one is the predicted depth map image.
```python
test_loader = next(
iter(
DataGenerator(
data=df[265:].reset_index(drop="true"), batch_size=6, dim=(HEIGHT, WIDTH)
)
)
)
visualize_depth_map(test_loader, test=True, model=model)
test_loader = next(
iter(
DataGenerator(
data=df[300:].reset_index(drop="true"), batch_size=6, dim=(HEIGHT, WIDTH)
)
)
)
visualize_depth_map(test_loader, test=True, model=model)
```


---
## Possible improvements
1. You can improve this model by replacing the encoding part of the U-Net with a
pretrained DenseNet or ResNet.
2. Loss functions play an important role in solving this problem.
Tuning the loss functions may yield significant improvement.
---
## References
The following papers go deeper into possible approaches for depth estimation.
1. [Depth Prediction Without the Sensors: Leveraging Structure for Unsupervised Learning from Monocular Videos](https://arxiv.org/pdf/1811.06152v1.pdf)
2. [Digging Into Self-Supervised Monocular Depth Estimation](https://openaccess.thecvf.com/content_ICCV_2019/papers/Godard_Digging_Into_Self-Supervised_Monocular_Depth_Estimation_ICCV_2019_paper.pdf)
3. [Deeper Depth Prediction with Fully Convolutional Residual Networks](https://arxiv.org/pdf/1606.00373v2.pdf)
You can also find helpful implementations in the papers with code depth estimation task.
You can use the trained model hosted on [Hugging Face Hub](https://huggingface.co/spaces/keras-io/Monocular-Depth-Estimation) and try the demo on [Hugging Face Spaces](https://huggingface.co/keras-io/monocular-depth-estimation).
| keras-io/examples/vision/md/depth_estimation.md/0 | {
"file_path": "keras-io/examples/vision/md/depth_estimation.md",
"repo_id": "keras-io",
"token_count": 8237
} | 104 |
# Involutional neural networks
**Author:** [Aritra Roy Gosthipaty](https://twitter.com/ariG23498)<br>
**Date created:** 2021/07/25<br>
**Last modified:** 2021/07/25<br>
**Description:** Deep dive into location-specific and channel-agnostic "involution" kernels.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/involution.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/vision/involution.py)
---
## Introduction
Convolution has been the basis of most modern neural
networks for computer vision. A convolution kernel is
spatial-agnostic and channel-specific. Because of this, it isn't able
to adapt to different visual patterns with respect to
different spatial locations. Along with location-related problems, the
receptive field of convolution creates challenges with regard to capturing
long-range spatial interactions.
To address the above issues, Li et. al. rethink the properties
of convolution in
[Involution: Inverting the Inherence of Convolution for VisualRecognition](https://arxiv.org/abs/2103.06255).
The authors propose the "involution kernel", that is location-specific and
channel-agnostic. Due to the location-specific nature of the operation,
the authors say that self-attention falls under the design paradigm of
involution.
This example describes the involution kernel, compares two image
classification models, one with convolution and the other with
involution, and also tries drawing a parallel with the self-attention
layer.
---
## Setup
```python
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import tensorflow as tf
import keras
import matplotlib.pyplot as plt
# Set seed for reproducibility.
tf.random.set_seed(42)
```
---
## Convolution
Convolution remains the mainstay of deep neural networks for computer vision.
To understand Involution, it is necessary to talk about the
convolution operation.

Consider an input tensor **X** with dimensions **H**, **W** and
**C_in**. We take a collection of **C_out** convolution kernels each of
shape **K**, **K**, **C_in**. With the multiply-add operation between
the input tensor and the kernels we obtain an output tensor **Y** with
dimensions **H**, **W**, **C_out**.
In the diagram above `C_out=3`. This makes the output tensor of shape H,
W and 3. One can notice that the convoltuion kernel does not depend on
the spatial position of the input tensor which makes it
**location-agnostic**. On the other hand, each channel in the output
tensor is based on a specific convolution filter which makes is
**channel-specific**.
---
## Involution
The idea is to have an operation that is both **location-specific**
and **channel-agnostic**. Trying to implement these specific properties poses
a challenge. With a fixed number of involution kernels (for each
spatial position) we will **not** be able to process variable-resolution
input tensors.
To solve this problem, the authors have considered *generating* each
kernel conditioned on specific spatial positions. With this method, we
should be able to process variable-resolution input tensors with ease.
The diagram below provides an intuition on this kernel generation
method.

```python
class Involution(keras.layers.Layer):
def __init__(
self, channel, group_number, kernel_size, stride, reduction_ratio, name
):
super().__init__(name=name)
# Initialize the parameters.
self.channel = channel
self.group_number = group_number
self.kernel_size = kernel_size
self.stride = stride
self.reduction_ratio = reduction_ratio
def build(self, input_shape):
# Get the shape of the input.
(_, height, width, num_channels) = input_shape
# Scale the height and width with respect to the strides.
height = height // self.stride
width = width // self.stride
# Define a layer that average pools the input tensor
# if stride is more than 1.
self.stride_layer = (
keras.layers.AveragePooling2D(
pool_size=self.stride, strides=self.stride, padding="same"
)
if self.stride > 1
else tf.identity
)
# Define the kernel generation layer.
self.kernel_gen = keras.Sequential(
[
keras.layers.Conv2D(
filters=self.channel // self.reduction_ratio, kernel_size=1
),
keras.layers.BatchNormalization(),
keras.layers.ReLU(),
keras.layers.Conv2D(
filters=self.kernel_size * self.kernel_size * self.group_number,
kernel_size=1,
),
]
)
# Define reshape layers
self.kernel_reshape = keras.layers.Reshape(
target_shape=(
height,
width,
self.kernel_size * self.kernel_size,
1,
self.group_number,
)
)
self.input_patches_reshape = keras.layers.Reshape(
target_shape=(
height,
width,
self.kernel_size * self.kernel_size,
num_channels // self.group_number,
self.group_number,
)
)
self.output_reshape = keras.layers.Reshape(
target_shape=(height, width, num_channels)
)
def call(self, x):
# Generate the kernel with respect to the input tensor.
# B, H, W, K*K*G
kernel_input = self.stride_layer(x)
kernel = self.kernel_gen(kernel_input)
# reshape the kerenl
# B, H, W, K*K, 1, G
kernel = self.kernel_reshape(kernel)
# Extract input patches.
# B, H, W, K*K*C
input_patches = tf.image.extract_patches(
images=x,
sizes=[1, self.kernel_size, self.kernel_size, 1],
strides=[1, self.stride, self.stride, 1],
rates=[1, 1, 1, 1],
padding="SAME",
)
# Reshape the input patches to align with later operations.
# B, H, W, K*K, C//G, G
input_patches = self.input_patches_reshape(input_patches)
# Compute the multiply-add operation of kernels and patches.
# B, H, W, K*K, C//G, G
output = tf.multiply(kernel, input_patches)
# B, H, W, C//G, G
output = tf.reduce_sum(output, axis=3)
# Reshape the output kernel.
# B, H, W, C
output = self.output_reshape(output)
# Return the output tensor and the kernel.
return output, kernel
```
---
## Testing the Involution layer
```python
# Define the input tensor.
input_tensor = tf.random.normal((32, 256, 256, 3))
# Compute involution with stride 1.
output_tensor, _ = Involution(
channel=3, group_number=1, kernel_size=5, stride=1, reduction_ratio=1, name="inv_1"
)(input_tensor)
print(f"with stride 1 ouput shape: {output_tensor.shape}")
# Compute involution with stride 2.
output_tensor, _ = Involution(
channel=3, group_number=1, kernel_size=5, stride=2, reduction_ratio=1, name="inv_2"
)(input_tensor)
print(f"with stride 2 ouput shape: {output_tensor.shape}")
# Compute involution with stride 1, channel 16 and reduction ratio 2.
output_tensor, _ = Involution(
channel=16, group_number=1, kernel_size=5, stride=1, reduction_ratio=2, name="inv_3"
)(input_tensor)
print(
"with channel 16 and reduction ratio 2 ouput shape: {}".format(output_tensor.shape)
)
```
<div class="k-default-codeblock">
```
with stride 1 ouput shape: (32, 256, 256, 3)
with stride 2 ouput shape: (32, 128, 128, 3)
with channel 16 and reduction ratio 2 ouput shape: (32, 256, 256, 3)
```
</div>
---
## Image Classification
In this section, we will build an image-classifier model. There will
be two models one with convolutions and the other with involutions.
The image-classification model is heavily inspired by this
[Convolutional Neural Network (CNN)](https://www.tensorflow.org/tutorials/images/cnn)
tutorial from Google.
---
## Get the CIFAR10 Dataset
```python
# Load the CIFAR10 dataset.
print("loading the CIFAR10 dataset...")
(
(train_images, train_labels),
(
test_images,
test_labels,
),
) = keras.datasets.cifar10.load_data()
# Normalize pixel values to be between 0 and 1.
(train_images, test_images) = (train_images / 255.0, test_images / 255.0)
# Shuffle and batch the dataset.
train_ds = (
tf.data.Dataset.from_tensor_slices((train_images, train_labels))
.shuffle(256)
.batch(256)
)
test_ds = tf.data.Dataset.from_tensor_slices((test_images, test_labels)).batch(256)
```
<div class="k-default-codeblock">
```
loading the CIFAR10 dataset...
```
</div>
---
## Visualise the data
```python
class_names = [
"airplane",
"automobile",
"bird",
"cat",
"deer",
"dog",
"frog",
"horse",
"ship",
"truck",
]
plt.figure(figsize=(10, 10))
for i in range(25):
plt.subplot(5, 5, i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
plt.imshow(train_images[i])
plt.xlabel(class_names[train_labels[i][0]])
plt.show()
```

---
## Convolutional Neural Network
```python
# Build the conv model.
print("building the convolution model...")
conv_model = keras.Sequential(
[
keras.layers.Conv2D(32, (3, 3), input_shape=(32, 32, 3), padding="same"),
keras.layers.ReLU(name="relu1"),
keras.layers.MaxPooling2D((2, 2)),
keras.layers.Conv2D(64, (3, 3), padding="same"),
keras.layers.ReLU(name="relu2"),
keras.layers.MaxPooling2D((2, 2)),
keras.layers.Conv2D(64, (3, 3), padding="same"),
keras.layers.ReLU(name="relu3"),
keras.layers.Flatten(),
keras.layers.Dense(64, activation="relu"),
keras.layers.Dense(10),
]
)
# Compile the mode with the necessary loss function and optimizer.
print("compiling the convolution model...")
conv_model.compile(
optimizer="adam",
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"],
)
# Train the model.
print("conv model training...")
conv_hist = conv_model.fit(train_ds, epochs=20, validation_data=test_ds)
```
<div class="k-default-codeblock">
```
building the convolution model...
compiling the convolution model...
conv model training...
Epoch 1/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 6s 15ms/step - accuracy: 0.3068 - loss: 1.9000 - val_accuracy: 0.4861 - val_loss: 1.4593
Epoch 2/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - accuracy: 0.5153 - loss: 1.3603 - val_accuracy: 0.5741 - val_loss: 1.1913
Epoch 3/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.5949 - loss: 1.1517 - val_accuracy: 0.6095 - val_loss: 1.0965
Epoch 4/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.6414 - loss: 1.0330 - val_accuracy: 0.6260 - val_loss: 1.0635
Epoch 5/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.6690 - loss: 0.9485 - val_accuracy: 0.6622 - val_loss: 0.9833
Epoch 6/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.6951 - loss: 0.8764 - val_accuracy: 0.6783 - val_loss: 0.9413
Epoch 7/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.7122 - loss: 0.8167 - val_accuracy: 0.6856 - val_loss: 0.9134
Epoch 8/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - accuracy: 0.7299 - loss: 0.7709 - val_accuracy: 0.7001 - val_loss: 0.8792
Epoch 9/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - accuracy: 0.7467 - loss: 0.7288 - val_accuracy: 0.6992 - val_loss: 0.8821
Epoch 10/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - accuracy: 0.7591 - loss: 0.6982 - val_accuracy: 0.7235 - val_loss: 0.8237
Epoch 11/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - accuracy: 0.7725 - loss: 0.6550 - val_accuracy: 0.7115 - val_loss: 0.8521
Epoch 12/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.7808 - loss: 0.6302 - val_accuracy: 0.7051 - val_loss: 0.8823
Epoch 13/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.7860 - loss: 0.6101 - val_accuracy: 0.7122 - val_loss: 0.8635
Epoch 14/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.7998 - loss: 0.5786 - val_accuracy: 0.7214 - val_loss: 0.8348
Epoch 15/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.8117 - loss: 0.5473 - val_accuracy: 0.7139 - val_loss: 0.8835
Epoch 16/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.8168 - loss: 0.5267 - val_accuracy: 0.7155 - val_loss: 0.8840
Epoch 17/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.8266 - loss: 0.5022 - val_accuracy: 0.7239 - val_loss: 0.8576
Epoch 18/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.8374 - loss: 0.4750 - val_accuracy: 0.7262 - val_loss: 0.8756
Epoch 19/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.8452 - loss: 0.4505 - val_accuracy: 0.7235 - val_loss: 0.9049
Epoch 20/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - accuracy: 0.8531 - loss: 0.4283 - val_accuracy: 0.7304 - val_loss: 0.8962
```
</div>
---
## Involutional Neural Network
```python
# Build the involution model.
print("building the involution model...")
inputs = keras.Input(shape=(32, 32, 3))
x, _ = Involution(
channel=3, group_number=1, kernel_size=3, stride=1, reduction_ratio=2, name="inv_1"
)(inputs)
x = keras.layers.ReLU()(x)
x = keras.layers.MaxPooling2D((2, 2))(x)
x, _ = Involution(
channel=3, group_number=1, kernel_size=3, stride=1, reduction_ratio=2, name="inv_2"
)(x)
x = keras.layers.ReLU()(x)
x = keras.layers.MaxPooling2D((2, 2))(x)
x, _ = Involution(
channel=3, group_number=1, kernel_size=3, stride=1, reduction_ratio=2, name="inv_3"
)(x)
x = keras.layers.ReLU()(x)
x = keras.layers.Flatten()(x)
x = keras.layers.Dense(64, activation="relu")(x)
outputs = keras.layers.Dense(10)(x)
inv_model = keras.Model(inputs=[inputs], outputs=[outputs], name="inv_model")
# Compile the mode with the necessary loss function and optimizer.
print("compiling the involution model...")
inv_model.compile(
optimizer="adam",
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=["accuracy"],
)
# train the model
print("inv model training...")
inv_hist = inv_model.fit(train_ds, epochs=20, validation_data=test_ds)
```
<div class="k-default-codeblock">
```
building the involution model...
compiling the involution model...
inv model training...
Epoch 1/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 9s 25ms/step - accuracy: 0.1369 - loss: 2.2728 - val_accuracy: 0.2716 - val_loss: 2.1041
Epoch 2/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.2922 - loss: 1.9489 - val_accuracy: 0.3478 - val_loss: 1.8275
Epoch 3/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.3477 - loss: 1.8098 - val_accuracy: 0.3782 - val_loss: 1.7435
Epoch 4/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.3741 - loss: 1.7420 - val_accuracy: 0.3901 - val_loss: 1.6943
Epoch 5/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.3931 - loss: 1.6942 - val_accuracy: 0.4007 - val_loss: 1.6639
Epoch 6/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.4057 - loss: 1.6622 - val_accuracy: 0.4108 - val_loss: 1.6494
Epoch 7/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.4134 - loss: 1.6374 - val_accuracy: 0.4202 - val_loss: 1.6363
Epoch 8/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.4200 - loss: 1.6166 - val_accuracy: 0.4312 - val_loss: 1.6062
Epoch 9/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.4286 - loss: 1.5949 - val_accuracy: 0.4316 - val_loss: 1.6018
Epoch 10/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.4346 - loss: 1.5794 - val_accuracy: 0.4346 - val_loss: 1.5963
Epoch 11/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.4395 - loss: 1.5641 - val_accuracy: 0.4388 - val_loss: 1.5831
Epoch 12/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 5ms/step - accuracy: 0.4445 - loss: 1.5502 - val_accuracy: 0.4443 - val_loss: 1.5826
Epoch 13/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.4493 - loss: 1.5391 - val_accuracy: 0.4497 - val_loss: 1.5574
Epoch 14/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.4528 - loss: 1.5255 - val_accuracy: 0.4547 - val_loss: 1.5433
Epoch 15/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 4ms/step - accuracy: 0.4575 - loss: 1.5148 - val_accuracy: 0.4548 - val_loss: 1.5438
Epoch 16/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.4599 - loss: 1.5072 - val_accuracy: 0.4581 - val_loss: 1.5323
Epoch 17/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.4664 - loss: 1.4957 - val_accuracy: 0.4598 - val_loss: 1.5321
Epoch 18/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.4701 - loss: 1.4863 - val_accuracy: 0.4575 - val_loss: 1.5302
Epoch 19/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.4737 - loss: 1.4790 - val_accuracy: 0.4676 - val_loss: 1.5233
Epoch 20/20
196/196 ━━━━━━━━━━━━━━━━━━━━ 1s 6ms/step - accuracy: 0.4771 - loss: 1.4740 - val_accuracy: 0.4719 - val_loss: 1.5096
```
</div>
---
## Comparisons
In this section, we will be looking at both the models and compare a
few pointers.
### Parameters
One can see that with a similar architecture the parameters in a CNN
is much larger than that of an INN (Involutional Neural Network).
```python
conv_model.summary()
inv_model.summary()
```
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold">Model: "sequential_3"</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃<span style="font-weight: bold"> Layer (type) </span>┃<span style="font-weight: bold"> Output Shape </span>┃<span style="font-weight: bold"> Param # </span>┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ conv2d_6 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">896</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ relu1 (<span style="color: #0087ff; text-decoration-color: #0087ff">ReLU</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ max_pooling2d (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling2D</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ conv2d_7 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">18,496</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ relu2 (<span style="color: #0087ff; text-decoration-color: #0087ff">ReLU</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ max_pooling2d_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling2D</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">8</span>, <span style="color: #00af00; text-decoration-color: #00af00">8</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ conv2d_8 (<span style="color: #0087ff; text-decoration-color: #0087ff">Conv2D</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">8</span>, <span style="color: #00af00; text-decoration-color: #00af00">8</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">36,928</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ relu3 (<span style="color: #0087ff; text-decoration-color: #0087ff">ReLU</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">8</span>, <span style="color: #00af00; text-decoration-color: #00af00">8</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ flatten (<span style="color: #0087ff; text-decoration-color: #0087ff">Flatten</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">4096</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ dense (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">262,208</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ dense_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">10</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">650</span> │
└─────────────────────────────────┴───────────────────────────┴────────────┘
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Total params: </span><span style="color: #00af00; text-decoration-color: #00af00">957,536</span> (3.65 MB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">319,178</span> (1.22 MB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Non-trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">0</span> (0.00 B)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Optimizer params: </span><span style="color: #00af00; text-decoration-color: #00af00">638,358</span> (2.44 MB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold">Model: "inv_model"</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃<span style="font-weight: bold"> Layer (type) </span>┃<span style="font-weight: bold"> Output Shape </span>┃<span style="font-weight: bold"> Param # </span>┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ input_layer_4 (<span style="color: #0087ff; text-decoration-color: #0087ff">InputLayer</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>, <span style="color: #00af00; text-decoration-color: #00af00">3</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ inv_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Involution</span>) │ [(<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>, <span style="color: #00af00; text-decoration-color: #00af00">3</span>), │ <span style="color: #00af00; text-decoration-color: #00af00">26</span> │
│ │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>, <span style="color: #00af00; text-decoration-color: #00af00">9</span>, <span style="color: #00af00; text-decoration-color: #00af00">1</span>, <span style="color: #00af00; text-decoration-color: #00af00">1</span>)] │ │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ re_lu_4 (<span style="color: #0087ff; text-decoration-color: #0087ff">ReLU</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>, <span style="color: #00af00; text-decoration-color: #00af00">32</span>, <span style="color: #00af00; text-decoration-color: #00af00">3</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ max_pooling2d_2 (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling2D</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>, <span style="color: #00af00; text-decoration-color: #00af00">3</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ inv_2 (<span style="color: #0087ff; text-decoration-color: #0087ff">Involution</span>) │ [(<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>, <span style="color: #00af00; text-decoration-color: #00af00">3</span>), │ <span style="color: #00af00; text-decoration-color: #00af00">26</span> │
│ │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>, <span style="color: #00af00; text-decoration-color: #00af00">9</span>, <span style="color: #00af00; text-decoration-color: #00af00">1</span>, <span style="color: #00af00; text-decoration-color: #00af00">1</span>)] │ │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ re_lu_6 (<span style="color: #0087ff; text-decoration-color: #0087ff">ReLU</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>, <span style="color: #00af00; text-decoration-color: #00af00">16</span>, <span style="color: #00af00; text-decoration-color: #00af00">3</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ max_pooling2d_3 (<span style="color: #0087ff; text-decoration-color: #0087ff">MaxPooling2D</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">8</span>, <span style="color: #00af00; text-decoration-color: #00af00">8</span>, <span style="color: #00af00; text-decoration-color: #00af00">3</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ inv_3 (<span style="color: #0087ff; text-decoration-color: #0087ff">Involution</span>) │ [(<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">8</span>, <span style="color: #00af00; text-decoration-color: #00af00">8</span>, <span style="color: #00af00; text-decoration-color: #00af00">3</span>), (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, │ <span style="color: #00af00; text-decoration-color: #00af00">26</span> │
│ │ <span style="color: #00af00; text-decoration-color: #00af00">8</span>, <span style="color: #00af00; text-decoration-color: #00af00">8</span>, <span style="color: #00af00; text-decoration-color: #00af00">9</span>, <span style="color: #00af00; text-decoration-color: #00af00">1</span>, <span style="color: #00af00; text-decoration-color: #00af00">1</span>)] │ │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ re_lu_8 (<span style="color: #0087ff; text-decoration-color: #0087ff">ReLU</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">8</span>, <span style="color: #00af00; text-decoration-color: #00af00">8</span>, <span style="color: #00af00; text-decoration-color: #00af00">3</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ flatten_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">Flatten</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">192</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ dense_2 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">64</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">12,352</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ dense_3 (<span style="color: #0087ff; text-decoration-color: #0087ff">Dense</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">10</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">650</span> │
└─────────────────────────────────┴───────────────────────────┴────────────┘
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Total params: </span><span style="color: #00af00; text-decoration-color: #00af00">39,230</span> (153.25 KB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">13,074</span> (51.07 KB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Non-trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">6</span> (24.00 B)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Optimizer params: </span><span style="color: #00af00; text-decoration-color: #00af00">26,150</span> (102.15 KB)
</pre>
### Loss and Accuracy Plots
Here, the loss and the accuracy plots demonstrate that INNs are slow
learners (with lower parameters).
```python
plt.figure(figsize=(20, 5))
plt.subplot(1, 2, 1)
plt.title("Convolution Loss")
plt.plot(conv_hist.history["loss"], label="loss")
plt.plot(conv_hist.history["val_loss"], label="val_loss")
plt.legend()
plt.subplot(1, 2, 2)
plt.title("Involution Loss")
plt.plot(inv_hist.history["loss"], label="loss")
plt.plot(inv_hist.history["val_loss"], label="val_loss")
plt.legend()
plt.show()
plt.figure(figsize=(20, 5))
plt.subplot(1, 2, 1)
plt.title("Convolution Accuracy")
plt.plot(conv_hist.history["accuracy"], label="accuracy")
plt.plot(conv_hist.history["val_accuracy"], label="val_accuracy")
plt.legend()
plt.subplot(1, 2, 2)
plt.title("Involution Accuracy")
plt.plot(inv_hist.history["accuracy"], label="accuracy")
plt.plot(inv_hist.history["val_accuracy"], label="val_accuracy")
plt.legend()
plt.show()
```


---
## Visualizing Involution Kernels
To visualize the kernels, we take the sum of **K×K** values from each
involution kernel. **All the representatives at different spatial
locations frame the corresponding heat map.**
The authors mention:
"Our proposed involution is reminiscent of self-attention and
essentially could become a generalized version of it."
With the visualization of the kernel we can indeed obtain an attention
map of the image. The learned involution kernels provides attention to
individual spatial positions of the input tensor. The
**location-specific** property makes involution a generic space of models
in which self-attention belongs.
```python
layer_names = ["inv_1", "inv_2", "inv_3"]
outputs = [inv_model.get_layer(name).output[1] for name in layer_names]
vis_model = keras.Model(inv_model.input, outputs)
fig, axes = plt.subplots(nrows=10, ncols=4, figsize=(10, 30))
for ax, test_image in zip(axes, test_images[:10]):
(inv1_kernel, inv2_kernel, inv3_kernel) = vis_model.predict(test_image[None, ...])
inv1_kernel = tf.reduce_sum(inv1_kernel, axis=[-1, -2, -3])
inv2_kernel = tf.reduce_sum(inv2_kernel, axis=[-1, -2, -3])
inv3_kernel = tf.reduce_sum(inv3_kernel, axis=[-1, -2, -3])
ax[0].imshow(keras.utils.array_to_img(test_image))
ax[0].set_title("Input Image")
ax[1].imshow(keras.utils.array_to_img(inv1_kernel[0, ..., None]))
ax[1].set_title("Involution Kernel 1")
ax[2].imshow(keras.utils.array_to_img(inv2_kernel[0, ..., None]))
ax[2].set_title("Involution Kernel 2")
ax[3].imshow(keras.utils.array_to_img(inv3_kernel[0, ..., None]))
ax[3].set_title("Involution Kernel 3")
```
<div class="k-default-codeblock">
```
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 503ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 11ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 10ms/step
1/1 ━━━━━━━━━━━━━━━━━━━━ 0s 9ms/step
```
</div>

---
## Conclusions
In this example, the main focus was to build an `Involution` layer which
can be easily reused. While our comparisons were based on a specific
task, feel free to use the layer for different tasks and report your
results.
According to me, the key take-away of involution is its
relationship with self-attention. The intuition behind location-specific
and channel-spefic processing makes sense in a lot of tasks.
Moving forward one can:
- Look at [Yannick's video](https://youtu.be/pH2jZun8MoY) on
involution for a better understanding.
- Experiment with the various hyperparameters of the involution layer.
- Build different models with the involution layer.
- Try building a different kernel generation method altogether.
You can use the trained model hosted on [Hugging Face Hub](https://huggingface.co/keras-io/involution)
and try the demo on [Hugging Face Spaces](https://huggingface.co/spaces/keras-io/involution).
| keras-io/examples/vision/md/involution.md/0 | {
"file_path": "keras-io/examples/vision/md/involution.md",
"repo_id": "keras-io",
"token_count": 16589
} | 105 |
"""
Title: Metric learning for image similarity search using TensorFlow Similarity
Author: [Owen Vallis](https://twitter.com/owenvallis)
Date created: 2021/09/30
Last modified: 2022/02/29
Description: Example of using similarity metric learning on CIFAR-10 images.
Accelerator: GPU
"""
"""
## Overview
This example is based on the
["Metric learning for image similarity search" example](https://keras.io/examples/vision/metric_learning/).
We aim to use the same data set but implement the model using
[TensorFlow Similarity](https://github.com/tensorflow/similarity).
Metric learning aims to train models that can embed inputs into a
high-dimensional space such that "similar" inputs are pulled closer to each
other and "dissimilar" inputs are pushed farther apart. Once trained, these
models can produce embeddings for downstream systems where such similarity is
useful, for instance as a ranking signal for search or as a form of pretrained
embedding model for another supervised problem.
For a more detailed overview of metric learning, see:
* [What is metric learning?](http://contrib.scikit-learn.org/metric-learn/introduction.html)
* ["Using crossentropy for metric learning" tutorial](https://www.youtube.com/watch?v=Jb4Ewl5RzkI)
"""
"""
## Setup
This tutorial will use the [TensorFlow Similarity](https://github.com/tensorflow/similarity) library
to learn and evaluate the similarity embedding.
TensorFlow Similarity provides components that:
* Make training contrastive models simple and fast.
* Make it easier to ensure that batches contain pairs of examples.
* Enable the evaluation of the quality of the embedding.
TensorFlow Similarity can be installed easily via pip, as follows:
```
pip -q install tensorflow_similarity
```
"""
import random
from matplotlib import pyplot as plt
from mpl_toolkits import axes_grid1
import numpy as np
import tensorflow as tf
from tensorflow import keras
import tensorflow_similarity as tfsim
tfsim.utils.tf_cap_memory()
print("TensorFlow:", tf.__version__)
print("TensorFlow Similarity:", tfsim.__version__)
"""
## Dataset samplers
We will be using the
[CIFAR-10](https://www.tensorflow.org/datasets/catalog/cifar10)
dataset for this tutorial.
For a similarity model to learn efficiently, each batch must contain at least 2
examples of each class.
To make this easy, tf_similarity offers `Sampler` objects that enable you to set both
the number of classes and the minimum number of examples of each class per
batch.
The training and validation datasets will be created using the
`TFDatasetMultiShotMemorySampler` object. This creates a sampler that loads datasets
from [TensorFlow Datasets](https://www.tensorflow.org/datasets) and yields
batches containing a target number of classes and a target number of examples
per class. Additionally, we can restrict the sampler to only yield the subset of
classes defined in `class_list`, enabling us to train on a subset of the classes
and then test how the embedding generalizes to the unseen classes. This can be
useful when working on few-shot learning problems.
The following cell creates a train_ds sample that:
* Loads the CIFAR-10 dataset from TFDS and then takes the `examples_per_class_per_batch`.
* Ensures the sampler restricts the classes to those defined in `class_list`.
* Ensures each batch contains 10 different classes with 8 examples each.
We also create a validation dataset in the same way, but we limit the total number of
examples per class to 100 and the examples per class per batch is set to the
default of 2.
"""
# This determines the number of classes used during training.
# Here we are using all the classes.
num_known_classes = 10
class_list = random.sample(population=range(10), k=num_known_classes)
classes_per_batch = 10
# Passing multiple examples per class per batch ensures that each example has
# multiple positive pairs. This can be useful when performing triplet mining or
# when using losses like `MultiSimilarityLoss` or `CircleLoss` as these can
# take a weighted mix of all the positive pairs. In general, more examples per
# class will lead to more information for the positive pairs, while more classes
# per batch will provide more varied information in the negative pairs. However,
# the losses compute the pairwise distance between the examples in a batch so
# the upper limit of the batch size is restricted by the memory.
examples_per_class_per_batch = 8
print(
"Batch size is: "
f"{min(classes_per_batch, num_known_classes) * examples_per_class_per_batch}"
)
print(" Create Training Data ".center(34, "#"))
train_ds = tfsim.samplers.TFDatasetMultiShotMemorySampler(
"cifar10",
classes_per_batch=min(classes_per_batch, num_known_classes),
splits="train",
steps_per_epoch=4000,
examples_per_class_per_batch=examples_per_class_per_batch,
class_list=class_list,
)
print("\n" + " Create Validation Data ".center(34, "#"))
val_ds = tfsim.samplers.TFDatasetMultiShotMemorySampler(
"cifar10",
classes_per_batch=classes_per_batch,
splits="test",
total_examples_per_class=100,
)
"""
## Visualize the dataset
The samplers will shuffle the dataset, so we can get a sense of the dataset by
plotting the first 25 images.
The samplers provide a `get_slice(begin, size)` method that allows us to easily
select a block of samples.
Alternatively, we can use the `generate_batch()` method to yield a batch. This
can allow us to check that a batch contains the expected number of classes and
examples per class.
"""
num_cols = num_rows = 5
# Get the first 25 examples.
x_slice, y_slice = train_ds.get_slice(begin=0, size=num_cols * num_rows)
fig = plt.figure(figsize=(6.0, 6.0))
grid = axes_grid1.ImageGrid(fig, 111, nrows_ncols=(num_cols, num_rows), axes_pad=0.1)
for ax, im, label in zip(grid, x_slice, y_slice):
ax.imshow(im)
ax.axis("off")
"""
## Embedding model
Next we define a `SimilarityModel` using the Keras Functional API. The model
is a standard convnet with the addition of a `MetricEmbedding` layer that
applies L2 normalization. The metric embedding layer is helpful when using
`Cosine` distance as we only care about the angle between the vectors.
Additionally, the `SimilarityModel` provides a number of helper methods for:
* Indexing embedded examples
* Performing example lookups
* Evaluating the classification
* Evaluating the quality of the embedding space
See the [TensorFlow Similarity documentation](https://github.com/tensorflow/similarity)
for more details.
"""
embedding_size = 256
inputs = keras.layers.Input((32, 32, 3))
x = keras.layers.Rescaling(scale=1.0 / 255)(inputs)
x = keras.layers.Conv2D(64, 3, activation="relu")(x)
x = keras.layers.BatchNormalization()(x)
x = keras.layers.Conv2D(128, 3, activation="relu")(x)
x = keras.layers.BatchNormalization()(x)
x = keras.layers.MaxPool2D((4, 4))(x)
x = keras.layers.Conv2D(256, 3, activation="relu")(x)
x = keras.layers.BatchNormalization()(x)
x = keras.layers.Conv2D(256, 3, activation="relu")(x)
x = keras.layers.GlobalMaxPool2D()(x)
outputs = tfsim.layers.MetricEmbedding(embedding_size)(x)
# building model
model = tfsim.models.SimilarityModel(inputs, outputs)
model.summary()
"""
## Similarity loss
The similarity loss expects batches containing at least 2 examples of each
class, from which it computes the loss over the pairwise positive and negative
distances. Here we are using `MultiSimilarityLoss()`
([paper](ihttps://arxiv.org/abs/1904.06627)), one of several losses in
[TensorFlow Similarity](https://github.com/tensorflow/similarity). This loss
attempts to use all informative pairs in the batch, taking into account the
self-similarity, positive-similarity, and the negative-similarity.
"""
epochs = 3
learning_rate = 0.002
val_steps = 50
# init similarity loss
loss = tfsim.losses.MultiSimilarityLoss()
# compiling and training
model.compile(
optimizer=keras.optimizers.Adam(learning_rate),
loss=loss,
steps_per_execution=10,
)
history = model.fit(
train_ds, epochs=epochs, validation_data=val_ds, validation_steps=val_steps
)
"""
## Indexing
Now that we have trained our model, we can create an index of examples. Here we
batch index the first 200 validation examples by passing the x and y to the index
along with storing the image in the data parameter. The `x_index` values are
embedded and then added to the index to make them searchable. The `y_index` and
data parameters are optional but allow the user to associate metadata with the
embedded example.
"""
x_index, y_index = val_ds.get_slice(begin=0, size=200)
model.reset_index()
model.index(x_index, y_index, data=x_index)
"""
## Calibration
Once the index is built, we can calibrate a distance threshold using a matching
strategy and a calibration metric.
Here we are searching for the optimal F1 score while using K=1 as our
classifier. All matches at or below the calibrated threshold distance will be
labeled as a Positive match between the query example and the label associated
with the match result, while all matches above the threshold distance will be
labeled as a Negative match.
Additionally, we pass in extra metrics to compute as well. All values in the
output are computed at the calibrated threshold.
Finally, `model.calibrate()` returns a `CalibrationResults` object containing:
* `"cutpoints"`: A Python dict mapping the cutpoint name to a dict containing the
`ClassificationMetric` values associated with a particular distance threshold,
e.g., `"optimal" : {"acc": 0.90, "f1": 0.92}`.
* `"thresholds"`: A Python dict mapping `ClassificationMetric` names to a list
containing the metric's value computed at each of the distance thresholds, e.g.,
`{"f1": [0.99, 0.80], "distance": [0.0, 1.0]}`.
"""
x_train, y_train = train_ds.get_slice(begin=0, size=1000)
calibration = model.calibrate(
x_train,
y_train,
calibration_metric="f1",
matcher="match_nearest",
extra_metrics=["precision", "recall", "binary_accuracy"],
verbose=1,
)
"""
## Visualization
It may be difficult to get a sense of the model quality from the metrics alone.
A complementary approach is to manually inspect a set of query results to get a
feel for the match quality.
Here we take 10 validation examples and plot them with their 5 nearest
neighbors and the distances to the query example. Looking at the results, we see
that while they are imperfect they still represent meaningfully similar images,
and that the model is able to find similar images irrespective of their pose or
image illumination.
We can also see that the model is very confident with certain images, resulting
in very small distances between the query and the neighbors. Conversely, we see
more mistakes in the class labels as the distances become larger. This is one of
the reasons why calibration is critical for matching applications.
"""
num_neighbors = 5
labels = [
"Airplane",
"Automobile",
"Bird",
"Cat",
"Deer",
"Dog",
"Frog",
"Horse",
"Ship",
"Truck",
"Unknown",
]
class_mapping = {c_id: c_lbl for c_id, c_lbl in zip(range(11), labels)}
x_display, y_display = val_ds.get_slice(begin=200, size=10)
# lookup nearest neighbors in the index
nns = model.lookup(x_display, k=num_neighbors)
# display
for idx in np.argsort(y_display):
tfsim.visualization.viz_neigbors_imgs(
x_display[idx],
y_display[idx],
nns[idx],
class_mapping=class_mapping,
fig_size=(16, 2),
)
"""
## Metrics
We can also plot the extra metrics contained in the `CalibrationResults` to get
a sense of the matching performance as the distance threshold increases.
The following plots show the Precision, Recall, and F1 Score. We can see that
the matching precision degrades as the distance increases, but that the
percentage of the queries that we accept as positive matches (recall) grows
faster up to the calibrated distance threshold.
"""
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 5))
x = calibration.thresholds["distance"]
ax1.plot(x, calibration.thresholds["precision"], label="precision")
ax1.plot(x, calibration.thresholds["recall"], label="recall")
ax1.plot(x, calibration.thresholds["f1"], label="f1 score")
ax1.legend()
ax1.set_title("Metric evolution as distance increase")
ax1.set_xlabel("Distance")
ax1.set_ylim((-0.05, 1.05))
ax2.plot(calibration.thresholds["recall"], calibration.thresholds["precision"])
ax2.set_title("Precision recall curve")
ax2.set_xlabel("Recall")
ax2.set_ylabel("Precision")
ax2.set_ylim((-0.05, 1.05))
plt.show()
"""
We can also take 100 examples for each class and plot the confusion matrix for
each example and their nearest match. We also add an "extra" 10th class to
represent the matches above the calibrated distance threshold.
We can see that most of the errors are between the animal classes with an
interesting number of confusions between Airplane and Bird. Additionally, we see
that only a few of the 100 examples for each class returned matches outside of
the calibrated distance threshold.
"""
cutpoint = "optimal"
# This yields 100 examples for each class.
# We defined this when we created the val_ds sampler.
x_confusion, y_confusion = val_ds.get_slice(0, -1)
matches = model.match(x_confusion, cutpoint=cutpoint, no_match_label=10)
cm = tfsim.visualization.confusion_matrix(
matches,
y_confusion,
labels=labels,
title="Confusion matrix for cutpoint:%s" % cutpoint,
normalize=False,
)
"""
## No Match
We can plot the examples outside of the calibrated threshold to see which images
are not matching any indexed examples.
This may provide insight into what other examples may need to be indexed or
surface anomalous examples within the class.
"""
idx_no_match = np.where(np.array(matches) == 10)
no_match_queries = x_confusion[idx_no_match]
if len(no_match_queries):
plt.imshow(no_match_queries[0])
else:
print("All queries have a match below the distance threshold.")
"""
## Visualize clusters
One of the best ways to quickly get a sense of the quality of how the model is
doing and understand it's short comings is to project the embedding into a 2D
space.
This allows us to inspect clusters of images and understand which classes are
entangled.
"""
# Each class in val_ds was restricted to 100 examples.
num_examples_to_clusters = 1000
thumb_size = 96
plot_size = 800
vx, vy = val_ds.get_slice(0, num_examples_to_clusters)
# Uncomment to run the interactive projector.
# tfsim.visualization.projector(
# model.predict(vx),
# labels=vy,
# images=vx,
# class_mapping=class_mapping,
# image_size=thumb_size,
# plot_size=plot_size,
# )
| keras-io/examples/vision/metric_learning_tf_similarity.py/0 | {
"file_path": "keras-io/examples/vision/metric_learning_tf_similarity.py",
"repo_id": "keras-io",
"token_count": 4513
} | 106 |
<jupyter_start><jupyter_text>Getting Started with KerasNLP**Author:** [Jonathan Bischof](https://github.com/jbischof)**Date created:** 2022/12/15**Last modified:** 2023/07/01**Description:** An introduction to the KerasNLP API. IntroductionKerasNLP is a natural language processing library that supports users throughtheir entire development cycle. Our workflows are built from modular componentsthat have state-of-the-art preset weights and architectures when usedout-of-the-box and are easily customizable when more control is needed.This library is an extension of the core Keras API; all high-level modules are[`Layers`](/api/layers/) or [`Models`](/api/models/). If you are familiar with Keras,congratulations! You already understand most of KerasNLP.KerasNLP uses Keras 3 to work with any of TensorFlow, Pytorch and Jax. In theguide below, we will use the `jax` backend for training our models, and[tf.data](https://www.tensorflow.org/guide/data) for efficiently running ourinput preprocessing. But feel free to mix things up! This guide runs inTensorFlow or PyTorch backends with zero changes, simply update the`KERAS_BACKEND` below.This guide demonstrates our modular approach using a sentiment analysis example at sixlevels of complexity:* Inference with a pretrained classifier* Fine tuning a pretrained backbone* Fine tuning with user-controlled preprocessing* Fine tuning a custom model* Pretraining a backbone model* Build and train your own transformer from scratchThroughout our guide, we use Professor Keras, the official Keras mascot, as a visualreference for the complexity of the material:<jupyter_code>!pip install -q --upgrade keras-nlp
!pip install -q --upgrade keras # Upgrade to Keras 3.
import os
os.environ["KERAS_BACKEND"] = "jax" # or "tensorflow" or "torch"
import keras_nlp
import keras
# Use mixed precision to speed up all training in this guide.
keras.mixed_precision.set_global_policy("mixed_float16")<jupyter_output><empty_output><jupyter_text>API quickstartOur highest level API is `keras_nlp.models`. These symbols cover the complete userjourney of converting strings to tokens, tokens to dense features, and dense features totask-specific output. For each `XX` architecture (e.g., `Bert`), we offer the followingmodules:* **Tokenizer**: `keras_nlp.models.XXTokenizer` * **What it does**: Converts strings to sequences of token ids. * **Why it's important**: The raw bytes of a string are too high dimensional to be useful features so we first map them to a small number of tokens, for example `"The quick brown fox"` to `["the", "qu", "ick", "br", "own", "fox"]`. * **Inherits from**: `keras.layers.Layer`.* **Preprocessor**: `keras_nlp.models.XXPreprocessor` * **What it does**: Converts strings to a dictionary of preprocessed tensors consumed by the backbone, starting with tokenization. * **Why it's important**: Each model uses special tokens and extra tensors to understand the input such as delimiting input segments and identifying padding tokens. Padding each sequence to the same length improves computational efficiency. * **Has a**: `XXTokenizer`. * **Inherits from**: `keras.layers.Layer`.* **Backbone**: `keras_nlp.models.XXBackbone` * **What it does**: Converts preprocessed tensors to dense features. *Does not handle strings; call the preprocessor first.* * **Why it's important**: The backbone distills the input tokens into dense features that can be used in downstream tasks. It is generally pretrained on a language modeling task using massive amounts of unlabeled data. Transferring this information to a new task is a major breakthrough in modern NLP. * **Inherits from**: `keras.Model`.* **Task**: e.g., `keras_nlp.models.XXClassifier` * **What it does**: Converts strings to task-specific output (e.g., classification probabilities). * **Why it's important**: Task models combine string preprocessing and the backbone model with task-specific `Layers` to solve a problem such as sentence classification, token classification, or text generation. The additional `Layers` must be fine-tuned on labeled data. * **Has a**: `XXBackbone` and `XXPreprocessor`. * **Inherits from**: `keras.Model`.Here is the modular hierarchy for `BertClassifier` (all relationships are compositional):All modules can be used independently and have a `from_preset()` method in addition tothe standard constructor that instantiates the class with **preset** architecture andweights (see examples below). DataWe will use a running example of sentiment analysis of IMDB movie reviews. In this task,we use the text to predict whether the review was positive (`label = 1`) or negative(`label = 0`).We load the data using `keras.utils.text_dataset_from_directory`, which utilizes thepowerful `tf.data.Dataset` format for examples.<jupyter_code>!curl -O https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
!tar -xf aclImdb_v1.tar.gz
!# Remove unsupervised examples
!rm -r aclImdb/train/unsup
BATCH_SIZE = 16
imdb_train = keras.utils.text_dataset_from_directory(
"aclImdb/train",
batch_size=BATCH_SIZE,
)
imdb_test = keras.utils.text_dataset_from_directory(
"aclImdb/test",
batch_size=BATCH_SIZE,
)
# Inspect first review
# Format is (review text tensor, label tensor)
print(imdb_train.unbatch().take(1).get_single_element())<jupyter_output><empty_output><jupyter_text>Inference with a pretrained classifierThe highest level module in KerasNLP is a **task**. A **task** is a `keras.Model`consisting of a (generally pretrained) **backbone** model and task-specific layers.Here's an example using `keras_nlp.models.BertClassifier`.**Note**: Outputs are the logits per class (e.g., `[0, 0]` is 50% chance of positive). The output is[negative, positive] for binary classification.<jupyter_code>classifier = keras_nlp.models.BertClassifier.from_preset("bert_tiny_en_uncased_sst2")
# Note: batched inputs expected so must wrap string in iterable
classifier.predict(["I love modular workflows in keras-nlp!"])<jupyter_output><empty_output><jupyter_text>All **tasks** have a `from_preset` method that constructs a `keras.Model` instance withpreset preprocessing, architecture and weights. This means that we can pass raw stringsin any format accepted by a `keras.Model` and get output specific to our task.This particular **preset** is a `"bert_tiny_uncased_en"` **backbone** fine-tuned on`sst2`, another movie review sentiment analysis (this time from Rotten Tomatoes). We usethe `tiny` architecture for demo purposes, but larger models are recommended for SoTAperformance. For all the task-specific presets available for `BertClassifier`, seeour keras.io [models page](https://keras.io/api/keras_nlp/models/).Let's evaluate our classifier on the IMDB dataset. You will note we don't need tocall `keras.Model.compile` here. All **task** models like `BertClassifier` ship withcompilation defaults, meaning we can just call `keras.Model.evaluate` directly. Youcan always call compile as normal to override these defaults (e.g. to add new metrics).The output below is [loss, accuracy],<jupyter_code>classifier.evaluate(imdb_test)<jupyter_output><empty_output><jupyter_text>Our result is 78% accuracy without training anything. Not bad! Fine tuning a pretrained BERT backboneWhen labeled text specific to our task is available, fine-tuning a custom classifier canimprove performance. If we want to predict IMDB review sentiment, using IMDB data shouldperform better than Rotten Tomatoes data! And for many tasks, no relevant pretrained modelwill be available (e.g., categorizing customer reviews).The workflow for fine-tuning is almost identical to above, except that we request a**preset** for the **backbone**-only model rather than the entire classifier. When passeda **backbone** **preset**, a **task** `Model` will randomly initialize all task-specificlayers in preparation for training. For all the **backbone** presets available for`BertClassifier`, see our keras.io [models page](https://keras.io/api/keras_nlp/models/).To train your classifier, use `keras.Model.fit` as with any other`keras.Model`. As with our inference example, we can rely on the compilationdefaults for the **task** and skip `keras.Model.compile`. As preprocessing isincluded, we again pass the raw data.<jupyter_code>classifier = keras_nlp.models.BertClassifier.from_preset(
"bert_tiny_en_uncased",
num_classes=2,
)
classifier.fit(
imdb_train,
validation_data=imdb_test,
epochs=1,
)<jupyter_output><empty_output><jupyter_text>Here we see a significant lift in validation accuracy (0.78 -> 0.87) with a single epoch oftraining even though the IMDB dataset is much smaller than `sst2`. Fine tuning with user-controlled preprocessingFor some advanced training scenarios, users might prefer direct control overpreprocessing. For large datasets, examples can be preprocessed in advance and saved todisk or preprocessed by a separate worker pool using `tf.data.experimental.service`. Inother cases, custom preprocessing is needed to handle the inputs.Pass `preprocessor=None` to the constructor of a **task** `Model` to skip automaticpreprocessing or pass a custom `BertPreprocessor` instead. Separate preprocessing from the same presetEach model architecture has a parallel **preprocessor** `Layer` with its own`from_preset` constructor. Using the same **preset** for this `Layer` will return thematching **preprocessor** as the **task**.In this workflow we train the model over three epochs using `tf.data.Dataset.cache()`,which computes the preprocessing once and caches the result before fitting begins.**Note:** we can use `tf.data` for preprocessing while running on theJax or PyTorch backend. The input dataset will automatically be converted tobackend native tensor types during fit. In fact, given the efficiency of `tf.data`for running preprocessing, this is good practice on all backends.<jupyter_code>import tensorflow as tf
preprocessor = keras_nlp.models.BertPreprocessor.from_preset(
"bert_tiny_en_uncased",
sequence_length=512,
)
# Apply the preprocessor to every sample of train and test data using `map()`.
# `tf.data.AUTOTUNE` and `prefetch()` are options to tune performance, see
# https://www.tensorflow.org/guide/data_performance for details.
# Note: only call `cache()` if you training data fits in CPU memory!
imdb_train_cached = (
imdb_train.map(preprocessor, tf.data.AUTOTUNE).cache().prefetch(tf.data.AUTOTUNE)
)
imdb_test_cached = (
imdb_test.map(preprocessor, tf.data.AUTOTUNE).cache().prefetch(tf.data.AUTOTUNE)
)
classifier = keras_nlp.models.BertClassifier.from_preset(
"bert_tiny_en_uncased", preprocessor=None, num_classes=2
)
classifier.fit(
imdb_train_cached,
validation_data=imdb_test_cached,
epochs=3,
)<jupyter_output><empty_output><jupyter_text>After three epochs, our validation accuracy has only increased to 0.88. This is both afunction of the small size of our dataset and our model. To exceed 90% accuracy, trylarger **presets** such as `"bert_base_en_uncased"`. For all the **backbone** presetsavailable for `BertClassifier`, see our keras.io [models page](https://keras.io/api/keras_nlp/models/). Custom preprocessingIn cases where custom preprocessing is required, we offer direct access to the`Tokenizer` class that maps raw strings to tokens. It also has a `from_preset()`constructor to get the vocabulary matching pretraining.**Note:** `BertTokenizer` does not pad sequences by default, so the output isragged (each sequence has varying length). The `MultiSegmentPacker` belowhandles padding these ragged sequences to dense tensor types (e.g. `tf.Tensor`or `torch.Tensor`).<jupyter_code>tokenizer = keras_nlp.models.BertTokenizer.from_preset("bert_tiny_en_uncased")
tokenizer(["I love modular workflows!", "Libraries over frameworks!"])
# Write your own packer or use one of our `Layers`
packer = keras_nlp.layers.MultiSegmentPacker(
start_value=tokenizer.cls_token_id,
end_value=tokenizer.sep_token_id,
# Note: This cannot be longer than the preset's `sequence_length`, and there
# is no check for a custom preprocessor!
sequence_length=64,
)
# This function that takes a text sample `x` and its
# corresponding label `y` as input and converts the
# text into a format suitable for input into a BERT model.
def preprocessor(x, y):
token_ids, segment_ids = packer(tokenizer(x))
x = {
"token_ids": token_ids,
"segment_ids": segment_ids,
"padding_mask": token_ids != 0,
}
return x, y
imdb_train_preprocessed = imdb_train.map(preprocessor, tf.data.AUTOTUNE).prefetch(
tf.data.AUTOTUNE
)
imdb_test_preprocessed = imdb_test.map(preprocessor, tf.data.AUTOTUNE).prefetch(
tf.data.AUTOTUNE
)
# Preprocessed example
print(imdb_train_preprocessed.unbatch().take(1).get_single_element())<jupyter_output><empty_output><jupyter_text>Fine tuning with a custom modelFor more advanced applications, an appropriate **task** `Model` may not be available. Inthis case, we provide direct access to the **backbone** `Model`, which has its own`from_preset` constructor and can be composed with custom `Layer`s. Detailed examples canbe found at our [transfer learning guide](https://keras.io/guides/transfer_learning/).A **backbone** `Model` does not include automatic preprocessing but can be paired with amatching **preprocessor** using the same **preset** as shown in the previous workflow.In this workflow, we experiment with freezing our backbone model and adding two trainabletransformer layers to adapt to the new input.**Note**: We can ignore the warning about gradients for the `pooled_dense` layer becausewe are using BERT's sequence output.<jupyter_code>preprocessor = keras_nlp.models.BertPreprocessor.from_preset("bert_tiny_en_uncased")
backbone = keras_nlp.models.BertBackbone.from_preset("bert_tiny_en_uncased")
imdb_train_preprocessed = (
imdb_train.map(preprocessor, tf.data.AUTOTUNE).cache().prefetch(tf.data.AUTOTUNE)
)
imdb_test_preprocessed = (
imdb_test.map(preprocessor, tf.data.AUTOTUNE).cache().prefetch(tf.data.AUTOTUNE)
)
backbone.trainable = False
inputs = backbone.input
sequence = backbone(inputs)["sequence_output"]
for _ in range(2):
sequence = keras_nlp.layers.TransformerEncoder(
num_heads=2,
intermediate_dim=512,
dropout=0.1,
)(sequence)
# Use [CLS] token output to classify
outputs = keras.layers.Dense(2)(sequence[:, backbone.cls_token_index, :])
model = keras.Model(inputs, outputs)
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.AdamW(5e-5),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
jit_compile=True,
)
model.summary()
model.fit(
imdb_train_preprocessed,
validation_data=imdb_test_preprocessed,
epochs=3,
)<jupyter_output><empty_output><jupyter_text>This model achieves reasonable accuracy despite having only 10% of the trainable parametersof our `BertClassifier` model. Each training step takes about 1/3 of the time---evenaccounting for cached preprocessing. Pretraining a backbone modelDo you have access to large unlabeled datasets in your domain? Are they around thesame size as used to train popular backbones such as BERT, RoBERTa, or GPT2 (XX+ GiB)? Ifso, you might benefit from domain-specific pretraining of your own backbone models.NLP models are generally pretrained on a language modeling task, predicting masked wordsgiven the visible words in an input sentence. For example, given the input`"The fox [MASK] over the [MASK] dog"`, the model might be asked to predict `["jumped", "lazy"]`.The lower layers of this model are then packaged as a **backbone** to be combined withlayers relating to a new task.The KerasNLP library offers SoTA **backbones** and **tokenizers** to be trained fromscratch without presets.In this workflow, we pretrain a BERT **backbone** using our IMDB review text. We skip the"next sentence prediction" (NSP) loss because it adds significant complexity to the dataprocessing and was dropped by later models like RoBERTa. See our e2e[Transformer pretraining](https://keras.io/guides/keras_nlp/transformer_pretraining/pretraining)for step-by-step details on how to replicate the original paper. Preprocessing<jupyter_code># All BERT `en` models have the same vocabulary, so reuse preprocessor from
# "bert_tiny_en_uncased"
preprocessor = keras_nlp.models.BertPreprocessor.from_preset(
"bert_tiny_en_uncased",
sequence_length=256,
)
packer = preprocessor.packer
tokenizer = preprocessor.tokenizer
# keras.Layer to replace some input tokens with the "[MASK]" token
masker = keras_nlp.layers.MaskedLMMaskGenerator(
vocabulary_size=tokenizer.vocabulary_size(),
mask_selection_rate=0.25,
mask_selection_length=64,
mask_token_id=tokenizer.token_to_id("[MASK]"),
unselectable_token_ids=[
tokenizer.token_to_id(x) for x in ["[CLS]", "[PAD]", "[SEP]"]
],
)
def preprocess(inputs, label):
inputs = preprocessor(inputs)
masked_inputs = masker(inputs["token_ids"])
# Split the masking layer outputs into a (features, labels, and weights)
# tuple that we can use with keras.Model.fit().
features = {
"token_ids": masked_inputs["token_ids"],
"segment_ids": inputs["segment_ids"],
"padding_mask": inputs["padding_mask"],
"mask_positions": masked_inputs["mask_positions"],
}
labels = masked_inputs["mask_ids"]
weights = masked_inputs["mask_weights"]
return features, labels, weights
pretrain_ds = imdb_train.map(preprocess, num_parallel_calls=tf.data.AUTOTUNE).prefetch(
tf.data.AUTOTUNE
)
pretrain_val_ds = imdb_test.map(
preprocess, num_parallel_calls=tf.data.AUTOTUNE
).prefetch(tf.data.AUTOTUNE)
# Tokens with ID 103 are "masked"
print(pretrain_ds.unbatch().take(1).get_single_element())<jupyter_output><empty_output><jupyter_text>Pretraining model<jupyter_code># BERT backbone
backbone = keras_nlp.models.BertBackbone(
vocabulary_size=tokenizer.vocabulary_size(),
num_layers=2,
num_heads=2,
hidden_dim=128,
intermediate_dim=512,
)
# Language modeling head
mlm_head = keras_nlp.layers.MaskedLMHead(
token_embedding=backbone.token_embedding,
)
inputs = {
"token_ids": keras.Input(shape=(None,), dtype=tf.int32, name="token_ids"),
"segment_ids": keras.Input(shape=(None,), dtype=tf.int32, name="segment_ids"),
"padding_mask": keras.Input(shape=(None,), dtype=tf.int32, name="padding_mask"),
"mask_positions": keras.Input(shape=(None,), dtype=tf.int32, name="mask_positions"),
}
# Encoded token sequence
sequence = backbone(inputs)["sequence_output"]
# Predict an output word for each masked input token.
# We use the input token embedding to project from our encoded vectors to
# vocabulary logits, which has been shown to improve training efficiency.
outputs = mlm_head(sequence, mask_positions=inputs["mask_positions"])
# Define and compile our pretraining model.
pretraining_model = keras.Model(inputs, outputs)
pretraining_model.summary()
pretraining_model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.AdamW(learning_rate=5e-4),
weighted_metrics=[keras.metrics.SparseCategoricalAccuracy()],
jit_compile=True,
)
# Pretrain on IMDB dataset
pretraining_model.fit(
pretrain_ds,
validation_data=pretrain_val_ds,
epochs=3, # Increase to 6 for higher accuracy
)<jupyter_output><empty_output><jupyter_text>After pretraining save your `backbone` submodel to use in a new task! Build and train your own transformer from scratchWant to implement a novel transformer architecture? The KerasNLP library offers all thelow-level modules used to build SoTA architectures in our `models` API. This includes the`keras_nlp.tokenizers` API which allows you to train your own subword tokenizer using`WordPieceTokenizer`, `BytePairTokenizer`, or `SentencePieceTokenizer`.In this workflow, we train a custom tokenizer on the IMDB data and design a backbone withcustom transformer architecture. For simplicity, we then train directly on theclassification task. Interested in more details? We wrote an entire guide to pretrainingand finetuning a custom transformer on[keras.io](https://keras.io/guides/keras_nlp/transformer_pretraining/), Train custom vocabulary from IMDB data<jupyter_code>vocab = keras_nlp.tokenizers.compute_word_piece_vocabulary(
imdb_train.map(lambda x, y: x),
vocabulary_size=20_000,
lowercase=True,
strip_accents=True,
reserved_tokens=["[PAD]", "[START]", "[END]", "[MASK]", "[UNK]"],
)
tokenizer = keras_nlp.tokenizers.WordPieceTokenizer(
vocabulary=vocab,
lowercase=True,
strip_accents=True,
oov_token="[UNK]",
)<jupyter_output><empty_output><jupyter_text>Preprocess data with a custom tokenizer<jupyter_code>packer = keras_nlp.layers.StartEndPacker(
start_value=tokenizer.token_to_id("[START]"),
end_value=tokenizer.token_to_id("[END]"),
pad_value=tokenizer.token_to_id("[PAD]"),
sequence_length=512,
)
def preprocess(x, y):
token_ids = packer(tokenizer(x))
return token_ids, y
imdb_preproc_train_ds = imdb_train.map(
preprocess, num_parallel_calls=tf.data.AUTOTUNE
).prefetch(tf.data.AUTOTUNE)
imdb_preproc_val_ds = imdb_test.map(
preprocess, num_parallel_calls=tf.data.AUTOTUNE
).prefetch(tf.data.AUTOTUNE)
print(imdb_preproc_train_ds.unbatch().take(1).get_single_element())<jupyter_output><empty_output><jupyter_text>Design a tiny transformer<jupyter_code>token_id_input = keras.Input(
shape=(None,),
dtype="int32",
name="token_ids",
)
outputs = keras_nlp.layers.TokenAndPositionEmbedding(
vocabulary_size=len(vocab),
sequence_length=packer.sequence_length,
embedding_dim=64,
)(token_id_input)
outputs = keras_nlp.layers.TransformerEncoder(
num_heads=2,
intermediate_dim=128,
dropout=0.1,
)(outputs)
# Use "[START]" token to classify
outputs = keras.layers.Dense(2)(outputs[:, 0, :])
model = keras.Model(
inputs=token_id_input,
outputs=outputs,
)
model.summary()<jupyter_output><empty_output><jupyter_text>Train the transformer directly on the classification objective<jupyter_code>model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.AdamW(5e-5),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
jit_compile=True,
)
model.fit(
imdb_preproc_train_ds,
validation_data=imdb_preproc_val_ds,
epochs=3,
)<jupyter_output><empty_output> | keras-io/guides/ipynb/keras_nlp/getting_started.ipynb/0 | {
"file_path": "keras-io/guides/ipynb/keras_nlp/getting_started.ipynb",
"repo_id": "keras-io",
"token_count": 7203
} | 107 |
<jupyter_start><jupyter_text>Understanding masking & padding**Authors:** Scott Zhu, Francois Chollet**Date created:** 2019/07/16**Last modified:** 2023/07/10**Description:** Complete guide to using mask-aware sequence layers in Keras. Setup<jupyter_code>import numpy as np
import tensorflow as tf
import keras
from keras import layers<jupyter_output><empty_output><jupyter_text>Introduction**Masking** is a way to tell sequence-processing layers that certain timestepsin an input are missing, and thus should be skipped when processing the data.**Padding** is a special form of masking where the masked steps are at the start orthe end of a sequence. Padding comes from the need to encode sequence data intocontiguous batches: in order to make all sequences in a batch fit a given standardlength, it is necessary to pad or truncate some sequences.Let's take a close look. Padding sequence dataWhen processing sequence data, it is very common for individual samples to havedifferent lengths. Consider the following example (text tokenized as words):```[ ["Hello", "world", "!"], ["How", "are", "you", "doing", "today"], ["The", "weather", "will", "be", "nice", "tomorrow"],]```After vocabulary lookup, the data might be vectorized as integers, e.g.:```[ [71, 1331, 4231] [73, 8, 3215, 55, 927], [83, 91, 1, 645, 1253, 927],]```The data is a nested list where individual samples have length 3, 5, and 6,respectively. Since the input data for a deep learning model must be a single tensor(of shape e.g. `(batch_size, 6, vocab_size)` in this case), samples that are shorterthan the longest item need to be padded with some placeholder value (alternatively,one might also truncate long samples before padding short samples).Keras provides a utility function to truncate and pad Python lists to a common length:`tf.keras.utils.pad_sequences`.<jupyter_code>raw_inputs = [
[711, 632, 71],
[73, 8, 3215, 55, 927],
[83, 91, 1, 645, 1253, 927],
]
# By default, this will pad using 0s; it is configurable via the
# "value" parameter.
# Note that you could use "pre" padding (at the beginning) or
# "post" padding (at the end).
# We recommend using "post" padding when working with RNN layers
# (in order to be able to use the
# CuDNN implementation of the layers).
padded_inputs = tf.keras.utils.pad_sequences(raw_inputs, padding="post")
print(padded_inputs)<jupyter_output><empty_output><jupyter_text>MaskingNow that all samples have a uniform length, the model must be informed that some partof the data is actually padding and should be ignored. That mechanism is **masking**.There are three ways to introduce input masks in Keras models:- Add a `keras.layers.Masking` layer.- Configure a `keras.layers.Embedding` layer with `mask_zero=True`.- Pass a `mask` argument manually when calling layers that support this argument (e.g.RNN layers). Mask-generating layers: `Embedding` and `Masking`Under the hood, these layers will create a mask tensor (2D tensor with shape `(batch,sequence_length)`), and attach it to the tensor output returned by the `Masking` or`Embedding` layer.<jupyter_code>embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)
masked_output = embedding(padded_inputs)
print(masked_output._keras_mask)
masking_layer = layers.Masking()
# Simulate the embedding lookup by expanding the 2D input to 3D,
# with embedding dimension of 10.
unmasked_embedding = tf.cast(
tf.tile(tf.expand_dims(padded_inputs, axis=-1), [1, 1, 10]), tf.float32
)
masked_embedding = masking_layer(unmasked_embedding)
print(masked_embedding._keras_mask)<jupyter_output><empty_output><jupyter_text>As you can see from the printed result, the mask is a 2D boolean tensor with shape`(batch_size, sequence_length)`, where each individual `False` entry indicates thatthe corresponding timestep should be ignored during processing. Mask propagation in the Functional API and Sequential APIWhen using the Functional API or the Sequential API, a mask generated by an `Embedding`or `Masking` layer will be propagated through the network for any layer that iscapable of using them (for example, RNN layers). Keras will automatically fetch themask corresponding to an input and pass it to any layer that knows how to use it.For instance, in the following Sequential model, the `LSTM` layer will automaticallyreceive a mask, which means it will ignore padded values:<jupyter_code>model = keras.Sequential(
[
layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True),
layers.LSTM(32),
]
)<jupyter_output><empty_output><jupyter_text>This is also the case for the following Functional API model:<jupyter_code>inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)(inputs)
outputs = layers.LSTM(32)(x)
model = keras.Model(inputs, outputs)<jupyter_output><empty_output><jupyter_text>Passing mask tensors directly to layers Layers that can handle masks (such as the `LSTM` layer) have a `mask` argument in their`__call__` method.Meanwhile, layers that produce a mask (e.g. `Embedding`) expose a `compute_mask(input,previous_mask)` method which you can call.Thus, you can pass the output of the `compute_mask()` method of a mask-producing layerto the `__call__` method of a mask-consuming layer, like this:<jupyter_code>class MyLayer(layers.Layer):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.embedding = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)
self.lstm = layers.LSTM(32)
def call(self, inputs):
x = self.embedding(inputs)
# Note that you could also prepare a `mask` tensor manually.
# It only needs to be a boolean tensor
# with the right shape, i.e. (batch_size, timesteps).
mask = self.embedding.compute_mask(inputs)
output = self.lstm(x, mask=mask) # The layer will ignore the masked values
return output
layer = MyLayer()
x = np.random.random((32, 10)) * 100
x = x.astype("int32")
layer(x)<jupyter_output><empty_output><jupyter_text>Supporting masking in your custom layers Sometimes, you may need to write layers that generate a mask (like `Embedding`), orlayers that need to modify the current mask.For instance, any layer that produces a tensor with a different time dimension than itsinput, such as a `Concatenate` layer that concatenates on the time dimension, willneed to modify the current mask so that downstream layers will be able to properlytake masked timesteps into account.To do this, your layer should implement the `layer.compute_mask()` method, whichproduces a new mask given the input and the current mask.Here is an example of a `TemporalSplit` layer that needs to modify the current mask.<jupyter_code>class TemporalSplit(keras.layers.Layer):
"""Split the input tensor into 2 tensors along the time dimension."""
def call(self, inputs):
# Expect the input to be 3D and mask to be 2D, split the input tensor into 2
# subtensors along the time axis (axis 1).
return tf.split(inputs, 2, axis=1)
def compute_mask(self, inputs, mask=None):
# Also split the mask into 2 if it presents.
if mask is None:
return None
return tf.split(mask, 2, axis=1)
first_half, second_half = TemporalSplit()(masked_embedding)
print(first_half._keras_mask)
print(second_half._keras_mask)<jupyter_output><empty_output><jupyter_text>Here is another example of a `CustomEmbedding` layer that is capable of generating amask from input values:<jupyter_code>class CustomEmbedding(keras.layers.Layer):
def __init__(self, input_dim, output_dim, mask_zero=False, **kwargs):
super().__init__(**kwargs)
self.input_dim = input_dim
self.output_dim = output_dim
self.mask_zero = mask_zero
def build(self, input_shape):
self.embeddings = self.add_weight(
shape=(self.input_dim, self.output_dim),
initializer="random_normal",
dtype="float32",
)
def call(self, inputs):
return tf.nn.embedding_lookup(self.embeddings, inputs)
def compute_mask(self, inputs, mask=None):
if not self.mask_zero:
return None
return tf.not_equal(inputs, 0)
layer = CustomEmbedding(10, 32, mask_zero=True)
x = np.random.random((3, 10)) * 9
x = x.astype("int32")
y = layer(x)
mask = layer.compute_mask(x)
print(mask)<jupyter_output><empty_output><jupyter_text>Note: For more details about format limitations related to masking, see the[serialization guide](/guides/serialization_and_saving). Opting-in to mask propagation on compatible layersMost layers don't modify the time dimension, so don't need to modify the current mask.However, they may still want to be able to **propagate** the current mask, unchanged,to the next layer. **This is an opt-in behavior.** By default, a custom layer willdestroy the current mask (since the framework has no way to tell whether propagatingthe mask is safe to do).If you have a custom layer that does not modify the time dimension, and if you want itto be able to propagate the current input mask, you should set `self.supports_masking= True` in the layer constructor. In this case, the default behavior of`compute_mask()` is to just pass the current mask through.Here's an example of a layer that is whitelisted for mask propagation:<jupyter_code>@keras.saving.register_keras_serializable()
class MyActivation(keras.layers.Layer):
def __init__(self, **kwargs):
super().__init__(**kwargs)
# Signal that the layer is safe for mask propagation
self.supports_masking = True
def call(self, inputs):
return tf.nn.relu(inputs)<jupyter_output><empty_output><jupyter_text>You can now use this custom layer in-between a mask-generating layer (like `Embedding`)and a mask-consuming layer (like `LSTM`), and it will pass the mask along so that itreaches the mask-consuming layer.<jupyter_code>inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=5000, output_dim=16, mask_zero=True)(inputs)
x = MyActivation()(x) # Will pass the mask along
print("Mask found:", x._keras_mask)
outputs = layers.LSTM(32)(x) # Will receive the mask
model = keras.Model(inputs, outputs)<jupyter_output><empty_output><jupyter_text>Writing layers that need mask informationSome layers are mask *consumers*: they accept a `mask` argument in `call` and use it todetermine whether to skip certain time steps.To write such a layer, you can simply add a `mask=None` argument in your `call`signature. The mask associated with the inputs will be passed to your layer wheneverit is available.Here's a simple example below: a layer that computes a softmax over the time dimension(axis 1) of an input sequence, while discarding masked timesteps.<jupyter_code>@keras.saving.register_keras_serializable()
class TemporalSoftmax(keras.layers.Layer):
def call(self, inputs, mask=None):
broadcast_float_mask = tf.expand_dims(tf.cast(mask, "float32"), -1)
inputs_exp = tf.exp(inputs) * broadcast_float_mask
inputs_sum = tf.reduce_sum(
inputs_exp * broadcast_float_mask, axis=-1, keepdims=True
)
return inputs_exp / inputs_sum
inputs = keras.Input(shape=(None,), dtype="int32")
x = layers.Embedding(input_dim=10, output_dim=32, mask_zero=True)(inputs)
x = layers.Dense(1)(x)
outputs = TemporalSoftmax()(x)
model = keras.Model(inputs, outputs)
y = model(np.random.randint(0, 10, size=(32, 100)), np.random.random((32, 100, 1)))<jupyter_output><empty_output> | keras-io/guides/ipynb/understanding_masking_and_padding.ipynb/0 | {
"file_path": "keras-io/guides/ipynb/understanding_masking_and_padding.ipynb",
"repo_id": "keras-io",
"token_count": 3805
} | 108 |
"""
Title: Getting Started with KerasNLP
Author: [Jonathan Bischof](https://github.com/jbischof)
Date created: 2022/12/15
Last modified: 2023/07/01
Description: An introduction to the KerasNLP API.
Accelerator: GPU
"""
"""
## Introduction
KerasNLP is a natural language processing library that supports users through
their entire development cycle. Our workflows are built from modular components
that have state-of-the-art preset weights and architectures when used
out-of-the-box and are easily customizable when more control is needed.
This library is an extension of the core Keras API; all high-level modules are
[`Layers`](/api/layers/) or [`Models`](/api/models/). If you are familiar with Keras,
congratulations! You already understand most of KerasNLP.
KerasNLP uses Keras 3 to work with any of TensorFlow, Pytorch and Jax. In the
guide below, we will use the `jax` backend for training our models, and
[tf.data](https://www.tensorflow.org/guide/data) for efficiently running our
input preprocessing. But feel free to mix things up! This guide runs in
TensorFlow or PyTorch backends with zero changes, simply update the
`KERAS_BACKEND` below.
This guide demonstrates our modular approach using a sentiment analysis example at six
levels of complexity:
* Inference with a pretrained classifier
* Fine tuning a pretrained backbone
* Fine tuning with user-controlled preprocessing
* Fine tuning a custom model
* Pretraining a backbone model
* Build and train your own transformer from scratch
Throughout our guide, we use Professor Keras, the official Keras mascot, as a visual
reference for the complexity of the material:
<img src="https://storage.googleapis.com/keras-nlp/getting_started_guide/prof_keras_evolution.png" alt="drawing" height="250"/>
"""
"""shell
pip install -q --upgrade keras-nlp
pip install -q --upgrade keras # Upgrade to Keras 3.
"""
import os
os.environ["KERAS_BACKEND"] = "jax" # or "tensorflow" or "torch"
import keras_nlp
import keras
# Use mixed precision to speed up all training in this guide.
keras.mixed_precision.set_global_policy("mixed_float16")
"""
## API quickstart
Our highest level API is `keras_nlp.models`. These symbols cover the complete user
journey of converting strings to tokens, tokens to dense features, and dense features to
task-specific output. For each `XX` architecture (e.g., `Bert`), we offer the following
modules:
* **Tokenizer**: `keras_nlp.models.XXTokenizer`
* **What it does**: Converts strings to sequences of token ids.
* **Why it's important**: The raw bytes of a string are too high dimensional to be useful
features so we first map them to a small number of tokens, for example `"The quick brown
fox"` to `["the", "qu", "##ick", "br", "##own", "fox"]`.
* **Inherits from**: `keras.layers.Layer`.
* **Preprocessor**: `keras_nlp.models.XXPreprocessor`
* **What it does**: Converts strings to a dictionary of preprocessed tensors consumed by
the backbone, starting with tokenization.
* **Why it's important**: Each model uses special tokens and extra tensors to understand
the input such as delimiting input segments and identifying padding tokens. Padding each
sequence to the same length improves computational efficiency.
* **Has a**: `XXTokenizer`.
* **Inherits from**: `keras.layers.Layer`.
* **Backbone**: `keras_nlp.models.XXBackbone`
* **What it does**: Converts preprocessed tensors to dense features. *Does not handle
strings; call the preprocessor first.*
* **Why it's important**: The backbone distills the input tokens into dense features that
can be used in downstream tasks. It is generally pretrained on a language modeling task
using massive amounts of unlabeled data. Transferring this information to a new task is a
major breakthrough in modern NLP.
* **Inherits from**: `keras.Model`.
* **Task**: e.g., `keras_nlp.models.XXClassifier`
* **What it does**: Converts strings to task-specific output (e.g., classification
probabilities).
* **Why it's important**: Task models combine string preprocessing and the backbone model
with task-specific `Layers` to solve a problem such as sentence classification, token
classification, or text generation. The additional `Layers` must be fine-tuned on labeled
data.
* **Has a**: `XXBackbone` and `XXPreprocessor`.
* **Inherits from**: `keras.Model`.
Here is the modular hierarchy for `BertClassifier` (all relationships are compositional):
<img src="https://storage.googleapis.com/keras-nlp/getting_started_guide/class_diagram.png" alt="drawing" height="300"/>
All modules can be used independently and have a `from_preset()` method in addition to
the standard constructor that instantiates the class with **preset** architecture and
weights (see examples below).
"""
"""
## Data
We will use a running example of sentiment analysis of IMDB movie reviews. In this task,
we use the text to predict whether the review was positive (`label = 1`) or negative
(`label = 0`).
We load the data using `keras.utils.text_dataset_from_directory`, which utilizes the
powerful `tf.data.Dataset` format for examples.
"""
"""shell
curl -O https://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz
tar -xf aclImdb_v1.tar.gz
# Remove unsupervised examples
rm -r aclImdb/train/unsup
"""
BATCH_SIZE = 16
imdb_train = keras.utils.text_dataset_from_directory(
"aclImdb/train",
batch_size=BATCH_SIZE,
)
imdb_test = keras.utils.text_dataset_from_directory(
"aclImdb/test",
batch_size=BATCH_SIZE,
)
# Inspect first review
# Format is (review text tensor, label tensor)
print(imdb_train.unbatch().take(1).get_single_element())
"""
## Inference with a pretrained classifier
<img src="https://storage.googleapis.com/keras-nlp/getting_started_guide/prof_keras_beginner.png" alt="drawing" height="250"/>
The highest level module in KerasNLP is a **task**. A **task** is a `keras.Model`
consisting of a (generally pretrained) **backbone** model and task-specific layers.
Here's an example using `keras_nlp.models.BertClassifier`.
**Note**: Outputs are the logits per class (e.g., `[0, 0]` is 50% chance of positive). The output is
[negative, positive] for binary classification.
"""
classifier = keras_nlp.models.BertClassifier.from_preset("bert_tiny_en_uncased_sst2")
# Note: batched inputs expected so must wrap string in iterable
classifier.predict(["I love modular workflows in keras-nlp!"])
"""
All **tasks** have a `from_preset` method that constructs a `keras.Model` instance with
preset preprocessing, architecture and weights. This means that we can pass raw strings
in any format accepted by a `keras.Model` and get output specific to our task.
This particular **preset** is a `"bert_tiny_uncased_en"` **backbone** fine-tuned on
`sst2`, another movie review sentiment analysis (this time from Rotten Tomatoes). We use
the `tiny` architecture for demo purposes, but larger models are recommended for SoTA
performance. For all the task-specific presets available for `BertClassifier`, see
our keras.io [models page](https://keras.io/api/keras_nlp/models/).
Let's evaluate our classifier on the IMDB dataset. You will note we don't need to
call `keras.Model.compile` here. All **task** models like `BertClassifier` ship with
compilation defaults, meaning we can just call `keras.Model.evaluate` directly. You
can always call compile as normal to override these defaults (e.g. to add new metrics).
The output below is [loss, accuracy],
"""
classifier.evaluate(imdb_test)
"""
Our result is 78% accuracy without training anything. Not bad!
"""
"""
## Fine tuning a pretrained BERT backbone
<img src="https://storage.googleapis.com/keras-nlp/getting_started_guide/prof_keras_intermediate.png" alt="drawing" height="250"/>
When labeled text specific to our task is available, fine-tuning a custom classifier can
improve performance. If we want to predict IMDB review sentiment, using IMDB data should
perform better than Rotten Tomatoes data! And for many tasks, no relevant pretrained model
will be available (e.g., categorizing customer reviews).
The workflow for fine-tuning is almost identical to above, except that we request a
**preset** for the **backbone**-only model rather than the entire classifier. When passed
a **backbone** **preset**, a **task** `Model` will randomly initialize all task-specific
layers in preparation for training. For all the **backbone** presets available for
`BertClassifier`, see our keras.io [models page](https://keras.io/api/keras_nlp/models/).
To train your classifier, use `keras.Model.fit` as with any other
`keras.Model`. As with our inference example, we can rely on the compilation
defaults for the **task** and skip `keras.Model.compile`. As preprocessing is
included, we again pass the raw data.
"""
classifier = keras_nlp.models.BertClassifier.from_preset(
"bert_tiny_en_uncased",
num_classes=2,
)
classifier.fit(
imdb_train,
validation_data=imdb_test,
epochs=1,
)
"""
Here we see a significant lift in validation accuracy (0.78 -> 0.87) with a single epoch of
training even though the IMDB dataset is much smaller than `sst2`.
"""
"""
## Fine tuning with user-controlled preprocessing
<img src="https://storage.googleapis.com/keras-nlp/getting_started_guide/prof_keras_advanced.png" alt="drawing" height="250"/>
For some advanced training scenarios, users might prefer direct control over
preprocessing. For large datasets, examples can be preprocessed in advance and saved to
disk or preprocessed by a separate worker pool using `tf.data.experimental.service`. In
other cases, custom preprocessing is needed to handle the inputs.
Pass `preprocessor=None` to the constructor of a **task** `Model` to skip automatic
preprocessing or pass a custom `BertPreprocessor` instead.
"""
"""
### Separate preprocessing from the same preset
Each model architecture has a parallel **preprocessor** `Layer` with its own
`from_preset` constructor. Using the same **preset** for this `Layer` will return the
matching **preprocessor** as the **task**.
In this workflow we train the model over three epochs using `tf.data.Dataset.cache()`,
which computes the preprocessing once and caches the result before fitting begins.
**Note:** we can use `tf.data` for preprocessing while running on the
Jax or PyTorch backend. The input dataset will automatically be converted to
backend native tensor types during fit. In fact, given the efficiency of `tf.data`
for running preprocessing, this is good practice on all backends.
"""
import tensorflow as tf
preprocessor = keras_nlp.models.BertPreprocessor.from_preset(
"bert_tiny_en_uncased",
sequence_length=512,
)
# Apply the preprocessor to every sample of train and test data using `map()`.
# `tf.data.AUTOTUNE` and `prefetch()` are options to tune performance, see
# https://www.tensorflow.org/guide/data_performance for details.
# Note: only call `cache()` if you training data fits in CPU memory!
imdb_train_cached = (
imdb_train.map(preprocessor, tf.data.AUTOTUNE).cache().prefetch(tf.data.AUTOTUNE)
)
imdb_test_cached = (
imdb_test.map(preprocessor, tf.data.AUTOTUNE).cache().prefetch(tf.data.AUTOTUNE)
)
classifier = keras_nlp.models.BertClassifier.from_preset(
"bert_tiny_en_uncased", preprocessor=None, num_classes=2
)
classifier.fit(
imdb_train_cached,
validation_data=imdb_test_cached,
epochs=3,
)
"""
After three epochs, our validation accuracy has only increased to 0.88. This is both a
function of the small size of our dataset and our model. To exceed 90% accuracy, try
larger **presets** such as `"bert_base_en_uncased"`. For all the **backbone** presets
available for `BertClassifier`, see our keras.io [models page](https://keras.io/api/keras_nlp/models/).
"""
"""
### Custom preprocessing
In cases where custom preprocessing is required, we offer direct access to the
`Tokenizer` class that maps raw strings to tokens. It also has a `from_preset()`
constructor to get the vocabulary matching pretraining.
**Note:** `BertTokenizer` does not pad sequences by default, so the output is
ragged (each sequence has varying length). The `MultiSegmentPacker` below
handles padding these ragged sequences to dense tensor types (e.g. `tf.Tensor`
or `torch.Tensor`).
"""
tokenizer = keras_nlp.models.BertTokenizer.from_preset("bert_tiny_en_uncased")
tokenizer(["I love modular workflows!", "Libraries over frameworks!"])
# Write your own packer or use one of our `Layers`
packer = keras_nlp.layers.MultiSegmentPacker(
start_value=tokenizer.cls_token_id,
end_value=tokenizer.sep_token_id,
# Note: This cannot be longer than the preset's `sequence_length`, and there
# is no check for a custom preprocessor!
sequence_length=64,
)
# This function that takes a text sample `x` and its
# corresponding label `y` as input and converts the
# text into a format suitable for input into a BERT model.
def preprocessor(x, y):
token_ids, segment_ids = packer(tokenizer(x))
x = {
"token_ids": token_ids,
"segment_ids": segment_ids,
"padding_mask": token_ids != 0,
}
return x, y
imdb_train_preprocessed = imdb_train.map(preprocessor, tf.data.AUTOTUNE).prefetch(
tf.data.AUTOTUNE
)
imdb_test_preprocessed = imdb_test.map(preprocessor, tf.data.AUTOTUNE).prefetch(
tf.data.AUTOTUNE
)
# Preprocessed example
print(imdb_train_preprocessed.unbatch().take(1).get_single_element())
"""
## Fine tuning with a custom model
<img src="https://storage.googleapis.com/keras-nlp/getting_started_guide/prof_keras_advanced.png" alt="drawing" height="250"/>
For more advanced applications, an appropriate **task** `Model` may not be available. In
this case, we provide direct access to the **backbone** `Model`, which has its own
`from_preset` constructor and can be composed with custom `Layer`s. Detailed examples can
be found at our [transfer learning guide](https://keras.io/guides/transfer_learning/).
A **backbone** `Model` does not include automatic preprocessing but can be paired with a
matching **preprocessor** using the same **preset** as shown in the previous workflow.
In this workflow, we experiment with freezing our backbone model and adding two trainable
transformer layers to adapt to the new input.
**Note**: We can ignore the warning about gradients for the `pooled_dense` layer because
we are using BERT's sequence output.
"""
preprocessor = keras_nlp.models.BertPreprocessor.from_preset("bert_tiny_en_uncased")
backbone = keras_nlp.models.BertBackbone.from_preset("bert_tiny_en_uncased")
imdb_train_preprocessed = (
imdb_train.map(preprocessor, tf.data.AUTOTUNE).cache().prefetch(tf.data.AUTOTUNE)
)
imdb_test_preprocessed = (
imdb_test.map(preprocessor, tf.data.AUTOTUNE).cache().prefetch(tf.data.AUTOTUNE)
)
backbone.trainable = False
inputs = backbone.input
sequence = backbone(inputs)["sequence_output"]
for _ in range(2):
sequence = keras_nlp.layers.TransformerEncoder(
num_heads=2,
intermediate_dim=512,
dropout=0.1,
)(sequence)
# Use [CLS] token output to classify
outputs = keras.layers.Dense(2)(sequence[:, backbone.cls_token_index, :])
model = keras.Model(inputs, outputs)
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.AdamW(5e-5),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
jit_compile=True,
)
model.summary()
model.fit(
imdb_train_preprocessed,
validation_data=imdb_test_preprocessed,
epochs=3,
)
"""
This model achieves reasonable accuracy despite having only 10% of the trainable parameters
of our `BertClassifier` model. Each training step takes about 1/3 of the time---even
accounting for cached preprocessing.
"""
"""
## Pretraining a backbone model
<img src="https://storage.googleapis.com/keras-nlp/getting_started_guide/prof_keras_expert.png" alt="drawing" height="250"/>
Do you have access to large unlabeled datasets in your domain? Are they around the
same size as used to train popular backbones such as BERT, RoBERTa, or GPT2 (XX+ GiB)? If
so, you might benefit from domain-specific pretraining of your own backbone models.
NLP models are generally pretrained on a language modeling task, predicting masked words
given the visible words in an input sentence. For example, given the input
`"The fox [MASK] over the [MASK] dog"`, the model might be asked to predict `["jumped", "lazy"]`.
The lower layers of this model are then packaged as a **backbone** to be combined with
layers relating to a new task.
The KerasNLP library offers SoTA **backbones** and **tokenizers** to be trained from
scratch without presets.
In this workflow, we pretrain a BERT **backbone** using our IMDB review text. We skip the
"next sentence prediction" (NSP) loss because it adds significant complexity to the data
processing and was dropped by later models like RoBERTa. See our e2e
[Transformer pretraining](https://keras.io/guides/keras_nlp/transformer_pretraining/#pretraining)
for step-by-step details on how to replicate the original paper.
"""
"""
### Preprocessing
"""
# All BERT `en` models have the same vocabulary, so reuse preprocessor from
# "bert_tiny_en_uncased"
preprocessor = keras_nlp.models.BertPreprocessor.from_preset(
"bert_tiny_en_uncased",
sequence_length=256,
)
packer = preprocessor.packer
tokenizer = preprocessor.tokenizer
# keras.Layer to replace some input tokens with the "[MASK]" token
masker = keras_nlp.layers.MaskedLMMaskGenerator(
vocabulary_size=tokenizer.vocabulary_size(),
mask_selection_rate=0.25,
mask_selection_length=64,
mask_token_id=tokenizer.token_to_id("[MASK]"),
unselectable_token_ids=[
tokenizer.token_to_id(x) for x in ["[CLS]", "[PAD]", "[SEP]"]
],
)
def preprocess(inputs, label):
inputs = preprocessor(inputs)
masked_inputs = masker(inputs["token_ids"])
# Split the masking layer outputs into a (features, labels, and weights)
# tuple that we can use with keras.Model.fit().
features = {
"token_ids": masked_inputs["token_ids"],
"segment_ids": inputs["segment_ids"],
"padding_mask": inputs["padding_mask"],
"mask_positions": masked_inputs["mask_positions"],
}
labels = masked_inputs["mask_ids"]
weights = masked_inputs["mask_weights"]
return features, labels, weights
pretrain_ds = imdb_train.map(preprocess, num_parallel_calls=tf.data.AUTOTUNE).prefetch(
tf.data.AUTOTUNE
)
pretrain_val_ds = imdb_test.map(
preprocess, num_parallel_calls=tf.data.AUTOTUNE
).prefetch(tf.data.AUTOTUNE)
# Tokens with ID 103 are "masked"
print(pretrain_ds.unbatch().take(1).get_single_element())
"""
### Pretraining model
"""
# BERT backbone
backbone = keras_nlp.models.BertBackbone(
vocabulary_size=tokenizer.vocabulary_size(),
num_layers=2,
num_heads=2,
hidden_dim=128,
intermediate_dim=512,
)
# Language modeling head
mlm_head = keras_nlp.layers.MaskedLMHead(
token_embedding=backbone.token_embedding,
)
inputs = {
"token_ids": keras.Input(shape=(None,), dtype=tf.int32, name="token_ids"),
"segment_ids": keras.Input(shape=(None,), dtype=tf.int32, name="segment_ids"),
"padding_mask": keras.Input(shape=(None,), dtype=tf.int32, name="padding_mask"),
"mask_positions": keras.Input(shape=(None,), dtype=tf.int32, name="mask_positions"),
}
# Encoded token sequence
sequence = backbone(inputs)["sequence_output"]
# Predict an output word for each masked input token.
# We use the input token embedding to project from our encoded vectors to
# vocabulary logits, which has been shown to improve training efficiency.
outputs = mlm_head(sequence, mask_positions=inputs["mask_positions"])
# Define and compile our pretraining model.
pretraining_model = keras.Model(inputs, outputs)
pretraining_model.summary()
pretraining_model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.AdamW(learning_rate=5e-4),
weighted_metrics=[keras.metrics.SparseCategoricalAccuracy()],
jit_compile=True,
)
# Pretrain on IMDB dataset
pretraining_model.fit(
pretrain_ds,
validation_data=pretrain_val_ds,
epochs=3, # Increase to 6 for higher accuracy
)
"""
After pretraining save your `backbone` submodel to use in a new task!
"""
"""
## Build and train your own transformer from scratch
<img src="https://storage.googleapis.com/keras-nlp/getting_started_guide/prof_keras_expert.png" alt="drawing" height="250"/>
Want to implement a novel transformer architecture? The KerasNLP library offers all the
low-level modules used to build SoTA architectures in our `models` API. This includes the
`keras_nlp.tokenizers` API which allows you to train your own subword tokenizer using
`WordPieceTokenizer`, `BytePairTokenizer`, or `SentencePieceTokenizer`.
In this workflow, we train a custom tokenizer on the IMDB data and design a backbone with
custom transformer architecture. For simplicity, we then train directly on the
classification task. Interested in more details? We wrote an entire guide to pretraining
and finetuning a custom transformer on
[keras.io](https://keras.io/guides/keras_nlp/transformer_pretraining/),
"""
"""
### Train custom vocabulary from IMDB data
"""
vocab = keras_nlp.tokenizers.compute_word_piece_vocabulary(
imdb_train.map(lambda x, y: x),
vocabulary_size=20_000,
lowercase=True,
strip_accents=True,
reserved_tokens=["[PAD]", "[START]", "[END]", "[MASK]", "[UNK]"],
)
tokenizer = keras_nlp.tokenizers.WordPieceTokenizer(
vocabulary=vocab,
lowercase=True,
strip_accents=True,
oov_token="[UNK]",
)
"""
### Preprocess data with a custom tokenizer
"""
packer = keras_nlp.layers.StartEndPacker(
start_value=tokenizer.token_to_id("[START]"),
end_value=tokenizer.token_to_id("[END]"),
pad_value=tokenizer.token_to_id("[PAD]"),
sequence_length=512,
)
def preprocess(x, y):
token_ids = packer(tokenizer(x))
return token_ids, y
imdb_preproc_train_ds = imdb_train.map(
preprocess, num_parallel_calls=tf.data.AUTOTUNE
).prefetch(tf.data.AUTOTUNE)
imdb_preproc_val_ds = imdb_test.map(
preprocess, num_parallel_calls=tf.data.AUTOTUNE
).prefetch(tf.data.AUTOTUNE)
print(imdb_preproc_train_ds.unbatch().take(1).get_single_element())
"""
### Design a tiny transformer
"""
token_id_input = keras.Input(
shape=(None,),
dtype="int32",
name="token_ids",
)
outputs = keras_nlp.layers.TokenAndPositionEmbedding(
vocabulary_size=len(vocab),
sequence_length=packer.sequence_length,
embedding_dim=64,
)(token_id_input)
outputs = keras_nlp.layers.TransformerEncoder(
num_heads=2,
intermediate_dim=128,
dropout=0.1,
)(outputs)
# Use "[START]" token to classify
outputs = keras.layers.Dense(2)(outputs[:, 0, :])
model = keras.Model(
inputs=token_id_input,
outputs=outputs,
)
model.summary()
"""
### Train the transformer directly on the classification objective
"""
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.AdamW(5e-5),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
jit_compile=True,
)
model.fit(
imdb_preproc_train_ds,
validation_data=imdb_preproc_val_ds,
epochs=3,
)
"""
Excitingly, our custom classifier is similar to the performance of fine-tuning
`"bert_tiny_en_uncased"`! To see the advantages of pretraining and exceed 90% accuracy we
would need to use larger **presets** such as `"bert_base_en_uncased"`.
"""
| keras-io/guides/keras_nlp/getting_started.py/0 | {
"file_path": "keras-io/guides/keras_nlp/getting_started.py",
"repo_id": "keras-io",
"token_count": 7649
} | 109 |
# Pretraining a Transformer from scratch with KerasNLP
**Author:** [Matthew Watson](https://github.com/mattdangerw/)<br>
**Date created:** 2022/04/18<br>
**Last modified:** 2023/07/15<br>
**Description:** Use KerasNLP to train a Transformer model from scratch.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/guides/ipynb/keras_nlp/transformer_pretraining.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/guides/keras_nlp/transformer_pretraining.py)
KerasNLP aims to make it easy to build state-of-the-art text processing models. In this
guide, we will show how library components simplify pretraining and fine-tuning a
Transformer model from scratch.
This guide is broken into three parts:
1. *Setup*, task definition, and establishing a baseline.
2. *Pretraining* a Transformer model.
3. *Fine-tuning* the Transformer model on our classification task.
---
## Setup
The following guide uses Keras 3 to work in any of `tensorflow`, `jax` or
`torch`. We select the `jax` backend below, which will give us a particularly
fast train step below, but feel free to mix it up.
```python
!pip install -q --upgrade keras-nlp
!pip install -q --upgrade keras # Upgrade to Keras 3.
```
```python
import os
os.environ["KERAS_BACKEND"] = "jax" # or "tensorflow" or "torch"
import keras_nlp
import tensorflow as tf
import keras
```
<div class="k-default-codeblock">
```
```
</div>
Next up, we can download two datasets.
- [SST-2](https://paperswithcode.com/sota/sentiment-analysis-on-sst-2-binary) a text
classification dataset and our "end goal". This dataset is often used to benchmark
language models.
- [WikiText-103](https://paperswithcode.com/dataset/wikitext-103): A medium sized
collection of featured articles from English Wikipedia, which we will use for
pretraining.
Finally, we will download a WordPiece vocabulary, to do sub-word tokenization later on in
this guide.
```python
# Download pretraining data.
keras.utils.get_file(
origin="https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-raw-v1.zip",
extract=True,
)
wiki_dir = os.path.expanduser("~/.keras/datasets/wikitext-103-raw/")
# Download finetuning data.
keras.utils.get_file(
origin="https://dl.fbaipublicfiles.com/glue/data/SST-2.zip",
extract=True,
)
sst_dir = os.path.expanduser("~/.keras/datasets/SST-2/")
# Download vocabulary data.
vocab_file = keras.utils.get_file(
origin="https://storage.googleapis.com/tensorflow/keras-nlp/examples/bert/bert_vocab_uncased.txt",
)
```
Next, we define some hyperparameters we will use during training.
```python
# Preprocessing params.
PRETRAINING_BATCH_SIZE = 128
FINETUNING_BATCH_SIZE = 32
SEQ_LENGTH = 128
MASK_RATE = 0.25
PREDICTIONS_PER_SEQ = 32
# Model params.
NUM_LAYERS = 3
MODEL_DIM = 256
INTERMEDIATE_DIM = 512
NUM_HEADS = 4
DROPOUT = 0.1
NORM_EPSILON = 1e-5
# Training params.
PRETRAINING_LEARNING_RATE = 5e-4
PRETRAINING_EPOCHS = 8
FINETUNING_LEARNING_RATE = 5e-5
FINETUNING_EPOCHS = 3
```
### Load data
We load our data with [tf.data](https://www.tensorflow.org/guide/data), which will allow
us to define input pipelines for tokenizing and preprocessing text.
```python
# Load SST-2.
sst_train_ds = tf.data.experimental.CsvDataset(
sst_dir + "train.tsv", [tf.string, tf.int32], header=True, field_delim="\t"
).batch(FINETUNING_BATCH_SIZE)
sst_val_ds = tf.data.experimental.CsvDataset(
sst_dir + "dev.tsv", [tf.string, tf.int32], header=True, field_delim="\t"
).batch(FINETUNING_BATCH_SIZE)
# Load wikitext-103 and filter out short lines.
wiki_train_ds = (
tf.data.TextLineDataset(wiki_dir + "wiki.train.raw")
.filter(lambda x: tf.strings.length(x) > 100)
.batch(PRETRAINING_BATCH_SIZE)
)
wiki_val_ds = (
tf.data.TextLineDataset(wiki_dir + "wiki.valid.raw")
.filter(lambda x: tf.strings.length(x) > 100)
.batch(PRETRAINING_BATCH_SIZE)
)
# Take a peak at the sst-2 dataset.
print(sst_train_ds.unbatch().batch(4).take(1).get_single_element())
```
<div class="k-default-codeblock">
```
(<tf.Tensor: shape=(4,), dtype=string, numpy=
array([b'hide new secretions from the parental units ',
b'contains no wit , only labored gags ',
b'that loves its characters and communicates something rather beautiful about human nature ',
b'remains utterly satisfied to remain the same throughout '],
dtype=object)>, <tf.Tensor: shape=(4,), dtype=int32, numpy=array([0, 0, 1, 0], dtype=int32)>)
```
</div>
You can see that our `SST-2` dataset contains relatively short snippets of movie review
text. Our goal is to predict the sentiment of the snippet. A label of 1 indicates
positive sentiment, and a label of 0 negative sentiment.
### Establish a baseline
As a first step, we will establish a baseline of good performance. We don't actually need
KerasNLP for this, we can just use core Keras layers.
We will train a simple bag-of-words model, where we learn a positive or negative weight
for each word in our vocabulary. A sample's score is simply the sum of the weights of all
words that are present in the sample.
```python
# This layer will turn our input sentence into a list of 1s and 0s the same size
# our vocabulary, indicating whether a word is present in absent.
multi_hot_layer = keras.layers.TextVectorization(
max_tokens=4000, output_mode="multi_hot"
)
multi_hot_layer.adapt(sst_train_ds.map(lambda x, y: x))
multi_hot_ds = sst_train_ds.map(lambda x, y: (multi_hot_layer(x), y))
multi_hot_val_ds = sst_val_ds.map(lambda x, y: (multi_hot_layer(x), y))
# We then learn a linear regression over that layer, and that's our entire
# baseline model!
inputs = keras.Input(shape=(4000,), dtype="int32")
outputs = keras.layers.Dense(1, activation="sigmoid")(inputs)
baseline_model = keras.Model(inputs, outputs)
baseline_model.compile(loss="binary_crossentropy", metrics=["accuracy"])
baseline_model.fit(multi_hot_ds, validation_data=multi_hot_val_ds, epochs=5)
```
<div class="k-default-codeblock">
```
Epoch 1/5
2105/2105 ━━━━━━━━━━━━━━━━━━━━ 2s 698us/step - accuracy: 0.6421 - loss: 0.6469 - val_accuracy: 0.7567 - val_loss: 0.5391
Epoch 2/5
2105/2105 ━━━━━━━━━━━━━━━━━━━━ 1s 493us/step - accuracy: 0.7524 - loss: 0.5392 - val_accuracy: 0.7868 - val_loss: 0.4891
Epoch 3/5
2105/2105 ━━━━━━━━━━━━━━━━━━━━ 1s 513us/step - accuracy: 0.7832 - loss: 0.4871 - val_accuracy: 0.7991 - val_loss: 0.4671
Epoch 4/5
2105/2105 ━━━━━━━━━━━━━━━━━━━━ 1s 475us/step - accuracy: 0.7991 - loss: 0.4543 - val_accuracy: 0.8069 - val_loss: 0.4569
Epoch 5/5
2105/2105 ━━━━━━━━━━━━━━━━━━━━ 1s 476us/step - accuracy: 0.8100 - loss: 0.4313 - val_accuracy: 0.8036 - val_loss: 0.4530
<keras.src.callbacks.history.History at 0x7f13902967a0>
```
</div>
A bag-of-words approach can be a fast and surprisingly powerful, especially when input
examples contain a large number of words. With shorter sequences, it can hit a
performance ceiling.
To do better, we would like to build a model that can evaluate words *in context*. Instead
of evaluating each word in a void, we need to use the information contained in the
*entire ordered sequence* of our input.
This runs us into a problem. `SST-2` is very small dataset, and there's simply not enough
example text to attempt to build a larger, more parameterized model that can learn on a
sequence. We would quickly start to overfit and memorize our training set, without any
increase in our ability to generalize to unseen examples.
Enter **pretraining**, which will allow us to learn on a larger corpus, and transfer our
knowledge to the `SST-2` task. And enter **KerasNLP**, which will allow us to pretrain a
particularly powerful model, the Transformer, with ease.
---
## Pretraining
To beat our baseline, we will leverage the `WikiText103` dataset, an unlabeled
collection of Wikipedia articles that is much bigger than `SST-2`.
We are going to train a *transformer*, a highly expressive model which will learn
to embed each word in our input as a low dimensional vector. Our wikipedia dataset has no
labels, so we will use an unsupervised training objective called the *Masked Language
Modeling* (MaskedLM) objective.
Essentially, we will be playing a big game of "guess the missing word". For each input
sample we will obscure 25% of our input data, and train our model to predict the parts we
covered up.
### Preprocess data for the MaskedLM task
Our text preprocessing for the MaskedLM task will occur in two stages.
1. Tokenize input text into integer sequences of token ids.
2. Mask certain positions in our input to predict on.
To tokenize, we can use a `keras_nlp.tokenizers.Tokenizer` -- the KerasNLP building block
for transforming text into sequences of integer token ids.
In particular, we will use `keras_nlp.tokenizers.WordPieceTokenizer` which does
*sub-word* tokenization. Sub-word tokenization is popular when training models on large
text corpora. Essentially, it allows our model to learn from uncommon words, while not
requiring a massive vocabulary of every word in our training set.
The second thing we need to do is mask our input for the MaskedLM task. To do this, we can use
`keras_nlp.layers.MaskedLMMaskGenerator`, which will randomly select a set of tokens in each
input and mask them out.
The tokenizer and the masking layer can both be used inside a call to
[tf.data.Dataset.map](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#map).
We can use `tf.data` to efficiently pre-compute each batch on the CPU, while our GPU or TPU
works on training with the batch that came before. Because our masking layer will
choose new words to mask each time, each epoch over our dataset will give us a totally
new set of labels to train on.
```python
# Setting sequence_length will trim or pad the token outputs to shape
# (batch_size, SEQ_LENGTH).
tokenizer = keras_nlp.tokenizers.WordPieceTokenizer(
vocabulary=vocab_file,
sequence_length=SEQ_LENGTH,
lowercase=True,
strip_accents=True,
)
# Setting mask_selection_length will trim or pad the mask outputs to shape
# (batch_size, PREDICTIONS_PER_SEQ).
masker = keras_nlp.layers.MaskedLMMaskGenerator(
vocabulary_size=tokenizer.vocabulary_size(),
mask_selection_rate=MASK_RATE,
mask_selection_length=PREDICTIONS_PER_SEQ,
mask_token_id=tokenizer.token_to_id("[MASK]"),
)
def preprocess(inputs):
inputs = tokenizer(inputs)
outputs = masker(inputs)
# Split the masking layer outputs into a (features, labels, and weights)
# tuple that we can use with keras.Model.fit().
features = {
"token_ids": outputs["token_ids"],
"mask_positions": outputs["mask_positions"],
}
labels = outputs["mask_ids"]
weights = outputs["mask_weights"]
return features, labels, weights
# We use prefetch() to pre-compute preprocessed batches on the fly on the CPU.
pretrain_ds = wiki_train_ds.map(
preprocess, num_parallel_calls=tf.data.AUTOTUNE
).prefetch(tf.data.AUTOTUNE)
pretrain_val_ds = wiki_val_ds.map(
preprocess, num_parallel_calls=tf.data.AUTOTUNE
).prefetch(tf.data.AUTOTUNE)
# Preview a single input example.
# The masks will change each time you run the cell.
print(pretrain_val_ds.take(1).get_single_element())
```
<div class="k-default-codeblock">
```
({'token_ids': <tf.Tensor: shape=(128, 128), dtype=int32, numpy=
array([[7570, 7849, 2271, ..., 9673, 103, 7570],
[7570, 7849, 103, ..., 1007, 1012, 2023],
[1996, 2034, 3940, ..., 0, 0, 0],
...,
[2076, 1996, 2307, ..., 0, 0, 0],
[3216, 103, 2083, ..., 0, 0, 0],
[ 103, 2007, 1045, ..., 0, 0, 0]], dtype=int32)>, 'mask_positions': <tf.Tensor: shape=(128, 32), dtype=int64, numpy=
array([[ 5, 6, 7, ..., 118, 120, 126],
[ 2, 3, 14, ..., 105, 106, 113],
[ 4, 9, 10, ..., 0, 0, 0],
...,
[ 4, 11, 19, ..., 117, 118, 0],
[ 1, 14, 17, ..., 0, 0, 0],
[ 0, 3, 6, ..., 0, 0, 0]])>}, <tf.Tensor: shape=(128, 32), dtype=int32, numpy=
array([[ 1010, 2124, 2004, ..., 2095, 11300, 1012],
[ 2271, 13091, 2303, ..., 2029, 2027, 1010],
[23976, 2007, 1037, ..., 0, 0, 0],
...,
[ 1010, 1996, 1010, ..., 1999, 7511, 0],
[ 2225, 1998, 10722, ..., 0, 0, 0],
[ 9794, 1030, 2322, ..., 0, 0, 0]], dtype=int32)>, <tf.Tensor: shape=(128, 32), dtype=float32, numpy=
array([[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 1., 1., 1.],
[1., 1., 1., ..., 0., 0., 0.],
...,
[1., 1., 1., ..., 1., 1., 0.],
[1., 1., 1., ..., 0., 0., 0.],
[1., 1., 1., ..., 0., 0., 0.]], dtype=float32)>)
```
</div>
The above block sorts our dataset into a `(features, labels, weights)` tuple, which can be
passed directly to `keras.Model.fit()`.
We have two features:
1. `"token_ids"`, where some tokens have been replaced with our mask token id.
2. `"mask_positions"`, which keeps track of which tokens we masked out.
Our labels are simply the ids we masked out.
Because not all sequences will have the same number of masks, we also keep a
`sample_weight` tensor, which removes padded labels from our loss function by giving them
zero weight.
### Create the Transformer encoder
KerasNLP provides all the building blocks to quickly build a Transformer encoder.
We use `keras_nlp.layers.TokenAndPositionEmbedding` to first embed our input token ids.
This layer simultaneously learns two embeddings -- one for words in a sentence and another
for integer positions in a sentence. The output embedding is simply the sum of the two.
Then we can add a series of `keras_nlp.layers.TransformerEncoder` layers. These are the
bread and butter of the Transformer model, using an attention mechanism to attend to
different parts of the input sentence, followed by a multi-layer perceptron block.
The output of this model will be a encoded vector per input token id. Unlike the
bag-of-words model we used as a baseline, this model will embed each token accounting for
the context in which it appeared.
```python
inputs = keras.Input(shape=(SEQ_LENGTH,), dtype="int32")
# Embed our tokens with a positional embedding.
embedding_layer = keras_nlp.layers.TokenAndPositionEmbedding(
vocabulary_size=tokenizer.vocabulary_size(),
sequence_length=SEQ_LENGTH,
embedding_dim=MODEL_DIM,
)
outputs = embedding_layer(inputs)
# Apply layer normalization and dropout to the embedding.
outputs = keras.layers.LayerNormalization(epsilon=NORM_EPSILON)(outputs)
outputs = keras.layers.Dropout(rate=DROPOUT)(outputs)
# Add a number of encoder blocks
for i in range(NUM_LAYERS):
outputs = keras_nlp.layers.TransformerEncoder(
intermediate_dim=INTERMEDIATE_DIM,
num_heads=NUM_HEADS,
dropout=DROPOUT,
layer_norm_epsilon=NORM_EPSILON,
)(outputs)
encoder_model = keras.Model(inputs, outputs)
encoder_model.summary()
```
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold">Model: "functional_3"</span>
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace">┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━┓
┃<span style="font-weight: bold"> Layer (type) </span>┃<span style="font-weight: bold"> Output Shape </span>┃<span style="font-weight: bold"> Param # </span>┃
┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━┩
│ input_layer_1 (<span style="color: #0087ff; text-decoration-color: #0087ff">InputLayer</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">128</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ token_and_position_embedding │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">128</span>, <span style="color: #00af00; text-decoration-color: #00af00">256</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">7,846,400</span> │
│ (<span style="color: #0087ff; text-decoration-color: #0087ff">TokenAndPositionEmbedding</span>) │ │ │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ layer_normalization │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">128</span>, <span style="color: #00af00; text-decoration-color: #00af00">256</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">512</span> │
│ (<span style="color: #0087ff; text-decoration-color: #0087ff">LayerNormalization</span>) │ │ │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ dropout (<span style="color: #0087ff; text-decoration-color: #0087ff">Dropout</span>) │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">128</span>, <span style="color: #00af00; text-decoration-color: #00af00">256</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">0</span> │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ transformer_encoder │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">128</span>, <span style="color: #00af00; text-decoration-color: #00af00">256</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">527,104</span> │
│ (<span style="color: #0087ff; text-decoration-color: #0087ff">TransformerEncoder</span>) │ │ │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ transformer_encoder_1 │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">128</span>, <span style="color: #00af00; text-decoration-color: #00af00">256</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">527,104</span> │
│ (<span style="color: #0087ff; text-decoration-color: #0087ff">TransformerEncoder</span>) │ │ │
├─────────────────────────────────┼───────────────────────────┼────────────┤
│ transformer_encoder_2 │ (<span style="color: #00d7ff; text-decoration-color: #00d7ff">None</span>, <span style="color: #00af00; text-decoration-color: #00af00">128</span>, <span style="color: #00af00; text-decoration-color: #00af00">256</span>) │ <span style="color: #00af00; text-decoration-color: #00af00">527,104</span> │
│ (<span style="color: #0087ff; text-decoration-color: #0087ff">TransformerEncoder</span>) │ │ │
└─────────────────────────────────┴───────────────────────────┴────────────┘
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Total params: </span><span style="color: #00af00; text-decoration-color: #00af00">9,428,224</span> (287.73 MB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">9,428,224</span> (287.73 MB)
</pre>
<pre style="white-space:pre;overflow-x:auto;line-height:normal;font-family:Menlo,'DejaVu Sans Mono',consolas,'Courier New',monospace"><span style="font-weight: bold"> Non-trainable params: </span><span style="color: #00af00; text-decoration-color: #00af00">0</span> (0.00 B)
</pre>
### Pretrain the Transformer
You can think of the `encoder_model` as it's own modular unit, it is the piece of our
model that we are really interested in for our downstream task. However we still need to
set up the encoder to train on the MaskedLM task; to do that we attach a
`keras_nlp.layers.MaskedLMHead`.
This layer will take as one input the token encodings, and as another the positions we
masked out in the original input. It will gather the token encodings we masked, and
transform them back in predictions over our entire vocabulary.
With that, we are ready to compile and run pretraining. If you are running this in a
Colab, note that this will take about an hour. Training Transformer is famously compute
intensive, so even this relatively small Transformer will take some time.
```python
# Create the pretraining model by attaching a masked language model head.
inputs = {
"token_ids": keras.Input(shape=(SEQ_LENGTH,), dtype="int32", name="token_ids"),
"mask_positions": keras.Input(
shape=(PREDICTIONS_PER_SEQ,), dtype="int32", name="mask_positions"
),
}
# Encode the tokens.
encoded_tokens = encoder_model(inputs["token_ids"])
# Predict an output word for each masked input token.
# We use the input token embedding to project from our encoded vectors to
# vocabulary logits, which has been shown to improve training efficiency.
outputs = keras_nlp.layers.MaskedLMHead(
token_embedding=embedding_layer.token_embedding,
activation="softmax",
)(encoded_tokens, mask_positions=inputs["mask_positions"])
# Define and compile our pretraining model.
pretraining_model = keras.Model(inputs, outputs)
pretraining_model.compile(
loss="sparse_categorical_crossentropy",
optimizer=keras.optimizers.AdamW(PRETRAINING_LEARNING_RATE),
weighted_metrics=["sparse_categorical_accuracy"],
jit_compile=True,
)
# Pretrain the model on our wiki text dataset.
pretraining_model.fit(
pretrain_ds,
validation_data=pretrain_val_ds,
epochs=PRETRAINING_EPOCHS,
)
# Save this base model for further finetuning.
encoder_model.save("encoder_model.keras")
```
<div class="k-default-codeblock">
```
Epoch 1/8
5857/5857 ━━━━━━━━━━━━━━━━━━━━ 242s 41ms/step - loss: 5.4679 - sparse_categorical_accuracy: 0.1353 - val_loss: 3.4570 - val_sparse_categorical_accuracy: 0.3522
Epoch 2/8
5857/5857 ━━━━━━━━━━━━━━━━━━━━ 234s 40ms/step - loss: 3.6031 - sparse_categorical_accuracy: 0.3396 - val_loss: 3.0514 - val_sparse_categorical_accuracy: 0.4032
Epoch 3/8
5857/5857 ━━━━━━━━━━━━━━━━━━━━ 232s 40ms/step - loss: 3.2609 - sparse_categorical_accuracy: 0.3802 - val_loss: 2.8858 - val_sparse_categorical_accuracy: 0.4240
Epoch 4/8
5857/5857 ━━━━━━━━━━━━━━━━━━━━ 233s 40ms/step - loss: 3.1099 - sparse_categorical_accuracy: 0.3978 - val_loss: 2.7897 - val_sparse_categorical_accuracy: 0.4375
Epoch 5/8
5857/5857 ━━━━━━━━━━━━━━━━━━━━ 235s 40ms/step - loss: 3.0145 - sparse_categorical_accuracy: 0.4090 - val_loss: 2.7504 - val_sparse_categorical_accuracy: 0.4419
Epoch 6/8
5857/5857 ━━━━━━━━━━━━━━━━━━━━ 252s 43ms/step - loss: 2.9530 - sparse_categorical_accuracy: 0.4157 - val_loss: 2.6925 - val_sparse_categorical_accuracy: 0.4474
Epoch 7/8
5857/5857 ━━━━━━━━━━━━━━━━━━━━ 232s 40ms/step - loss: 2.9088 - sparse_categorical_accuracy: 0.4210 - val_loss: 2.6554 - val_sparse_categorical_accuracy: 0.4513
Epoch 8/8
5857/5857 ━━━━━━━━━━━━━━━━━━━━ 236s 40ms/step - loss: 2.8721 - sparse_categorical_accuracy: 0.4250 - val_loss: 2.6389 - val_sparse_categorical_accuracy: 0.4548
```
</div>
---
## Fine-tuning
After pretraining, we can now fine-tune our model on the `SST-2` dataset. We can
leverage the ability of the encoder we build to predict on words in context to boost
our performance on the downstream task.
### Preprocess data for classification
Preprocessing for fine-tuning is much simpler than for our pretraining MaskedLM task. We just
tokenize our input sentences and we are ready for training!
```python
def preprocess(sentences, labels):
return tokenizer(sentences), labels
# We use prefetch() to pre-compute preprocessed batches on the fly on our CPU.
finetune_ds = sst_train_ds.map(
preprocess, num_parallel_calls=tf.data.AUTOTUNE
).prefetch(tf.data.AUTOTUNE)
finetune_val_ds = sst_val_ds.map(
preprocess, num_parallel_calls=tf.data.AUTOTUNE
).prefetch(tf.data.AUTOTUNE)
# Preview a single input example.
print(finetune_val_ds.take(1).get_single_element())
```
<div class="k-default-codeblock">
```
(<tf.Tensor: shape=(32, 128), dtype=int32, numpy=
array([[ 2009, 1005, 1055, ..., 0, 0, 0],
[ 4895, 10258, 2378, ..., 0, 0, 0],
[ 4473, 2149, 2000, ..., 0, 0, 0],
...,
[ 1045, 2018, 2000, ..., 0, 0, 0],
[ 4283, 2000, 3660, ..., 0, 0, 0],
[ 1012, 1012, 1012, ..., 0, 0, 0]], dtype=int32)>, <tf.Tensor: shape=(32,), dtype=int32, numpy=
array([1, 0, 1, 1, 0, 1, 0, 0, 1, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 0, 0, 0,
0, 1, 1, 0, 0, 1, 0, 0, 1, 0], dtype=int32)>)
```
</div>
### Fine-tune the Transformer
To go from our encoded token output to a classification prediction, we need to attach
another "head" to our Transformer model. We can afford to be simple here. We pool
the encoded tokens together, and use a single dense layer to make a prediction.
```python
# Reload the encoder model from disk so we can restart fine-tuning from scratch.
encoder_model = keras.models.load_model("encoder_model.keras", compile=False)
# Take as input the tokenized input.
inputs = keras.Input(shape=(SEQ_LENGTH,), dtype="int32")
# Encode and pool the tokens.
encoded_tokens = encoder_model(inputs)
pooled_tokens = keras.layers.GlobalAveragePooling1D()(encoded_tokens[0])
# Predict an output label.
outputs = keras.layers.Dense(1, activation="sigmoid")(pooled_tokens)
# Define and compile our fine-tuning model.
finetuning_model = keras.Model(inputs, outputs)
finetuning_model.compile(
loss="binary_crossentropy",
optimizer=keras.optimizers.AdamW(FINETUNING_LEARNING_RATE),
metrics=["accuracy"],
)
# Finetune the model for the SST-2 task.
finetuning_model.fit(
finetune_ds,
validation_data=finetune_val_ds,
epochs=FINETUNING_EPOCHS,
)
```
<div class="k-default-codeblock">
```
Epoch 1/3
2105/2105 ━━━━━━━━━━━━━━━━━━━━ 21s 9ms/step - accuracy: 0.7500 - loss: 0.4891 - val_accuracy: 0.8036 - val_loss: 0.4099
Epoch 2/3
2105/2105 ━━━━━━━━━━━━━━━━━━━━ 16s 8ms/step - accuracy: 0.8826 - loss: 0.2779 - val_accuracy: 0.8482 - val_loss: 0.3964
Epoch 3/3
2105/2105 ━━━━━━━━━━━━━━━━━━━━ 16s 8ms/step - accuracy: 0.9176 - loss: 0.2066 - val_accuracy: 0.8549 - val_loss: 0.4142
<keras.src.callbacks.history.History at 0x7f12d85c21a0>
```
</div>
Pretraining was enough to boost our performance to 84%, and this is hardly the ceiling
for Transformer models. You may have noticed during pretraining that our validation
performance was still steadily increasing. Our model is still significantly undertrained.
Training for more epochs, training a large Transformer, and training on more unlabeled
text would all continue to boost performance significantly.
One of the key goals of KerasNLP is to provide a modular approach to NLP model building.
We have shown one approach to building a Transformer here, but KerasNLP supports an ever
growing array of components for preprocessing text and building models. We hope it makes
it easier to experiment on solutions to your natural language problems.
| keras-io/guides/md/keras_nlp/transformer_pretraining.md/0 | {
"file_path": "keras-io/guides/md/keras_nlp/transformer_pretraining.md",
"repo_id": "keras-io",
"token_count": 10763
} | 110 |
<meta http-equiv="refresh" content="0; URL='https://keras.io/api/keras_nlp/modeling_layers/'" />
| keras-io/redirects/api/keras_nlp/layers/index.html/0 | {
"file_path": "keras-io/redirects/api/keras_nlp/layers/index.html",
"repo_id": "keras-io",
"token_count": 41
} | 111 |
<meta http-equiv="refresh" content="0; URL='https://keras.io/api/utils/backend_utils/'" />
| keras-io/redirects/backend/index.html/0 | {
"file_path": "keras-io/redirects/backend/index.html",
"repo_id": "keras-io",
"token_count": 36
} | 112 |
<meta http-equiv="refresh" content="0; URL='https://keras.io/api/layers/convolution_layers/'" />
| keras-io/redirects/layers/convolutional/index.html/0 | {
"file_path": "keras-io/redirects/layers/convolutional/index.html",
"repo_id": "keras-io",
"token_count": 38
} | 113 |
<meta http-equiv="refresh" content="0; URL='https://keras.io/api/optimizers/'" />
| keras-io/redirects/optimizers/index.html/0 | {
"file_path": "keras-io/redirects/optimizers/index.html",
"repo_id": "keras-io",
"token_count": 32
} | 114 |
EXAMPLES_MASTER = {
"path": "examples/",
"title": "Code examples",
"toc": False,
"children": [
{
"path": "vision/",
"title": "Computer Vision",
"toc": True,
"children": [
# Image classification
{
"path": "image_classification_from_scratch",
"title": "Image classification from scratch",
"subcategory": "Image classification",
"highlight": True,
"keras_3": True,
},
{
"path": "mnist_convnet",
"title": "Simple MNIST convnet",
"subcategory": "Image classification",
"highlight": True,
"keras_3": True,
},
{
"path": "image_classification_efficientnet_fine_tuning",
"title": "Image classification via fine-tuning with EfficientNet",
"subcategory": "Image classification",
"highlight": True,
"keras_3": True,
},
{
"path": "image_classification_with_vision_transformer",
"title": "Image classification with Vision Transformer",
"subcategory": "Image classification",
"keras_3": True,
},
{
"path": "attention_mil_classification",
"title": "Classification using Attention-based Deep Multiple Instance Learning",
"subcategory": "Image classification",
"keras_3": True,
},
{
"path": "mlp_image_classification",
"title": "Image classification with modern MLP models",
"subcategory": "Image classification",
"keras_3": True,
},
{
"path": "mobilevit",
"title": "A mobile-friendly Transformer-based model for image classification",
"subcategory": "Image classification",
"keras_3": True,
},
{
"path": "xray_classification_with_tpus",
"title": "Pneumonia Classification on TPU",
"subcategory": "Image classification",
"keras_3": True,
},
{
"path": "cct",
"title": "Compact Convolutional Transformers",
"subcategory": "Image classification",
"keras_3": True,
},
{
"path": "convmixer",
"title": "Image classification with ConvMixer",
"subcategory": "Image classification",
"keras_3": True,
},
{
"path": "eanet",
"title": "Image classification with EANet (External Attention Transformer)",
"subcategory": "Image classification",
"keras_3": True,
},
{
"path": "involution",
"title": "Involutional neural networks",
"subcategory": "Image classification",
"keras_3": True,
},
{
"path": "perceiver_image_classification",
"title": "Image classification with Perceiver",
"subcategory": "Image classification",
"keras_3": True,
},
{
"path": "reptile",
"title": "Few-Shot learning with Reptile",
"subcategory": "Image classification",
"keras_3": True,
},
{
"path": "semisupervised_simclr",
"title": "Semi-supervised image classification using contrastive pretraining with SimCLR",
"subcategory": "Image classification",
"keras_3": True,
},
{
"path": "swin_transformers",
"title": "Image classification with Swin Transformers",
"subcategory": "Image classification",
"keras_3": True,
},
{
"path": "vit_small_ds",
"title": "Train a Vision Transformer on small datasets",
"subcategory": "Image classification",
},
{
"path": "shiftvit",
"title": "A Vision Transformer without Attention",
"subcategory": "Image classification",
},
{
"path": "image_classification_using_global_context_vision_transformer",
"title": "Image Classification using Global Context Vision Transformer",
"subcategory": "Image classification",
"keras_3": True,
},
# Image segmentation
{
"path": "oxford_pets_image_segmentation",
"title": "Image segmentation with a U-Net-like architecture",
"subcategory": "Image segmentation",
"highlight": True,
"keras_3": True,
},
{
"path": "deeplabv3_plus",
"title": "Multiclass semantic segmentation using DeepLabV3+",
"subcategory": "Image segmentation",
"keras_3": True,
},
{
"path": "basnet_segmentation",
"title": "Highly accurate boundaries segmentation using BASNet",
"subcategory": "Image segmentation",
},
{
"path": "fully_convolutional_network",
"title": "Image Segmentation using Composable Fully-Convolutional Networks",
"subcategory": "Image segmentation",
"keras_3": True,
},
# Object Detection
{
"path": "retinanet",
"title": "Object Detection with RetinaNet",
"subcategory": "Object detection",
},
{
"path": "keypoint_detection",
"title": "Keypoint Detection with Transfer Learning",
"subcategory": "Object detection",
"keras_3": True,
},
{
"path": "object_detection_using_vision_transformer",
"title": "Object detection with Vision Transformers",
"subcategory": "Object detection",
"keras_3": True,
},
# 3D
{
"path": "3D_image_classification",
"title": "3D image classification from CT scans",
"subcategory": "3D",
"keras_3": True,
},
{
"path": "depth_estimation",
"title": "Monocular depth estimation",
"subcategory": "3D",
},
{
"path": "nerf",
"title": "3D volumetric rendering with NeRF",
"subcategory": "3D",
"keras_3": True,
"highlight": True,
},
{
"path": "pointnet_segmentation",
"title": "Point cloud segmentation with PointNet",
"subcategory": "3D",
"keras_3": True,
},
{
"path": "pointnet",
"title": "Point cloud classification",
"subcategory": "3D",
"keras_3": True,
},
# OCR
{
"path": "captcha_ocr",
"title": "OCR model for reading Captchas",
"subcategory": "OCR",
"keras_3": True,
},
{
"path": "handwriting_recognition",
"title": "Handwriting recognition",
"subcategory": "OCR",
},
# Image enhancement
{
"path": "autoencoder",
"title": "Convolutional autoencoder for image denoising",
"subcategory": "Image enhancement",
"keras_3": True,
},
{
"path": "mirnet",
"title": "Low-light image enhancement using MIRNet",
"subcategory": "Image enhancement",
"keras_3": True,
},
{
"path": "super_resolution_sub_pixel",
"title": "Image Super-Resolution using an Efficient Sub-Pixel CNN",
"subcategory": "Image enhancement",
"keras_3": True,
},
{
"path": "edsr",
"title": "Enhanced Deep Residual Networks for single-image super-resolution",
"subcategory": "Image enhancement",
},
{
"path": "zero_dce",
"title": "Zero-DCE for low-light image enhancement",
"subcategory": "Image enhancement",
"keras_3": True,
},
# Data augmentation
{
"path": "cutmix",
"title": "CutMix data augmentation for image classification",
"subcategory": "Data augmentation",
"keras_3": True,
},
{
"path": "mixup",
"title": "MixUp augmentation for image classification",
"subcategory": "Data augmentation",
"keras_3": True,
},
{
"path": "randaugment",
"title": "RandAugment for Image Classification for Improved Robustness",
"subcategory": "Data augmentation",
"keras_3": True,
},
# Image & Text
{
"path": "image_captioning",
"title": "Image captioning",
"subcategory": "Image & Text",
"highlight": True,
"keras_3": True,
},
{
"path": "nl_image_search",
"title": "Natural language image search with a Dual Encoder",
"subcategory": "Image & Text",
},
# Vision models interpretability
{
"path": "visualizing_what_convnets_learn",
"title": "Visualizing what convnets learn",
"subcategory": "Vision models interpretability",
"keras_3": True,
},
{
"path": "integrated_gradients",
"title": "Model interpretability with Integrated Gradients",
"subcategory": "Vision models interpretability",
"keras_3": True,
},
{
"path": "probing_vits",
"title": "Investigating Vision Transformer representations",
"subcategory": "Vision models interpretability",
"keras_3": True,
},
{
"path": "grad_cam",
"title": "Grad-CAM class activation visualization",
"subcategory": "Vision models interpretability",
"keras_3": True,
},
# Image similarity search
{
"path": "near_dup_search",
"title": "Near-duplicate image search",
"subcategory": "Image similarity search",
},
{
"path": "semantic_image_clustering",
"title": "Semantic Image Clustering",
"subcategory": "Image similarity search",
"keras_3": True,
},
{
"path": "siamese_contrastive",
"title": "Image similarity estimation using a Siamese Network with a contrastive loss",
"subcategory": "Image similarity search",
"keras_3": True,
},
{
"path": "siamese_network",
"title": "Image similarity estimation using a Siamese Network with a triplet loss",
"subcategory": "Image similarity search",
"keras_3": True,
},
{
"path": "metric_learning",
"title": "Metric learning for image similarity search",
"subcategory": "Image similarity search",
"keras_3": True,
},
{
"path": "metric_learning_tf_similarity",
"title": "Metric learning for image similarity search using TensorFlow Similarity",
"subcategory": "Image similarity search",
},
{
"path": "nnclr",
"title": "Self-supervised contrastive learning with NNCLR",
"subcategory": "Image similarity search",
"keras_3": True,
},
# Video
{
"path": "video_classification",
"title": "Video Classification with a CNN-RNN Architecture",
"subcategory": "Video",
"keras_3": True,
},
{
"path": "conv_lstm",
"title": "Next-Frame Video Prediction with Convolutional LSTMs",
"subcategory": "Video",
"keras_3": True,
},
{
"path": "video_transformers",
"title": "Video Classification with Transformers",
"subcategory": "Video",
"keras_3": True,
},
{
"path": "vivit",
"title": "Video Vision Transformer",
"subcategory": "Video",
"keras_3": True,
},
{
"path": "bit",
"title": "Image Classification using BigTransfer (BiT)",
"subcategory": "Image classification",
"keras_3": True,
},
# Performance recipes
{
"path": "gradient_centralization",
"title": "Gradient Centralization for Better Training Performance",
"subcategory": "Performance recipes",
"keras_3": True,
},
{
"path": "token_learner",
"title": "Learning to tokenize in Vision Transformers",
"subcategory": "Performance recipes",
"keras_3": True,
},
{
"path": "knowledge_distillation",
"title": "Knowledge Distillation",
"subcategory": "Performance recipes",
"keras_3": True,
},
{
"path": "fixres",
"title": "FixRes: Fixing train-test resolution discrepancy",
"subcategory": "Performance recipes",
"keras_3": True,
},
{
"path": "cait",
"title": "Class Attention Image Transformers with LayerScale",
"subcategory": "Performance recipes",
"keras_3": True,
},
{
"path": "patch_convnet",
"title": "Augmenting convnets with aggregated attention",
"subcategory": "Performance recipes",
"keras_3": True,
},
{
"path": "learnable_resizer",
"title": "Learning to Resize",
"subcategory": "Performance recipes",
"keras_3": True,
},
],
},
{
"path": "nlp/",
"title": "Natural Language Processing",
"toc": True,
"children": [
# Text classification
{
"path": "text_classification_from_scratch",
"title": "Text classification from scratch",
"subcategory": "Text classification",
"highlight": True,
"keras_3": True,
},
{
"path": "active_learning_review_classification",
"title": "Review Classification using Active Learning",
"subcategory": "Text classification",
},
{
"path": "fnet_classification_with_keras_nlp",
"title": "Text Classification using FNet",
"subcategory": "Text classification",
"keras_3": True,
},
{
"path": "multi_label_classification",
"title": "Large-scale multi-label text classification",
"subcategory": "Text classification",
},
{
"path": "text_classification_with_transformer",
"title": "Text classification with Transformer",
"subcategory": "Text classification",
"keras_3": True,
},
{
"path": "text_classification_with_switch_transformer",
"title": "Text classification with Switch Transformer",
"subcategory": "Text classification",
"keras_3": True,
},
{
"path": "tweet-classification-using-tfdf",
"title": "Text classification using Decision Forests and pretrained embeddings",
"subcategory": "Text classification",
},
{
"path": "pretrained_word_embeddings",
"title": "Using pre-trained word embeddings",
"subcategory": "Text classification",
"keras_3": True,
},
{
"path": "bidirectional_lstm_imdb",
"title": "Bidirectional LSTM on IMDB",
"subcategory": "Text classification",
"keras_3": True,
},
{
"path": "data_parallel_training_with_keras_nlp",
"title": "Data Parallel Training with KerasNLP and tf.distribute",
"subcategory": "Text classification",
"keras_3": True,
},
# Machine translation
{
"path": "neural_machine_translation_with_keras_nlp",
"title": "English-to-Spanish translation with KerasNLP",
"subcategory": "Machine translation",
"keras_3": True,
},
{
"path": "neural_machine_translation_with_transformer",
"title": "English-to-Spanish translation with a sequence-to-sequence Transformer",
"subcategory": "Machine translation",
"highlight": True,
"keras_3": True,
},
{
"path": "lstm_seq2seq",
"title": "Character-level recurrent sequence-to-sequence model",
"subcategory": "Machine translation",
"keras_3": True,
},
# Entailement prediction
{
"path": "multimodal_entailment",
"title": "Multimodal entailment",
"subcategory": "Entailment prediction",
},
# Named entity recognition
{
"path": "ner_transformers",
"title": "Named Entity Recognition using Transformers",
"subcategory": "Named entity recognition",
"keras_3": True,
},
# Sequence-to-sequence
{
"path": "text_extraction_with_bert",
"title": "Text Extraction with BERT",
"subcategory": "Sequence-to-sequence",
},
{
"path": "addition_rnn",
"title": "Sequence to sequence learning for performing number addition",
"subcategory": "Sequence-to-sequence",
"keras_3": True,
},
# Text similarity search
{
"path": "semantic_similarity_with_keras_nlp",
"title": "Semantic Similarity with KerasNLP",
"subcategory": "Text similarity search",
"keras_3": True,
},
{
"path": "semantic_similarity_with_bert",
"title": "Semantic Similarity with BERT",
"subcategory": "Text similarity search",
"keras_3": True,
},
{
"path": "sentence_embeddings_with_sbert",
"title": "Sentence embeddings using Siamese RoBERTa-networks",
"subcategory": "Text similarity search",
"keras_3": True,
},
# Language modeling
{
"path": "masked_language_modeling",
"title": "End-to-end Masked Language Modeling with BERT",
"subcategory": "Language modeling",
},
{
"path": "pretraining_BERT",
"title": "Pretraining BERT with Hugging Face Transformers",
"subcategory": "Language modeling",
},
# Parameter efficient fine-tuning.
{
"path": "parameter_efficient_finetuning_of_gpt2_with_lora",
"title": "Parameter-efficient fine-tuning of GPT-2 with LoRA",
"subcategory": "Parameter efficient fine-tuning",
"keras_3": True,
},
# Remainder is autogenerated
],
},
{
"path": "structured_data/",
"title": "Structured Data",
"toc": True,
"children": [
{
"path": "structured_data_classification_with_feature_space",
"title": "Structured data classification with FeatureSpace",
"subcategory": "Structured data classification",
"highlight": True,
"keras_3": True,
},
{
"path": "imbalanced_classification",
"title": "Imbalanced classification: credit card fraud detection",
"subcategory": "Structured data classification",
"highlight": True,
"keras_3": True,
},
{
"path": "structured_data_classification_from_scratch",
"title": "Structured data classification from scratch",
"subcategory": "Structured data classification",
"keras_3": True,
},
{
"path": "wide_deep_cross_networks",
"title": "Structured data learning with Wide, Deep, and Cross networks",
"subcategory": "Structured data classification",
"keras_3": True,
},
{
"path": "classification_with_grn_and_vsn",
"title": "Classification with Gated Residual and Variable Selection Networks",
"subcategory": "Structured data classification",
},
{
"path": "classification_with_tfdf",
"title": "Classification with TensorFlow Decision Forests",
"subcategory": "Structured data classification",
},
{
"path": "deep_neural_decision_forests",
"title": "Classification with Neural Decision Forests",
"subcategory": "Structured data classification",
"keras_3": True,
},
{
"path": "tabtransformer",
"title": "Structured data learning with TabTransformer",
"subcategory": "Structured data classification",
"keras_3": True,
},
# Recommendation
{
"path": "collaborative_filtering_movielens",
"title": "Collaborative Filtering for Movie Recommendations",
"subcategory": "Recommendation",
"keras_3": True,
},
{
"path": "movielens_recommendations_transformers",
"title": "A Transformer-based recommendation system",
"subcategory": "Recommendation",
"keras_3": True,
},
],
},
{
"path": "timeseries/",
"title": "Timeseries",
"toc": True,
"children": [
# Timeseries classification
{
"path": "timeseries_classification_from_scratch",
"title": "Timeseries classification from scratch",
"subcategory": "Timeseries classification",
"highlight": True,
"keras_3": True,
},
{
"path": "timeseries_classification_transformer",
"title": "Timeseries classification with a Transformer model",
"subcategory": "Timeseries classification",
"keras_3": True,
},
{
"path": "eeg_signal_classification",
"title": "Electroencephalogram Signal Classification for action identification",
"subcategory": "Timeseries classification",
"keras_3": True,
},
{
"path": "event_classification_for_payment_card_fraud_detection",
"title": "Event classification for payment card fraud detection",
"subcategory": "Timeseries classification",
"keras_3": True,
},
# Anomaly detection
{
"path": "timeseries_anomaly_detection",
"title": "Timeseries anomaly detection using an Autoencoder",
"subcategory": "Anomaly detection",
"keras_3": True,
},
# Timeseries forecasting
{
"path": "timeseries_traffic_forecasting",
"title": "Traffic forecasting using graph neural networks and LSTM",
"subcategory": "Timeseries forecasting",
"keras_3": True,
},
{
"path": "timeseries_weather_forecasting",
"title": "Timeseries forecasting for weather prediction",
"subcategory": "Timeseries forecasting",
"keras_3": True,
},
],
},
{
"path": "generative/",
"title": "Generative Deep Learning",
"toc": True,
"children": [
# Image generation
{
"path": "ddim",
"title": "Denoising Diffusion Implicit Models",
"subcategory": "Image generation",
"highlight": True,
"keras_3": True,
},
{
"path": "random_walks_with_stable_diffusion",
"title": "A walk through latent space with Stable Diffusion",
"subcategory": "Image generation",
"highlight": True,
"keras_3": True,
},
{
"path": "dreambooth",
"title": "DreamBooth",
"subcategory": "Image generation",
},
{
"path": "ddpm",
"title": "Denoising Diffusion Probabilistic Models",
"subcategory": "Image generation",
},
{
"path": "fine_tune_via_textual_inversion",
"title": "Teach StableDiffusion new concepts via Textual Inversion",
"subcategory": "Image generation",
},
{
"path": "finetune_stable_diffusion",
"title": "Fine-tuning Stable Diffusion",
"subcategory": "Image generation",
},
{
"path": "vae",
"title": "Variational AutoEncoder",
"subcategory": "Image generation",
"keras_3": True,
},
{
"path": "dcgan_overriding_train_step",
"title": "GAN overriding Model.train_step",
"subcategory": "Image generation",
"keras_3": True,
},
{
"path": "wgan_gp",
"title": "WGAN-GP overriding Model.train_step",
"subcategory": "Image generation",
"keras_3": True,
},
{
"path": "conditional_gan",
"title": "Conditional GAN",
"subcategory": "Image generation",
"keras_3": True,
},
{
"path": "cyclegan",
"title": "CycleGAN",
"subcategory": "Image generation",
},
{
"path": "gan_ada",
"title": "Data-efficient GANs with Adaptive Discriminator Augmentation",
"subcategory": "Image generation",
},
{
"path": "deep_dream",
"title": "Deep Dream",
"subcategory": "Image generation",
"keras_3": True,
},
{
"path": "gaugan",
"title": "GauGAN for conditional image generation",
"subcategory": "Image generation",
"keras_3": True,
},
{
"path": "pixelcnn",
"title": "PixelCNN",
"subcategory": "Image generation",
"keras_3": True,
},
{
"path": "stylegan",
"title": "Face image generation with StyleGAN",
"subcategory": "Image generation",
},
{
"path": "vq_vae",
"title": "Vector-Quantized Variational Autoencoders",
"subcategory": "Image generation",
},
# Style transfer
{
"path": "neural_style_transfer",
"title": "Neural style transfer",
"subcategory": "Style transfer",
"keras_3": True,
},
{
"path": "adain",
"title": "Neural Style Transfer with AdaIN",
"subcategory": "Style transfer",
},
# Text generation
{
"path": "gpt2_text_generation_with_kerasnlp",
"title": "GPT2 Text Generation with KerasNLP",
"subcategory": "Text generation",
"highlight": True,
"keras_3": True,
},
{
"path": "text_generation_gpt",
"title": "GPT text generation from scratch with KerasNLP",
"subcategory": "Text generation",
"keras_3": True,
},
{
"path": "text_generation_with_miniature_gpt",
"title": "Text generation with a miniature GPT",
"subcategory": "Text generation",
"keras_3": True,
},
{
"path": "lstm_character_level_text_generation",
"title": "Character-level text generation with LSTM",
"subcategory": "Text generation",
"keras_3": True,
},
{
"path": "text_generation_fnet",
"title": "Text Generation using FNet",
"subcategory": "Text generation",
},
# Graph generation
{
"path": "molecule_generation",
"title": "Drug Molecule Generation with VAE",
"subcategory": "Graph generation",
},
{
"path": "wgan-graphs",
"title": "WGAN-GP with R-GCN for the generation of small molecular graphs",
"subcategory": "Graph generation",
},
],
},
{
"path": "audio/",
"title": "Audio Data",
"toc": True,
"children": [
{
"path": "transformer_asr",
"title": "Automatic Speech Recognition with Transformer",
"subcategory": "Speech recognition",
"keras_3": True,
},
# Will be autogenerated
],
},
{
"path": "rl/",
"title": "Reinforcement Learning",
"toc": True,
"children": [
# Will be autogenerated
],
},
{
"path": "graph/",
"title": "Graph Data",
"toc": True,
"children": [
# Will be autogenerated
],
},
{
"path": "keras_recipes/",
"title": "Quick Keras Recipes",
"toc": True,
"children": [
{
"path": "tf_serving",
"title": "Serving TensorFlow models with TFServing",
"subcategory": "Serving",
"keras_3": True,
},
{
"path": "debugging_tips",
"title": "Keras debugging tips",
"subcategory": "Keras usage tips",
"keras_3": True,
},
{
"path": "subclassing_conv_layers",
"title": "Customizing the convolution operation of a Conv2D layer",
"subcategory": "Keras usage tips",
"keras_3": True,
},
{
"path": "trainer_pattern",
"title": "Trainer pattern",
"subcategory": "Keras usage tips",
"keras_3": True,
},
{
"path": "endpoint_layer_pattern",
"title": "Endpoint layer pattern",
"subcategory": "Keras usage tips",
"keras_3": True,
},
{
"path": "reproducibility_recipes",
"title": "Reproducibility in Keras Models",
"subcategory": "Keras usage tips",
"keras_3": True,
},
{
"path": "tensorflow_numpy_models",
"title": "Writing Keras Models With TensorFlow NumPy",
"subcategory": "Keras usage tips",
"keras_3": True,
},
{
"path": "antirectifier",
"title": "Simple custom layer example: Antirectifier",
"subcategory": "Keras usage tips",
"keras_3": True,
},
{
"path": "sample_size_estimate",
"title": "Estimating required sample size for model training",
"subcategory": "ML best practices",
"keras_3": True,
},
{
"path": "memory_efficient_embeddings",
"title": "Memory-efficient embeddings for recommendation systems",
"subcategory": "ML best practices",
"keras_3": True,
},
{
"path": "creating_tfrecords",
"title": "Creating TFRecords",
"subcategory": "ML best practices",
"keras_3": True,
},
{
"path": "packaging_keras_models_for_wide_distribution",
"title": "Packaging Keras models for wide distribution using Functional Subclassing",
"subcategory": "Keras usage tips",
"keras_3": True,
},
# Rest will be autogenerated
],
},
],
}
| keras-io/scripts/examples_master.py/0 | {
"file_path": "keras-io/scripts/examples_master.py",
"repo_id": "keras-io",
"token_count": 23680
} | 115 |
# Datasets
The `keras.datasets` module provide a few toy datasets (already-vectorized, in Numpy format)
that can be used for debugging a model or creating simple code examples.
If you are looking for larger & more useful ready-to-use datasets, take a look at
[TensorFlow Datasets](https://github.com/tensorflow/datasets).
## Available datasets
{{toc}}
| keras-io/templates/api/datasets/index.md/0 | {
"file_path": "keras-io/templates/api/datasets/index.md",
"repo_id": "keras-io",
"token_count": 104
} | 116 |
# KerasNLP Utils
Standalone utilitiy methods for KerasNLP, including functions for generating
sequences of text with a model.
{{toc}}
| keras-io/templates/api/keras_nlp/utils/index.md/0 | {
"file_path": "keras-io/templates/api/keras_nlp/utils/index.md",
"repo_id": "keras-io",
"token_count": 41
} | 117 |
# Models API
There are three ways to create Keras models:
- The [Sequential model](/guides/sequential_model), which is very straightforward (a simple list of layers),
but is limited to single-input, single-output stacks of layers (as the name gives away).
- The [Functional API](/guides/functional_api), which is an easy-to-use, fully-featured API that supports arbitrary model architectures.
For most people and most use cases, this is what you should be using. This is the Keras "industry strength" model.
- [Model subclassing](/guides/making_new_layers_and_models_via_subclassing), where you implement everything from scratch on your own.
Use this if you have complex, out-of-the-box research use cases.
## Models API overview
{{toc}}
| keras-io/templates/api/models/index.md/0 | {
"file_path": "keras-io/templates/api/models/index.md",
"repo_id": "keras-io",
"token_count": 209
} | 118 |
# KerasNLP
These guides cover the [KerasNLP](/keras_nlp/) library.
## Available guides
{{toc}}
| keras-io/templates/guides/keras_nlp/index.md/0 | {
"file_path": "keras-io/templates/guides/keras_nlp/index.md",
"repo_id": "keras-io",
"token_count": 38
} | 119 |
CI to run on PR and merge to Master and for continous build. | keras-nlp/.kokoro/README.md/0 | {
"file_path": "keras-nlp/.kokoro/README.md",
"repo_id": "keras-nlp",
"token_count": 15
} | 120 |
# KerasNLP: Modular NLP Workflows for Keras
[](https://github.com/keras-team/keras-nlp/actions?query=workflow%3ATests+branch%3Amaster)

[](https://github.com/keras-team/keras-nlp/issues)
KerasNLP is a natural language processing library that works natively
with TensorFlow, JAX, or PyTorch. Built on Keras 3, these models, layers,
metrics, and tokenizers can be trained and serialized in any framework and
re-used in another without costly migrations.
KerasNLP supports users through their entire development cycle. Our workflows
are built from modular components that have state-of-the-art preset weights when
used out-of-the-box and are easily customizable when more control is needed.
This library is an extension of the core Keras API; all high-level modules are
[`Layers`](https://keras.io/api/layers/) or
[`Models`](https://keras.io/api/models/) that receive that same level of polish
as core Keras. If you are familiar with Keras, congratulations! You already
understand most of KerasNLP.
See our [Getting Started guide](https://keras.io/guides/keras_nlp/getting_started)
to start learning our API. We welcome [contributions](CONTRIBUTING.md).
## Quick Links
### For everyone
- [Home Page](https://keras.io/keras_nlp)
- [Developer Guides](https://keras.io/guides/keras_nlp)
- [API Reference](https://keras.io/api/keras_nlp)
- [Getting Started guide](https://keras.io/guides/keras_nlp/getting_started)
### For contributors
- [Contributing Guide](CONTRIBUTING.md)
- [Roadmap](ROADMAP.md)
- [Style Guide](STYLE_GUIDE.md)
- [API Design Guide](API_DESIGN_GUIDE.md)
- [Call for Contributions](https://github.com/keras-team/keras-nlp/issues?q=is%3Aissue+is%3Aopen+label%3A%22contributions+welcome%22)
## Installation
KerasNLP supports both Keras 2 and Keras 3. We recommend Keras 3 for all new
users, as it enables using KerasNLP models and layers with JAX, TensorFlow and
PyTorch.
### Keras 2 Installation
To install the latest KerasNLP release with Keras 2, simply run:
```
pip install --upgrade keras-nlp
```
### Keras 3 Installation
There are currently two ways to install Keras 3 with KerasNLP. To install the
stable versions of KerasNLP and Keras 3, you should install Keras 3 **after**
installing KerasNLP. This is a temporary step while TensorFlow is pinned to
Keras 2, and will no longer be necessary after TensorFlow 2.16.
```
pip install --upgrade keras-nlp
pip install --upgrade keras>=3
```
To install the latest nightly changes for both KerasNLP and Keras, you can use
our nightly package.
```
pip install --upgrade keras-nlp-nightly
```
> [!IMPORTANT]
> Keras 3 will not function with TensorFlow 2.14 or earlier.
Read [Getting started with Keras](https://keras.io/getting_started/) for more information
on installing Keras 3 and compatibility with different frameworks.
## Quickstart
Fine-tune BERT on a small sentiment analysis task using the
[`keras_nlp.models`](https://keras.io/api/keras_nlp/models/) API:
```python
import os
os.environ["KERAS_BACKEND"] = "tensorflow" # Or "jax" or "torch"!
import keras_nlp
import tensorflow_datasets as tfds
imdb_train, imdb_test = tfds.load(
"imdb_reviews",
split=["train", "test"],
as_supervised=True,
batch_size=16,
)
# Load a BERT model.
classifier = keras_nlp.models.BertClassifier.from_preset(
"bert_base_en_uncased",
num_classes=2,
activation="softmax",
)
# Fine-tune on IMDb movie reviews.
classifier.fit(imdb_train, validation_data=imdb_test)
# Predict two new examples.
classifier.predict(["What an amazing movie!", "A total waste of my time."])
```
For more in depth guides and examples, visit https://keras.io/keras_nlp/.
## Configuring your backend
If you have Keras 3 installed in your environment (see installation above),
you can use KerasNLP with any of JAX, TensorFlow and PyTorch. To do so, set the
`KERAS_BACKEND` environment variable. For example:
```shell
export KERAS_BACKEND=jax
```
Or in Colab, with:
```python
import os
os.environ["KERAS_BACKEND"] = "jax"
import keras_nlp
```
> [!IMPORTANT]
> Make sure to set the `KERAS_BACKEND` before import any Keras libraries, it
> will be used to set up Keras when it is first imported.
## Compatibility
We follow [Semantic Versioning](https://semver.org/), and plan to
provide backwards compatibility guarantees both for code and saved models built
with our components. While we continue with pre-release `0.y.z` development, we
may break compatibility at any time and APIs should not be consider stable.
## Disclaimer
KerasNLP provides access to pre-trained models via the `keras_nlp.models` API.
These pre-trained models are provided on an "as is" basis, without warranties
or conditions of any kind. The following underlying models are provided by third
parties, and subject to separate licenses:
BART, DeBERTa, DistilBERT, GPT-2, OPT, RoBERTa, Whisper, and XLM-RoBERTa.
## Citing KerasNLP
If KerasNLP helps your research, we appreciate your citations.
Here is the BibTeX entry:
```bibtex
@misc{kerasnlp2022,
title={KerasNLP},
author={Watson, Matthew, and Qian, Chen, and Bischof, Jonathan and Chollet,
Fran\c{c}ois and others},
year={2022},
howpublished={\url{https://github.com/keras-team/keras-nlp}},
}
```
## Acknowledgements
Thank you to all of our wonderful contributors!
<a href="https://github.com/keras-team/keras-nlp/graphs/contributors">
<img src="https://contrib.rocks/image?repo=keras-team/keras-nlp" />
</a>
| keras-nlp/README.md/0 | {
"file_path": "keras-nlp/README.md",
"repo_id": "keras-nlp",
"token_count": 1912
} | 121 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Utility functions for writing training scripts."""
import glob
import os
def list_filenames_for_arg(arg_pattern):
"""List filenames from a comma separated list of files, dirs, and globs."""
input_filenames = []
for pattern in arg_pattern.split(","):
pattern = os.path.expanduser(pattern)
if os.path.isdir(pattern):
pattern = os.path.join(pattern, "**", "*")
for filename in glob.iglob(pattern, recursive=True):
if not os.path.isdir(filename):
input_filenames.append(filename)
return input_filenames
| keras-nlp/examples/utils/scripting_utils.py/0 | {
"file_path": "keras-nlp/examples/utils/scripting_utils.py",
"repo_id": "keras-nlp",
"token_count": 386
} | 122 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.backend import ops
@keras_nlp_export("keras_nlp.layers.CachedMultiHeadAttention")
class CachedMultiHeadAttention(keras.layers.MultiHeadAttention):
"""MultiHeadAttention layer with cache support.
This layer is suitable for use in autoregressive decoding. It can be used
to cache decoder self-attention and cross-attention. The forward pass
can happen in one of three modes:
- No cache, same as regular multi-head attention.
- Static cache (`cache_update_index` is None). In this case, the
cached key/value projections will be used and the input values will
be ignored.
- Updated cache (`cache_update_index` is not None). In this case, new
key/value projections are computed using the input, and spliced into
the cache at the specified index.
Note that caching is useful only during inference and should not be used
during training.
We use the notation `B`, `T`, `S` below, where `B` is the batch dimension,
`T` is the target sequence length, and `S` in the source sequence length.
Note that during generative decoding, `T` is usually 1 (you are
generating a target sequence of length one to predict the next token).
Call arguments:
query: Query `Tensor` of shape `(B, T, dim)`.
value: Value `Tensor` of shape `(B, S*, dim)`. if `cache` is None`, `S*`
must equal `S` and match the shape of `attention_mask`. If cache` is
not `None`, `S*` can be any length less than `S`, and the computed
value will be spliced into `cache` at `cache_update_index`.
key: Optional key `Tensor` of shape `(B, S*, dim)`. If `cache` is
`None`, `S*` must equal `S` and match the shape of
`attention_mask`. If `cache` is not `None`, `S*` can be any length
less than `S`, and the computed value will be spliced into `cache`
at `cache_update_index`.
attention_mask: a boolean mask of shape `(B, T, S)`. `attention_mask`
prevents attention to certain positions. The boolean mask specifies
which query elements can attend to which key elements, 1 indicates
attention and 0 indicates no attention. Broadcasting can happen for
the missing batch dimensions and the head dimension.
cache: a dense float Tensor. The key/value cache, of shape
`[B, 2, S, num_heads, key_dims]`, where `S` must agree with the
`attention_mask` shape. This argument is intended for use during
generation to avoid recomputing intermediate state.
cache_update_index: a int or int Tensor, the index at which to update
`cache` (usually the index of the current token being processed
when running generation). If `cache_update_index=None` while `cache`
is set, the cache will not be updated.
Returns:
An `(attention_output, cache)` tuple. `attention_output` is the result
of the computation, of shape `(B, T, dim)`, where `T` is for target
sequence shapes and `dim` is the query input last dimension if
`output_shape` is `None`. Otherwise, the multi-head outputs are
projected to the shape specified by `output_shape`. `cache` is the
updated cache.
"""
def call(
self,
query,
value,
key=None,
attention_mask=None,
cache=None,
cache_update_index=None,
):
if (
hasattr(self, "_build_from_signature")
and hasattr(self, "_built_from_signature")
and not self._built_from_signature
):
self._build_from_signature(query=query, value=value, key=key)
if key is None:
key = value
query = self._query_dense(query)
# If cache is not `None`, we will use the cache to compute the final key
# and value tensors. If `cache_update_index` is not None, we will first
# update the cache before use. To do this, we first call the
# `_key_dense` and `_value_dense` layers, and copy the outputs into the
# cache at the specified index. `cache = None` handles the training
# case, where we don't use the cache at all.
if cache is not None:
key_cache = cache[:, 0, ...]
value_cache = cache[:, 1, ...]
if cache_update_index is None:
key = key_cache
value = value_cache
else:
key_update = self._key_dense(key)
value_update = self._value_dense(value)
start = [0, cache_update_index, 0, 0]
key = ops.slice_update(key_cache, start, key_update)
value = ops.slice_update(value_cache, start, value_update)
cache = ops.stack((key, value), axis=1)
else:
if cache_update_index is not None:
raise ValueError(
"`cache_update_index` should not be set if `cache` is "
f"`None`. Received: cache={cache}, "
f"cache_update_index={cache_update_index}"
)
key = self._key_dense(key)
value = self._value_dense(value)
query = ops.multiply(
query,
1.0 / ops.sqrt(ops.cast(self._key_dim, query.dtype)),
)
attention_scores = ops.einsum(self._dot_product_equation, key, query)
attention_scores = self._masked_softmax(
attention_scores, attention_mask
)
attention_scores = self._dropout_layer(attention_scores)
attention_output = ops.einsum(
self._combine_equation, attention_scores, value
)
attention_output = self._output_dense(attention_output)
if cache is not None:
return attention_output, cache
return attention_output
| keras-nlp/keras_nlp/layers/modeling/cached_multi_head_attention.py/0 | {
"file_path": "keras-nlp/keras_nlp/layers/modeling/cached_multi_head_attention.py",
"repo_id": "keras-nlp",
"token_count": 2621
} | 123 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.backend import ops
from keras_nlp.layers.modeling.cached_multi_head_attention import (
CachedMultiHeadAttention,
)
from keras_nlp.utils.keras_utils import clone_initializer
from keras_nlp.layers.modeling.transformer_layer_utils import ( # isort:skip
compute_causal_mask,
merge_padding_and_attention_mask,
)
@keras_nlp_export("keras_nlp.layers.TransformerDecoder")
class TransformerDecoder(keras.layers.Layer):
"""Transformer decoder.
This class follows the architecture of the transformer decoder layer in the
paper [Attention is All You Need](https://arxiv.org/abs/1706.03762). Users
can instantiate multiple instances of this class to stack up a decoder.
By default, this layer will apply a causal mask to the decoder attention layer.
This layer will correctly compute an attention mask from an implicit
Keras padding mask (for example, by passing `mask_zero=True` to a
`keras.layers.Embedding` layer). See the Masking and Padding
[guide](https://keras.io/guides/understanding_masking_and_padding/)
for more details.
This layer can be called with either one or two inputs. The number of inputs
must be consistent across all calls. The options are as follows:
`layer(decoder_sequence)`: no cross-attention will be built into the
decoder block. This is useful when building a "decoder-only"
transformer such as GPT-2.
`layer(decoder_sequence, encoder_sequence)`: cross-attention will be
built into the decoder block. This is useful when building an
"encoder-decoder" transformer, such as the original transformer
model described in Attention is All You Need.
Args:
intermediate_dim: int, the hidden size of feedforward network.
num_heads: int, the number of heads in MultiHeadAttention.
dropout: float. the dropout value, shared by
MultiHeadAttention and feedforward network. Defaults to `0.`.
activation: string or `keras.activations`. the
activation function of feedforward network.
Defaults to `"relu"`.
layer_norm_epsilon: float. The eps value in layer
normalization components. Defaults to `1e-5`.
kernel_initializer: string or `keras.initializers` initializer.
The kernel initializer for the dense and multiheaded
attention layers. Defaults to `"glorot_uniform"`.
bias_initializer: string or `keras.initializers` initializer.
The bias initializer for the dense and multiheaded
attention layers. Defaults to `"zeros"`.
normalize_first: bool. If True, the inputs to the
attention layer(s) and the intermediate dense layer are normalized
(similar to GPT-2). If set to False, outputs of attention layer and
intermediate dense layer are normalized (similar to BERT).
Defaults to `False`.
name: string. The name of the layer. Defaults to `None`.
**kwargs: other keyword arguments.
Examples:
```python
# Create a single transformer decoder layer.
decoder = keras_nlp.layers.TransformerDecoder(
intermediate_dim=64, num_heads=8)
# Create a simple model containing the decoder.
decoder_input = keras.Input(shape=(10, 64))
encoder_input = keras.Input(shape=(10, 64))
output = decoder(decoder_input, encoder_input)
model = keras.Model(
inputs=(decoder_input, encoder_input),
outputs=output,
)
# Call decoder on the inputs.
decoder_input_data = np.random.uniform(size=(2, 10, 64))
encoder_input_data = np.random.uniform(size=(2, 10, 64))
decoder_output = model((decoder_input_data, encoder_input_data))
```
References:
- [Vaswani et al., 2017](https://arxiv.org/abs/1706.03762)
"""
def __init__(
self,
intermediate_dim,
num_heads,
dropout=0,
activation="relu",
layer_norm_epsilon=1e-05,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
normalize_first=False,
**kwargs,
):
# Work around for model saving, we need to ensure our model is built
# immediately after restoring from config.
decoder_sequence_shape = kwargs.pop("decoder_sequence_shape", None)
encoder_sequence_shape = kwargs.pop("encoder_sequence_shape", None)
super().__init__(**kwargs)
self.intermediate_dim = intermediate_dim
self.num_heads = num_heads
self.dropout = dropout
self.activation = keras.activations.get(activation)
self.layer_norm_epsilon = layer_norm_epsilon
self.kernel_initializer = keras.initializers.get(kernel_initializer)
self.bias_initializer = keras.initializers.get(bias_initializer)
self.normalize_first = normalize_first
self.supports_masking = True
self._decoder_sequence_shape = None
self._encoder_sequence_shape = None
if decoder_sequence_shape:
self.build(decoder_sequence_shape, encoder_sequence_shape)
def build(
self,
decoder_sequence_shape,
encoder_sequence_shape=None,
):
self._decoder_sequence_shape = decoder_sequence_shape
self._encoder_sequence_shape = encoder_sequence_shape
# Infer the dimension of our hidden feature size from the build shape.
hidden_dim = decoder_sequence_shape[-1]
# Attention head size is `hidden_dim` over the number of heads.
head_dim = int(hidden_dim // self.num_heads)
if head_dim == 0:
raise ValueError(
"Attention `head_dim` computed cannot be zero. "
f"The `hidden_dim` value of {hidden_dim} has to be equal to "
f"or greater than `num_heads` value of {self.num_heads}."
)
# Self attention layers.
self._self_attention_layer = CachedMultiHeadAttention(
num_heads=self.num_heads,
key_dim=head_dim,
dropout=self.dropout,
kernel_initializer=clone_initializer(self.kernel_initializer),
bias_initializer=clone_initializer(self.bias_initializer),
dtype=self.dtype_policy,
name="self_attention",
)
if hasattr(self._self_attention_layer, "_build_from_signature"):
self._self_attention_layer._build_from_signature(
query=decoder_sequence_shape,
value=decoder_sequence_shape,
)
else:
self._self_attention_layer.build(
query_shape=decoder_sequence_shape,
value_shape=decoder_sequence_shape,
)
self._self_attention_layer_norm = keras.layers.LayerNormalization(
epsilon=self.layer_norm_epsilon,
dtype=self.dtype_policy,
name="self_attention_layer_norm",
)
self._self_attention_layer_norm.build(decoder_sequence_shape)
self._self_attention_dropout = keras.layers.Dropout(
rate=self.dropout,
dtype=self.dtype_policy,
name="self_attention_dropout",
)
# Cross attention layers are optional.
self._cross_attention_layer = None
if encoder_sequence_shape:
self._cross_attention_layer = CachedMultiHeadAttention(
num_heads=self.num_heads,
key_dim=head_dim,
value_dim=head_dim,
dropout=self.dropout,
kernel_initializer=clone_initializer(self.kernel_initializer),
bias_initializer=clone_initializer(self.bias_initializer),
dtype=self.dtype_policy,
name="cross_attention",
)
if hasattr(self._cross_attention_layer, "_build_from_signature"):
self._cross_attention_layer._build_from_signature(
query=decoder_sequence_shape,
value=encoder_sequence_shape,
)
else:
self._cross_attention_layer.build(
query_shape=decoder_sequence_shape,
value_shape=encoder_sequence_shape,
)
self._cross_attention_layer_norm = keras.layers.LayerNormalization(
epsilon=self.layer_norm_epsilon,
dtype=self.dtype_policy,
name="cross_attention_layer_norm",
)
self._cross_attention_layer_norm.build(decoder_sequence_shape)
self._cross_attention_dropout = keras.layers.Dropout(
rate=self.dropout,
dtype=self.dtype_policy,
name="cross_attention_dropout",
)
# Feedforward layers.
self._feedforward_intermediate_dense = keras.layers.Dense(
self.intermediate_dim,
activation=self.activation,
kernel_initializer=clone_initializer(self.kernel_initializer),
bias_initializer=clone_initializer(self.bias_initializer),
dtype=self.dtype_policy,
name="feedforward_intermediate_dense",
)
self._feedforward_intermediate_dense.build(decoder_sequence_shape)
self._feedforward_output_dense = keras.layers.Dense(
hidden_dim,
kernel_initializer=clone_initializer(self.kernel_initializer),
bias_initializer=clone_initializer(self.bias_initializer),
dtype=self.dtype_policy,
name="feedforward_output_dense",
)
intermediate_shape = list(decoder_sequence_shape)
intermediate_shape[-1] = self.intermediate_dim
self._feedforward_output_dense.build(tuple(intermediate_shape))
self._feedforward_layer_norm = keras.layers.LayerNormalization(
epsilon=self.layer_norm_epsilon,
dtype=self.dtype_policy,
name="feedforward_layer_norm",
)
self._feedforward_layer_norm.build(decoder_sequence_shape)
self._feedforward_dropout = keras.layers.Dropout(
rate=self.dropout,
dtype=self.dtype_policy,
name="feedforward_dropout",
)
# Create layers based on input shape.
self.built = True
def __call__(
self,
decoder_sequence,
encoder_sequence=None,
**kwargs,
):
if not self.built:
decoder_sequence_shape = decoder_sequence.shape
encoder_sequence_shape = None
if encoder_sequence is not None:
encoder_sequence_shape = encoder_sequence.shape
self.build(decoder_sequence_shape, encoder_sequence_shape)
return super().__call__(
decoder_sequence, encoder_sequence=encoder_sequence, **kwargs
)
def call(
self,
decoder_sequence,
encoder_sequence=None,
decoder_padding_mask=None,
decoder_attention_mask=None,
encoder_padding_mask=None,
encoder_attention_mask=None,
self_attention_cache=None,
self_attention_cache_update_index=None,
cross_attention_cache=None,
cross_attention_cache_update_index=None,
use_causal_mask=True,
):
"""Forward pass of the TransformerDecoder.
Args:
decoder_sequence: a Tensor. The decoder input sequence.
encoder_sequence: a Tensor. The encoder input sequence. For decoder
only models (like GPT2), this should be left `None`. Once the
model is called once without an encoder_sequence, you cannot
call it again with encoder_sequence.
decoder_padding_mask: a boolean Tensor, the padding mask of decoder
sequence, must be of shape
`[batch_size, decoder_sequence_length]`.
decoder_attention_mask: a boolean Tensor. Customized decoder
sequence mask, must be of shape
`[batch_size, decoder_sequence_length, decoder_sequence_length]`.
encoder_padding_mask: a boolean Tensor, the padding mask of encoder
sequence, must be of shape
`[batch_size, encoder_sequence_length]`.
encoder_attention_mask: a boolean Tensor. Customized encoder
sequence mask, must be of shape
`[batch_size, encoder_sequence_length, encoder_sequence_length]`.
self_attention_cache: a dense float Tensor. The cache of key/values
pairs in the self-attention layer. Has shape
`[batch_size, 2, max_seq_len, num_heads, key_dims]`.
self_attention_cache_update_index: an int or int Tensor, the index
at which to update the `self_attention_cache`. Usually, this is
the index of the current token being processed during decoding.
cross_attention_cache: a dense float Tensor. The cache of
key/value pairs in the cross-attention layer. Has shape
`[batch_size, 2, S, num_heads, key_dims]`.
cross_attention_cache_update_index: an int or int Tensor, the index
at which to update the `cross_attention_cache`. Usually, this is
either `0` (compute the entire `cross_attention_cache`), or
`None` (reuse a previously computed `cross_attention_cache`).
use_causal_mask: bool, defaults to `True`. If true, a causal mask
(masking out future input) is applied `on the decoder sequence.
Returns:
One of three things, depending on call arguments:
- `outputs`, if `self_attention_cache` is `None.
- `(outputs, self_attention_cache)`, if `self_attention_cache` is
set and the layer has no cross-attention.
- `(outputs, self_attention_cache, cross_attention_cache)`, if
`self_attention_cache` and `cross_attention_cache` are set and
the layer has cross-attention.
"""
has_encoder_sequence = encoder_sequence is not None
has_cross_attention = self._cross_attention_layer is not None
if not has_cross_attention and has_encoder_sequence:
raise ValueError(
"The number of call arguments to "
"`keras_nlp.layers.TransformerDecoder` should not change. "
"Use `layer(decoder_sequence, encoder_sequence)` to "
"build a layer with cross attention, or "
"`layer(decoder_sequence)` to build a layer without. "
"This layer has been built without cross attention, but "
"you are trying to call it with encoder_sequence."
)
elif has_cross_attention and not has_encoder_sequence:
raise ValueError(
"The number of call arguments to "
"`keras_nlp.layers.TransformerDecoder` should not change. "
"Use `layer(decoder_sequence, encoder_sequence)` to "
"build a layer with cross attention, or "
"`layer(decoder_sequence)` to build a layer without. "
"This layer has been built with cross attention, but "
"you did not provide encoder_sequence."
)
has_self_attention_cache = self_attention_cache is not None
has_cross_attention_cache = cross_attention_cache is not None
if has_cross_attention and (
has_self_attention_cache != has_cross_attention_cache
):
raise ValueError(
"When calling `keras_nlp.layers.TransformerDecoder` with "
"cross-attention (with both `encoder_sequence` and "
"`decoder_sequence`), `self_attention_cache` and "
"`cross_attention_cache` should both be set or both be `None`. "
"One cannot be `None` while the other is not. Received: "
f"self_attention_cache={self_attention_cache}, "
f"cross_attention_cache={cross_attention_cache}."
)
self_attention_mask = self._compute_self_attention_mask(
decoder_sequence=decoder_sequence,
decoder_padding_mask=decoder_padding_mask,
decoder_attention_mask=decoder_attention_mask,
use_causal_mask=use_causal_mask,
self_attention_cache=self_attention_cache,
self_attention_cache_update_index=self_attention_cache_update_index,
)
x = decoder_sequence # Intermediate result.
# Self attention block.
residual = x
if self.normalize_first:
x = self._self_attention_layer_norm(x)
attention_output = self._self_attention_layer(
query=x,
value=x,
attention_mask=self_attention_mask,
cache=self_attention_cache,
cache_update_index=self_attention_cache_update_index,
)
if self_attention_cache is None:
x = attention_output
else:
x, self_attention_cache = attention_output
x = self._self_attention_dropout(x)
x = x + residual
if not self.normalize_first:
x = self._self_attention_layer_norm(x)
# Cross attention is optional.
if has_cross_attention:
# Compute cross attention mask.
cross_attention_mask = merge_padding_and_attention_mask(
encoder_sequence, encoder_padding_mask, encoder_attention_mask
)
# Cross attention block.
residual = x
if self.normalize_first:
x = self._cross_attention_layer_norm(x)
attention_output = self._cross_attention_layer(
query=x,
value=encoder_sequence,
attention_mask=cross_attention_mask,
cache=cross_attention_cache,
cache_update_index=cross_attention_cache_update_index,
)
if cross_attention_cache is None:
x = attention_output
else:
x, cross_attention_cache = attention_output
x = self._cross_attention_dropout(x)
x = x + residual
if not self.normalize_first:
x = self._cross_attention_layer_norm(x)
# Feedforward block.
residual = x
if self.normalize_first:
x = self._feedforward_layer_norm(x)
x = self._feedforward_intermediate_dense(x)
x = self._feedforward_output_dense(x)
x = self._feedforward_dropout(x)
x = x + residual
if not self.normalize_first:
x = self._feedforward_layer_norm(x)
if self_attention_cache is not None:
if has_cross_attention:
return (x, self_attention_cache, cross_attention_cache)
else:
return (x, self_attention_cache)
else:
return x
def _compute_self_attention_mask(
self,
decoder_sequence,
decoder_padding_mask,
decoder_attention_mask,
use_causal_mask,
self_attention_cache,
self_attention_cache_update_index,
):
decoder_mask = merge_padding_and_attention_mask(
decoder_sequence, decoder_padding_mask, decoder_attention_mask
)
if use_causal_mask:
batch_size = ops.shape(decoder_sequence)[0]
input_length = output_length = ops.shape(decoder_sequence)[1]
# We need to handle a rectangular causal mask when doing cached
# decoding. For generative inference, `decoder_sequence` will
# generally be length 1, and `cache` will be the full generation length.
if self_attention_cache is not None:
input_length = ops.shape(self_attention_cache)[2]
causal_mask = compute_causal_mask(
batch_size,
input_length,
output_length,
(
0
if self_attention_cache_update_index is None
else self_attention_cache_update_index
),
)
return (
ops.minimum(decoder_mask, causal_mask)
if decoder_mask is not None
else causal_mask
)
return decoder_mask
def get_config(self):
config = super().get_config()
config.update(
{
"intermediate_dim": self.intermediate_dim,
"num_heads": self.num_heads,
"dropout": self.dropout,
"activation": keras.activations.serialize(self.activation),
"layer_norm_epsilon": self.layer_norm_epsilon,
"kernel_initializer": keras.initializers.serialize(
self.kernel_initializer
),
"bias_initializer": keras.initializers.serialize(
self.bias_initializer
),
"normalize_first": self.normalize_first,
"decoder_sequence_shape": self._decoder_sequence_shape,
"encoder_sequence_shape": self._encoder_sequence_shape,
}
)
return config
def compute_output_shape(self, decoder_sequence_shape):
return decoder_sequence_shape
| keras-nlp/keras_nlp/layers/modeling/transformer_decoder.py/0 | {
"file_path": "keras-nlp/keras_nlp/layers/modeling/transformer_decoder.py",
"repo_id": "keras-nlp",
"token_count": 10018
} | 124 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.layers.preprocessing.preprocessing_layer import (
PreprocessingLayer,
)
from keras_nlp.utils.tensor_utils import convert_to_ragged_batch
@keras_nlp_export("keras_nlp.layers.StartEndPacker")
class StartEndPacker(PreprocessingLayer):
"""Adds start and end tokens to a sequence and pads to a fixed length.
This layer is useful when tokenizing inputs for tasks like translation,
where each sequence should include a start and end marker. It should
be called after tokenization. The layer will first trim inputs to fit, then
add start/end tokens, and finally pad, if necessary, to `sequence_length`.
Input data should be passed as tensors, `tf.RaggedTensor`s, or lists. For
batched input, inputs should be a list of lists or a rank two tensor. For
unbatched inputs, each element should be a list or a rank one tensor.
Args:
sequence_length: int. The desired output length.
start_value: int/str/list/tuple. The ID(s) or token(s) that are to be
placed at the start of each sequence. The dtype must match the dtype
of the input tensors to the layer. If `None`, no start value will be
added.
end_value: int/str/list/tuple. The ID(s) or token(s) that are to be
placed at the end of each input segment. The dtype must match the
dtype of the input tensors to the layer. If `None`, no end value
will be added.
pad_value: int/str. The ID or token that is to be placed into the
unused positions after the last segment in the sequence. If `None`,
0 or "" will be added depending on the dtype of the input tensor.
return_padding_mask: bool. Whether to return a boolean padding mask of
all locations that are filled in with the `pad_value`.
Call arguments:
inputs: A `tf.Tensor`, `tf.RaggedTensor`, or list of python strings.
sequence_length: Pass to override the configured `sequence_length` of
the layer.
add_start_value: Pass `False` to not append a start value for this
input.
add_end_value: Pass `False` to not append an end value for this
input.
Examples:
Unbatched input (int).
>>> inputs = [5, 6, 7]
>>> start_end_packer = keras_nlp.layers.StartEndPacker(
... sequence_length=7, start_value=1, end_value=2,
... )
>>> outputs = start_end_packer(inputs)
>>> np.array(outputs)
array([1, 5, 6, 7, 2, 0, 0], dtype=int32)
Batched input (int).
>>> inputs = [[5, 6, 7], [8, 9, 10, 11, 12, 13, 14]]
>>> start_end_packer = keras_nlp.layers.StartEndPacker(
... sequence_length=6, start_value=1, end_value=2,
... )
>>> outputs = start_end_packer(inputs)
>>> np.array(outputs)
array([[ 1, 5, 6, 7, 2, 0],
[ 1, 8, 9, 10, 11, 2]], dtype=int32)
Unbatched input (str).
>>> inputs = tf.constant(["this", "is", "fun"])
>>> start_end_packer = keras_nlp.layers.StartEndPacker(
... sequence_length=6, start_value="<s>", end_value="</s>",
... pad_value="<pad>"
... )
>>> outputs = start_end_packer(inputs)
>>> np.array(outputs).astype("U")
array(['<s>', 'this', 'is', 'fun', '</s>', '<pad>'], dtype='<U5')
Batched input (str).
>>> inputs = tf.ragged.constant([["this", "is", "fun"], ["awesome"]])
>>> start_end_packer = keras_nlp.layers.StartEndPacker(
... sequence_length=6, start_value="<s>", end_value="</s>",
... pad_value="<pad>"
... )
>>> outputs = start_end_packer(inputs)
>>> np.array(outputs).astype("U")
array([['<s>', 'this', 'is', 'fun', '</s>', '<pad>'],
['<s>', 'awesome', '</s>', '<pad>', '<pad>', '<pad>']], dtype='<U7')
Multiple start tokens.
>>> inputs = tf.ragged.constant([["this", "is", "fun"], ["awesome"]])
>>> start_end_packer = keras_nlp.layers.StartEndPacker(
... sequence_length=6, start_value=["</s>", "<s>"], end_value="</s>",
... pad_value="<pad>"
... )
>>> outputs = start_end_packer(inputs)
>>> np.array(outputs).astype("U")
array([['</s>', '<s>', 'this', 'is', 'fun', '</s>'],
['</s>', '<s>', 'awesome', '</s>', '<pad>', '<pad>']], dtype='<U7')
"""
def __init__(
self,
sequence_length,
start_value=None,
end_value=None,
pad_value=None,
return_padding_mask=False,
name=None,
**kwargs,
):
super().__init__(name=name, **kwargs)
self.sequence_length = sequence_length
# Maintain private copies for config purposes.
self._start_value = start_value
self._end_value = end_value
def check_special_value_type(value, value_name):
if isinstance(value, (int, str)):
return [value]
if value and not isinstance(value, (list, tuple)):
raise ValueError(
f"{value_name} should be of type int/str/list/tuple."
f"Received type: `{type(value)}`."
)
return value
start_value = check_special_value_type(start_value, "start_value")
end_value = check_special_value_type(end_value, "end_value")
self.start_value = start_value
self.end_value = end_value
self.pad_value = pad_value
self.return_padding_mask = return_padding_mask
def call(
self,
inputs,
sequence_length=None,
add_start_value=True,
add_end_value=True,
):
inputs, unbatched, _ = convert_to_ragged_batch(inputs)
x = inputs # Intermediate result.
batch_size = tf.shape(x)[0]
sequence_length = sequence_length or self.sequence_length
dtype = inputs.dtype
# Concatenate start and end tokens.
if add_start_value and self.start_value is not None:
start_value = tf.convert_to_tensor(self.start_value, dtype=dtype)
start_token_id_tensor = tf.repeat(
start_value[tf.newaxis, :], repeats=batch_size, axis=0
)
x = tf.concat([start_token_id_tensor, x], axis=-1)
if add_end_value and self.end_value is not None:
end_value = tf.convert_to_tensor(self.end_value, dtype=dtype)
end_token_id_tensor = tf.repeat(
end_value[tf.newaxis, :], repeats=batch_size, axis=0
)
# Trim to leave room for end token.
x = x[..., : sequence_length - len(self.end_value)]
x = tf.concat([x, end_token_id_tensor], axis=-1)
# Pad to desired length.
outputs = x.to_tensor(
default_value=self.pad_value,
shape=(batch_size, sequence_length),
)
outputs = tf.squeeze(outputs, axis=0) if unbatched else outputs
if self.return_padding_mask:
mask = tf.ones_like(x, dtype="bool")
mask = mask.to_tensor(shape=(batch_size, sequence_length))
mask = tf.squeeze(mask, axis=0) if unbatched else mask
return outputs, mask
return outputs
def get_config(self):
config = super().get_config()
config.update(
{
"sequence_length": self.sequence_length,
"start_value": self._start_value,
"end_value": self._end_value,
"pad_value": self.pad_value,
"return_padding_mask": self.return_padding_mask,
}
)
return config
def compute_output_shape(self, inputs_shape):
inputs_shape = list(inputs_shape)
inputs_shape[-1] = self.sequence_length
return tuple(inputs_shape)
| keras-nlp/keras_nlp/layers/preprocessing/start_end_packer.py/0 | {
"file_path": "keras-nlp/keras_nlp/layers/preprocessing/start_end_packer.py",
"repo_id": "keras-nlp",
"token_count": 3639
} | 125 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.layers.modeling.position_embedding import PositionEmbedding
from keras_nlp.layers.modeling.reversible_embedding import ReversibleEmbedding
from keras_nlp.layers.modeling.transformer_encoder import TransformerEncoder
from keras_nlp.models.albert.albert_presets import backbone_presets
from keras_nlp.models.backbone import Backbone
from keras_nlp.utils.keras_utils import gelu_approximate
from keras_nlp.utils.python_utils import classproperty
def albert_kernel_initializer(stddev=0.02):
return keras.initializers.TruncatedNormal(stddev=stddev)
@keras_nlp_export("keras_nlp.models.AlbertBackbone")
class AlbertBackbone(Backbone):
"""ALBERT encoder network.
This class implements a bi-directional Transformer-based encoder as
described in
["ALBERT: A Lite BERT for Self-supervised Learning of Language Representations"](https://arxiv.org/abs/1909.11942).
ALBERT is a more efficient variant of BERT, and uses parameter reduction
techniques such as cross-layer parameter sharing and factorized embedding
parameterization. This model class includes the embedding lookups and
transformer layers, but not the masked language model or sentence order
prediction heads.
The default constructor gives a fully customizable, randomly initialized
ALBERT encoder with any number of layers, heads, and embedding dimensions.
To load preset architectures and weights, use the `from_preset` constructor.
Disclaimer: Pre-trained models are provided on an "as is" basis, without
warranties or conditions of any kind.
Args:
vocabulary_size: int. The size of the token vocabulary.
num_layers: int, must be divisible by `num_groups`. The number of
"virtual" layers, i.e., the total number of times the input sequence
will be fed through the groups in one forward pass. The input will
be routed to the correct group based on the layer index.
num_heads: int. The number of attention heads for each transformer.
The hidden size must be divisible by the number of attention heads.
embedding_dim: int. The size of the embeddings.
hidden_dim: int. The size of the transformer encoding and pooler layers.
intermediate_dim: int. The output dimension of the first Dense layer in
a two-layer feedforward network for each transformer.
num_groups: int. Number of groups, with each group having
`num_inner_repetitions` number of `TransformerEncoder` layers.
num_inner_repetitions: int. Number of `TransformerEncoder` layers per
group.
dropout: float. Dropout probability for the Transformer encoder.
max_sequence_length: int. The maximum sequence length that this encoder
can consume. If None, `max_sequence_length` uses the value from
sequence length. This determines the variable shape for positional
embeddings.
num_segments: int. The number of types that the 'segment_ids' input can
take.
dtype: string or `keras.mixed_precision.DTypePolicy`. The dtype to use
for model computations and weights. Note that some computations,
such as softmax and layer normalization, will always be done at
float32 precision regardless of dtype.
Examples:
```python
input_data = {
"token_ids": np.ones(shape=(1, 12), dtype="int32"),
"segment_ids": np.array([[0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 0, 0]]),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]]),
}
# Randomly initialized ALBERT encoder
model = keras_nlp.models.AlbertBackbone(
vocabulary_size=30000,
num_layers=12,
num_heads=12,
num_groups=1,
num_inner_repetitions=1,
embedding_dim=128,
hidden_dim=768,
intermediate_dim=3072,
max_sequence_length=12,
)
output = model(input_data)
```
"""
def __init__(
self,
vocabulary_size,
num_layers,
num_heads,
embedding_dim,
hidden_dim,
intermediate_dim,
num_groups=1,
num_inner_repetitions=1,
dropout=0.0,
max_sequence_length=512,
num_segments=2,
dtype=None,
**kwargs,
):
if num_layers % num_groups != 0:
raise ValueError(
"`num_layers` must be divisible by `num_groups`. Received: "
f"`num_layers={num_layers}` and `num_groups={num_groups}`."
)
# === Layers ===
self.token_embedding = ReversibleEmbedding(
input_dim=vocabulary_size,
output_dim=embedding_dim,
embeddings_initializer=albert_kernel_initializer(),
dtype=dtype,
name="token_embedding",
)
self.position_embedding = PositionEmbedding(
initializer=albert_kernel_initializer(),
sequence_length=max_sequence_length,
dtype=dtype,
name="position_embedding",
)
self.segment_embedding = keras.layers.Embedding(
input_dim=num_segments,
output_dim=embedding_dim,
embeddings_initializer=albert_kernel_initializer(),
dtype=dtype,
name="segment_embedding",
)
self.embeddings_add = keras.layers.Add(
dtype=dtype,
name="embeddings_add",
)
self.embeddings_layer_norm = keras.layers.LayerNormalization(
axis=-1,
epsilon=1e-12,
dtype=dtype,
name="embeddings_layer_norm",
)
self.embeddings_dropout = keras.layers.Dropout(
dropout,
dtype=dtype,
name="embeddings_dropout",
)
self.embeddings_projection = keras.layers.Dense(
hidden_dim,
kernel_initializer=albert_kernel_initializer(),
dtype=dtype,
name="embedding_projection",
)
self.transformer_layers = []
for group_idx in range(num_groups):
inner_layers = []
for inner_idx in range(num_inner_repetitions):
layer = TransformerEncoder(
num_heads=num_heads,
intermediate_dim=intermediate_dim,
activation=gelu_approximate,
dropout=dropout,
layer_norm_epsilon=1e-12,
kernel_initializer=albert_kernel_initializer(),
dtype=dtype,
name=f"group_{group_idx}_inner_layer_{inner_idx}",
)
inner_layers.append(layer)
self.transformer_layers.append(inner_layers)
self.pooled_dense = keras.layers.Dense(
hidden_dim,
kernel_initializer=albert_kernel_initializer(),
activation="tanh",
dtype=dtype,
name="pooled_dense",
)
# === Functional Model ===
# Inputs
token_id_input = keras.Input(
shape=(None,), dtype="int32", name="token_ids"
)
segment_id_input = keras.Input(
shape=(None,), dtype="int32", name="segment_ids"
)
padding_mask_input = keras.Input(
shape=(None,), dtype="int32", name="padding_mask"
)
# Embed tokens, positions, and segment ids.
tokens = self.token_embedding(token_id_input)
positions = self.position_embedding(tokens)
segments = self.segment_embedding(segment_id_input)
# Sum, normalize and apply dropout to embeddings.
x = self.embeddings_add((tokens, positions, segments))
x = self.embeddings_layer_norm(x)
x = self.embeddings_dropout(x)
x = self.embeddings_projection(x)
# Call transformer layers with repeated groups.
num_calls_per_group = num_layers // num_groups
for group in self.transformer_layers:
for _ in range(num_calls_per_group):
for transformer_layer in group:
x = transformer_layer(x, padding_mask=padding_mask_input)
# Construct the two ALBERT outputs. The pooled output is a dense layer
# on top of the [CLS] token.
sequence_output = x
cls_token_index = 0
pooled_output = self.pooled_dense(x[:, cls_token_index, :])
super().__init__(
inputs={
"token_ids": token_id_input,
"segment_ids": segment_id_input,
"padding_mask": padding_mask_input,
},
outputs={
"sequence_output": sequence_output,
"pooled_output": pooled_output,
},
**kwargs,
)
# === Config ===
self.vocabulary_size = vocabulary_size
self.num_layers = num_layers
self.num_heads = num_heads
self.num_groups = num_groups
self.num_inner_repetitions = num_inner_repetitions
self.embedding_dim = embedding_dim
self.hidden_dim = hidden_dim
self.intermediate_dim = intermediate_dim
self.dropout = dropout
self.max_sequence_length = max_sequence_length
self.num_segments = num_segments
self.cls_token_index = cls_token_index
def get_config(self):
config = super().get_config()
config.update(
{
"vocabulary_size": self.vocabulary_size,
"num_layers": self.num_layers,
"num_heads": self.num_heads,
"num_groups": self.num_groups,
"num_inner_repetitions": self.num_inner_repetitions,
"embedding_dim": self.embedding_dim,
"hidden_dim": self.hidden_dim,
"intermediate_dim": self.intermediate_dim,
"dropout": self.dropout,
"max_sequence_length": self.max_sequence_length,
"num_segments": self.num_segments,
}
)
return config
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
| keras-nlp/keras_nlp/models/albert/albert_backbone.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/albert/albert_backbone.py",
"repo_id": "keras-nlp",
"token_count": 4850
} | 126 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pytest
from keras_nlp.backend import ops
from keras_nlp.models.bart.bart_backbone import BartBackbone
from keras_nlp.tests.test_case import TestCase
class BartBackboneTest(TestCase):
def setUp(self):
self.init_kwargs = {
"vocabulary_size": 10,
"num_layers": 2,
"num_heads": 2,
"hidden_dim": 2,
"intermediate_dim": 4,
"max_sequence_length": 5,
}
self.input_data = {
"encoder_token_ids": ops.ones((2, 3), dtype="int32"),
"encoder_padding_mask": ops.zeros((2, 3), dtype="int32"),
"decoder_token_ids": ops.ones((2, 5), dtype="int32"),
"decoder_padding_mask": ops.zeros((2, 5), dtype="int32"),
}
def test_backbone_basics(self):
self.run_backbone_test(
cls=BartBackbone,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
expected_output_shape={
"encoder_sequence_output": (2, 3, 2),
"decoder_sequence_output": (2, 5, 2),
},
)
@pytest.mark.large
def test_saved_model(self):
self.run_model_saving_test(
cls=BartBackbone,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
)
@pytest.mark.large
def test_smallest_preset(self):
self.run_preset_test(
cls=BartBackbone,
preset="bart_base_en",
input_data={
"encoder_token_ids": ops.array([[0, 133, 2119, 2]]),
"encoder_padding_mask": ops.array([[1, 1, 1, 1]]),
"decoder_token_ids": ops.array([[0, 7199, 14, 2119, 2]]),
"decoder_padding_mask": ops.array([[1, 1, 1, 1, 1]]),
},
expected_output_shape={
"encoder_sequence_output": (1, 4, 768),
"decoder_sequence_output": (1, 5, 768),
},
# The forward pass from a preset should be stable!
expected_partial_output={
"encoder_sequence_output": ops.array(
[-0.033, 0.013, -0.003, -0.012, -0.002]
),
"decoder_sequence_output": ops.array(
[2.516, 2.489, 0.695, 8.057, 1.245]
),
},
)
@pytest.mark.extra_large
def test_all_presets(self):
for preset in BartBackbone.presets:
self.run_preset_test(
cls=BartBackbone,
preset=preset,
input_data=self.input_data,
)
| keras-nlp/keras_nlp/models/bart/bart_backbone_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/bart/bart_backbone_test.py",
"repo_id": "keras-nlp",
"token_count": 1604
} | 127 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pytest
from keras_nlp.models.bloom.bloom_preprocessor import BloomPreprocessor
from keras_nlp.models.bloom.bloom_tokenizer import BloomTokenizer
from keras_nlp.tests.test_case import TestCase
class BloomPreprocessorTest(TestCase):
def setUp(self):
self.vocab = ["<pad>", "<s>", "</s>"]
self.vocab += ["!", "air", "Ġair", "plane", "Ġat", "port"]
self.vocab = dict([(token, i) for i, token in enumerate(self.vocab)])
self.merges = ["Ġ a", "Ġ t", "Ġ i", "Ġ b", "a i", "p l", "n e"]
self.merges += ["Ġa t", "p o", "r t", "Ġt h", "ai r", "pl a", "po rt"]
self.merges += ["Ġai r", "Ġa i", "pla ne"]
self.tokenizer = BloomTokenizer(
vocabulary=self.vocab,
merges=self.merges,
)
self.init_kwargs = {
"tokenizer": self.tokenizer,
"sequence_length": 8,
}
self.input_data = ["airplane at airport"]
def test_preprocessor_basics(self):
self.run_preprocessor_test(
cls=BloomPreprocessor,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
expected_output={
"token_ids": [[1, 4, 6, 7, 5, 8, 2, 0]],
"padding_mask": [[1, 1, 1, 1, 1, 1, 1, 0]],
},
)
def test_no_start_end_token(self):
input_data = ["airplane at airport"] * 4
preprocessor = BloomPreprocessor(
tokenizer=BloomTokenizer(
vocabulary=self.vocab,
merges=self.merges,
),
sequence_length=8,
add_start_token=False,
add_end_token=False,
)
x = preprocessor(input_data)
self.assertAllEqual(x["token_ids"], [[4, 6, 7, 5, 8, 0, 0, 0]] * 4)
self.assertAllEqual(x["padding_mask"], [[1, 1, 1, 1, 1, 0, 0, 0]] * 4)
def test_sequence_length_override(self):
input_data = "airplane at airport"
preprocessor = BloomPreprocessor(**self.init_kwargs)
x = preprocessor(input_data, sequence_length=4)
self.assertAllEqual(x["token_ids"], [1, 4, 6, 2])
@pytest.mark.extra_large
def test_all_presets(self):
for preset in BloomPreprocessor.presets:
self.run_preset_test(
cls=BloomPreprocessor,
preset=preset,
input_data=self.input_data,
)
| keras-nlp/keras_nlp/models/bloom/bloom_preprocessor_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/bloom/bloom_preprocessor_test.py",
"repo_id": "keras-nlp",
"token_count": 1375
} | 128 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import tensorflow as tf
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.models.deberta_v3.deberta_v3_presets import backbone_presets
from keras_nlp.tokenizers.sentence_piece_tokenizer import SentencePieceTokenizer
from keras_nlp.utils.python_utils import classproperty
@keras_nlp_export("keras_nlp.models.DebertaV3Tokenizer")
class DebertaV3Tokenizer(SentencePieceTokenizer):
"""DeBERTa tokenizer layer based on SentencePiece.
This tokenizer class will tokenize raw strings into integer sequences and
is based on `keras_nlp.tokenizers.SentencePieceTokenizer`. Unlike the
underlying tokenizer, it will check for all special tokens needed by
DeBERTa models and provides a `from_preset()` method to automatically
download a matching vocabulary for a DeBERTa preset.
This tokenizer does not provide truncation or padding of inputs. It can be
combined with a `keras_nlp.models.DebertaV3Preprocessor` layer for input
packing.
If input is a batch of strings (rank > 0), the layer will output a
`tf.RaggedTensor` where the last dimension of the output is ragged.
If input is a scalar string (rank == 0), the layer will output a dense
`tf.Tensor` with static shape `[None]`.
Note: The mask token (`"[MASK]"`) is handled differently in this tokenizer.
If the token is not present in the provided SentencePiece vocabulary, the
token will be appended to the vocabulary. For example, if the vocabulary
size is 100, the mask token will be assigned the ID 100.
Args:
proto: Either a `string` path to a SentencePiece proto file, or a
`bytes` object with a serialized SentencePiece proto. See the
[SentencePiece repository](https://github.com/google/sentencepiece)
for more details on the format.
Examples:
```python
# Unbatched input.
tokenizer = keras_nlp.models.DebertaV3Tokenizer.from_preset(
"deberta_v3_base_en",
)
tokenizer("The quick brown fox jumped.")
# Batched inputs.
tokenizer(["the quick brown fox", "the earth is round"])
# Detokenization.
tokenizer.detokenize(tokenizer("The quick brown fox jumped."))
# Custom vocabulary.
bytes_io = io.BytesIO()
ds = tf.data.Dataset.from_tensor_slices(["The quick brown fox jumped."])
sentencepiece.SentencePieceTrainer.train(
sentence_iterator=ds.as_numpy_iterator(),
model_writer=bytes_io,
vocab_size=9,
model_type="WORD",
pad_id=0,
bos_id=1,
eos_id=2,
unk_id=3,
pad_piece="[PAD]",
bos_piece="[CLS]",
eos_piece="[SEP]",
unk_piece="[UNK]",
)
tokenizer = keras_nlp.models.DebertaV3Tokenizer(
proto=bytes_io.getvalue(),
)
tokenizer("The quick brown fox jumped.")
```
"""
def __init__(self, proto, **kwargs):
self.cls_token = "[CLS]"
self.sep_token = "[SEP]"
self.pad_token = "[PAD]"
self.mask_token = "[MASK]"
super().__init__(proto=proto, **kwargs)
def set_proto(self, proto):
super().set_proto(proto)
if proto is not None:
for token in [self.cls_token, self.pad_token, self.sep_token]:
if token not in super().get_vocabulary():
raise ValueError(
f"Cannot find token `'{token}'` in the provided "
f"`vocabulary`. Please provide `'{token}'` in your "
"`vocabulary` or use a pretrained `vocabulary` name."
)
self.cls_token_id = self.token_to_id(self.cls_token)
self.sep_token_id = self.token_to_id(self.sep_token)
self.pad_token_id = self.token_to_id(self.pad_token)
# If the mask token is not in the vocabulary, add it to the end of the
# vocabulary.
if self.mask_token in super().get_vocabulary():
self.mask_token_id = super().token_to_id(self.mask_token)
else:
self.mask_token_id = super().vocabulary_size()
else:
self.cls_token_id = None
self.sep_token_id = None
self.pad_token_id = None
self.mask_token_id = None
def vocabulary_size(self):
sentence_piece_size = super().vocabulary_size()
if sentence_piece_size == self.mask_token_id:
return sentence_piece_size + 1
return sentence_piece_size
def get_vocabulary(self):
sentence_piece_vocabulary = super().get_vocabulary()
if self.mask_token_id < super().vocabulary_size():
return sentence_piece_vocabulary
return sentence_piece_vocabulary + ["[MASK]"]
def id_to_token(self, id):
if id == self.mask_token_id:
return "[MASK]"
return super().id_to_token(id)
def token_to_id(self, token):
if token == "[MASK]":
return self.mask_token_id
return super().token_to_id(token)
def detokenize(self, ids):
ids = tf.ragged.boolean_mask(ids, tf.not_equal(ids, self.mask_token_id))
return super().detokenize(ids)
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
| keras-nlp/keras_nlp/models/deberta_v3/deberta_v3_tokenizer.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/deberta_v3/deberta_v3_tokenizer.py",
"repo_id": "keras-nlp",
"token_count": 2415
} | 129 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""DistilBERT model preset configurations."""
backbone_presets = {
"distil_bert_base_en_uncased": {
"metadata": {
"description": (
"6-layer DistilBERT model where all input is lowercased. "
"Trained on English Wikipedia + BooksCorpus using BERT as the "
"teacher model."
),
"params": 66362880,
"official_name": "DistilBERT",
"path": "distil_bert",
"model_card": "https://huggingface.co/distilbert-base-uncased",
},
"kaggle_handle": "kaggle://keras/distil_bert/keras/distil_bert_base_en_uncased/2",
},
"distil_bert_base_en": {
"metadata": {
"description": (
"6-layer DistilBERT model where case is maintained. "
"Trained on English Wikipedia + BooksCorpus using BERT as the "
"teacher model."
),
"params": 65190912,
"official_name": "DistilBERT",
"path": "distil_bert",
"model_card": "https://huggingface.co/distilbert-base-cased",
},
"kaggle_handle": "kaggle://keras/distil_bert/keras/distil_bert_base_en/2",
},
"distil_bert_base_multi": {
"metadata": {
"description": (
"6-layer DistilBERT model where case is maintained. Trained on Wikipedias of 104 languages"
),
"params": 134734080,
"official_name": "DistilBERT",
"path": "distil_bert",
"model_card": "https://huggingface.co/distilbert-base-multilingual-cased",
},
"kaggle_handle": "kaggle://keras/distil_bert/keras/distil_bert_base_multi/2",
},
}
| keras-nlp/keras_nlp/models/distil_bert/distil_bert_presets.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/distil_bert/distil_bert_presets.py",
"repo_id": "keras-nlp",
"token_count": 1031
} | 130 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import pytest
from keras_nlp.models.f_net.f_net_backbone import FNetBackbone
from keras_nlp.models.f_net.f_net_masked_lm import FNetMaskedLM
from keras_nlp.models.f_net.f_net_masked_lm_preprocessor import (
FNetMaskedLMPreprocessor,
)
from keras_nlp.models.f_net.f_net_tokenizer import FNetTokenizer
from keras_nlp.tests.test_case import TestCase
class FNetMaskedLMTest(TestCase):
def setUp(self):
# Setup model.
self.preprocessor = FNetMaskedLMPreprocessor(
FNetTokenizer(
# Generated using create_f_net_test_proto.py
proto=os.path.join(
self.get_test_data_dir(), "f_net_test_vocab.spm"
)
),
# Simplify our testing by masking every available token.
mask_selection_rate=1.0,
mask_token_rate=1.0,
random_token_rate=0.0,
mask_selection_length=5,
sequence_length=5,
)
self.backbone = FNetBackbone(
vocabulary_size=self.preprocessor.tokenizer.vocabulary_size(),
num_layers=2,
hidden_dim=2,
intermediate_dim=4,
max_sequence_length=self.preprocessor.sequence_length,
)
self.init_kwargs = {
"preprocessor": self.preprocessor,
"backbone": self.backbone,
}
self.train_data = (
["the quick brown fox.", "the slow brown fox."], # Features.
)
self.input_data = self.preprocessor(*self.train_data)[0]
def test_masked_lm_basics(self):
self.run_task_test(
cls=FNetMaskedLM,
init_kwargs=self.init_kwargs,
train_data=self.train_data,
expected_output_shape=(2, 5, 12),
)
@pytest.mark.large
def test_saved_model(self):
self.run_model_saving_test(
cls=FNetMaskedLM,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
)
@pytest.mark.extra_large
def test_all_presets(self):
for preset in FNetMaskedLM.presets:
self.run_preset_test(
cls=FNetMaskedLM,
preset=preset,
input_data=self.input_data,
)
| keras-nlp/keras_nlp/models/f_net/f_net_masked_lm_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/f_net/f_net_masked_lm_test.py",
"repo_id": "keras-nlp",
"token_count": 1331
} | 131 |
# Copyright 2024 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.layers.preprocessing.start_end_packer import StartEndPacker
from keras_nlp.models.gemma.gemma_presets import backbone_presets
from keras_nlp.models.gemma.gemma_tokenizer import GemmaTokenizer
from keras_nlp.models.preprocessor import Preprocessor
from keras_nlp.utils.keras_utils import (
convert_inputs_to_list_of_tensor_segments,
)
from keras_nlp.utils.keras_utils import pack_x_y_sample_weight
from keras_nlp.utils.python_utils import classproperty
@keras_nlp_export("keras_nlp.models.GemmaPreprocessor")
class GemmaPreprocessor(Preprocessor):
"""Gemma preprocessing layer which tokenizes and packs inputs.
This preprocessing layer will do 2 things:
- Tokenize the inputs using the `tokenizer`.
- Construct a dictionary with keys `"token_ids"`, `"padding_mask"`, that can
be passed directly to a `keras_nlp.models.GemmaBackbone`.
This layer can be used directly with `tf.data.Dataset.map` to preprocess
string data in the `(x, y, sample_weight)` format used by
`keras.Model.fit`.
The call method of this layer accepts three arguments, `x`, `y`, and
`sample_weight`. `x` can be a python string or tensor representing a single
segment, a list of python strings representing a batch of single segments,
or a list of tensors representing multiple segments to be packed together.
`y` and `sample_weight` are both optional, can have any format, and will be
passed through unaltered.
`GemmaPreprocessor` expects the input to have only one segment, as Gemma is
mainly used for generation tasks. For tasks having multi-segment inputs
please combine inputs into a single string input before passing to the
preprocessor layer.
Args:
tokenizer: A `keras_nlp.models.GemmaTokenizer` instance.
sequence_length: The length of the packed inputs.
add_start_token: If `True`, the preprocessor will prepend the tokenizer
start token to each input sequence.
add_end_token: If `True`, the preprocessor will append the tokenizer
end token to each input sequence.
Call arguments:
x: A string, `tf.Tensor` or list of python strings.
y: Any label data. Will be passed through unaltered.
sample_weight: Any label weight data. Will be passed through unaltered.
sequence_length: Pass to override the configured `sequence_length` of
the layer.
Examples:
Directly calling the layer on data.
```python
preprocessor = keras_nlp.models.GemmaPreprocessor.from_preset(
"gemma_2b_en"
)
# Tokenize and pack a single sentence.
preprocessor("The quick brown fox jumped.")
# Tokenize a batch of sentences.
preprocessor(["The quick brown fox jumped.", "Call me Ishmael."])
# Custom vocabulary.
bytes_io = io.BytesIO()
ds = tf.data.Dataset.from_tensor_slices(["The quick brown fox jumped."])
sentencepiece.SentencePieceTrainer.train(
sentence_iterator=ds.as_numpy_iterator(),
model_writer=bytes_io,
vocab_size=8,
model_type="WORD",
pad_id=0,
bos_id=1,
eos_id=2,
unk_id=3,
pad_piece="<pad>",
bos_piece="<bos>",
eos_piece="<eos>",
unk_piece="<unk>",
)
tokenizer = keras_nlp.models.GemmaTokenizer(
proto=bytes_io.getvalue(),
)
preprocessor = keras_nlp.models.GemmaPreprocessor(tokenizer=tokenizer)
preprocessor("The quick brown fox jumped.")
```
Apply preprocessing to a `tf.data.Dataset`.
```python
preprocessor = keras_nlp.models.GemmaPreprocessor.from_preset(
"gemma_2b_en"
)
text = tf.constant(["The quick brown fox jumped.", "Call me Ishmael."])
label = tf.constant([1, 1])
# Map labeled single sentences.
ds = tf.data.Dataset.from_tensor_slices((text, label))
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
# Map unlabeled single sentences.
ds = tf.data.Dataset.from_tensor_slices(text)
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
```
"""
def __init__(
self,
tokenizer,
sequence_length=8192,
add_start_token=True,
add_end_token=True,
**kwargs,
):
super().__init__(**kwargs)
self.tokenizer = tokenizer
self.sequence_length = sequence_length
self.add_start_token = add_start_token
self.add_end_token = add_end_token
def build(self, input_shape):
# Defer packer creation to `build()` so that we can be sure tokenizer
# assets have loaded when restoring a saved model.
self.packer = StartEndPacker(
start_value=self.tokenizer.start_token_id,
end_value=self.tokenizer.end_token_id,
pad_value=self.tokenizer.pad_token_id,
sequence_length=self.sequence_length,
return_padding_mask=True,
)
self.built = True
def call(
self,
x,
y=None,
sample_weight=None,
sequence_length=None,
):
x = convert_inputs_to_list_of_tensor_segments(x)
if len(x) != 1:
raise ValueError(
"GemmaPreprocessor requires each input to contain only "
f"one segment, but received {len(x)}. If you are using Gemma "
"for a multi-segment classification task, please combine your "
"input into a single string."
)
sequence_length = sequence_length or self.sequence_length
token_ids, padding_mask = self.packer(
self.tokenizer(x[0]),
sequence_length=sequence_length,
add_start_value=self.add_start_token,
add_end_value=self.add_end_token,
)
x = {
"token_ids": token_ids,
"padding_mask": padding_mask,
}
return pack_x_y_sample_weight(x, y, sample_weight)
def get_config(self):
config = super().get_config()
config.update(
{
"sequence_length": self.sequence_length,
"add_start_token": self.add_start_token,
"add_end_token": self.add_end_token,
}
)
return config
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
@classproperty
def tokenizer_cls(cls):
return GemmaTokenizer
| keras-nlp/keras_nlp/models/gemma/gemma_preprocessor.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/gemma/gemma_preprocessor.py",
"repo_id": "keras-nlp",
"token_count": 2878
} | 132 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_nlp.backend import keras
from keras_nlp.backend import ops
# TODO: Deprecate this in favor of
# `keras.layers.LayerNormalization(rms_scaling=True)` once Keras 2 support is
# removed.
class MistralLayerNormalization(keras.layers.Layer):
"""A normalization layer for Mistral that implements RMS normalization."""
def __init__(self, epsilon=1e-6, **kwargs):
super().__init__(**kwargs)
self._epsilon = epsilon
def build(self, input_shape):
self._dim = input_shape[-1]
self._weight = self.add_weight(
name="weight",
trainable=True,
shape=(self._dim,),
initializer="ones",
)
self.built = True
def call(self, x):
x = x * ops.rsqrt(
ops.mean(ops.power(x, 2), axis=-1, keepdims=True) + self._epsilon
)
return x * self._weight
def get_config(self):
config = super().get_config()
config.update({"epsilon": self._epsilon})
return config
| keras-nlp/keras_nlp/models/mistral/mistral_layer_norm.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/mistral/mistral_layer_norm.py",
"repo_id": "keras-nlp",
"token_count": 611
} | 133 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""OPT model preset configurations."""
# Metadata for loading pretrained model weights.
backbone_presets = {
"opt_125m_en": {
"metadata": {
"description": (
"12-layer OPT model where case in maintained. Trained on "
"BookCorpus, CommonCrawl, Pile, and PushShift.io corpora."
),
"params": 125237760,
"official_name": "OPT",
"path": "opt",
"model_card": "https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/model_card.md",
},
"kaggle_handle": "kaggle://keras/opt/keras/opt_125m_en/2",
},
# We skip the 350m checkpoint because it does not match the structure of
# other checkpoints.
"opt_1.3b_en": {
"metadata": {
"description": (
"24-layer OPT model where case in maintained. Trained on "
"BookCorpus, CommonCrawl, Pile, and PushShift.io corpora."
),
"params": 1315753984,
"official_name": "OPT",
"path": "opt",
"model_card": "https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/model_card.md",
},
"kaggle_handle": "kaggle://keras/opt/keras/opt_1.3b_en/2",
},
"opt_2.7b_en": {
"metadata": {
"description": (
"32-layer OPT model where case in maintained. Trained on "
"BookCorpus, CommonCrawl, Pile, and PushShift.io corpora."
),
"params": 2700000000,
"official_name": "OPT",
"path": "opt",
"model_card": "https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/model_card.md",
},
"kaggle_handle": "kaggle://keras/opt/keras/opt_2.7b_en/2",
},
"opt_6.7b_en": {
"metadata": {
"description": (
"32-layer OPT model where case in maintained. Trained on "
"BookCorpus, CommonCrawl, Pile, and PushShift.io corpora."
),
"params": 6700000000,
"official_name": "OPT",
"path": "opt",
"model_card": "https://github.com/facebookresearch/metaseq/blob/main/projects/OPT/model_card.md",
},
"kaggle_handle": "kaggle://keras/opt/keras/opt_6.7b_en/2",
},
}
| keras-nlp/keras_nlp/models/opt/opt_presets.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/opt/opt_presets.py",
"repo_id": "keras-nlp",
"token_count": 1332
} | 134 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.backend import ops
from keras_nlp.layers.modeling.position_embedding import PositionEmbedding
from keras_nlp.layers.modeling.token_and_position_embedding import (
TokenAndPositionEmbedding,
)
from keras_nlp.models.backbone import Backbone
from keras_nlp.models.whisper.whisper_decoder import WhisperDecoder
from keras_nlp.models.whisper.whisper_encoder import WhisperEncoder
from keras_nlp.models.whisper.whisper_presets import backbone_presets
from keras_nlp.utils.python_utils import classproperty
from keras_nlp.utils.tensor_utils import assert_tf_backend
def whisper_kernel_initializer(stddev=0.02):
return keras.initializers.TruncatedNormal(stddev=stddev)
class Padder(keras.layers.Layer):
def call(self, x):
return ops.pad(x, [[0, 0], [1, 1], [0, 0]])
@keras_nlp_export("keras_nlp.models.WhisperBackbone")
class WhisperBackbone(Backbone):
"""A Whisper encoder-decoder network for speech.
This class implements a Transformer-based encoder-decoder model as
described in
["Robust Speech Recognition via Large-Scale Weak Supervision"](https://arxiv.org/abs/2212.04356).
It includes the embedding lookups and transformer layers, but not the head
for predicting the next token.
The default constructor gives a fully customizable, randomly initialized Whisper
model with any number of layers, heads, and embedding dimensions. To load
preset architectures and weights, use the `from_preset()` constructor.
Disclaimer: Pre-trained models are provided on an "as is" basis, without
warranties or conditions of any kind. The underlying model is provided by a
third party and subject to a separate license, available
[here](https://github.com/openai/whisper).
Args:
vocabulary_size: int. The size of the token vocabulary.
num_layers: int. The number of transformer encoder layers and
transformer decoder layers.
num_heads: int. The number of attention heads for each transformer.
The hidden size must be divisible by the number of attention heads.
hidden_dim: int. The size of the transformer encoding and pooler layers.
intermediate_dim: int. The output dimension of the first Dense layer in
a two-layer feedforward network for each transformer.
num_mels: int. The number of mel-frequency filters. Defaults to `80`.
dropout: float. Dropout probability for the Transformer encoder.
max_encoder_sequence_length: int. The maximum sequence length that the
audio encoder can consume. Since the second convolutional layer in
the encoder reduces the sequence length by half (stride of 2), we
use `max_encoder_sequence_length // 2` as the sequence length for the
positional embedding layer.
max_decoder_sequence_length: int. The maximum sequence length that the
text decoder can consume.
dtype: string or `keras.mixed_precision.DTypePolicy`. The dtype to use
for model computations and weights. Note that some computations,
such as softmax and layer normalization, will always be done at
float32 precision regardless of dtype.
Examples:
```python
input_data = {
"encoder_features": np.ones(shape=(1, 12, 80), dtype="int32"),
"decoder_token_ids": np.ones(shape=(1, 12), dtype="int32"),
"decoder_padding_mask": np.array(
[[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]]
),
}
# Randomly initialized Whisper encoder-decoder model with a custom config.
model = keras_nlp.models.WhisperBackbone(
vocabulary_size=51864,
num_layers=4,
num_heads=4,
hidden_dim=256,
intermediate_dim=512,
max_encoder_sequence_length=128,
max_decoder_sequence_length=128,
)
model(input_data)
```
"""
def __init__(
self,
vocabulary_size,
num_layers,
num_heads,
hidden_dim,
intermediate_dim,
num_mels=80,
dropout=0.0,
max_encoder_sequence_length=3000,
max_decoder_sequence_length=448,
dtype=None,
**kwargs,
):
assert_tf_backend(self.__class__.__name__)
# === Layers ===
self.encoder_conv_layer_1 = keras.layers.Conv1D(
filters=hidden_dim,
kernel_size=3,
strides=1,
padding="same",
dtype=dtype,
name="encoder_token_embedding_conv_layer_1",
)
self.encoder_conv_layer_2 = keras.layers.Conv1D(
filters=hidden_dim,
kernel_size=3,
strides=2,
padding="valid",
dtype=dtype,
name="encoder_token_embedding_conv_layer_2",
)
self.encoder_padder = Padder(
dtype=dtype,
name="encoder_padder",
)
self.encoder_position_embedding = PositionEmbedding(
initializer=whisper_kernel_initializer(),
sequence_length=max_encoder_sequence_length // 2,
dtype=dtype,
name="encoder_position_embedding",
trainable=False,
)
self.encoder_embeddings_add = keras.layers.Add(
dtype=dtype,
name="encoder_embeddings_add",
)
self.encoder_embeddings_dropout = keras.layers.Dropout(
dropout,
dtype=dtype,
name="encoder_embeddings_dropout",
)
self.encoder_transformer_layers = []
for i in range(num_layers):
layer = WhisperEncoder(
num_heads=num_heads,
intermediate_dim=intermediate_dim,
activation=keras.activations.gelu,
layer_norm_epsilon=1e-5,
dropout=dropout,
kernel_initializer=whisper_kernel_initializer(),
normalize_first=True,
dtype=dtype,
name=f"transformer_encoder_layer_{i}",
)
self.encoder_transformer_layers.append(layer)
self.encoder_layer_norm = keras.layers.LayerNormalization(
axis=-1,
epsilon=1e-5,
dtype=dtype,
name="encoder_layer_norm",
)
self.decoder_embeddings = TokenAndPositionEmbedding(
vocabulary_size=vocabulary_size,
sequence_length=max_decoder_sequence_length,
embedding_dim=hidden_dim,
embeddings_initializer=whisper_kernel_initializer(),
dtype=dtype,
name="decoder_token_and_position_embedding",
)
self.token_embedding = self.decoder_embeddings.token_embedding
self.decoder_embeddings_dropout = keras.layers.Dropout(
dropout,
dtype=dtype,
name="decoder_embeddings_dropout",
)
self.decoder_transformer_layers = []
for i in range(num_layers):
layer = WhisperDecoder(
intermediate_dim=intermediate_dim,
num_heads=num_heads,
dropout=dropout,
activation=keras.activations.gelu,
layer_norm_epsilon=1e-5,
kernel_initializer=whisper_kernel_initializer(),
normalize_first=True,
dtype=dtype,
name=f"transformer_decoder_layer_{i}",
)
self.decoder_transformer_layers.append(layer)
self.decoder_layer_norm = keras.layers.LayerNormalization(
axis=-1,
epsilon=1e-5,
dtype=dtype,
name="decoder_layer_norm",
)
# === Functional Model ===
# Note that the encoder does not have a padding mask:
# https://github.com/openai/whisper/blob/v20230124/whisper/model.py#L132.
encoder_feature_input = keras.Input(
shape=(None, num_mels), dtype="float32", name="encoder_features"
)
decoder_token_id_input = keras.Input(
shape=(None,), dtype="int32", name="decoder_token_ids"
)
decoder_padding_mask_input = keras.Input(
shape=(None,), dtype="int32", name="decoder_padding_mask"
)
# Encoder.
# Embed the input features. This consists of two 1D convolutional
# layers.
# For the first layer, we use `padding="same"` since that corresponds to
# a padding size of 1.
embedded_features = keras.activations.gelu(
self.encoder_conv_layer_1(encoder_feature_input),
approximate=False,
)
# For the second conv. layer, we cannot use `padding="same"` since
# that corresponds to a padding size of 1.5 (since stride is 2). Hence,
# we will manually pad the input.
embedded_features = self.encoder_padder(embedded_features)
embedded_features = keras.activations.gelu(
self.encoder_conv_layer_2(embedded_features),
approximate=False,
)
# The position embedding layer for the encoder is a sinusoidal embedding
# layer: https://github.com/openai/whisper/blob/v20230124/whisper/model.py#L137.
# Hence, we set it to be non-trainable.
# TODO: We can use `keras_nlp.layers.SinePositionEncoding` layer.
positions = self.encoder_position_embedding(embedded_features)
x = self.encoder_embeddings_add((embedded_features, positions))
x = self.encoder_embeddings_dropout(x)
for transformer_layer in self.encoder_transformer_layers:
x = transformer_layer(x)
x = self.encoder_layer_norm(x)
encoder_output = x
# Decoder.
x = self.decoder_embeddings(decoder_token_id_input)
x = self.decoder_embeddings_dropout(x)
for transformer_layer in self.decoder_transformer_layers:
x = transformer_layer(
decoder_sequence=x,
encoder_sequence=encoder_output,
decoder_padding_mask=decoder_padding_mask_input,
)
x = self.decoder_layer_norm(x)
decoder_output = x
super().__init__(
inputs={
"encoder_features": encoder_feature_input,
"decoder_token_ids": decoder_token_id_input,
"decoder_padding_mask": decoder_padding_mask_input,
},
outputs={
"encoder_sequence_output": encoder_output,
"decoder_sequence_output": decoder_output,
},
**kwargs,
)
# === Config ===
self.vocabulary_size = vocabulary_size
self.num_layers = num_layers
self.num_heads = num_heads
self.hidden_dim = hidden_dim
self.intermediate_dim = intermediate_dim
self.num_mels = num_mels
self.dropout = dropout
self.max_encoder_sequence_length = max_encoder_sequence_length
self.max_decoder_sequence_length = max_decoder_sequence_length
def get_config(self):
config = super().get_config()
config.update(
{
"vocabulary_size": self.vocabulary_size,
"num_layers": self.num_layers,
"num_heads": self.num_heads,
"hidden_dim": self.hidden_dim,
"intermediate_dim": self.intermediate_dim,
"num_mels": self.num_mels,
"dropout": self.dropout,
"max_encoder_sequence_length": self.max_encoder_sequence_length,
"max_decoder_sequence_length": self.max_decoder_sequence_length,
}
)
return config
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
| keras-nlp/keras_nlp/models/whisper/whisper_backbone.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/whisper/whisper_backbone.py",
"repo_id": "keras-nlp",
"token_count": 5620
} | 135 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pytest
import tensorflow as tf
from keras_nlp.backend import keras
from keras_nlp.tests.test_case import TestCase
from keras_nlp.tokenizers.byte_pair_tokenizer import BytePairTokenizer
VOCAB_PATH = keras.utils.get_file(
None,
"https://storage.googleapis.com/keras-nlp/models/roberta_base/vocab.json",
)
MERGE_PATH = keras.utils.get_file(
None,
"https://storage.googleapis.com/keras-nlp/models/roberta_base/merges.txt",
)
@pytest.mark.large
class BytePairTokenizerTest(TestCase):
def setUp(self):
super().setUp()
self.tokenizer = BytePairTokenizer(
vocabulary=VOCAB_PATH, merges=MERGE_PATH
)
def test_tokenize_list_input(self):
input_data = ["brown.", "black."]
call_output = self.tokenizer(input_data)
tokenize_output = self.tokenizer.tokenize(input_data)
expected = [[31876, 4], [14178, 4]]
self.assertAllEqual(call_output, expected)
self.assertAllEqual(tokenize_output, expected)
input_data = tf.convert_to_tensor(["brown.", "black."])
encoded = self.tokenizer(input_data)
self.assertAllEqual(encoded, expected)
def test_tokenize_string_output(self):
input_data = ["quick brown fox.", "slow black bear."]
tokenizer = BytePairTokenizer(
vocabulary=VOCAB_PATH, merges=MERGE_PATH, dtype="string"
)
call_output = tokenizer(input_data)
expected = [
["quick", "Ġbrown", "Ġfox", "."],
["slow", "Ġblack", "Ġbear", "."],
]
self.assertAllEqual(call_output, expected)
def test_tokenize_with_special_tokens(self):
vocab = {"sp": 0, "s": 1, "p": 2}
merges = ["s p"]
tokenizer = BytePairTokenizer(
vocabulary=vocab,
merges=merges,
unsplittable_tokens=["s", "p"],
)
output = tokenizer("sp")
self.assertAllEqual(output, [1, 2])
# If not setting special tokens, "sp" is one token.
tokenizer = BytePairTokenizer(
vocabulary=vocab,
merges=merges,
)
output = tokenizer("sp")
self.assertAllEqual(output, [0])
def test_tokenize_prefix_space(self):
input_data = ["brown.", "black."]
tokenizer = BytePairTokenizer(
vocabulary=VOCAB_PATH,
merges=MERGE_PATH,
dtype="string",
add_prefix_space=True,
)
call_output = tokenizer(input_data)
expected = [["Ġbrown", "."], ["Ġblack", "."]]
self.assertAllEqual(call_output, expected)
def test_tokenize_scalar_input(self):
input_data = "brown."
encoded = self.tokenizer.tokenize(input_data)
self.assertAllEqual(encoded, [31876, 4])
def test_detokenize_scalar_input(self):
input_data = ["quick brown fox."]
encoded = self.tokenizer.tokenize(input_data)
decoded = self.tokenizer.detokenize(encoded)
self.assertAllEqual(input_data, decoded)
def test_detokenize_list_input(self):
input_data = ["quick brown fox.", "slow bear"]
encoded = self.tokenizer.tokenize(input_data)
decoded = self.tokenizer.detokenize(encoded)
self.assertAllEqual(input_data, decoded)
def test_error_id_out_of_vocabulary(self):
with self.assertRaises(ValueError):
self.tokenizer.id_to_token(self.tokenizer.vocabulary_size())
with self.assertRaises(ValueError):
self.tokenizer.id_to_token(-1)
def test_whitespace_split(self):
input_data = "\n\n\n s"
encoded = self.tokenizer(input_data)
self.assertAllEqual(encoded, [50140, 50118, 1437, 579])
input_data = " \n\n\ns"
encoded = self.tokenizer(input_data)
self.assertAllEqual(encoded, [1437, 1437, 50140, 50118, 29])
def test_special_whitespace(self):
input_data = "\xa0 \xa0 \x3000 s"
encoded = self.tokenizer(input_data)
self.assertAllEqual(encoded, [50141, 50143, 12096, 579])
def test_cjk_input(self):
input_data = "素晴らしい!芭比Q啦~"
# Black formats long list by one element per line, which is bad to read.
expected = [36714, 20024, 21402, 37127, 27, 20024, 48945, 47918]
expected += [47780, 43251, 4394, 10172, 36484, 27969, 12410, 37127]
expected += [10965, 10674, 1864, 42393, 15722, 18164, 43251, 10809]
expected += [17772]
encoded = self.tokenizer(input_data)
self.assertAllEqual(encoded, expected)
def test_tokenize_with_tf_data(self):
data = [
"I am just a test string",
"I am also a test string",
"I am still a test string",
"me too",
"I am not a test string (joking)",
"You guys should add punctuation!",
"Period matters!",
]
ds = tf.data.Dataset.from_tensor_slices(data)
ds = ds.batch(2).map(self.tokenizer)
encoded = next(iter(ds))
expected = [
[100, 524, 95, 10, 1296, 6755],
[100, 524, 67, 10, 1296, 6755],
]
self.assertAllEqual(encoded, expected)
def test_config(self):
input_data = ["the quick brown whale."]
cloned_tokenizer = BytePairTokenizer.from_config(
self.tokenizer.get_config()
)
cloned_tokenizer.set_vocabulary_and_merges(
self.tokenizer.vocabulary, self.tokenizer.merges
)
self.assertAllEqual(
self.tokenizer(input_data),
cloned_tokenizer(input_data),
)
| keras-nlp/keras_nlp/tokenizers/byte_pair_tokenizer_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/tokenizers/byte_pair_tokenizer_test.py",
"repo_id": "keras-nlp",
"token_count": 2810
} | 136 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import tensorflow as tf
from absl import logging
from keras_nlp.backend import keras
from keras_nlp.utils.tensor_utils import is_tensor_type
def clone_initializer(initializer):
"""Clones an initializer to ensure a new seed.
As of tensorflow 2.10, we need to clone user passed initializers when
invoking them twice to avoid creating the same randomized initialization.
"""
# If we get a string or dict, just return as we cannot and should not clone.
if not isinstance(initializer, keras.initializers.Initializer):
return initializer
config = initializer.get_config()
return initializer.__class__.from_config(config)
def pack_x_y_sample_weight(x, y=None, sample_weight=None):
"""Packs user-provided data into a tuple.
This is a temporary copy of `keras.utils.pack_x_y_sample_weight` while we
wait for the a change to the upstream version to propagate to a stable
release. See https://github.com/keras-team/keras-nlp/issues/492
"""
if y is None:
if not isinstance(x, (list, tuple)):
return x
else:
return (x,)
elif sample_weight is None:
return (x, y)
else:
return (x, y, sample_weight)
def convert_inputs_to_list_of_tensor_segments(x):
"""Converts user inputs to a list of a tensor segments.
For models and layers which accept lists of string tensors to pack together,
this method converts user inputs to a uniform format in a way that can be
considered canonical for the library.
We handle the following:
- A single string will be converted to a tensor and wrapped in a list.
- A list of strings will be converted to a tensor and wrapped in a list.
- A single tensor will be wrapped in a list.
- A list of tensors will be passed through unaltered.
All other inputs will result in an error. This effectively means that users
who would like to pack multiple segments together should convert those
segments to tensors before calling the layer. This removes any ambiguity
in the input for those cases.
"""
# Check the input type.
is_string = isinstance(x, (str, bytes))
is_tensor = is_tensor_type(x)
is_string_list = (
isinstance(x, (list, tuple)) and x and isinstance(x[0], (str, bytes))
)
is_tensor_list = isinstance(x, (list, tuple)) and x and is_tensor_type(x[0])
if is_string or is_string_list:
# Automatically convert raw strings or string lists to tensors.
# Wrap this input as a single (possibly batched) segment.
x = [tf.convert_to_tensor(x)]
elif is_tensor:
# Automatically wrap a single tensor as a single segment.
x = [x]
elif is_tensor_list:
# Pass lists of tensors though unaltered.
x = x
else:
# Error for all other input.
raise ValueError(
f"Unsupported input for `x`. `x` should be a string, a list of "
"strings, or a list of tensors. If passing multiple segments "
"which should packed together, please convert your inputs to a "
f"list of tensors. Received `x={x}`"
)
return x
def print_msg(message, line_break=True):
"""Print the message to absl logging or stdout."""
# Copied from core Keras.
if keras.utils.is_interactive_logging_enabled():
if line_break:
sys.stdout.write(message + "\n")
else:
sys.stdout.write(message)
sys.stdout.flush()
else:
logging.info(message)
def print_row(fields, positions, print_fn, nested_level=0):
"""Print a row of a summary message."""
# Copied from core Keras.
left_to_print = [str(x) for x in fields]
while any(left_to_print):
line = ""
for col in range(len(left_to_print)):
if col > 0:
start_pos = positions[col - 1]
else:
start_pos = 0
end_pos = positions[col]
# Leave room for 2 spaces to delineate columns
# we don't need any if we are printing the last column
space = 2 if col != len(positions) - 1 else 0
cutoff = end_pos - start_pos - space
fit_into_line = left_to_print[col][:cutoff]
# For nicer formatting we line-break on seeing end of
# tuple/dict etc.
line_break_conditions = ("),", "},", "],", "',")
candidate_cutoffs = [
fit_into_line.find(x) + len(x)
for x in line_break_conditions
if fit_into_line.find(x) >= 0
]
if candidate_cutoffs:
cutoff = min(candidate_cutoffs)
fit_into_line = fit_into_line[:cutoff]
if col == 0:
line += "|" * nested_level + " "
line += fit_into_line
line += " " * space if space else ""
left_to_print[col] = left_to_print[col][cutoff:]
# Pad out to the next position
if nested_level:
line += " " * (positions[col] - len(line) - nested_level)
else:
line += " " * (positions[col] - len(line))
line += "|" * nested_level
print_fn(line)
@keras.saving.register_keras_serializable(package="keras_nlp")
def gelu_approximate(x):
return keras.activations.gelu(x, approximate=True)
| keras-nlp/keras_nlp/utils/keras_utils.py/0 | {
"file_path": "keras-nlp/keras_nlp/utils/keras_utils.py",
"repo_id": "keras-nlp",
"token_count": 2392
} | 137 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import numpy as np
import tensorflow as tf
import torch
import transformers
from absl import app
from absl import flags
from checkpoint_conversion_utils import extract_files_from_archive
from checkpoint_conversion_utils import get_md5_checksum
from tensorflow import keras
import keras_nlp
PRESET_MAP = {
"xlm_roberta_base_multi": ("xlmr.base", "xlm-roberta-base"),
"xlm_roberta_large_multi": ("xlmr.large", "xlm-roberta-large"),
}
DOWNLOAD_SCRIPT_URL = "https://dl.fbaipublicfiles.com/fairseq/models/{}.tar.gz"
EXTRACT_DIR = "./{}"
FLAGS = flags.FLAGS
flags.DEFINE_string(
"preset", None, f'Must be one of {",".join(PRESET_MAP.keys())}'
)
def download_model(size):
print("-> Download original weights.")
archive_file_path = keras.utils.get_file(
fname=None,
origin=DOWNLOAD_SCRIPT_URL.format(size),
cache_subdir=os.path.join("checkpoint_conversion", FLAGS.preset),
)
extract_files_from_archive(archive_file_path)
def convert_checkpoints(size):
print("\n-> Convert original weights to KerasNLP format.")
# XLM-RoBERTa paths.
extract_dir = EXTRACT_DIR.format(size)
checkpoint_path = os.path.join(extract_dir, "model.pt")
# Load PyTorch XLM-R checkpoint.
pt_ckpt = torch.load(checkpoint_path, map_location=torch.device("cpu"))
pt_cfg = pt_ckpt["args"]
pt_model = pt_ckpt["model"]
cfg = {
"num_layers": pt_cfg.encoder_layers,
"num_heads": pt_cfg.encoder_attention_heads,
"hidden_dim": pt_cfg.encoder_embed_dim,
"intermediate_dim": pt_cfg.encoder_ffn_embed_dim,
"dropout": pt_cfg.dropout,
"max_sequence_length": pt_cfg.max_positions,
"vocab_size": (
pt_model["decoder.sentence_encoder.embed_tokens.weight"]
.numpy()
.shape[0]
),
}
print("Config:", cfg)
keras_nlp_model = keras_nlp.models.XLMRobertaBackbone.from_preset(
FLAGS.preset, load_weights=False
)
# Embedding Layer.
keras_nlp_model.get_layer("embeddings").token_embedding.embeddings.assign(
pt_model["decoder.sentence_encoder.embed_tokens.weight"].numpy()
)
keras_nlp_model.get_layer(
"embeddings"
).position_embedding.position_embeddings.assign(
pt_model["decoder.sentence_encoder.embed_positions.weight"].numpy()[
2:, :
]
)
# Embedding LayerNorm.
keras_nlp_model.get_layer("embeddings_layer_norm").gamma.assign(
pt_model["decoder.sentence_encoder.emb_layer_norm.weight"].numpy()
)
keras_nlp_model.get_layer("embeddings_layer_norm").beta.assign(
pt_model["decoder.sentence_encoder.emb_layer_norm.bias"].numpy()
)
range_1 = (0, cfg["hidden_dim"])
range_2 = (cfg["hidden_dim"], 2 * cfg["hidden_dim"])
range_3 = (2 * cfg["hidden_dim"], 3 * cfg["hidden_dim"])
# Transformer layers.
for i in range(keras_nlp_model.num_layers):
q_k_v_wts = (
pt_model[
f"decoder.sentence_encoder.layers.{i}.self_attn.in_proj_weight"
]
.numpy()
.T
)
q_k_v_bias = (
pt_model[
f"decoder.sentence_encoder.layers.{i}.self_attn.in_proj_bias"
]
.numpy()
.T
)
# Query
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._query_dense.kernel.assign(
q_k_v_wts[:, range_1[0] : range_1[1]].reshape(
(cfg["hidden_dim"], cfg["num_heads"], -1)
)
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._query_dense.bias.assign(
q_k_v_bias[range_1[0] : range_1[1]].reshape((cfg["num_heads"], -1))
)
# Key
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._key_dense.kernel.assign(
q_k_v_wts[:, range_2[0] : range_2[1]].reshape(
(cfg["hidden_dim"], cfg["num_heads"], -1)
)
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._key_dense.bias.assign(
q_k_v_bias[range_2[0] : range_2[1]].reshape((cfg["num_heads"], -1))
)
# Value
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._value_dense.kernel.assign(
q_k_v_wts[:, range_3[0] : range_3[1]].reshape(
(cfg["hidden_dim"], cfg["num_heads"], -1)
)
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._value_dense.bias.assign(
q_k_v_bias[range_3[0] : range_3[1]].reshape((cfg["num_heads"], -1))
)
# Attention output
attn_output_wts = (
pt_model[
f"decoder.sentence_encoder.layers.{i}.self_attn.out_proj.weight"
]
.numpy()
.T
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._output_dense.kernel.assign(
attn_output_wts.reshape((cfg["num_heads"], -1, cfg["hidden_dim"]))
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._output_dense.bias.assign(
pt_model[
f"decoder.sentence_encoder.layers.{i}.self_attn.out_proj.bias"
].numpy()
)
# Attention LayerNorm
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer_norm.gamma.assign(
pt_model[
f"decoder.sentence_encoder.layers.{i}.self_attn_layer_norm.weight"
].numpy()
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer_norm.beta.assign(
pt_model[
f"decoder.sentence_encoder.layers.{i}.self_attn_layer_norm.bias"
].numpy()
)
# Intermediate FF layer
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._feedforward_intermediate_dense.kernel.assign(
pt_model[f"decoder.sentence_encoder.layers.{i}.fc1.weight"]
.numpy()
.T
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._feedforward_intermediate_dense.bias.assign(
pt_model[f"decoder.sentence_encoder.layers.{i}.fc1.bias"].numpy()
)
# Output dense layer
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._feedforward_output_dense.kernel.assign(
pt_model[f"decoder.sentence_encoder.layers.{i}.fc2.weight"]
.numpy()
.T
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._feedforward_output_dense.bias.assign(
pt_model[f"decoder.sentence_encoder.layers.{i}.fc2.bias"].numpy()
)
# FF LayerNorm
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._feedforward_layer_norm.gamma.assign(
pt_model[
f"decoder.sentence_encoder.layers.{i}.final_layer_norm.weight"
].numpy()
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._feedforward_layer_norm.beta.assign(
pt_model[
f"decoder.sentence_encoder.layers.{i}.final_layer_norm.bias"
].numpy()
)
# Save the model.
print(f"\n-> Save KerasNLP model weights to `{FLAGS.preset}.h5`.")
keras_nlp_model.save_weights(f"{FLAGS.preset}.h5")
return keras_nlp_model
def define_preprocessor(hf_model_name, size):
print("\n-> Define the tokenizers.")
extract_dir = EXTRACT_DIR.format(size)
spm_path = os.path.join(extract_dir, "sentencepiece.bpe.model")
keras_nlp_tokenizer = keras_nlp.models.XLMRobertaTokenizer(
proto=spm_path,
)
keras_nlp_preprocessor = keras_nlp.models.XLMRobertaPreprocessor(
keras_nlp_tokenizer
)
hf_tokenizer = transformers.AutoTokenizer.from_pretrained(hf_model_name)
print("\n-> Print MD5 checksum of the vocab files.")
print(f"`{spm_path}` md5sum: ", get_md5_checksum(spm_path))
return keras_nlp_preprocessor, hf_tokenizer
def check_output(
keras_nlp_model,
keras_nlp_preprocessor,
hf_model,
hf_tokenizer,
):
print("\n-> Check the outputs.")
input_str = ["the quick brown fox ran, galloped and jumped."]
# KerasNLP
keras_nlp_inputs = keras_nlp_preprocessor(tf.constant(input_str))
keras_nlp_output = keras_nlp_model.predict(keras_nlp_inputs)
# HF
hf_inputs = hf_tokenizer(
input_str, padding="max_length", return_tensors="pt"
)
hf_output = hf_model(**hf_inputs).last_hidden_state
print("KerasNLP output:", keras_nlp_output[0, 0, :10])
print("HF output:", hf_output[0, 0, :10])
print("Difference:", np.mean(keras_nlp_output - hf_output.detach().numpy()))
# Show the MD5 checksum of the model weights.
print("Model md5sum: ", get_md5_checksum(f"./{FLAGS.preset}.h5"))
return keras_nlp_output
def main(_):
assert (
FLAGS.preset in PRESET_MAP.keys()
), f'Invalid preset {FLAGS.preset}. Must be one of {",".join(PRESET_MAP.keys())}'
size = PRESET_MAP[FLAGS.preset][0]
hf_model_name = PRESET_MAP[FLAGS.preset][1]
download_model(size)
keras_nlp_model = convert_checkpoints(size)
print("\n-> Load HF model.")
hf_model = transformers.AutoModel.from_pretrained(hf_model_name)
hf_model.eval()
keras_nlp_preprocessor, hf_tokenizer = define_preprocessor(
hf_model_name, size
)
check_output(
keras_nlp_model,
keras_nlp_preprocessor,
hf_model,
hf_tokenizer,
)
if __name__ == "__main__":
flags.mark_flag_as_required("preset")
app.run(main)
| keras-nlp/tools/checkpoint_conversion/convert_xlm_roberta_checkpoints.py/0 | {
"file_path": "keras-nlp/tools/checkpoint_conversion/convert_xlm_roberta_checkpoints.py",
"repo_id": "keras-nlp",
"token_count": 5274
} | 138 |
"""Utilities for real-time data augmentation on image data.
"""
import multiprocessing.pool
import os
import numpy as np
from .iterator import BatchFromFilesMixin, Iterator
from .utils import _list_valid_filenames_in_directory
class DirectoryIterator(BatchFromFilesMixin, Iterator):
"""Iterator capable of reading images from a directory on disk.
# Arguments
directory: string, path to the directory to read images from.
Each subdirectory in this directory will be
considered to contain images from one class,
or alternatively you could specify class subdirectories
via the `classes` argument.
image_data_generator: Instance of `ImageDataGenerator`
to use for random transformations and normalization.
target_size: tuple of integers, dimensions to resize input images to.
color_mode: One of `"rgb"`, `"rgba"`, `"grayscale"`.
Color mode to read images.
classes: Optional list of strings, names of subdirectories
containing images from each class (e.g. `["dogs", "cats"]`).
It will be computed automatically if not set.
class_mode: Mode for yielding the targets:
`"binary"`: binary targets (if there are only two classes),
`"categorical"`: categorical targets,
`"sparse"`: integer targets,
`"input"`: targets are images identical to input images (mainly
used to work with autoencoders),
`None`: no targets get yielded (only input images are yielded).
batch_size: Integer, size of a batch.
shuffle: Boolean, whether to shuffle the data between epochs.
If set to False, sorts the data in alphanumeric order.
seed: Random seed for data shuffling.
data_format: String, one of `channels_first`, `channels_last`.
save_to_dir: Optional directory where to save the pictures
being yielded, in a viewable format. This is useful
for visualizing the random transformations being
applied, for debugging purposes.
save_prefix: String prefix to use for saving sample
images (if `save_to_dir` is set).
save_format: Format to use for saving sample images
(if `save_to_dir` is set).
follow_links: Boolean, follow symbolic links to subdirectories
subset: Subset of data (`"training"` or `"validation"`) if
validation_split is set in ImageDataGenerator.
interpolation: Interpolation method used to resample the image if the
target size is different from that of the loaded image.
Supported methods are "nearest", "bilinear", and "bicubic".
If PIL version 1.1.3 or newer is installed, "lanczos" is also
supported. If PIL version 3.4.0 or newer is installed, "box" and
"hamming" are also supported. By default, "nearest" is used.
keep_aspect_ratio: Boolean, whether to resize images to a target size
without aspect ratio distortion. The image is cropped in the center
with target aspect ratio before resizing.
dtype: Dtype to use for generated arrays.
"""
allowed_class_modes = {'categorical', 'binary', 'sparse', 'input', None}
def __new__(cls, *args, **kwargs):
try:
from tensorflow.keras.utils import Sequence as TFSequence
if TFSequence not in cls.__bases__:
cls.__bases__ = cls.__bases__ + (TFSequence,)
except ImportError:
pass
return super(DirectoryIterator, cls).__new__(cls)
def __init__(self,
directory,
image_data_generator,
target_size=(256, 256),
color_mode='rgb',
classes=None,
class_mode='categorical',
batch_size=32,
shuffle=True,
seed=None,
data_format='channels_last',
save_to_dir=None,
save_prefix='',
save_format='png',
follow_links=False,
subset=None,
interpolation='nearest',
keep_aspect_ratio=False,
dtype='float32'):
super(DirectoryIterator, self).set_processing_attrs(image_data_generator,
target_size,
color_mode,
data_format,
save_to_dir,
save_prefix,
save_format,
subset,
interpolation,
keep_aspect_ratio)
self.directory = directory
self.classes = classes
if class_mode not in self.allowed_class_modes:
raise ValueError('Invalid class_mode: {}; expected one of: {}'
.format(class_mode, self.allowed_class_modes))
self.class_mode = class_mode
self.dtype = dtype
# First, count the number of samples and classes.
self.samples = 0
if not classes:
classes = []
for subdir in sorted(os.listdir(directory)):
if os.path.isdir(os.path.join(directory, subdir)):
classes.append(subdir)
self.num_classes = len(classes)
self.class_indices = dict(zip(classes, range(len(classes))))
pool = multiprocessing.pool.ThreadPool()
# Second, build an index of the images
# in the different class subfolders.
results = []
self.filenames = []
i = 0
for dirpath in (os.path.join(directory, subdir) for subdir in classes):
results.append(
pool.apply_async(_list_valid_filenames_in_directory,
(dirpath, self.white_list_formats, self.split,
self.class_indices, follow_links)))
classes_list = []
for res in results:
classes, filenames = res.get()
classes_list.append(classes)
self.filenames += filenames
self.samples = len(self.filenames)
self.classes = np.zeros((self.samples,), dtype='int32')
for classes in classes_list:
self.classes[i:i + len(classes)] = classes
i += len(classes)
print('Found %d images belonging to %d classes.' %
(self.samples, self.num_classes))
pool.close()
pool.join()
self._filepaths = [
os.path.join(self.directory, fname) for fname in self.filenames
]
super(DirectoryIterator, self).__init__(self.samples,
batch_size,
shuffle,
seed)
@property
def filepaths(self):
return self._filepaths
@property
def labels(self):
return self.classes
@property # mixin needs this property to work
def sample_weight(self):
# no sample weights will be returned
return None
| keras-preprocessing/keras_preprocessing/image/directory_iterator.py/0 | {
"file_path": "keras-preprocessing/keras_preprocessing/image/directory_iterator.py",
"repo_id": "keras-preprocessing",
"token_count": 3650
} | 139 |
import io
import resource
from pathlib import Path
import numpy as np
import PIL
import pytest
from keras_preprocessing.image import utils
def test_validate_filename(tmpdir):
valid_extensions = ('png', 'jpg')
filename = tmpdir.ensure('test.png')
assert utils.validate_filename(str(filename), valid_extensions)
filename = tmpdir.ensure('test.PnG')
assert utils.validate_filename(str(filename), valid_extensions)
filename = tmpdir.ensure('test.some_extension')
assert not utils.validate_filename(str(filename), valid_extensions)
assert not utils.validate_filename('some_test_file.png', valid_extensions)
def test_load_img(tmpdir):
filename_rgb = str(tmpdir / 'rgb_utils.png')
filename_rgba = str(tmpdir / 'rgba_utils.png')
filename_grayscale_8bit = str(tmpdir / 'grayscale_8bit_utils.png')
filename_grayscale_16bit = str(tmpdir / 'grayscale_16bit_utils.tiff')
filename_grayscale_32bit = str(tmpdir / 'grayscale_32bit_utils.tiff')
original_rgb_array = np.array(255 * np.random.rand(100, 100, 3),
dtype=np.uint8)
original_rgb = utils.array_to_img(original_rgb_array, scale=False)
original_rgb.save(filename_rgb)
original_rgba_array = np.array(255 * np.random.rand(100, 100, 4),
dtype=np.uint8)
original_rgba = utils.array_to_img(original_rgba_array, scale=False)
original_rgba.save(filename_rgba)
original_grayscale_8bit_array = np.array(255 * np.random.rand(100, 100, 1),
dtype=np.uint8)
original_grayscale_8bit = utils.array_to_img(original_grayscale_8bit_array,
scale=False)
original_grayscale_8bit.save(filename_grayscale_8bit)
original_grayscale_16bit_array = np.array(
np.random.randint(-2147483648, 2147483647, (100, 100, 1)), dtype=np.int16
)
original_grayscale_16bit = utils.array_to_img(original_grayscale_16bit_array,
scale=False, dtype='int16')
original_grayscale_16bit.save(filename_grayscale_16bit)
original_grayscale_32bit_array = np.array(
np.random.randint(-2147483648, 2147483647, (100, 100, 1)), dtype=np.int32
)
original_grayscale_32bit = utils.array_to_img(original_grayscale_32bit_array,
scale=False, dtype='int32')
original_grayscale_32bit.save(filename_grayscale_32bit)
# Test that loaded image is exactly equal to original.
loaded_im = utils.load_img(filename_rgb)
loaded_im_array = utils.img_to_array(loaded_im)
assert loaded_im_array.shape == original_rgb_array.shape
assert np.all(loaded_im_array == original_rgb_array)
loaded_im = utils.load_img(filename_rgba, color_mode='rgba')
loaded_im_array = utils.img_to_array(loaded_im)
assert loaded_im_array.shape == original_rgba_array.shape
assert np.all(loaded_im_array == original_rgba_array)
loaded_im = utils.load_img(filename_rgb, color_mode='grayscale')
loaded_im_array = utils.img_to_array(loaded_im)
assert loaded_im_array.shape == (original_rgb_array.shape[0],
original_rgb_array.shape[1], 1)
loaded_im = utils.load_img(filename_grayscale_8bit, color_mode='grayscale')
loaded_im_array = utils.img_to_array(loaded_im)
assert loaded_im_array.shape == original_grayscale_8bit_array.shape
assert np.all(loaded_im_array == original_grayscale_8bit_array)
loaded_im = utils.load_img(filename_grayscale_16bit, color_mode='grayscale')
loaded_im_array = utils.img_to_array(loaded_im, dtype='int16')
assert loaded_im_array.shape == original_grayscale_16bit_array.shape
assert np.all(loaded_im_array == original_grayscale_16bit_array)
# test casting int16 image to float32
loaded_im_array = utils.img_to_array(loaded_im)
assert np.allclose(loaded_im_array, original_grayscale_16bit_array)
loaded_im = utils.load_img(filename_grayscale_32bit, color_mode='grayscale')
loaded_im_array = utils.img_to_array(loaded_im, dtype='int32')
assert loaded_im_array.shape == original_grayscale_32bit_array.shape
assert np.all(loaded_im_array == original_grayscale_32bit_array)
# test casting int32 image to float32
loaded_im_array = utils.img_to_array(loaded_im)
assert np.allclose(loaded_im_array, original_grayscale_32bit_array)
# Test that nothing is changed when target size is equal to original.
loaded_im = utils.load_img(filename_rgb, target_size=(100, 100))
loaded_im_array = utils.img_to_array(loaded_im)
assert loaded_im_array.shape == original_rgb_array.shape
assert np.all(loaded_im_array == original_rgb_array)
loaded_im = utils.load_img(filename_rgba, color_mode='rgba',
target_size=(100, 100))
loaded_im_array = utils.img_to_array(loaded_im)
assert loaded_im_array.shape == original_rgba_array.shape
assert np.all(loaded_im_array == original_rgba_array)
loaded_im = utils.load_img(filename_rgb, color_mode='grayscale',
target_size=(100, 100))
loaded_im_array = utils.img_to_array(loaded_im)
assert loaded_im_array.shape == (original_rgba_array.shape[0],
original_rgba_array.shape[1], 1)
loaded_im = utils.load_img(filename_grayscale_8bit, color_mode='grayscale',
target_size=(100, 100))
loaded_im_array = utils.img_to_array(loaded_im)
assert loaded_im_array.shape == original_grayscale_8bit_array.shape
assert np.all(loaded_im_array == original_grayscale_8bit_array)
loaded_im = utils.load_img(filename_grayscale_16bit, color_mode='grayscale',
target_size=(100, 100))
loaded_im_array = utils.img_to_array(loaded_im, dtype='int16')
assert loaded_im_array.shape == original_grayscale_16bit_array.shape
assert np.all(loaded_im_array == original_grayscale_16bit_array)
loaded_im = utils.load_img(filename_grayscale_32bit, color_mode='grayscale',
target_size=(100, 100))
loaded_im_array = utils.img_to_array(loaded_im, dtype='int32')
assert loaded_im_array.shape == original_grayscale_32bit_array.shape
assert np.all(loaded_im_array == original_grayscale_32bit_array)
# Test down-sampling with bilinear interpolation.
loaded_im = utils.load_img(filename_rgb, target_size=(25, 25))
loaded_im_array = utils.img_to_array(loaded_im)
assert loaded_im_array.shape == (25, 25, 3)
loaded_im = utils.load_img(filename_rgba, color_mode='rgba',
target_size=(25, 25))
loaded_im_array = utils.img_to_array(loaded_im)
assert loaded_im_array.shape == (25, 25, 4)
loaded_im = utils.load_img(filename_rgb, color_mode='grayscale',
target_size=(25, 25))
loaded_im_array = utils.img_to_array(loaded_im)
assert loaded_im_array.shape == (25, 25, 1)
loaded_im = utils.load_img(filename_grayscale_8bit, color_mode='grayscale',
target_size=(25, 25))
loaded_im_array = utils.img_to_array(loaded_im)
assert loaded_im_array.shape == (25, 25, 1)
loaded_im = utils.load_img(filename_grayscale_16bit, color_mode='grayscale',
target_size=(25, 25))
loaded_im_array = utils.img_to_array(loaded_im, dtype='int16')
assert loaded_im_array.shape == (25, 25, 1)
loaded_im = utils.load_img(filename_grayscale_32bit, color_mode='grayscale',
target_size=(25, 25))
loaded_im_array = utils.img_to_array(loaded_im, dtype='int32')
assert loaded_im_array.shape == (25, 25, 1)
# Test down-sampling with nearest neighbor interpolation.
loaded_im_nearest = utils.load_img(filename_rgb, target_size=(25, 25),
interpolation="nearest")
loaded_im_array_nearest = utils.img_to_array(loaded_im_nearest)
assert loaded_im_array_nearest.shape == (25, 25, 3)
assert np.any(loaded_im_array_nearest != loaded_im_array)
loaded_im_nearest = utils.load_img(filename_rgba, color_mode='rgba',
target_size=(25, 25),
interpolation="nearest")
loaded_im_array_nearest = utils.img_to_array(loaded_im_nearest)
assert loaded_im_array_nearest.shape == (25, 25, 4)
assert np.any(loaded_im_array_nearest != loaded_im_array)
loaded_im = utils.load_img(filename_grayscale_8bit, color_mode='grayscale',
target_size=(25, 25), interpolation="nearest")
loaded_im_array = utils.img_to_array(loaded_im)
assert loaded_im_array.shape == (25, 25, 1)
loaded_im = utils.load_img(filename_grayscale_16bit, color_mode='grayscale',
target_size=(25, 25), interpolation="nearest")
loaded_im_array = utils.img_to_array(loaded_im, dtype='int16')
assert loaded_im_array.shape == (25, 25, 1)
loaded_im = utils.load_img(filename_grayscale_32bit, color_mode='grayscale',
target_size=(25, 25), interpolation="nearest")
loaded_im_array = utils.img_to_array(loaded_im, dtype='int32')
assert loaded_im_array.shape == (25, 25, 1)
# Test different path type
with open(filename_grayscale_32bit, 'rb') as f:
_path = io.BytesIO(f.read()) # io.Bytesio
loaded_im = utils.load_img(_path, color_mode='grayscale')
loaded_im_array = utils.img_to_array(loaded_im, dtype=np.int32)
assert np.all(loaded_im_array == original_grayscale_32bit_array)
_path = filename_grayscale_32bit # str
loaded_im = utils.load_img(_path, color_mode='grayscale')
loaded_im_array = utils.img_to_array(loaded_im, dtype=np.int32)
assert np.all(loaded_im_array == original_grayscale_32bit_array)
_path = filename_grayscale_32bit.encode() # bytes
loaded_im = utils.load_img(_path, color_mode='grayscale')
loaded_im_array = utils.img_to_array(loaded_im, dtype=np.int32)
assert np.all(loaded_im_array == original_grayscale_32bit_array)
_path = Path(tmpdir / 'grayscale_32bit_utils.tiff') # Path
loaded_im = utils.load_img(_path, color_mode='grayscale')
loaded_im_array = utils.img_to_array(loaded_im, dtype=np.int32)
assert np.all(loaded_im_array == original_grayscale_32bit_array)
# Check that exception is raised if interpolation not supported.
loaded_im = utils.load_img(filename_rgb, interpolation="unsupported")
with pytest.raises(ValueError):
loaded_im = utils.load_img(filename_rgb, target_size=(25, 25),
interpolation="unsupported")
# Check that the aspect ratio of a square is the same
filename_red_square = str(tmpdir / 'red_square_utils.png')
A = np.zeros((50, 100, 3), dtype=np.uint8) # rectangle image 100x50
A[20:30, 45:55, 0] = 255 # red square 10x10
red_square_array = np.array(A)
red_square = utils.array_to_img(red_square_array, scale=False)
red_square.save(filename_red_square)
loaded_im = utils.load_img(filename_red_square, target_size=(25, 25),
keep_aspect_ratio=True)
loaded_im_array = utils.img_to_array(loaded_im)
assert loaded_im_array.shape == (25, 25, 3)
red_channel_arr = loaded_im_array[:, :, 0].astype(np.bool)
square_width = np.sum(np.sum(red_channel_arr, axis=0))
square_height = np.sum(np.sum(red_channel_arr, axis=1))
aspect_ratio_result = square_width / square_height
# original square had 1:1 ratio
assert aspect_ratio_result == pytest.approx(1.0)
def test_list_pictures(tmpdir):
filenames = ['test.png', 'test0.jpg', 'test-1.jpeg', '2test.bmp',
'2-test.ppm', '3.png', '1.jpeg', 'test.bmp', 'test0.ppm',
'test4.tiff', '5-test.tif', 'test.txt', 'foo.csv',
'face.gif', 'bar.txt']
subdirs = ['', 'subdir1', 'subdir2']
filenames = [tmpdir.ensure(subdir, f) for subdir in subdirs
for f in filenames]
found_images = utils.list_pictures(str(tmpdir))
assert len(found_images) == 33
found_images = utils.list_pictures(str(tmpdir), ext='png')
assert len(found_images) == 6
def test_array_to_img_and_img_to_array():
height, width = 10, 8
# Test the data format
# Test RGB 3D
x = np.random.random((3, height, width))
img = utils.array_to_img(x, data_format='channels_first')
assert img.size == (width, height)
x = utils.img_to_array(img, data_format='channels_first')
assert x.shape == (3, height, width)
# Test RGBA 3D
x = np.random.random((4, height, width))
img = utils.array_to_img(x, data_format='channels_first')
assert img.size == (width, height)
x = utils.img_to_array(img, data_format='channels_first')
assert x.shape == (4, height, width)
# Test 2D
x = np.random.random((1, height, width))
img = utils.array_to_img(x, data_format='channels_first')
assert img.size == (width, height)
x = utils.img_to_array(img, data_format='channels_first')
assert x.shape == (1, height, width)
# grayscale 32-bit signed integer
x = np.array(
np.random.randint(-2147483648, 2147483647, (1, height, width)),
dtype=np.int32
)
img = utils.array_to_img(x, data_format='channels_first')
assert img.size == (width, height)
x = utils.img_to_array(img, data_format='channels_first')
assert x.shape == (1, height, width)
# Test tf data format
# Test RGB 3D
x = np.random.random((height, width, 3))
img = utils.array_to_img(x, data_format='channels_last')
assert img.size == (width, height)
x = utils.img_to_array(img, data_format='channels_last')
assert x.shape == (height, width, 3)
# Test RGBA 3D
x = np.random.random((height, width, 4))
img = utils.array_to_img(x, data_format='channels_last')
assert img.size == (width, height)
x = utils.img_to_array(img, data_format='channels_last')
assert x.shape == (height, width, 4)
# Test 2D
x = np.random.random((height, width, 1))
img = utils.array_to_img(x, data_format='channels_last')
assert img.size == (width, height)
x = utils.img_to_array(img, data_format='channels_last')
assert x.shape == (height, width, 1)
# grayscale 16-bit signed integer
x = np.array(
np.random.randint(-2147483648, 2147483647, (height, width, 1)),
dtype=np.int16
)
img = utils.array_to_img(x, data_format='channels_last')
assert img.size == (width, height)
x = utils.img_to_array(img, data_format='channels_last')
assert x.shape == (height, width, 1)
# grayscale 32-bit signed integer
x = np.array(
np.random.randint(-2147483648, 2147483647, (height, width, 1)),
dtype=np.int32
)
img = utils.array_to_img(x, data_format='channels_last')
assert img.size == (width, height)
x = utils.img_to_array(img, data_format='channels_last')
assert x.shape == (height, width, 1)
# Test invalid use case
with pytest.raises(ValueError):
x = np.random.random((height, width)) # not 3D
img = utils.array_to_img(x, data_format='channels_first')
with pytest.raises(ValueError):
x = np.random.random((height, width, 3))
# unknown data_format
img = utils.array_to_img(x, data_format='channels')
with pytest.raises(ValueError):
# neither RGB, RGBA, or gray-scale
x = np.random.random((height, width, 5))
img = utils.array_to_img(x, data_format='channels_last')
with pytest.raises(ValueError):
x = np.random.random((height, width, 3))
# unknown data_format
img = utils.img_to_array(x, data_format='channels')
with pytest.raises(ValueError):
# neither RGB, RGBA, or gray-scale
x = np.random.random((height, width, 5, 3))
img = utils.img_to_array(x, data_format='channels_last')
def write_sample_image(tmpdir):
im = utils.array_to_img(np.random.rand(1, 1, 3))
path = str(tmpdir / 'sample_image.png')
utils.save_img(path, im)
return path
def test_image_file_handlers_close(tmpdir):
path = write_sample_image(tmpdir)
max_open_files, _ = resource.getrlimit(resource.RLIMIT_NOFILE)
for i in range(max_open_files+1):
utils.load_img(path)
def test_load_img_returns_image(tmpdir):
path = write_sample_image(tmpdir)
im = utils.load_img(path)
assert isinstance(im, PIL.Image.Image)
if __name__ == '__main__':
pytest.main([__file__])
| keras-preprocessing/tests/image/utils_test.py/0 | {
"file_path": "keras-preprocessing/tests/image/utils_test.py",
"repo_id": "keras-preprocessing",
"token_count": 7322
} | 140 |
<meta http-equiv="refresh" content="0; URL='https://keras.io/api/keras_tuner/hypermodels/'" />
| keras-tuner/docs/site/documentation/hypermodels/index.html/0 | {
"file_path": "keras-tuner/docs/site/documentation/hypermodels/index.html",
"repo_id": "keras-tuner",
"token_count": 37
} | 141 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for HyperResNet Model."""
import os
import numpy as np
import pytest
from keras_tuner.applications import resnet
from keras_tuner.backend import config
from keras_tuner.backend import keras
from keras_tuner.engine import hyperparameters as hp_module
@pytest.mark.skipif("TRAVIS" in os.environ, reason="Causes CI to stall")
@pytest.mark.skipif(
config.multi_backend(),
reason="The test is too slow.",
)
@pytest.mark.parametrize("version", ["v1", "v2", "next"])
def test_model_construction(version):
hp = hp_module.HyperParameters()
hp.Choice("version", [version])
hypermodel = resnet.HyperResNet(input_shape=(128, 128, 3), classes=10)
model = hypermodel.build(hp)
assert hp.values["version"] == version
assert model.layers
assert model.name == "ResNet"
assert model.output_shape == (None, 10)
model.train_on_batch(np.ones((1, 128, 128, 3)), np.ones((1, 10)))
out = model.predict(np.ones((1, 128, 128, 3)))
assert out.shape == (1, 10)
def test_hyperparameter_existence_and_defaults():
hp = hp_module.HyperParameters()
hypermodel = resnet.HyperResNet(input_shape=(256, 256, 3), classes=10)
hypermodel.build(hp)
assert hp.get("version") == "v2"
assert hp.get("conv3_depth") == 4
assert hp.get("conv4_depth") == 6
assert hp.get("learning_rate") == 0.01
assert hp.get("pooling") == "avg"
def test_include_top_false():
hp = hp_module.HyperParameters()
hypermodel = resnet.HyperResNet(
input_shape=(256, 256, 3), classes=10, include_top=False
)
model = hypermodel.build(hp)
# Check that model wasn't compiled.
assert not hasattr(model, "optimizer") or not model.optimizer
def test_hyperparameter_override():
hp = hp_module.HyperParameters()
hp.Choice("version", ["v1"])
hp.Fixed("conv3_depth", 10)
hypermodel = resnet.HyperResNet(input_shape=(256, 256, 3), classes=10)
hypermodel.build(hp)
assert hp.get("version") == "v1"
assert hp.get("conv3_depth") == 10
assert hp.get("conv4_depth") == 6
def test_input_tensor():
hp = hp_module.HyperParameters()
inputs = keras.Input(shape=(256, 256, 3))
hypermodel = resnet.HyperResNet(input_tensor=inputs, include_top=False)
model = hypermodel.build(hp)
assert model.inputs == [inputs]
def test_pooling_is_max():
hp = hp_module.HyperParameters()
hp.values["pooling"] = "max"
hypermodel = resnet.HyperResNet(input_shape=(32, 32, 3), classes=10)
hypermodel.build(hp)
def test_no_classes_raise_error():
with pytest.raises(ValueError, match="classes"):
resnet.HyperResNet(input_shape=(32, 32, 3))
def test_no_input_shape_tensor_raise_error():
with pytest.raises(ValueError, match="input_tensor"):
resnet.HyperResNet(classes=10)
| keras-tuner/keras_tuner/applications/resnet_test.py/0 | {
"file_path": "keras-tuner/keras_tuner/applications/resnet_test.py",
"repo_id": "keras-tuner",
"token_count": 1223
} | 142 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Distribution utilities."""
import os
def has_chief_oracle():
"""Checks for distributed tuning with a chief Oracle.
`CloudOracle` manages its own distribution so should not set
"KERASTUNER_ORACLE_IP".
Returns:
Boolean, whether distributed tuning with a chief Oracle should be run.
"""
if "KERASTUNER_ORACLE_IP" in os.environ:
if "KERASTUNER_ORACLE_PORT" not in os.environ:
raise RuntimeError(
'Environment variable "KERASTUNER_ORACLE_IP" was set, '
'but "KERASTUNER_ORACLE_PORT" was not. Please specify '
"a port."
)
if "KERASTUNER_TUNER_ID" not in os.environ:
raise RuntimeError(
'Environment variable "KERASTUNER_ORACLE_IP" was set, '
'but "KERASTUNER_TUNER_ID" was not. Please specify '
"an ID for each tuner."
)
return True
return False
def is_chief_oracle():
if has_chief_oracle():
return "chief" in os.environ["KERASTUNER_TUNER_ID"]
return False
| keras-tuner/keras_tuner/distribute/utils.py/0 | {
"file_path": "keras-tuner/keras_tuner/distribute/utils.py",
"repo_id": "keras-tuner",
"token_count": 648
} | 143 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_tuner import protos
from keras_tuner.api_export import keras_tuner_export
from keras_tuner.engine import conditions as conditions_mod
from keras_tuner.engine.hyperparameters import hp_utils
from keras_tuner.engine.hyperparameters.hp_types import numerical
@keras_tuner_export("keras_tuner.engine.hyperparameters.Float")
class Float(numerical.Numerical):
"""Floating point value hyperparameter.
Example #1:
```py
hp.Float(
"image_rotation_factor",
min_value=0,
max_value=1)
```
All values in interval [0, 1] have equal probability of being sampled.
Example #2:
```py
hp.Float(
"image_rotation_factor",
min_value=0,
max_value=1,
step=0.2)
```
`step` is the minimum distance between samples.
The possible values are [0, 0.2, 0.4, 0.6, 0.8, 1.0].
Example #3:
```py
hp.Float(
"learning_rate",
min_value=0.001,
max_value=10,
step=10,
sampling="log")
```
When `sampling="log"`, the `step` is multiplied between samples.
The possible values are [0.001, 0.01, 0.1, 1, 10].
Args:
name: A string. the name of parameter. Must be unique for each
`HyperParameter` instance in the search space.
min_value: Float, the lower bound of the range.
max_value: Float, the upper bound of the range.
step: Optional float, the distance between two consecutive samples in
the range. If left unspecified, it is possible to sample any value
in the interval. If `sampling="linear"`, it will be the minimum
additve between two samples. If `sampling="log"`, it will be the
minimum multiplier between two samples.
sampling: String. One of "linear", "log", "reverse_log". Defaults to
"linear". When sampling value, it always start from a value in range
[0.0, 1.0). The `sampling` argument decides how the value is
projected into the range of [min_value, max_value].
"linear": min_value + value * (max_value - min_value)
"log": min_value * (max_value / min_value) ^ value
"reverse_log":
max_value - min_value * ((max_value/min_value)^(1 - value) - 1)
default: Float, the default value to return for the parameter. If
unspecified, the default value will be `min_value`.
"""
def __init__(
self,
name,
min_value,
max_value,
step=None,
sampling="linear",
default=None,
**kwargs,
):
if step is not None:
self.step = float(step)
super().__init__(
name=name,
min_value=float(min_value),
max_value=float(max_value),
step=step,
sampling=sampling,
default=default,
**kwargs,
)
def __repr__(self):
return (
f"Float(name: '{self.name}', min_value: '{self.min_value}', "
f"max_value: '{self.max_value}', step: '{self.step}', "
f"sampling: '{self.sampling}', default: '{self.default}')"
)
@property
def default(self):
return self._default if self._default is not None else self.min_value
def prob_to_value(self, prob):
if self.step is None:
return self._sample_numerical_value(prob)
return self._sample_with_step(prob)
def value_to_prob(self, value):
if self.step is None:
return self._numerical_to_prob(value)
return self._to_prob_with_step(value)
def get_config(self):
config = super().get_config()
config["min_value"] = self.min_value
config["max_value"] = self.max_value
config["step"] = self.step
config["sampling"] = self.sampling
return config
@classmethod
def from_proto(cls, proto):
conditions = [
conditions_mod.Condition.from_proto(c) for c in proto.conditions
]
return cls(
name=proto.name,
min_value=proto.min_value,
max_value=proto.max_value,
step=proto.step or None,
sampling=hp_utils.sampling_from_proto(proto.sampling),
default=proto.default,
conditions=conditions,
)
def to_proto(self):
return protos.get_proto().Float(
name=self.name,
min_value=self.min_value,
max_value=self.max_value,
step=self.step if self.step is not None else 0.0,
sampling=hp_utils.sampling_to_proto(self.sampling),
default=self.default,
conditions=[c.to_proto() for c in self.conditions],
)
| keras-tuner/keras_tuner/engine/hyperparameters/hp_types/float_hp.py/0 | {
"file_path": "keras-tuner/keras_tuner/engine/hyperparameters/hp_types/float_hp.py",
"repo_id": "keras-tuner",
"token_count": 2345
} | 144 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import datetime
import threading
import time
import numpy as np
import pytest
import keras_tuner
from keras_tuner.engine import oracle as oracle_module
from keras_tuner.engine import trial as trial_module
from keras_tuner.tuners import gridsearch
class OracleStub(oracle_module.Oracle):
def __init__(self, directory, **kwargs):
super().__init__(**kwargs)
self.score_trial_called = False
self._set_project_dir(directory=directory, project_name="name")
def populate_space(self, trial_id):
return {
"values": {"hp1": "populate_space"},
"status": trial_module.TrialStatus.RUNNING,
}
def score_trial(self, trial_id):
super().score_trial(trial_id)
self.score_trial_called = True
def test_hp_not_provided_for_tune_new_entry_error(tmp_path):
with pytest.raises(ValueError, match="tune_new_entries=False"):
OracleStub(
directory=tmp_path, objective="val_loss", tune_new_entries=False
)
def test_hp_not_provided_for_allow_new_entry_error(tmp_path):
with pytest.raises(ValueError, match="allow_new_entries=False"):
OracleStub(
directory=tmp_path, objective="val_loss", allow_new_entries=False
)
def test_objective_not_found_error(tmp_path):
oracle = OracleStub(directory=tmp_path, objective="val_loss")
trial = oracle.create_trial(tuner_id="a")
with pytest.raises(ValueError, match="Objective value missing"):
oracle.update_trial(trial_id=trial.trial_id, metrics={"unknown": 0.5})
def test_multi_objective_found(tmp_path):
oracle = OracleStub(directory=tmp_path, objective=["val_loss", "mse"])
trial = oracle.create_trial(tuner_id="a")
oracle.update_trial(
trial_id=trial.trial_id, metrics={"val_loss": 0.5, "mse": 0.1}
)
def test_private_populate_space_deprecated_and_call_public(tmp_path):
oracle = OracleStub(directory=tmp_path, objective="val_loss")
with pytest.deprecated_call():
assert isinstance(oracle._populate_space("100"), dict)
def test_private_score_trial_deprecated_and_call_public(tmp_path):
oracle = OracleStub(directory=tmp_path, objective="val_loss")
trial = oracle.create_trial(tuner_id="a")
oracle.update_trial(trial_id=trial.trial_id, metrics={"val_loss": 0.5})
with pytest.deprecated_call():
oracle._score_trial(trial)
assert oracle.score_trial_called
def test_import_objective_from_oracle():
# This test is for backward compatibility.
from keras_tuner.engine.oracle import Objective
assert Objective is keras_tuner.Objective
def test_duplicate(tmp_path):
class MyOracle(OracleStub):
def populate_space(self, trial_id):
values = {"hp1": 1}
if len(self.ongoing_trials) > 0:
assert self._duplicate(values)
return {
"values": values,
"status": trial_module.TrialStatus.RUNNING,
}
oracle = MyOracle(directory=tmp_path, objective="val_loss")
oracle.create_trial(tuner_id="a")
oracle.create_trial(tuner_id="b")
assert len(oracle.ongoing_trials) == 2
def test_end_trial_backward_compatible(tmp_path):
oracle = OracleStub(directory=tmp_path, objective="val_loss")
trial = oracle.create_trial(tuner_id="a")
oracle.update_trial(trial.trial_id, {"val_loss": 1.0})
oracle.end_trial(trial.trial_id, "COMPLETE")
def test_not_duplicate(tmp_path):
class MyOracle(OracleStub):
def populate_space(self, trial_id):
values = {"hp1": len(self.ongoing_trials)}
assert not self._duplicate(values)
return {
"values": values,
"status": trial_module.TrialStatus.RUNNING,
}
oracle = MyOracle(directory=tmp_path, objective="val_loss")
oracle.create_trial(tuner_id="a")
oracle.create_trial(tuner_id="b")
assert len(oracle.ongoing_trials) == 2
def test_new_hp_duplicate(tmp_path):
class MyOracle(OracleStub):
def populate_space(self, trial_id):
values = {"hp1": 1}
assert not self._duplicate(values)
if len(self.end_order) > 0:
values["hp2"] = 2
assert self._duplicate(values)
return {
"values": values,
"status": trial_module.TrialStatus.RUNNING,
}
oracle = MyOracle(directory=tmp_path, objective="val_loss")
trial = oracle.create_trial(tuner_id="a")
trial.hyperparameters.values["hp2"] = 2
oracle.update_trial(trial.trial_id, {"val_loss": 3.0})
oracle.end_trial(trial)
oracle.create_trial(tuner_id="b")
assert len(oracle.start_order) == 2
def test_default_no_retry(tmp_path):
oracle = OracleStub(directory=tmp_path, objective="val_loss")
trial_1 = oracle.create_trial(tuner_id="a")
trial_1.status = trial_module.TrialStatus.INVALID
trial_1.message = "error1"
oracle.end_trial(trial_1)
trial_2 = oracle.create_trial(tuner_id="a")
assert trial_1.trial_id != trial_2.trial_id
def test_retry_invalid_trial(tmp_path):
oracle = OracleStub(
directory=tmp_path, objective="val_loss", max_retries_per_trial=1
)
trial_1 = oracle.create_trial(tuner_id="a")
trial_1.status = trial_module.TrialStatus.INVALID
trial_1.message = "error1"
oracle.end_trial(trial_1)
# This is the retry for the trial.
trial_2 = oracle.create_trial(tuner_id="a")
assert trial_1.trial_id == trial_2.trial_id
# Retried once. This is a new trial.
trial_3 = oracle.create_trial(tuner_id="b")
assert trial_1.trial_id != trial_3.trial_id
def test_is_nan_mark_as_invalid(tmp_path):
oracle = OracleStub(
directory=tmp_path, objective="val_loss", max_retries_per_trial=1
)
trial = oracle.create_trial(tuner_id="a")
oracle.update_trial(trial.trial_id, metrics={"val_loss": float("nan")})
trial.status = trial_module.TrialStatus.COMPLETED
trial.message = "error1"
oracle.end_trial(trial)
assert (
oracle.trials[trial.trial_id].status == trial_module.TrialStatus.INVALID
)
def test_no_retry_for_failed_trial(tmp_path):
oracle = OracleStub(
directory=tmp_path, objective="val_loss", max_retries_per_trial=1
)
trial_1 = oracle.create_trial(tuner_id="a")
# Failed, so no retry.
trial_1.status = trial_module.TrialStatus.FAILED
trial_1.message = "error1"
oracle.end_trial(trial_1)
trial_2 = oracle.create_trial(tuner_id="a")
assert trial_1.trial_id != trial_2.trial_id
def test_consecutive_failures_in_limit(tmp_path):
oracle = OracleStub(
directory=tmp_path, objective="val_loss", max_retries_per_trial=2
)
# (1 run + 2 retry) * 2 trial = 6
for _ in range(6):
trial = oracle.create_trial(tuner_id="a")
trial.status = trial_module.TrialStatus.INVALID
trial.message = "error1"
oracle.end_trial(trial)
for _ in range(3):
trial = oracle.create_trial(tuner_id="a")
trial.status = trial_module.TrialStatus.COMPLETED
oracle.update_trial(trial.trial_id, metrics={"val_loss": 0.5})
oracle.end_trial(trial)
def test_too_many_consecutive_failures(tmp_path):
oracle = OracleStub(
directory=tmp_path, objective="val_loss", max_retries_per_trial=2
)
with pytest.raises(RuntimeError, match="Number of consecutive") as e:
for _ in range(3):
trial = oracle.create_trial(tuner_id="a")
# Failed, so no retry.
trial.status = trial_module.TrialStatus.FAILED
trial.message = "custom_error_info"
oracle.end_trial(trial)
assert "custom_error_info" in str(e)
def test_synchronized_functions_in_same_oracle_same_function(tmp_path):
class MyOracle(OracleStub):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.log = []
@oracle_module.synchronized
def create_trial(self, tuner_id):
# Log ID at the beginning.
self.log.append(tuner_id)
time.sleep(0.5)
# Log ID in the end.
self.log.append(tuner_id)
return super().create_trial(tuner_id)
oracle = MyOracle(directory=tmp_path)
def thread_function(i):
oracle.create_trial(tuner_id=str(i))
threads = []
for i in range(5):
thread = threading.Thread(target=thread_function, args=(i,))
threads.append(thread)
thread.start()
for thread in threads:
thread.join()
for i in range(5):
# The same ID should be next to each other.
# No other thread interupting between start and end.
assert oracle.log[i * 2] == oracle.log[i * 2 + 1]
def test_synchronized_functions_in_same_oracle_diff_function(tmp_path):
class MyOracle(OracleStub):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.log = []
@oracle_module.synchronized
def create_trial(self, tuner_id):
self.log.append("create")
time.sleep(0.5)
self.log.append("create")
return super().create_trial(tuner_id)
@oracle_module.synchronized
def end_trial(self, trial):
self.log.append("end")
time.sleep(0.5)
self.log.append("end")
return super().end_trial(trial)
oracle = MyOracle(
directory=tmp_path,
objective="val_loss",
)
trial = oracle.create_trial(tuner_id="a")
trial.status = trial_module.TrialStatus.COMPLETED
oracle.update_trial(trial.trial_id, metrics={"val_loss": 0.5})
def thread_function_create():
oracle.create_trial(tuner_id="b")
def thread_function_end():
oracle.end_trial(trial)
thread_create = threading.Thread(target=thread_function_create)
thread_end = threading.Thread(target=thread_function_end)
thread_create.start()
thread_end.start()
thread_create.join()
thread_end.join()
for i in range(2):
# The same ID should be next to each other.
# No other thread interupting between start and end.
assert oracle.log[i * 2] == oracle.log[i * 2 + 1]
def test_synchronized_functions_in_different_oracle_doesnt_block(tmp_path):
log = []
class MyOracle(OracleStub):
@oracle_module.synchronized
def create_trial(self, tuner_id):
# Log ID at the beginning.
log.append(tuner_id)
time.sleep(0.5)
# Log ID in the end.
log.append(tuner_id)
return super().create_trial(tuner_id)
def thread_function(i):
oracle = MyOracle(directory=tmp_path)
oracle.create_trial(tuner_id=str(i))
threads = []
for i in range(5):
thread = threading.Thread(target=thread_function, args=(i,))
threads.append(thread)
thread.start()
for thread in threads:
thread.join()
# All threads begin to sleep before anyone ends.
assert set(log[:5]) == set(log[5:])
def test_oracle_return_same_trial_if_same_tuner(tmp_path):
oracle = OracleStub(
directory=tmp_path, objective="val_loss", max_retries_per_trial=1
)
trial_1 = oracle.create_trial(tuner_id="a")
trial_2 = oracle.create_trial(tuner_id="a")
assert trial_1.trial_id == trial_2.trial_id
def test_oracle_reload_ongoing_trials_to_retry(tmp_path):
oracle = OracleStub(
directory=tmp_path, objective="val_loss", max_retries_per_trial=1
)
trial_1 = oracle.create_trial(tuner_id="a")
trial_2 = oracle.create_trial(tuner_id="b")
oracle_2 = OracleStub(
directory=tmp_path, objective="val_loss", max_retries_per_trial=1
)
oracle_2.reload()
trial_3 = oracle.create_trial(tuner_id="a")
trial_4 = oracle.create_trial(tuner_id="b")
assert set([trial_3.trial_id, trial_4.trial_id]) == set(
[trial_1.trial_id, trial_2.trial_id]
)
def test_get_best_trial_with_nans(tmp_path):
oracle = OracleStub(
directory=tmp_path, objective="val_loss", max_retries_per_trial=1
)
for i in range(10):
trial = oracle.create_trial(tuner_id="a")
oracle.update_trial(trial.trial_id, {"val_loss": np.random.rand()})
trial.status = trial_module.TrialStatus.COMPLETED
oracle.end_trial(trial)
best_trial = oracle.create_trial(tuner_id="a")
oracle.update_trial(best_trial.trial_id, {"val_loss": -0.1})
best_trial.status = trial_module.TrialStatus.COMPLETED
oracle.end_trial(best_trial)
trial = oracle.create_trial(tuner_id="a")
oracle.update_trial(trial.trial_id, {"val_loss": float("nan")})
trial.status = trial_module.TrialStatus.COMPLETED
oracle.end_trial(trial)
assert len(oracle.get_best_trials()) > 0
assert oracle.get_best_trials()[0].trial_id == best_trial.trial_id
def test_overwrite_false_resume(tmp_path):
oracle = OracleStub(
directory=tmp_path, objective="val_loss", max_retries_per_trial=1
)
for i in range(10):
trial = oracle.create_trial(tuner_id="a")
oracle.update_trial(trial.trial_id, {"val_loss": np.random.rand()})
trial.status = trial_module.TrialStatus.COMPLETED
oracle.end_trial(trial)
trial = oracle.create_trial(tuner_id="a")
trial_id = trial.trial_id
oracle = OracleStub(
directory=tmp_path, objective="val_loss", max_retries_per_trial=1
)
oracle.reload()
trial = oracle.create_trial(tuner_id="a")
oracle.update_trial(trial.trial_id, {"val_loss": np.random.rand()})
trial.status = trial_module.TrialStatus.COMPLETED
oracle.end_trial(trial)
assert trial.trial_id == trial_id
assert (
oracle.get_trial(trial_id).status == trial_module.TrialStatus.COMPLETED
)
def test_display_format_duration_large_d():
oracle = gridsearch.GridSearchOracle()
d = datetime.datetime(2020, 5, 17) - datetime.datetime(2020, 5, 10)
oracle.verbose = "auto"
assert oracle_module.Display(oracle).format_duration(d) == "7d 00h 00m 00s"
assert oracle.verbose == 1
| keras-tuner/keras_tuner/engine/oracle_test.py/0 | {
"file_path": "keras-tuner/keras_tuner/engine/oracle_test.py",
"repo_id": "keras-tuner",
"token_count": 6364
} | 145 |
/* Copyright 2019 The KerasTuner Authors. All Rights Reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
// Protos for distributed GRPC service
syntax = "proto3";
package keras_tuner;
import "keras_tuner/protos/keras_tuner.proto";
message GetSpaceRequest {}
message GetSpaceResponse {
keras_tuner.HyperParameters hyperparameters = 1;
}
message UpdateSpaceRequest {
keras_tuner.HyperParameters hyperparameters = 1;
}
message UpdateSpaceResponse {}
message CreateTrialRequest {
string tuner_id = 1;
}
message CreateTrialResponse {
keras_tuner.Trial trial = 1;
}
message UpdateTrialRequest {
string trial_id = 1;
map<string, double> metrics = 2;
int64 step = 3;
}
message UpdateTrialResponse {
keras_tuner.Trial trial = 1;
}
message EndTrialRequest {
keras_tuner.Trial trial = 1;
}
message EndTrialResponse {}
message GetBestTrialsRequest {
int64 num_trials = 1;
}
message GetBestTrialsResponse {
repeated keras_tuner.Trial trials = 1;
}
message GetTrialRequest {
string trial_id = 1;
}
message GetTrialResponse {
keras_tuner.Trial trial = 1;
}
service Oracle {
// Return the HyperParameter search space.
rpc GetSpace(GetSpaceRequest) returns (GetSpaceResponse) {}
// Updates the HyperParameter search space.
rpc UpdateSpace(UpdateSpaceRequest) returns (UpdateSpaceResponse) {}
// Creates a Trial.
rpc CreateTrial(CreateTrialRequest) returns (CreateTrialResponse) {}
// Updates a Trial with metrics and a step.
rpc UpdateTrial(UpdateTrialRequest) returns (UpdateTrialResponse) {}
// Ends a Trial.
rpc EndTrial(EndTrialRequest) returns (EndTrialResponse) {}
// Gets the best Trials.
rpc GetBestTrials(GetBestTrialsRequest) returns (GetBestTrialsResponse) {}
// Gets a Trial by ID.
rpc GetTrial(GetTrialRequest) returns (GetTrialResponse) {}
}
| keras-tuner/keras_tuner/protos/service.proto/0 | {
"file_path": "keras-tuner/keras_tuner/protos/service.proto",
"repo_id": "keras-tuner",
"token_count": 770
} | 146 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
import numpy as np
import pytest
import keras_tuner
from keras_tuner.backend import keras
from keras_tuner.engine import hyperparameters as hp_module
from keras_tuner.engine import trial as trial_module
from keras_tuner.tuners import bayesian as bo_module
def build_model(hp):
model = keras.Sequential()
model.add(keras.layers.Flatten(input_shape=(2, 2)))
for i in range(3):
model.add(
keras.layers.Dense(
units=hp.Int(f"units_{str(i)}", 2, 4, 2), activation="relu"
)
)
model.add(keras.layers.Dense(2, activation="softmax"))
model.compile(
optimizer=keras.optimizers.Adam(
hp.Choice("learning_rate", [1e-2, 1e-3, 1e-4])
),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
return model
def test_scipy_not_install_error(tmp_path):
scipy_module = keras_tuner.tuners.bayesian.scipy
keras_tuner.tuners.bayesian.scipy = None
with pytest.raises(ImportError, match="Please install scipy"):
keras_tuner.BayesianOptimization(
hypermodel=build_model,
directory=tmp_path,
)
keras_tuner.tuners.bayesian.scipy = scipy_module
def test_sklearn_not_install_error(tmp_path):
sklearn_module = keras_tuner.tuners.bayesian.sklearn
keras_tuner.tuners.bayesian.sklearn = None
with pytest.raises(ImportError, match="Please install scikit-learn"):
keras_tuner.BayesianOptimization(
hypermodel=build_model,
directory=tmp_path,
)
keras_tuner.tuners.bayesian.sklearn = sklearn_module
def test_bayesian_oracle(tmp_path):
hps = hp_module.HyperParameters()
hps.Choice("a", [1, 2], default=1)
hps.Int("b", 3, 10, default=3)
hps.Float("c", 0, 1, 0.1, default=0)
hps.Fixed("d", 7)
hps.Choice("e", [9, 0], default=9)
oracle = bo_module.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", "max"),
max_trials=20,
num_initial_points=2,
hyperparameters=hps,
)
oracle._set_project_dir(tmp_path, "untitled")
for i in range(5):
trial = oracle.create_trial(str(i))
oracle.update_trial(trial.trial_id, {"score": i})
trial.status = "COMPLETED"
oracle.end_trial(trial)
def test_bayesian_oracle_with_zero_y(tmp_path):
hps = hp_module.HyperParameters()
hps.Choice("a", [1, 2], default=1)
hps.Int("b", 3, 10, default=3)
hps.Float("c", 0, 1, 0.1, default=0)
hps.Fixed("d", 7)
hps.Choice("e", [9, 0], default=9)
oracle = bo_module.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", "max"),
max_trials=20,
num_initial_points=2,
hyperparameters=hps,
)
oracle._set_project_dir(tmp_path, "untitled")
for i in range(5):
trial = oracle.create_trial(str(i))
oracle.update_trial(trial.trial_id, {"score": 0})
trial.status = "COMPLETED"
oracle.end_trial(trial)
def test_bayesian_dynamic_space(tmp_path):
hps = hp_module.HyperParameters()
hps.Choice("a", [1, 2], default=1)
oracle = bo_module.BayesianOptimizationOracle(
objective="val_acc", max_trials=20, num_initial_points=10
)
oracle._set_project_dir(tmp_path, "untitled")
oracle.hyperparameters = hps
for i in range(10):
oracle.populate_space(str(i))
hps.Int("b", 3, 10, default=3)
assert "b" in oracle.populate_space("1_0")["values"]
hps.Float("c", 0, 1, 0.1, default=0)
assert "c" in oracle.populate_space("1_1")["values"]
hps.Fixed("d", 7)
assert "d" in oracle.populate_space("1_2")["values"]
hps.Choice("e", [9, 0], default=9)
assert "e" in oracle.populate_space("1_3")["values"]
def test_bayesian_save_reload(tmp_path):
hps = hp_module.HyperParameters()
hps.Choice("a", [1, 2], default=1)
hps.Choice("b", [3, 4], default=3)
hps.Choice("c", [5, 6], default=5)
hps.Choice("d", [7, 8], default=7)
hps.Choice("e", [9, 0], default=9)
oracle = bo_module.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", "max"),
max_trials=20,
hyperparameters=hps,
)
oracle._set_project_dir(tmp_path, "untitled")
for _ in range(3):
trial = oracle.create_trial("tuner_id")
oracle.update_trial(trial.trial_id, {"score": 1.0})
trial.status = "COMPLETED"
oracle.end_trial(trial)
oracle.save()
oracle = bo_module.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", "max"),
max_trials=20,
hyperparameters=hps,
)
oracle._set_project_dir(tmp_path, "untitled")
oracle.reload()
for _ in range(3):
trial = oracle.create_trial("tuner_id")
oracle.update_trial(trial.trial_id, {"score": 1.0})
trial.status = "COMPLETED"
oracle.end_trial(trial)
assert len(oracle.trials) == 6
def test_bayesian_optimization_tuner(tmp_path):
tuner = bo_module.BayesianOptimization(
build_model, objective="val_accuracy", max_trials=15, directory=tmp_path
)
assert isinstance(tuner.oracle, bo_module.BayesianOptimizationOracle)
def test_bayesian_optimization_tuner_set_alpha_beta(tmp_path):
tuner = bo_module.BayesianOptimization(
build_model,
alpha=1e-4,
beta=2.6,
objective="val_accuracy",
max_trials=15,
directory=tmp_path,
)
assert isinstance(tuner.oracle, bo_module.BayesianOptimizationOracle)
def test_save_before_result(tmp_path):
hps = hp_module.HyperParameters()
hps.Choice("a", [1, 2], default=1)
hps.Int("b", 3, 10, default=3)
hps.Float("c", 0, 1, 0.1, default=0)
hps.Fixed("d", 7)
hps.Choice("e", [9, 0], default=9)
oracle = bo_module.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", "max"),
max_trials=10,
hyperparameters=hps,
)
oracle._set_project_dir(tmp_path, "untitled")
oracle.populate_space(str(1))
oracle.save()
def test_bayesian_oracle_maximize(tmp_path):
hps = hp_module.HyperParameters()
hps.Int("a", -100, 100)
oracle = bo_module.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", direction="max"),
max_trials=20,
hyperparameters=hps,
num_initial_points=2,
)
oracle._set_project_dir(tmp_path, "untitled")
# Make examples with high 'a' and high score.
for i in range(5):
trial = trial_module.Trial(hyperparameters=hps.copy())
trial.hyperparameters.values["a"] = 10 * i
trial.score = i
trial.status = "COMPLETED"
oracle.trials[trial.trial_id] = trial
# Make examples with low 'a' and low score
for i in range(5):
trial = trial_module.Trial(hyperparameters=hps.copy())
trial.hyperparameters.values["a"] = -10 * i
trial.score = -i
trial.status = "COMPLETED"
oracle.trials[trial.trial_id] = trial
trial = oracle.create_trial("tuner0")
assert trial.status == "RUNNING"
# Assert that the oracle suggests hps it thinks will maximize.
assert trial.hyperparameters.get("a") > 0
def test_hyperparameters_added(tmp_path):
hps = hp_module.HyperParameters()
hps.Int("a", -100, 100)
oracle = bo_module.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", direction="max"),
max_trials=20,
hyperparameters=hps,
num_initial_points=2,
)
oracle._set_project_dir(tmp_path, "untitled")
# Populate initial trials.
for i in range(10):
trial = trial_module.Trial(hyperparameters=hps.copy())
trial.hyperparameters.values["a"] = 10 * i
trial.score = i
trial.status = "COMPLETED"
oracle.trials[trial.trial_id] = trial
# A new trial discovered a new hp and synced to oracle.hyperparameters.
new_hps = hp_module.HyperParameters()
new_hps.Float("b", 3.2, 6.4, step=0.2, default=3.6)
new_hps.Boolean("c", default=True)
oracle.update_space(new_hps)
# Make a new trial, it should have b set.
trial = oracle.create_trial("tuner0")
assert trial.status == "RUNNING"
assert "b" in trial.hyperparameters.values
assert "c" in trial.hyperparameters.values
def test_step_respected(tmp_path):
hps = hp_module.HyperParameters()
hps.Float("c", 0, 10, step=3)
oracle = bo_module.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", direction="max"),
max_trials=20,
hyperparameters=hps,
num_initial_points=2,
)
oracle._set_project_dir(tmp_path, "untitled")
# Populate initial trials.
for i in range(10):
trial = trial_module.Trial(hyperparameters=hps.copy())
trial.hyperparameters.values["c"] = 3.0
trial.score = i
trial.status = "COMPLETED"
oracle.trials[trial.trial_id] = trial
trial = oracle.create_trial("tuner0")
# Check that oracle respects the `step` param.
assert trial.hyperparameters.get("c") in {0, 3, 6, 9}
def test_float_optimization(tmp_path):
class PolynomialTuner(keras_tuner.engine.base_tuner.BaseTuner):
def run_trial(self, trial):
hp = trial.hyperparameters
return -1 * hp["a"] ** 3 + hp["b"] ** 3 + hp["c"] - abs(hp["d"])
hps = hp_module.HyperParameters()
hps.Float("a", -1, 1)
hps.Float("b", -1, 1)
hps.Float("c", -1, 1)
hps.Float("d", -1, 1)
tuner = PolynomialTuner(
oracle=keras_tuner.oracles.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", "max"),
hyperparameters=hps,
max_trials=50,
),
directory=tmp_path,
)
tuner.search()
atol, rtol = 1e-1, 1e-1
best_trial = tuner.oracle.get_best_trials()[0]
best_hps = best_trial.hyperparameters
assert np.isclose(best_trial.score, 3, atol=atol, rtol=rtol)
assert np.isclose(best_hps["a"], -1, atol=atol, rtol=rtol)
assert np.isclose(best_hps["b"], 1, atol=atol, rtol=rtol)
assert np.isclose(best_hps["c"], 1, atol=atol, rtol=rtol)
assert np.isclose(best_hps["d"], 0, atol=atol, rtol=rtol)
def test_distributed_optimization(tmp_path):
hps = hp_module.HyperParameters()
hps.Int("a", 0, 10)
hps.Float("b", -1, 1, step=0.1)
hps.Float("c", 1e-5, 1e-2, sampling="log")
def evaluate(hp):
# Minimum at a=4, b=1, c=1e-3 with score=-1
return abs(hp["a"] - 4) - hp["b"] + 0.1 * abs(3 + math.log(hp["c"], 10))
oracle = bo_module.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", "min"),
hyperparameters=hps,
max_trials=60,
)
oracle._set_project_dir(tmp_path, "untitled")
tuners = 4
for _ in range(10):
trials = []
for i in range(tuners):
trial = oracle.create_trial(f"tuner_{str(i)}")
trials.append(trial)
for trial in trials:
oracle.update_trial(
trial.trial_id, {"score": evaluate(trial.hyperparameters)}
)
for trial in trials:
trial.status = "COMPLETED"
oracle.end_trial(trial)
atol, rtol = 1e-1, 1e-1
best_trial = oracle.get_best_trials()[0]
best_hps = best_trial.hyperparameters
# The minimum is not always found but it is always close.
assert best_trial.score < -0.8, best_hps.values
assert np.isclose(best_hps["a"], 4, atol=atol, rtol=rtol)
assert np.isclose(best_hps["b"], 1, atol=atol, rtol=rtol)
# For log-scale param, just check that the order of magnitude is correct.
log_best_c = math.log(best_hps["c"], 10)
assert log_best_c > -4 and log_best_c < -2
def test_interleaved_distributed_optimization(tmp_path):
hps = hp_module.HyperParameters()
hps.Float("a", -1, 1)
hps.Float("b", -1, 1)
hps.Float("c", -1, 1)
hps.Float("d", -1, 1)
def evaluate(hp):
# Minimum at a=4, b=1, c=1e-3 with score=-1
return -1 * hp["a"] ** 3 + hp["b"] ** 3 + hp["c"] - abs(hp["d"])
oracle = bo_module.BayesianOptimizationOracle(
objective=keras_tuner.Objective("score", "min"),
hyperparameters=hps,
max_trials=60,
num_initial_points=2,
)
oracle._set_project_dir(tmp_path, "untitled")
# Run 4 trials on 2 tuners
# Start both tuners at the same time
trial_1 = oracle.create_trial("tuner_0")
trial_2 = oracle.create_trial("tuner_1")
# tuner_0 finishes trial_1 before tuner_1 finishes
oracle.update_trial(
trial_1.trial_id, {"score": evaluate(trial_1.hyperparameters)}
)
trial_1.status = "COMPLETED"
oracle.end_trial(trial_1)
# tuner_0 request a new trial (trial_3)
trial_3 = oracle.create_trial("tuner_0")
# tuner_1 finishes trial_2
oracle.update_trial(
trial_2.trial_id, {"score": evaluate(trial_2.hyperparameters)}
)
trial_2.status = "COMPLETED"
oracle.end_trial(trial_2)
# tuner_1 requests the final new trial (trial_4)
# the Bayesian optimizer will use ongoing trial_3 to hallucinate
trial_4 = oracle.create_trial("tuner_1")
# tuner_0 finishes trial_3
oracle.update_trial(
trial_3.trial_id, {"score": evaluate(trial_3.hyperparameters)}
)
trial_3.status = "COMPLETED"
oracle.end_trial(trial_3)
# tuner_1 finishes trial_4
oracle.update_trial(
trial_4.trial_id, {"score": evaluate(trial_4.hyperparameters)}
)
trial_4.status = "COMPLETED"
oracle.end_trial(trial_4)
assert True
def test_exhausted_search_space(tmp_path):
class MyTuner(bo_module.BayesianOptimization):
def run_trial(self, trial, *args, **kwargs):
hp = trial.hyperparameters
hp.Boolean("boolean")
hp.Boolean("boolean2")
return [np.random.rand()]
tuner = MyTuner(
alpha=1e-4,
beta=2.6,
max_trials=15,
directory=tmp_path,
)
tuner.search()
assert len(tuner.oracle.trials) == 4
def test_skip_failed_trials(tmp_path):
class MyTuner(bo_module.BayesianOptimization):
def run_trial(self, trial, *args, **kwargs):
hp = trial.hyperparameters
hp.Boolean("boolean")
hp.Boolean("boolean2")
hp.Boolean("boolean3")
hp.Boolean("boolean4")
if len(self.oracle.start_order) == 1:
raise keras_tuner.errors.FailedTrialError()
return [np.random.rand()]
tuner = MyTuner(
alpha=1e-4,
beta=2.6,
max_trials=15,
directory=tmp_path,
)
tuner.search()
assert any(
map(lambda x: x.status == "FAILED", tuner.oracle.trials.values())
)
| keras-tuner/keras_tuner/tuners/bayesian_test.py/0 | {
"file_path": "keras-tuner/keras_tuner/tuners/bayesian_test.py",
"repo_id": "keras-tuner",
"token_count": 6926
} | 147 |
import os
import re
import subprocess
from keras import backend
BACKEND_REQ = {
"tensorflow": "tensorflow",
"torch": "torch torchvision",
"jax": "jax jaxlib",
}
def setup_package():
subprocess.run("rm -rf tmp_build_dir", shell=True)
build_process = subprocess.run(
"python3 pip_build.py",
capture_output=True,
text=True,
shell=True,
)
print(build_process.stdout)
whl_path = re.findall(
r"[^\s]*\.whl",
build_process.stdout,
)[-1]
if not whl_path:
print(build_process.stderr)
raise ValueError("Installing Keras package unsuccessful. ")
return whl_path
def create_virtualenv():
env_setup = [
# Create virtual environment
"python3 -m venv test_env",
]
os.environ["PATH"] = (
"/test_env/bin/" + os.pathsep + os.environ.get("PATH", "")
)
run_commands_local(env_setup)
def manage_venv_installs(whl_path):
other_backends = list(set(BACKEND_REQ.keys()) - {backend.backend()})
install_setup = [
# Installs the backend's package and common requirements
"pip install " + BACKEND_REQ[backend.backend()],
"pip install -r requirements-common.txt",
"pip install pytest",
# Ensure other backends are uninstalled
"pip uninstall -y "
+ BACKEND_REQ[other_backends[0]]
+ " "
+ BACKEND_REQ[other_backends[1]],
# Install `.whl` package
"pip install " + whl_path,
]
run_commands_venv(install_setup)
def run_keras_flow():
test_script = [
# Runs the example script
"python -m pytest integration_tests/basic_full_flow.py",
]
run_commands_venv(test_script)
def cleanup():
cleanup_script = [
# Exits virtual environment, deletes files, and any
# miscellaneous install logs
"exit",
"rm -rf test_env",
"rm -rf tmp_build_dir",
"rm -f *+cpu",
]
run_commands_local(cleanup_script)
def run_commands_local(commands):
for command in commands:
print(f"Running command: {command}")
subprocess.run(command, shell=True)
def run_commands_venv(commands):
for command in commands:
print(f"Running command: {command}")
cmd_with_args = command.split(" ")
cmd_with_args[0] = "test_env/bin/" + cmd_with_args[0]
p = subprocess.Popen(cmd_with_args)
assert p.wait() == 0
def test_keras_imports():
try:
# Ensures packages from all backends are installed.
# Builds Keras core package and returns package file path.
whl_path = setup_package()
# Creates and activates a virtual environment.
create_virtualenv()
# Ensures the backend's package is installed
# and the other backends are uninstalled.
manage_venv_installs(whl_path)
# Runs test of basic flow in Keras Core.
# Tests for backend-specific imports and `model.fit()`.
run_keras_flow()
# Removes virtual environment and associated files
finally:
cleanup()
if __name__ == "__main__":
test_keras_imports()
| keras/integration_tests/import_test.py/0 | {
"file_path": "keras/integration_tests/import_test.py",
"repo_id": "keras",
"token_count": 1371
} | 148 |
from keras.backend.jax import core
from keras.backend.jax import distribution_lib
from keras.backend.jax import image
from keras.backend.jax import linalg
from keras.backend.jax import math
from keras.backend.jax import nn
from keras.backend.jax import numpy
from keras.backend.jax import random
from keras.backend.jax.core import SUPPORTS_SPARSE_TENSORS
from keras.backend.jax.core import Variable
from keras.backend.jax.core import cast
from keras.backend.jax.core import compute_output_spec
from keras.backend.jax.core import cond
from keras.backend.jax.core import convert_to_numpy
from keras.backend.jax.core import convert_to_tensor
from keras.backend.jax.core import device_scope
from keras.backend.jax.core import is_tensor
from keras.backend.jax.core import scatter
from keras.backend.jax.core import shape
from keras.backend.jax.core import stop_gradient
from keras.backend.jax.core import vectorized_map
from keras.backend.jax.rnn import cudnn_ok
from keras.backend.jax.rnn import gru
from keras.backend.jax.rnn import lstm
from keras.backend.jax.rnn import rnn
| keras/keras/backend/jax/__init__.py/0 | {
"file_path": "keras/keras/backend/jax/__init__.py",
"repo_id": "keras",
"token_count": 385
} | 149 |
import tensorflow as tf
from tensorflow.experimental import numpy as tfnp
from keras.backend import config
from keras.backend import standardize_dtype
from keras.backend.common import dtypes
from keras.backend.tensorflow.core import cast
from keras.backend.tensorflow.core import convert_to_tensor
def cholesky(a):
out = tf.linalg.cholesky(a)
# tf.linalg.cholesky simply returns NaNs for non-positive definite matrices
return tf.debugging.check_numerics(out, "Cholesky")
def det(a):
return tf.linalg.det(a)
def eig(a):
return tf.linalg.eig(a)
def inv(a):
return tf.linalg.inv(a)
def lu_factor(a):
lu, p = tf.linalg.lu(a)
return lu, tf.math.invert_permutation(p)
def norm(x, ord=None, axis=None, keepdims=False):
x = convert_to_tensor(x)
x_shape = x.shape
ndim = x_shape.rank
if axis is None:
axis = tuple(range(ndim))
elif isinstance(axis, int):
axis = (axis,)
axis = axis[0] if len(axis) == 1 else axis
num_axes = 1 if isinstance(axis, int) else len(axis)
if num_axes == 1 and ord is None:
ord = "euclidean"
elif num_axes == 2 and ord is None:
ord = "fro"
if standardize_dtype(x.dtype) == "int64":
dtype = config.floatx()
else:
dtype = dtypes.result_type(x.dtype, float)
x = cast(x, dtype)
# Fast path to utilze `tf.linalg.norm`
if (num_axes == 1 and ord in ("euclidean", 1, 2, float("inf"))) or (
num_axes == 2 and ord in ("euclidean", "fro", 1, 2, float("inf"))
):
return tf.linalg.norm(x, ord=ord, axis=axis, keepdims=keepdims)
# Ref: jax.numpy.linalg.norm
if num_axes == 1 and ord not in ("fro", "nuc"):
if ord == float("-inf"):
return tf.math.reduce_min(
tf.math.abs(x), axis=axis, keepdims=keepdims
)
elif ord == 0:
return tf.math.reduce_sum(
tf.cast(tf.not_equal(x, 0), dtype=x.dtype),
axis=axis,
keepdims=keepdims,
)
else:
ord = convert_to_tensor(ord, dtype=x.dtype)
out = tf.math.reduce_sum(
tf.pow(tf.math.abs(x), ord), axis=axis, keepdims=keepdims
)
return tf.pow(out, 1.0 / ord)
elif num_axes == 2 and ord in ("nuc", float("-inf"), -2, -1):
row_axis, col_axis = axis[0], axis[1]
row_axis = row_axis + ndim if row_axis < 0 else row_axis
col_axis = col_axis + ndim if col_axis < 0 else col_axis
if ord == float("-inf"):
if not keepdims and row_axis > col_axis:
row_axis -= 1
x = tf.math.reduce_min(
tf.reduce_sum(tf.math.abs(x), axis=col_axis, keepdims=keepdims),
axis=row_axis,
keepdims=keepdims,
)
elif ord == -1:
if not keepdims and col_axis > row_axis:
col_axis -= 1
x = tf.math.reduce_min(
tf.reduce_sum(tf.math.abs(x), axis=row_axis, keepdims=keepdims),
axis=col_axis,
keepdims=keepdims,
)
else:
x = tfnp.moveaxis(x, axis, (-2, -1))
if ord == -2:
x = tf.math.reduce_min(
tf.linalg.svd(x, compute_uv=False), axis=-1
)
else:
x = tf.math.reduce_sum(
tf.linalg.svd(x, compute_uv=False), axis=-1
)
if keepdims:
x = tf.expand_dims(x, axis[0])
x = tf.expand_dims(x, axis[1])
return x
if num_axes == 1:
raise ValueError(
f"Invalid `ord` argument for vector norm. Received: ord={ord}"
)
elif num_axes == 2:
raise ValueError(
f"Invalid `ord` argument for matrix norm. Received: ord={ord}"
)
else:
raise ValueError(f"Invalid axis values. Received: axis={axis}")
def qr(x, mode="reduced"):
if mode not in {"reduced", "complete"}:
raise ValueError(
"`mode` argument value not supported. "
"Expected one of {'reduced', 'complete'}. "
f"Received: mode={mode}"
)
if mode == "reduced":
return tf.linalg.qr(x)
return tf.linalg.qr(x, full_matrices=True)
def solve(a, b):
# tensorflow.linalg.solve only supports same rank inputs
if tf.rank(b) == tf.rank(a) - 1:
b = tf.expand_dims(b, axis=-1)
return tf.squeeze(tf.linalg.solve(a, b), axis=-1)
return tf.linalg.solve(a, b)
def solve_triangular(a, b, lower=False):
if b.shape.ndims == a.shape.ndims - 1:
b = tf.expand_dims(b, axis=-1)
return tf.squeeze(
tf.linalg.triangular_solve(a, b, lower=lower), axis=-1
)
return tf.linalg.triangular_solve(a, b, lower=lower)
def svd(x, full_matrices=True, compute_uv=True):
s, u, v = tf.linalg.svd(
x, full_matrices=full_matrices, compute_uv=compute_uv
)
return u, s, tf.linalg.adjoint(v)
| keras/keras/backend/tensorflow/linalg.py/0 | {
"file_path": "keras/keras/backend/tensorflow/linalg.py",
"repo_id": "keras",
"token_count": 2627
} | 150 |
"""Torch backend APIs.
# Note on device placement
Torch has a different device placement style compared to TF and JAX.
In short, variables/tensors are not created on GPU by default,
and the GPU cannot directly communicate with the CPU.
To bring Torch behavior in line with TF and JAX automated device placement,
we are doing the following to automate device placement if a GPU is available:
- Variables are created on GPU.
- Input data will be placed on GPU at the first `keras.layers.Layer` call.
- Tensor creation happens on GPU, e.g., `zeros()` will create a tensor on GPU.
- `convert_to_numpy` will bring the tensor to CPU before converting it to NumPy.
"""
from keras.backend.torch import core
from keras.backend.torch import image
from keras.backend.torch import linalg
from keras.backend.torch import math
from keras.backend.torch import nn
from keras.backend.torch import numpy
from keras.backend.torch import random
from keras.backend.torch.core import SUPPORTS_SPARSE_TENSORS
from keras.backend.torch.core import Variable
from keras.backend.torch.core import cast
from keras.backend.torch.core import compute_output_spec
from keras.backend.torch.core import cond
from keras.backend.torch.core import convert_to_numpy
from keras.backend.torch.core import convert_to_tensor
from keras.backend.torch.core import device_scope
from keras.backend.torch.core import is_tensor
from keras.backend.torch.core import scatter
from keras.backend.torch.core import shape
from keras.backend.torch.core import stop_gradient
from keras.backend.torch.core import to_torch_dtype
from keras.backend.torch.core import vectorized_map
from keras.backend.torch.rnn import cudnn_ok
from keras.backend.torch.rnn import gru
from keras.backend.torch.rnn import lstm
from keras.backend.torch.rnn import rnn
| keras/keras/backend/torch/__init__.py/0 | {
"file_path": "keras/keras/backend/torch/__init__.py",
"repo_id": "keras",
"token_count": 569
} | 151 |
import torch
from keras import optimizers
from keras.optimizers.base_optimizer import BaseOptimizer
from keras.utils import torch_utils
class TorchOptimizer(BaseOptimizer):
def __new__(cls, *args, **kwargs):
# Import locally to avoid circular imports.
from keras.backend.torch.optimizers import torch_adadelta
from keras.backend.torch.optimizers import torch_adagrad
from keras.backend.torch.optimizers import torch_adam
from keras.backend.torch.optimizers import torch_adamax
from keras.backend.torch.optimizers import torch_adamw
from keras.backend.torch.optimizers import torch_lion
from keras.backend.torch.optimizers import torch_nadam
from keras.backend.torch.optimizers import torch_rmsprop
from keras.backend.torch.optimizers import torch_sgd
OPTIMIZERS = {
optimizers.Adadelta: torch_adadelta.Adadelta,
optimizers.Adagrad: torch_adagrad.Adagrad,
optimizers.Adam: torch_adam.Adam,
optimizers.Adamax: torch_adamax.Adamax,
optimizers.AdamW: torch_adamw.AdamW,
optimizers.Lion: torch_lion.Lion,
optimizers.Nadam: torch_nadam.Nadam,
optimizers.RMSprop: torch_rmsprop.RMSprop,
optimizers.SGD: torch_sgd.SGD,
}
if cls in OPTIMIZERS:
return OPTIMIZERS[cls](*args, **kwargs)
return super().__new__(cls)
@torch_utils.no_grad
def _apply_weight_decay(self, variables):
if self.weight_decay is None:
return
torch._foreach_mul_(
[v.value for v in variables if self._use_weight_decay(v)],
1 - self.weight_decay * self._get_current_learning_rate(),
)
| keras/keras/backend/torch/optimizers/torch_optimizer.py/0 | {
"file_path": "keras/keras/backend/torch/optimizers/torch_optimizer.py",
"repo_id": "keras",
"token_count": 799
} | 152 |
import numpy as np
import pytest
from keras import callbacks
from keras import layers
from keras import metrics
from keras import models
from keras import ops
from keras import testing
class EarlyStoppingTest(testing.TestCase):
@pytest.mark.requires_trainable_backend
def test_early_stopping(self):
x_train = np.random.random((10, 5))
y_train = np.random.random((10, 1))
x_test = np.random.random((10, 5))
y_test = np.random.random((10, 1))
model = models.Sequential(
(
layers.Dense(1, activation="relu"),
layers.Dense(1, activation="relu"),
)
)
model.compile(
loss="mae",
optimizer="adam",
metrics=[
"mse",
"acc",
"accuracy",
"hinge",
metrics.F1Score(name="f1_score"),
],
)
cases = [
("max", "val_mse", "max"),
("min", "val_loss", "min"),
("auto", "val_mse", "min"),
("auto", "loss", "min"),
("auto", "acc", "max"),
("auto", "val_accuracy", "max"),
("auto", "hinge", "min"),
("auto", "f1_score", "max"),
]
for mode, monitor, expected_mode in cases:
patience = 0
cbks = [
callbacks.EarlyStopping(
patience=patience, monitor=monitor, mode=mode
)
]
model.fit(
x_train,
y_train,
batch_size=5,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=2,
verbose=0,
)
if expected_mode == "max":
monitor_op = ops.greater
else:
monitor_op = ops.less
self.assertEqual(cbks[0].monitor_op, monitor_op)
with self.assertRaises(ValueError):
cbks = [
callbacks.EarlyStopping(patience=patience, monitor="unknown")
]
model.fit(
x_train,
y_train,
batch_size=5,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=2,
verbose=0,
)
@pytest.mark.requires_trainable_backend
def test_early_stopping_patience(self):
cases = [0, 1, 2, 3]
losses = [10.0, 9.0, 8.0, 9.0, 8.9, 8.8, 8.7, 8.6, 8.5]
for patience in cases:
stopper = callbacks.EarlyStopping(monitor="loss", patience=patience)
stopper.set_model(models.Sequential())
stopper.model.compile(loss="mse", optimizer="sgd")
stopper.on_train_begin()
for epoch, loss in enumerate(losses):
stopper.on_epoch_end(epoch=epoch, logs={"loss": loss})
if stopper.model.stop_training:
break
self.assertEqual(stopper.stopped_epoch, max(patience, 1) + 2)
@pytest.mark.requires_trainable_backend
def test_early_stopping_reuse(self):
patience = 3
data = np.random.random((100, 1))
labels = np.where(data > 0.5, 1, 0)
model = models.Sequential(
(
layers.Dense(1, activation="relu"),
layers.Dense(1, activation="relu"),
)
)
model.compile(
optimizer="sgd",
loss="mae",
metrics=["mse"],
)
weights = model.get_weights()
# This should allow training to go for at least `patience` epochs
model.set_weights(weights)
stopper = callbacks.EarlyStopping(monitor="mse", patience=patience)
hist = model.fit(
data, labels, callbacks=[stopper], verbose=0, epochs=20
)
assert len(hist.epoch) >= patience
@pytest.mark.requires_trainable_backend
def test_early_stopping_with_baseline(self):
baseline = 0.6
x_train = np.random.random((10, 5))
y_train = np.random.random((10, 1))
model = models.Sequential(
(
layers.Dense(1, activation="relu"),
layers.Dense(1, activation="relu"),
)
)
model.compile(optimizer="sgd", loss="mae", metrics=["mse"])
patience = 3
stopper = callbacks.EarlyStopping(
monitor="mse", patience=patience, baseline=baseline
)
hist = model.fit(
x_train, y_train, callbacks=[stopper], verbose=0, epochs=20
)
assert len(hist.epoch) >= patience
def test_early_stopping_final_weights_when_restoring_model_weights(self):
class DummyModel:
def __init__(self):
self.stop_training = False
self.weights = -1
def get_weights(self):
return self.weights
def set_weights(self, weights):
self.weights = weights
def set_weight_to_epoch(self, epoch):
self.weights = epoch
early_stop = callbacks.EarlyStopping(
monitor="val_loss", patience=2, restore_best_weights=True
)
early_stop.set_model(DummyModel())
losses = [0.2, 0.15, 0.1, 0.11, 0.12]
# The best configuration is in the epoch 2 (loss = 0.1000).
epochs_trained = 0
early_stop.on_train_begin()
for epoch in range(len(losses)):
epochs_trained += 1
early_stop.model.set_weight_to_epoch(epoch=epoch)
early_stop.on_epoch_end(epoch, logs={"val_loss": losses[epoch]})
if early_stop.model.stop_training:
break
early_stop.on_train_end()
# The best configuration is in epoch 2 (loss = 0.1000),
# and while patience = 2, we're restoring the best weights,
# so we end up at the epoch with the best weights, i.e. epoch 2
self.assertEqual(early_stop.model.get_weights(), 2)
# Check early stopping when no model beats the baseline.
early_stop = callbacks.EarlyStopping(
monitor="val_loss",
patience=5,
baseline=0.5,
restore_best_weights=True,
)
early_stop.set_model(DummyModel())
losses = [0.9, 0.8, 0.7, 0.71, 0.72, 0.73]
# The best configuration is in the epoch 2 (loss = 0.7000).
epochs_trained = 0
early_stop.on_train_begin()
for epoch in range(len(losses)):
epochs_trained += 1
early_stop.model.set_weight_to_epoch(epoch=epoch)
early_stop.on_epoch_end(epoch, logs={"val_loss": losses[epoch]})
if early_stop.model.stop_training:
break
early_stop.on_train_end()
# No epoch improves on the baseline, so we should train for only 5
# epochs, and restore the second model.
self.assertEqual(epochs_trained, 5)
self.assertEqual(early_stop.model.get_weights(), 2)
# Check weight restoration when another callback requests a stop.
early_stop = callbacks.EarlyStopping(
monitor="val_loss",
patience=5,
baseline=0.5,
restore_best_weights=True,
)
early_stop.set_model(DummyModel())
losses = [0.9, 0.8, 0.7, 0.71, 0.72, 0.73]
# The best configuration is in the epoch 2 (loss = 0.7000).
epochs_trained = 0
early_stop.on_train_begin()
for epoch in range(len(losses)):
epochs_trained += 1
early_stop.model.set_weight_to_epoch(epoch=epoch)
early_stop.on_epoch_end(epoch, logs={"val_loss": losses[epoch]})
if epoch == 3:
early_stop.model.stop_training = True
if early_stop.model.stop_training:
break
early_stop.on_train_end()
# We should restore the second model.
self.assertEqual(epochs_trained, 4)
self.assertEqual(early_stop.model.get_weights(), 2)
@pytest.mark.requires_trainable_backend
def test_early_stopping_with_start_from_epoch(self):
x_train = np.random.random((10, 5))
y_train = np.random.random((10, 1))
model = models.Sequential(
(
layers.Dense(1, activation="relu"),
layers.Dense(1, activation="relu"),
)
)
model.compile(optimizer="sgd", loss="mae", metrics=["mse"])
start_from_epoch = 2
patience = 3
stopper = callbacks.EarlyStopping(
monitor="mse",
patience=patience,
start_from_epoch=start_from_epoch,
)
history = model.fit(
x_train, y_train, callbacks=[stopper], verbose=0, epochs=20
)
# Test 'patience' argument functions correctly when used
# in conjunction with 'start_from_epoch'.
self.assertGreaterEqual(len(history.epoch), patience + start_from_epoch)
start_from_epoch = 2
patience = 0
stopper = callbacks.EarlyStopping(
monitor="mse",
patience=patience,
start_from_epoch=start_from_epoch,
)
history = model.fit(
x_train, y_train, callbacks=[stopper], verbose=0, epochs=20
)
# Test for boundary condition when 'patience' = 0.
self.assertGreaterEqual(len(history.epoch), start_from_epoch)
| keras/keras/callbacks/early_stopping_test.py/0 | {
"file_path": "keras/keras/callbacks/early_stopping_test.py",
"repo_id": "keras",
"token_count": 4884
} | 153 |
from keras.layers.activations.elu import ELU
from keras.layers.activations.leaky_relu import LeakyReLU
from keras.layers.activations.prelu import PReLU
from keras.layers.activations.relu import ReLU
from keras.layers.activations.softmax import Softmax
| keras/keras/layers/activations/__init__.py/0 | {
"file_path": "keras/keras/layers/activations/__init__.py",
"repo_id": "keras",
"token_count": 86
} | 154 |
import tree
from keras.api_export import keras_export
from keras.backend import KerasTensor
from keras.layers.layer import Layer
@keras_export("keras.layers.Identity")
class Identity(Layer):
"""Identity layer.
This layer should be used as a placeholder when no operation is to be
performed. The layer just returns its `inputs` argument as output.
"""
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.supports_masking = True
def call(self, inputs):
return inputs
def compute_output_shape(self, input_shape):
return input_shape
def compute_output_spec(self, inputs):
return tree.map_structure(
lambda x: KerasTensor(x.shape, dtype=x.dtype, sparse=x.sparse),
inputs,
)
| keras/keras/layers/core/identity.py/0 | {
"file_path": "keras/keras/layers/core/identity.py",
"repo_id": "keras",
"token_count": 307
} | 155 |
import numpy as np
import pytest
from keras import backend
from keras import initializers
from keras import layers
from keras import testing
class SpectralNormalizationTest(testing.TestCase):
@pytest.mark.requires_trainable_backend
def test_basic_spectralnorm(self):
self.run_layer_test(
layers.SpectralNormalization,
init_kwargs={"layer": layers.Dense(2)},
input_data=np.random.uniform(size=(10, 3, 4)),
expected_output_shape=(10, 3, 2),
expected_num_trainable_weights=2,
expected_num_non_trainable_weights=1,
expected_num_seed_generators=0,
expected_num_losses=0,
supports_masking=False,
)
self.run_layer_test(
layers.SpectralNormalization,
init_kwargs={"layer": layers.Embedding(10, 4)},
input_data=np.random.randint(10, size=(10,)),
expected_output_shape=(10, 4),
expected_num_trainable_weights=1,
expected_num_non_trainable_weights=1,
expected_num_seed_generators=0,
expected_num_losses=0,
supports_masking=False,
run_training_check=False,
)
def test_invalid_power_iterations(self):
with self.assertRaisesRegex(
ValueError, "`power_iterations` should be greater than zero."
):
layers.SpectralNormalization(layers.Dense(2), power_iterations=0)
def test_invalid_layer(self):
layer = layers.SpectralNormalization(layers.ReLU())
inputs = np.ones(shape=(4, 2))
with self.assertRaisesRegex(
ValueError, "object has no attribute 'kernel' nor 'embeddings'"
):
layer(inputs)
def test_apply_layer(self):
if backend.config.image_data_format() == "channels_last":
images = np.ones((1, 2, 2, 1))
else:
images = np.ones((1, 1, 2, 2))
sn_wrapper = layers.SpectralNormalization(
layers.Conv2D(
1, (2, 2), kernel_initializer=initializers.Constant(value=1)
),
power_iterations=8,
)
result = sn_wrapper(images, training=False)
result_train = sn_wrapper(images, training=True)
expected_output = np.array([[[[4.0]]]], dtype=np.float32)
self.assertAllClose(result, expected_output)
# max eigen value of 2x2 matrix of ones is 2
self.assertAllClose(result_train, expected_output / 2)
| keras/keras/layers/normalization/spectral_normalization_test.py/0 | {
"file_path": "keras/keras/layers/normalization/spectral_normalization_test.py",
"repo_id": "keras",
"token_count": 1170
} | 156 |
from keras import backend
from keras import ops
from keras.api_export import keras_export
from keras.layers.input_spec import InputSpec
from keras.layers.layer import Layer
from keras.utils import argument_validation
@keras_export("keras.layers.UpSampling3D")
class UpSampling3D(Layer):
"""Upsampling layer for 3D inputs.
Repeats the 1st, 2nd and 3rd dimensions
of the data by `size[0]`, `size[1]` and `size[2]` respectively.
Example:
>>> input_shape = (2, 1, 2, 1, 3)
>>> x = np.ones(input_shape)
>>> y = keras.layers.UpSampling3D(size=(2, 2, 2))(x)
>>> y.shape
(2, 2, 4, 2, 3)
Args:
size: Int, or tuple of 3 integers.
The upsampling factors for dim1, dim2 and dim3.
data_format: A string,
one of `"channels_last"` (default) or `"channels_first"`.
The ordering of the dimensions in the inputs.
`"channels_last"` corresponds to inputs with shape
`(batch_size, spatial_dim1, spatial_dim2, spatial_dim3, channels)`
while `"channels_first"` corresponds to inputs with shape
`(batch_size, channels, spatial_dim1, spatial_dim2, spatial_dim3)`.
When unspecified, uses
`image_data_format` value found in your Keras config file at
`~/.keras/keras.json` (if exists) else `"channels_last"`.
Defaults to `"channels_last"`.
Input shape:
5D tensor with shape:
- If `data_format` is `"channels_last"`:
`(batch_size, dim1, dim2, dim3, channels)`
- If `data_format` is `"channels_first"`:
`(batch_size, channels, dim1, dim2, dim3)`
Output shape:
5D tensor with shape:
- If `data_format` is `"channels_last"`:
`(batch_size, upsampled_dim1, upsampled_dim2, upsampled_dim3,
channels)`
- If `data_format` is `"channels_first"`:
`(batch_size, channels, upsampled_dim1, upsampled_dim2,
upsampled_dim3)`
"""
def __init__(self, size=(2, 2, 2), data_format=None, **kwargs):
super().__init__(**kwargs)
self.data_format = backend.standardize_data_format(data_format)
self.size = argument_validation.standardize_tuple(size, 3, "size")
self.input_spec = InputSpec(ndim=5)
def compute_output_shape(self, input_shape):
if self.data_format == "channels_first":
dim1 = (
self.size[0] * input_shape[2]
if input_shape[2] is not None
else None
)
dim2 = (
self.size[1] * input_shape[3]
if input_shape[3] is not None
else None
)
dim3 = (
self.size[2] * input_shape[4]
if input_shape[4] is not None
else None
)
return (input_shape[0], input_shape[1], dim1, dim2, dim3)
else:
dim1 = (
self.size[0] * input_shape[1]
if input_shape[1] is not None
else None
)
dim2 = (
self.size[1] * input_shape[2]
if input_shape[2] is not None
else None
)
dim3 = (
self.size[2] * input_shape[3]
if input_shape[3] is not None
else None
)
return (input_shape[0], dim1, dim2, dim3, input_shape[4])
def call(self, inputs):
return self._resize_volumes(
inputs, self.size[0], self.size[1], self.size[2], self.data_format
)
def get_config(self):
config = {"size": self.size, "data_format": self.data_format}
base_config = super().get_config()
return {**base_config, **config}
def _resize_volumes(
self, x, depth_factor, height_factor, width_factor, data_format
):
"""Resizes the volume contained in a 5D tensor.
Args:
x: Tensor or variable to resize.
depth_factor: Positive integer.
height_factor: Positive integer.
width_factor: Positive integer.
data_format: One of `"channels_first"`, `"channels_last"`.
Returns:
Resized tensor.
"""
if data_format == "channels_first":
output = ops.repeat(x, depth_factor, axis=2)
output = ops.repeat(output, height_factor, axis=3)
output = ops.repeat(output, width_factor, axis=4)
return output
elif data_format == "channels_last":
output = ops.repeat(x, depth_factor, axis=1)
output = ops.repeat(output, height_factor, axis=2)
output = ops.repeat(output, width_factor, axis=3)
return output
else:
raise ValueError(f"Invalid data_format: {data_format}")
| keras/keras/layers/reshaping/up_sampling3d.py/0 | {
"file_path": "keras/keras/layers/reshaping/up_sampling3d.py",
"repo_id": "keras",
"token_count": 2414
} | 157 |
import numpy as np
import pytest
from keras import backend
from keras import initializers
from keras import layers
from keras import ops
from keras import testing
class TimeDistributedTest(testing.TestCase):
@pytest.mark.requires_trainable_backend
def test_basics(self):
self.run_layer_test(
layers.TimeDistributed,
init_kwargs={"layer": layers.Dense(1, use_bias=False)},
input_shape=(3, 2, 4),
expected_output_shape=(3, 2, 1),
expected_num_trainable_weights=1,
expected_num_non_trainable_weights=0,
supports_masking=True,
)
def test_build(self):
if backend.config.image_data_format() == "channels_last":
input_shape = (10, 128, 128, 3)
output_shape = (32, 10, 126, 126, 64)
else:
input_shape = (10, 3, 128, 128)
output_shape = (32, 10, 64, 126, 126)
inputs = layers.Input(shape=input_shape, batch_size=32)
conv_2d_layer = layers.Conv2D(64, (3, 3))
outputs = layers.TimeDistributed(conv_2d_layer)(inputs)
self.assertEqual(outputs.shape, output_shape)
def test_correctness(self):
sequence = np.arange(24).reshape((3, 2, 4)).astype("float32")
layer = layers.Dense(
1,
kernel_initializer=initializers.Constant(0.01),
use_bias=False,
)
layer = layers.TimeDistributed(layer=layer)
output = layer(sequence)
self.assertAllClose(
np.array(
[[[0.06], [0.22]], [[0.38], [0.53999996]], [[0.7], [0.86]]]
),
output,
)
def test_masking(self):
class MaskedDense(layers.Wrapper):
def __init__(self, units, **kwargs):
layer = layers.Dense(
units,
kernel_initializer=initializers.Constant(0.01),
use_bias=False,
)
super().__init__(layer, **kwargs)
self.supports_masking = True
def call(self, inputs, training=False, mask=None):
unmasked = self.layer.call(inputs)
if mask is None:
return unmasked
else:
return ops.transpose(
ops.transpose(unmasked) * ops.cast(mask, inputs.dtype)
)
sequence = np.arange(24).reshape((3, 2, 4)).astype("float32")
layer = layers.TimeDistributed(layer=MaskedDense(1))
mask = np.array([[False, True], [True, False], [True, True]])
output = layer(sequence, mask=mask)
self.assertAllClose(
np.array([[[0], [0.22]], [[0.38], [0]], [[0.7], [0.86]]]),
output,
)
| keras/keras/layers/rnn/time_distributed_test.py/0 | {
"file_path": "keras/keras/layers/rnn/time_distributed_test.py",
"repo_id": "keras",
"token_count": 1442
} | 158 |
import numpy as np
def get_test_data(
train_samples, test_samples, input_shape, num_classes, random_seed=None
):
"""Generates balanced, stratified synthetic test data to train a model on.
Args:
train_samples: Integer, how many training samples to generate.
test_samples: Integer, how many test samples to generate.
input_shape: Tuple of integers, shape of the inputs.
num_classes: Integer, number of classes for the data and targets.
random_seed: Integer, random seed used by Numpy to generate data.
Returns:
A tuple of Numpy arrays: `(x_train, y_train), (x_test, y_test)`.
"""
np.random.seed(random_seed)
# Total samples
total_samples = train_samples + test_samples
# Ensure that we generate a balanced dataset
samples_per_class = total_samples // num_classes
y = np.array(
[i for i in range(num_classes) for _ in range(samples_per_class)],
dtype=np.int32,
)
# Generate extra samples in a deterministic manner
extra_samples = total_samples - len(y)
y_extra = np.array(
[i % num_classes for i in range(extra_samples)], dtype=np.int64
)
y = np.concatenate([y, y_extra])
# Generate data
templates = 2 * num_classes * np.random.random((num_classes,) + input_shape)
x = np.zeros((total_samples,) + input_shape, dtype=np.float32)
for i in range(total_samples):
x[i] = templates[y[i]] + np.random.normal(
loc=0, scale=1.0, size=input_shape
)
# Shuffle the entire dataset to ensure randomness based on seed
indices = np.arange(total_samples)
np.random.shuffle(indices)
x, y = x[indices], y[indices]
# Stratified Shuffle Split
x_train, y_train, x_test, y_test = [], [], [], []
for cls in range(num_classes):
cls_indices = np.where(y == cls)[0]
np.random.shuffle(cls_indices)
train_count = int(train_samples / num_classes)
x_train.extend(x[cls_indices[:train_count]])
y_train.extend(y[cls_indices[:train_count]])
x_test.extend(x[cls_indices[train_count:]])
y_test.extend(y[cls_indices[train_count:]])
# Convert to numpy arrays
x_train, y_train = np.array(x_train), np.array(y_train)
x_test, y_test = np.array(x_test), np.array(y_test)
# Shuffle training and test sets after stratified split
train_indices = np.arange(len(x_train))
test_indices = np.arange(len(x_test))
np.random.shuffle(train_indices)
np.random.shuffle(test_indices)
x_train, y_train = x_train[train_indices], y_train[train_indices]
x_test, y_test = x_test[test_indices], y_test[test_indices]
return (x_train, y_train), (x_test, y_test)
def named_product(*args, **kwargs):
"""Utility to generate the cartesian product of parameters values and
generate a test case names for each combination.
The result of this function is to be used with the
`@parameterized.named_parameters` decorator. It is a replacement for
`@parameterized.product` which adds explicit test case names.
For example, this code:
```
class NamedExample(parameterized.TestCase):
@parameterized.named_parameters(
named_product(
[
{'testcase_name': 'negative', 'x': -1},
{'testcase_name': 'positive', 'x': 1},
{'testcase_name': 'zero', 'x': 0},
],
numeral_type=[float, int],
)
)
def test_conversion(self, x, numeral_type):
self.assertEqual(numeral_type(x), x)
```
produces six tests (note that absl will reorder them by name):
- `NamedExample::test_conversion_negative_float`
- `NamedExample::test_conversion_positive_float`
- `NamedExample::test_conversion_zero_float`
- `NamedExample::test_conversion_negative_int`
- `NamedExample::test_conversion_positive_int`
- `NamedExample::test_conversion_zero_int`
This function is also useful in the case where there is no product to
generate test case names for one argument:
```
@parameterized.named_parameters(named_product(numeral_type=[float, int]))
```
Args:
*args: Each positional parameter is a sequence of keyword arg dicts.
Every test case generated will include exactly one dict from each
positional parameter. These will then be merged to form an overall
list of arguments for the test case. Each dict must contain a
`"testcase_name"` key whose value is combined with others to
generate the test case name.
**kwargs: A mapping of parameter names and their possible values.
Possible values should given as either a list or a tuple. A string
representation of each value is used to generate the test case name.
Returns:
A list of maps for the test parameters combinations to pass to
`@parameterized.named_parameters`.
"""
def value_to_str(value):
if hasattr(value, "__name__"):
return value.__name__.lower()
return str(value).lower()
# Convert the keyword arguments in the same dict format as the args
all_test_dicts = args + tuple(
tuple({"testcase_name": value_to_str(v), key: v} for v in values)
for key, values in kwargs.items()
)
# The current list of tests, start with one empty test
tests = [{}]
for test_dicts in all_test_dicts:
new_tests = []
for test_dict in test_dicts:
for test in tests:
# Augment the testcase name by appending
testcase_name = test.get("testcase_name", "")
testcase_name += "_" if testcase_name else ""
testcase_name += test_dict["testcase_name"]
new_test = test.copy()
# Augment the test by adding all the parameters
new_test.update(test_dict)
new_test["testcase_name"] = testcase_name
new_tests.append(new_test)
# Overwrite the list of tests with the product obtained so far
tests = new_tests
return tests
| keras/keras/testing/test_utils.py/0 | {
"file_path": "keras/keras/testing/test_utils.py",
"repo_id": "keras",
"token_count": 2560
} | 159 |
import numpy as np
import tree
from keras import backend
from keras.trainers.data_adapters import data_adapter_utils
from keras.trainers.data_adapters.data_adapter import DataAdapter
class TorchDataLoaderAdapter(DataAdapter):
"""Adapter that handles `torch.utils.data.DataLoader`."""
def __init__(self, dataloader):
import torch
if not isinstance(dataloader, torch.utils.data.DataLoader):
raise ValueError(
f"Expected argument `dataloader` to be an instance of"
f"`torch.utils.data.DataLoader`. Received: {dataloader}"
)
self._dataloader = dataloader
self._batch_size = dataloader.batch_size
self._num_batches = None
self._partial_batch_size = None
if hasattr(dataloader.dataset, "__len__"):
self._num_batches = len(dataloader)
if self._batch_size is not None:
self._partial_batch_size = (
len(dataloader.dataset) % self._batch_size
)
def get_numpy_iterator(self):
for batch in self._dataloader:
# shared memory using `np.asarray`
yield tuple(
tree.map_structure(lambda x: np.asarray(x.cpu()), batch)
)
def get_jax_iterator(self):
# We use numpy as an intermediary because the conversion
# torch -> numpy -> jax is faster than torch -> jax.
return data_adapter_utils.get_jax_iterator(self.get_numpy_iterator())
def get_tf_dataset(self):
from keras.utils.module_utils import tensorflow as tf
output_signature = self.peek_and_get_tensor_spec()
return tf.data.Dataset.from_generator(
self.get_numpy_iterator,
output_signature=output_signature,
)
def get_torch_dataloader(self):
return self._dataloader
def peek_and_get_tensor_spec(self):
from keras.utils.module_utils import tensorflow as tf
batch_data = next(iter(self._dataloader))
def get_tensor_spec(x):
shape = x.shape
if len(shape) < 1:
raise ValueError(
"When passing a Pytorch DataLoader to a Keras model, "
"the arrays returned by the generator "
"must be at least rank 1. Received: "
f"{x} of rank {len(x.shape)}"
)
shape = list(shape)
shape[0] = None # The batch size is not guaranteed to be static.
dtype = backend.standardize_dtype(x.dtype)
return tf.TensorSpec(shape=shape, dtype=dtype)
return tuple(tree.map_structure(get_tensor_spec, batch_data))
@property
def num_batches(self):
return self._num_batches
@property
def batch_size(self):
return self._batch_size
@property
def has_partial_batch(self):
if self._partial_batch_size:
return self._partial_batch_size > 0
else:
return None
@property
def partial_batch_size(self):
return self._partial_batch_size
| keras/keras/trainers/data_adapters/torch_data_loader_adapter.py/0 | {
"file_path": "keras/keras/trainers/data_adapters/torch_data_loader_adapter.py",
"repo_id": "keras",
"token_count": 1448
} | 160 |
from keras.backend.common.keras_tensor import KerasTensor
from keras.testing import test_case
from keras.utils import dtype_utils
class DtypeSizeTests(test_case.TestCase):
def test_bfloat16_dtype_size(self):
self.assertEqual(dtype_utils.dtype_size("bfloat16"), 16)
def test_float16_dtype_size(self):
self.assertEqual(dtype_utils.dtype_size("float16"), 16)
def test_float32_dtype_size(self):
self.assertEqual(dtype_utils.dtype_size("float32"), 32)
def test_int32_dtype_size(self):
self.assertEqual(dtype_utils.dtype_size("int32"), 32)
def test_float64_dtype_size(self):
self.assertEqual(dtype_utils.dtype_size("float64"), 64)
def test_int64_dtype_size(self):
self.assertEqual(dtype_utils.dtype_size("int64"), 64)
def test_uint8_dtype_size(self):
self.assertEqual(dtype_utils.dtype_size("uint8"), 8)
def test_bool_dtype_size(self):
self.assertEqual(dtype_utils.dtype_size("bool"), 1)
def test_invalid_dtype_size(self):
with self.assertRaises(ValueError):
dtype_utils.dtype_size("unknown_dtype")
class IsFloatTests(test_case.TestCase):
def test_is_float_float16(self):
self.assertTrue(dtype_utils.is_float("float16"))
def test_is_float_float32(self):
self.assertTrue(dtype_utils.is_float("float32"))
def test_is_float_float64(self):
self.assertTrue(dtype_utils.is_float("float64"))
def test_is_float_int32(self):
self.assertFalse(dtype_utils.is_float("int32"))
def test_is_float_bool(self):
self.assertFalse(dtype_utils.is_float("bool"))
def test_is_float_uint8(self):
self.assertFalse(dtype_utils.is_float("uint8"))
def test_is_float_containing_float(self):
self.assertTrue(dtype_utils.is_float("floating"))
def test_is_float_empty_string(self):
self.assertFalse(dtype_utils.is_float(""))
class CastToCommonDtype(test_case.TestCase):
def test_cast_to_common_dtype_float32_float64(self):
tensor1 = KerasTensor([1, 2, 3], dtype="float32")
tensor2 = KerasTensor([4, 5, 6], dtype="float64")
casted_tensors = dtype_utils.cast_to_common_dtype([tensor1, tensor2])
for tensor in casted_tensors:
self.assertEqual(tensor.dtype, "float64")
def test_cast_to_common_dtype_float16_float32_float64(self):
tensor1 = KerasTensor([1, 2, 3], dtype="float16")
tensor2 = KerasTensor([4, 5, 6], dtype="float32")
tensor3 = KerasTensor([7, 8, 9], dtype="float64")
casted_tensors = dtype_utils.cast_to_common_dtype(
[tensor1, tensor2, tensor3]
)
for tensor in casted_tensors:
self.assertEqual(tensor.dtype, "float64")
def test_cast_to_common_dtype_float16_int16_float32(self):
tensor1 = KerasTensor([1, 2, 3], dtype="float16")
tensor2 = KerasTensor([4, 5, 6], dtype="int16")
tensor3 = KerasTensor([7, 8, 9], dtype="float32")
casted_tensors = dtype_utils.cast_to_common_dtype(
[tensor1, tensor2, tensor3]
)
for tensor in casted_tensors:
self.assertEqual(tensor.dtype, "float32")
def test_cast_to_common_dtype_all_float32(self):
tensor1 = KerasTensor([1, 2, 3], dtype="float32")
tensor2 = KerasTensor([4, 5, 6], dtype="float32")
tensor3 = KerasTensor([7, 8, 9], dtype="float32")
casted_tensors = dtype_utils.cast_to_common_dtype(
[tensor1, tensor2, tensor3]
)
for tensor in casted_tensors:
self.assertEqual(tensor.dtype, "float32")
def test_cast_to_common_dtype_float16_bfloat16(self):
tensor1 = KerasTensor([1, 2, 3], dtype="float16")
tensor2 = KerasTensor([4, 5, 6], dtype="bfloat16")
casted_tensors = dtype_utils.cast_to_common_dtype([tensor1, tensor2])
for tensor in casted_tensors:
self.assertEqual(tensor.dtype, "float16")
def test_cast_to_common_dtype_float16_uint8(self):
tensor1 = KerasTensor([1, 2, 3], dtype="float16")
tensor2 = KerasTensor([4, 5, 6], dtype="uint8")
casted_tensors = dtype_utils.cast_to_common_dtype([tensor1, tensor2])
for tensor in casted_tensors:
self.assertEqual(tensor.dtype, "float16")
def test_cast_to_common_dtype_mixed_types(self):
tensor1 = KerasTensor([1, 2, 3], dtype="float32")
tensor2 = KerasTensor([4, 5, 6], dtype="int32")
tensor3 = KerasTensor([7, 8, 9], dtype="bool")
casted_tensors = dtype_utils.cast_to_common_dtype(
[tensor1, tensor2, tensor3]
)
for tensor in casted_tensors:
self.assertEqual(tensor.dtype, "float32")
def test_cast_to_common_dtype_no_float(self):
tensor1 = KerasTensor([1, 2, 3], dtype="int32")
tensor2 = KerasTensor([4, 5, 6], dtype="uint8")
casted_tensors = dtype_utils.cast_to_common_dtype([tensor1, tensor2])
self.assertEqual(casted_tensors[0].dtype, "int32")
self.assertEqual(casted_tensors[1].dtype, "uint8")
def test_cast_to_common_dtype_float16_bfloat16_promotion(self):
tensor1 = KerasTensor([4, 5, 6], dtype="bfloat16")
tensor2 = KerasTensor([1, 2, 3], dtype="float16")
casted_tensors = dtype_utils.cast_to_common_dtype([tensor1, tensor2])
for tensor in casted_tensors:
self.assertEqual(tensor.dtype, "float32")
# TODO failed AssertionError: 'float16' != 'float32'
# The order of the tensors matters in the current logic
# of the cast_to_common_dtype function
# def test_cast_to_common_dtype_bfloat16_float16_promotion(self):
# tensor1 = KerasTensor([1, 2, 3], dtype="float16")
# tensor2 = KerasTensor([4, 5, 6], dtype="bfloat16")
# casted_tensors = dtype_utils.cast_to_common_dtype([tensor1, tensor2])
# for tensor in casted_tensors:
# self.assertEqual(tensor.dtype, "float32")
| keras/keras/utils/dtype_utils_test.py/0 | {
"file_path": "keras/keras/utils/dtype_utils_test.py",
"repo_id": "keras",
"token_count": 2812
} | 161 |
import math
import os
import sys
import time
from keras import backend
from keras.api_export import keras_export
from keras.utils import io_utils
@keras_export("keras.utils.Progbar")
class Progbar:
"""Displays a progress bar.
Args:
target: Total number of steps expected, None if unknown.
width: Progress bar width on screen.
verbose: Verbosity mode, 0 (silent), 1 (verbose), 2 (semi-verbose)
stateful_metrics: Iterable of string names of metrics that should *not*
be averaged over time. Metrics in this list will be displayed as-is.
All others will be averaged by the progbar before display.
interval: Minimum visual progress update interval (in seconds).
unit_name: Display name for step counts (usually "step" or "sample").
"""
def __init__(
self,
target,
width=20,
verbose=1,
interval=0.05,
stateful_metrics=None,
unit_name="step",
):
self.target = target
self.width = width
self.verbose = verbose
self.interval = interval
self.unit_name = unit_name
if stateful_metrics:
self.stateful_metrics = set(stateful_metrics)
else:
self.stateful_metrics = set()
self._dynamic_display = (
(hasattr(sys.stdout, "isatty") and sys.stdout.isatty())
or "ipykernel" in sys.modules
or "posix" in sys.modules
or "PYCHARM_HOSTED" in os.environ
)
self._seen_so_far = 0
# We use a dict + list to avoid garbage collection
# issues found in OrderedDict
self._values = {}
self._values_order = []
self._start = time.time()
self._last_update = 0
self._time_at_epoch_start = self._start
self._time_after_first_step = None
self._prev_total_width = 0
def update(self, current, values=None, finalize=None):
"""Updates the progress bar.
Args:
current: Index of current step.
values: List of tuples: `(name, value_for_last_step)`. If `name` is
in `stateful_metrics`, `value_for_last_step` will be displayed
as-is. Else, an average of the metric over time will be
displayed.
finalize: Whether this is the last update for the progress bar. If
`None`, defaults to `current >= self.target`.
"""
if finalize is None:
if self.target is None:
finalize = False
else:
finalize = current >= self.target
values = values or []
for k, v in values:
if k not in self._values_order:
self._values_order.append(k)
if k not in self.stateful_metrics:
# In the case that progress bar doesn't have a target value in
# the first epoch, both on_batch_end and on_epoch_end will be
# called, which will cause 'current' and 'self._seen_so_far' to
# have the same value. Force the minimal value to 1 here,
# otherwise stateful_metric will be 0s.
value_base = max(current - self._seen_so_far, 1)
if k not in self._values:
self._values[k] = [v * value_base, value_base]
else:
self._values[k][0] += v * value_base
self._values[k][1] += value_base
else:
# Stateful metrics output a numeric value. This representation
# means "take an average from a single value" but keeps the
# numeric formatting.
self._values[k] = [v, 1]
self._seen_so_far = current
message = ""
special_char_len = 0
now = time.time()
time_per_unit = self._estimate_step_duration(current, now)
if self.verbose == 1:
if now - self._last_update < self.interval and not finalize:
return
if self._dynamic_display:
message += "\b" * self._prev_total_width
message += "\r"
else:
message += "\n"
if self.target is not None:
numdigits = int(math.log10(self.target)) + 1
bar = ("%" + str(numdigits) + "d/%d") % (current, self.target)
bar = f"\x1b[1m{bar}\x1b[0m "
special_char_len += 8
prog = float(current) / self.target
prog_width = int(self.width * prog)
if prog_width > 0:
bar += "\33[32m" + "━" * prog_width + "\x1b[0m"
special_char_len += 9
bar += "\33[37m" + "━" * (self.width - prog_width) + "\x1b[0m"
special_char_len += 9
else:
bar = "%7d/Unknown" % current
message += bar
# Add ETA if applicable
if self.target is not None and not finalize:
eta = time_per_unit * (self.target - current)
if eta > 3600:
eta_format = "%d:%02d:%02d" % (
eta // 3600,
(eta % 3600) // 60,
eta % 60,
)
elif eta > 60:
eta_format = "%d:%02d" % (eta // 60, eta % 60)
else:
eta_format = "%ds" % eta
info = f" \x1b[1m{eta_format}\x1b[0m"
else:
# Time elapsed since start, in seconds
info = f" \x1b[1m{now - self._start:.0f}s\x1b[0m"
special_char_len += 8
# Add time/step
info += self._format_time(time_per_unit, self.unit_name)
# Add metrics
for k in self._values_order:
info += f" - {k}:"
if isinstance(self._values[k], list):
avg = backend.convert_to_numpy(
backend.numpy.mean(
self._values[k][0] / max(1, self._values[k][1])
)
)
avg = float(avg)
if abs(avg) > 1e-3:
info += f" {avg:.4f}"
else:
info += f" {avg:.4e}"
else:
info += f" {self._values[k]}"
message += info
total_width = len(bar) + len(info) - special_char_len
if self._prev_total_width > total_width:
message += " " * (self._prev_total_width - total_width)
if finalize:
message += "\n"
io_utils.print_msg(message, line_break=False)
self._prev_total_width = total_width
message = ""
elif self.verbose == 2:
if finalize:
numdigits = int(math.log10(self.target)) + 1
count = ("%" + str(numdigits) + "d/%d") % (current, self.target)
info = f"{count} - {now - self._start:.0f}s"
info += " -" + self._format_time(time_per_unit, self.unit_name)
for k in self._values_order:
info += f" - {k}:"
avg = backend.convert_to_numpy(
backend.numpy.mean(
self._values[k][0] / max(1, self._values[k][1])
)
)
if avg > 1e-3:
info += f" {avg:.4f}"
else:
info += f" {avg:.4e}"
info += "\n"
message += info
io_utils.print_msg(message, line_break=False)
message = ""
self._last_update = now
def add(self, n, values=None):
self.update(self._seen_so_far + n, values)
def _format_time(self, time_per_unit, unit_name):
"""format a given duration to display to the user.
Given the duration, this function formats it in either milliseconds
or seconds and displays the unit (i.e. ms/step or s/epoch).
Args:
time_per_unit: the duration to display
unit_name: the name of the unit to display
Returns:
A string with the correctly formatted duration and units
"""
formatted = ""
if time_per_unit >= 1 or time_per_unit == 0:
formatted += f" {time_per_unit:.0f}s/{unit_name}"
elif time_per_unit >= 1e-3:
formatted += f" {time_per_unit * 1000.0:.0f}ms/{unit_name}"
else:
formatted += f" {time_per_unit * 1000000.0:.0f}us/{unit_name}"
return formatted
def _estimate_step_duration(self, current, now):
"""Estimate the duration of a single step.
Given the step number `current` and the corresponding time `now` this
function returns an estimate for how long a single step takes. If this
is called before one step has been completed (i.e. `current == 0`) then
zero is given as an estimate. The duration estimate ignores the duration
of the (assumed to be non-representative) first step for estimates when
more steps are available (i.e. `current>1`).
Args:
current: Index of current step.
now: The current time.
Returns: Estimate of the duration of a single step.
"""
if current:
# there are a few special scenarios here:
# 1) somebody is calling the progress bar without ever supplying
# step 1
# 2) somebody is calling the progress bar and supplies step one
# multiple times, e.g. as part of a finalizing call
# in these cases, we just fall back to the simple calculation
if self._time_after_first_step is not None and current > 1:
time_per_unit = (now - self._time_after_first_step) / (
current - 1
)
else:
time_per_unit = (now - self._start) / current
if current == 1:
self._time_after_first_step = now
return time_per_unit
else:
return 0
| keras/keras/utils/progbar.py/0 | {
"file_path": "keras/keras/utils/progbar.py",
"repo_id": "keras",
"token_count": 5311
} | 162 |
import os
import numpy as np
import pytest
import torch
from absl.testing import parameterized
from keras import backend
from keras import layers
from keras import models
from keras import saving
from keras import testing
from keras.utils.torch_utils import TorchModuleWrapper
class Classifier(models.Model):
def __init__(
self, use_batch_norm=False, num_torch_layers=1, *args, **kwargs
):
super().__init__(*args, **kwargs)
self.use_batch_norm = use_batch_norm
self.num_torch_layers = num_torch_layers
self.torch_wrappers = []
for _ in range(num_torch_layers):
modules = [torch.nn.Linear(2, 2)]
if use_batch_norm:
modules.append(torch.nn.BatchNorm1d(2))
torch_model = torch.nn.Sequential(*modules)
self.torch_wrappers.append(TorchModuleWrapper(torch_model))
self.fc = layers.Dense(1)
def call(self, x):
for wrapper in self.torch_wrappers:
x = wrapper(x)
return self.fc(x)
def get_config(self):
config = super().get_config()
config["use_batch_norm"] = self.use_batch_norm
config["num_torch_layers"] = self.num_torch_layers
return config
class ClassifierWithNoSpecialCasing(models.Model):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.fc1 = torch.nn.Linear(2, 4)
self.bn1 = torch.nn.BatchNorm1d(4)
self.fc2 = torch.nn.Linear(4, 4)
self.fc3 = layers.Dense(2)
def call(self, x):
return self.fc3(self.fc2(self.bn1(self.fc1(x))))
@pytest.mark.skipif(
backend.backend() != "torch", reason="Requires torch backend"
)
class TorchUtilsTest(testing.TestCase, parameterized.TestCase):
@parameterized.parameters(
{"use_batch_norm": False, "num_torch_layers": 1},
{"use_batch_norm": True, "num_torch_layers": 1},
)
def test_basic_usage(self, use_batch_norm, num_torch_layers):
model = Classifier(use_batch_norm, num_torch_layers)
self.assertEqual(len(model.layers), 2)
# Linear - Weights, bias, BN - beta, gamma
torch_trainable_count = 0
for i, layer in zip(range(num_torch_layers), model.torch_wrappers):
layer_trainable_count = 2
if use_batch_norm:
layer_trainable_count += 2
self.assertEqual(
len(layer.trainable_weights), layer_trainable_count
)
torch_trainable_count += layer_trainable_count
model(np.random.random((3, 2)))
self.assertEqual(len(model.layers), 2 * num_torch_layers)
self.assertEqual(
len(model.trainable_weights), torch_trainable_count + 2
)
model.compile(optimizer="sgd", loss="mse")
model.fit(np.random.random((3, 2)), np.random.random((3, 1)))
def test_module_autowrapping(self):
model = ClassifierWithNoSpecialCasing()
self.assertIsInstance(model.fc1, TorchModuleWrapper)
self.assertIsInstance(model.bn1, TorchModuleWrapper)
self.assertIsInstance(model.fc2, TorchModuleWrapper)
self.assertFalse(isinstance(model.fc3, TorchModuleWrapper))
self.assertEqual(len(model.fc1.trainable_weights), 2)
self.assertEqual(len(model.bn1.trainable_weights), 2)
self.assertEqual(len(model.fc2.trainable_weights), 2)
model(np.random.random((3, 2)))
self.assertEqual(len(model.layers), 4)
self.assertEqual(len(model.fc3.trainable_weights), 2)
self.assertEqual(len(model.trainable_weights), 8)
model.compile(optimizer="sgd", loss="mse")
model.fit(np.random.random((3, 2)), np.random.random((3, 2)))
def test_load_weights_autowrapping(self):
# Test loading weights
temp_filepath = os.path.join(self.get_temp_dir(), "mymodel.weights.h5")
model = ClassifierWithNoSpecialCasing()
model.compile(optimizer="sgd", loss="mse")
x, y = np.random.random((3, 2)), np.random.random((3, 1))
x_test, y_test = np.random.random((3, 2)), np.random.random((3, 1))
model.fit(x, y)
ref_loss = model.evaluate(x_test, y_test)
model.save_weights(temp_filepath)
new_model = ClassifierWithNoSpecialCasing()
new_model(np.random.random((3, 2)))
new_model.compile(optimizer="sgd", loss="mse")
new_model.load_weights(temp_filepath)
for ref_w, new_w in zip(model.get_weights(), new_model.get_weights()):
self.assertAllClose(ref_w, new_w, atol=1e-5)
loss = new_model.evaluate(x_test, y_test)
self.assertAllClose(ref_loss, loss, atol=1e-5)
def test_serialize_model_autowrapping(self):
# Test loading saved model
temp_filepath = os.path.join(self.get_temp_dir(), "mymodel.keras")
model = ClassifierWithNoSpecialCasing()
model.compile(optimizer="sgd", loss="mse")
x, y = np.random.random((3, 2)), np.random.random((3, 1))
x_test, y_test = np.random.random((3, 2)), np.random.random((3, 1))
model.fit(x, y)
ref_loss = model.evaluate(x_test, y_test)
model.save(temp_filepath)
new_model = saving.load_model(temp_filepath)
for ref_w, new_w in zip(model.get_weights(), new_model.get_weights()):
self.assertAllClose(ref_w, new_w, atol=1e-5)
loss = new_model.evaluate(x_test, y_test)
self.assertAllClose(ref_loss, loss, atol=1e-5)
@parameterized.parameters(
{"use_batch_norm": False, "num_torch_layers": 1},
{"use_batch_norm": True, "num_torch_layers": 1},
{"use_batch_norm": False, "num_torch_layers": 2},
{"use_batch_norm": True, "num_torch_layers": 2},
)
def test_load_weights(self, use_batch_norm, num_torch_layers):
# Test loading weights
temp_filepath = os.path.join(self.get_temp_dir(), "mymodel.weights.h5")
model = Classifier(use_batch_norm, num_torch_layers)
model.compile(optimizer="sgd", loss="mse")
x, y = np.random.random((3, 2)), np.random.random((3, 1))
x_test, y_test = np.random.random((3, 2)), np.random.random((3, 1))
model.fit(x, y)
ref_loss = model.evaluate(x_test, y_test)
model.save_weights(temp_filepath)
new_model = Classifier(use_batch_norm, num_torch_layers)
new_model(np.random.random((3, 2)))
new_model.compile(optimizer="sgd", loss="mse")
new_model.load_weights(temp_filepath)
for ref_w, new_w in zip(model.get_weights(), new_model.get_weights()):
self.assertAllClose(ref_w, new_w, atol=1e-5)
loss = new_model.evaluate(x_test, y_test)
self.assertAllClose(ref_loss, loss, atol=1e-5)
@parameterized.parameters(
{"use_batch_norm": False, "num_torch_layers": 1},
{"use_batch_norm": True, "num_torch_layers": 1},
{"use_batch_norm": False, "num_torch_layers": 2},
{"use_batch_norm": True, "num_torch_layers": 2},
)
def test_serialize_model(self, use_batch_norm, num_torch_layers):
# Test loading saved model
temp_filepath = os.path.join(self.get_temp_dir(), "mymodel.keras")
model = Classifier(use_batch_norm, num_torch_layers)
model.compile(optimizer="sgd", loss="mse")
x, y = np.random.random((3, 2)), np.random.random((3, 1))
x_test, y_test = np.random.random((3, 2)), np.random.random((3, 1))
model.fit(x, y)
ref_loss = model.evaluate(x_test, y_test)
model.save(temp_filepath)
new_model = saving.load_model(temp_filepath)
for ref_w, new_w in zip(model.get_weights(), new_model.get_weights()):
self.assertAllClose(ref_w, new_w, atol=1e-5)
loss = new_model.evaluate(x_test, y_test)
self.assertAllClose(ref_loss, loss, atol=1e-5)
def test_from_config(self):
module = torch.nn.Sequential(torch.nn.Linear(2, 4))
mw = TorchModuleWrapper(module)
config = mw.get_config()
new_mw = TorchModuleWrapper.from_config(config)
for ref_w, new_w in zip(mw.get_weights(), new_mw.get_weights()):
self.assertAllClose(ref_w, new_w, atol=1e-5)
| keras/keras/utils/torch_utils_test.py/0 | {
"file_path": "keras/keras/utils/torch_utils_test.py",
"repo_id": "keras",
"token_count": 3789
} | 163 |
# TensorFlow Bazel configuration file.
# This file tries to group and simplify build options for TensorFlow
#
# ----CONFIG OPTIONS----
#
# Other build options:
# short_logs: Only log errors during build, skip warnings.
# verbose_logs: Show all compiler warnings during build.
# monolithic: Build all TF C++ code into a single shared object.
# dynamic_kernels: Try to link all kernels dynamically (experimental).
# libc++: Link against libc++ instead of stdlibc++
##
# TF version options;
# v1: Build TF V1 (without contrib)
# v2: Build TF v2
#
# Feature and Third party library support options:
# xla: Build TF with XLA
# tpu: Build TF with TPU support
# using_cuda: CUDA is available to build system.
# cuda: Build with full cuda support.
# rocm: Build with AMD GPU support (rocm).
# mkl: Enable full mkl support.
# tensorrt: Enable Tensorrt support.
# numa: Enable numa using hwloc.
# noaws: Disable AWS S3 storage support
# nogcp: Disable GCS support.
# nohdfs: Disable hadoop hdfs support.
# nonccl: Disable nccl support.
# Sets the default Apple platform to macOS.
build --apple_platform_type=macos
# Flags for open source build, always set to be true.
build --define open_source_build=true
test --define open_source_build=true
# For workaound the use_fast_cpp_protos issue in protobuf deps.
build --define=use_fast_cpp_protos=false
test --define=use_fast_cpp_protos=false
# This config refers to building with CUDA available. It does not necessarily
# mean that we build CUDA op kernels.
build:using_cuda --define=using_cuda=true
build:using_cuda --action_env TF_NEED_CUDA=1
build:using_cuda --crosstool_top=@local_config_cuda//crosstool:toolchain
# Enable the mlir generated GPU kernels only for cuda builds.
build --define=tensorflow_enable_mlir_generated_gpu_kernels=0
# This is a more specific option, so it takes precedence over the line above for cuda builds.
build:using_cuda --define=tensorflow_enable_mlir_generated_gpu_kernels=1
# This config refers to building CUDA op kernels with nvcc.
build:cuda --config=using_cuda
build:cuda --define=using_cuda_nvcc=true
# dbg config, as a shorthand for '--config=opt -c dbg'
build:dbg --config=opt -c dbg
# for now, disable arm_neon. see: https://github.com/tensorflow/tensorflow/issues/33360
build:dbg --cxxopt -DTF_LITE_DISABLE_X86_NEON
# AWS SDK must be compiled in release mode. see: https://github.com/tensorflow/tensorflow/issues/37498
build:dbg --copt -DDEBUG_BUILD
build:tensorrt --action_env TF_NEED_TENSORRT=1
build:rocm --crosstool_top=@local_config_rocm//crosstool:toolchain
build:rocm --define=using_rocm=true --define=using_rocm_hipcc=true
build:rocm --action_env TF_NEED_ROCM=1
# Options extracted from configure script
build:numa --define=with_numa_support=true
# Options to disable default on features
build:noaws --define=no_aws_support=true
build:nogcp --define=no_gcp_support=true
build:nohdfs --define=no_hdfs_support=true
build:nonccl --define=no_nccl_support=true
build --define=allow_oversize_protos=true
build --spawn_strategy=standalone
build -c opt
# Make Bazel print out all options from rc files.
build --announce_rc
# Other build flags.
build --define=grpc_no_ares=true
build:linux --copt=-w
build:linux --host_copt=-w
build:macos --copt=-w
build:windows --copt=/W0
# Tensorflow uses M_* math constants that only get defined by MSVC headers if
# _USE_MATH_DEFINES is defined.
build:windows --copt=/D_USE_MATH_DEFINES
build:windows --host_copt=/D_USE_MATH_DEFINES
# Default paths for TF_SYSTEM_LIBS
build:linux --define=PREFIX=/usr
build:linux --define=LIBDIR=$(PREFIX)/lib
build:linux --define=INCLUDEDIR=$(PREFIX)/include
build:linux --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include
build:macos --define=PREFIX=/usr
build:macos --define=LIBDIR=$(PREFIX)/lib
build:macos --define=INCLUDEDIR=$(PREFIX)/include
build:macos --define=PROTOBUF_INCLUDE_PATH=$(PREFIX)/include
# TF_SYSTEM_LIBS do not work on windows.
# On windows, we still link everything into a single DLL.
build:windows --config=monolithic
# On linux, we dynamically link small amount of kernels
build:linux --config=dynamic_kernels
# Make sure to include as little of windows.h as possible
build:windows --copt=-DWIN32_LEAN_AND_MEAN
build:windows --host_copt=-DWIN32_LEAN_AND_MEAN
build:windows --copt=-DNOGDI
build:windows --host_copt=-DNOGDI
# MSVC (Windows): Standards-conformant preprocessor mode
# See https://docs.microsoft.com/en-us/cpp/preprocessor/preprocessor-experimental-overview
build:windows --copt=/experimental:preprocessor
build:windows --host_copt=/experimental:preprocessor
# Misc build options we need for windows.
build:windows --linkopt=/DEBUG
build:windows --host_linkopt=/DEBUG
build:windows --linkopt=/OPT:REF
build:windows --host_linkopt=/OPT:REF
build:windows --linkopt=/OPT:ICF
build:windows --host_linkopt=/OPT:ICF
build:windows --experimental_strict_action_env=true
# Verbose failure logs when something goes wrong
build:windows --verbose_failures
# Suppress all warning messages.
build:short_logs --output_filter=DONT_MATCH_ANYTHING
build:verbose_logs --output_filter=
build --config=short_logs
# Options to build TensorFlow 1.x or 2.x.
build:v1 --define=tf_api_version=1
build:v2 --define=tf_api_version=2
build:v1 --action_env=TF2_BEHAVIOR=0
build:v2 --action_env=TF2_BEHAVIOR=1
build --config=v2
test --config=v2
# Enable XLA
build:xla --define=with_xla_support=true
| tf-keras/.bazelrc/0 | {
"file_path": "tf-keras/.bazelrc",
"repo_id": "tf-keras",
"token_count": 2028
} | 164 |
"""Keras API __init__.py files."""
# keep sorted
KERAS_API_INIT_FILES = [
"__init__.py",
"keras/__init__.py",
"keras/__internal__/__init__.py",
"keras/__internal__/backend/__init__.py",
"keras/__internal__/layers/__init__.py",
"keras/__internal__/losses/__init__.py",
"keras/__internal__/models/__init__.py",
"keras/__internal__/optimizers/__init__.py",
"keras/__internal__/utils/__init__.py",
"keras/activations/__init__.py",
"keras/applications/__init__.py",
"keras/applications/convnext/__init__.py",
"keras/applications/densenet/__init__.py",
"keras/applications/efficientnet/__init__.py",
"keras/applications/efficientnet_v2/__init__.py",
"keras/applications/imagenet_utils/__init__.py",
"keras/applications/inception_resnet_v2/__init__.py",
"keras/applications/inception_v3/__init__.py",
"keras/applications/mobilenet/__init__.py",
"keras/applications/mobilenet_v2/__init__.py",
"keras/applications/mobilenet_v3/__init__.py",
"keras/applications/nasnet/__init__.py",
"keras/applications/regnet/__init__.py",
"keras/applications/resnet/__init__.py",
"keras/applications/resnet50/__init__.py",
"keras/applications/resnet_rs/__init__.py",
"keras/applications/resnet_v2/__init__.py",
"keras/applications/vgg16/__init__.py",
"keras/applications/vgg19/__init__.py",
"keras/applications/xception/__init__.py",
"keras/backend/__init__.py",
"keras/backend/experimental/__init__.py",
"keras/callbacks/__init__.py",
"keras/callbacks/experimental/__init__.py",
"keras/constraints/__init__.py",
"keras/datasets/__init__.py",
"keras/datasets/boston_housing/__init__.py",
"keras/datasets/cifar10/__init__.py",
"keras/datasets/cifar100/__init__.py",
"keras/datasets/fashion_mnist/__init__.py",
"keras/datasets/imdb/__init__.py",
"keras/datasets/mnist/__init__.py",
"keras/datasets/reuters/__init__.py",
"keras/dtensor/__init__.py",
"keras/dtensor/experimental/__init__.py",
"keras/dtensor/experimental/optimizers/__init__.py",
"keras/estimator/__init__.py",
"keras/experimental/__init__.py",
"keras/export/__init__.py",
# Placeholder for internal API
"keras/initializers/__init__.py",
"keras/layers/__init__.py",
"keras/layers/experimental/__init__.py",
"keras/layers/experimental/preprocessing/__init__.py",
"keras/losses/__init__.py",
"keras/metrics/__init__.py",
"keras/metrics/experimental/__init__.py",
"keras/mixed_precision/__init__.py",
"keras/models/__init__.py",
"keras/models/experimental/__init__.py",
"keras/optimizers/__init__.py",
"keras/optimizers/experimental/__init__.py",
"keras/optimizers/legacy/__init__.py",
"keras/optimizers/schedules/__init__.py",
"keras/premade/__init__.py",
"keras/preprocessing/__init__.py",
"keras/preprocessing/image/__init__.py",
"keras/preprocessing/sequence/__init__.py",
"keras/preprocessing/text/__init__.py",
"keras/regularizers/__init__.py",
"keras/saving/__init__.py",
"keras/utils/__init__.py",
"keras/utils/experimental/__init__.py",
"keras/utils/legacy/__init__.py",
"keras/wrappers/__init__.py",
"keras/wrappers/scikit_learn/__init__.py",
]
KERAS_API_INIT_FILES_V1 = [
"__init__.py",
"keras/__init__.py",
"keras/__internal__/__init__.py",
"keras/__internal__/legacy/__init__.py",
"keras/__internal__/legacy/layers/__init__.py",
"keras/__internal__/layers/__init__.py",
"keras/__internal__/legacy/layers/experimental/__init__.py",
"keras/__internal__/legacy/rnn_cell/__init__.py",
"keras/activations/__init__.py",
"keras/applications/__init__.py",
"keras/applications/convnext/__init__.py",
"keras/applications/densenet/__init__.py",
"keras/applications/efficientnet/__init__.py",
"keras/applications/efficientnet_v2/__init__.py",
"keras/applications/imagenet_utils/__init__.py",
"keras/applications/inception_resnet_v2/__init__.py",
"keras/applications/inception_v3/__init__.py",
"keras/applications/mobilenet/__init__.py",
"keras/applications/mobilenet_v2/__init__.py",
"keras/applications/mobilenet_v3/__init__.py",
"keras/applications/nasnet/__init__.py",
"keras/applications/regnet/__init__.py",
"keras/applications/resnet/__init__.py",
"keras/applications/resnet_v2/__init__.py",
"keras/applications/resnet50/__init__.py",
"keras/applications/resnet_rs/__init__.py",
"keras/applications/vgg16/__init__.py",
"keras/applications/vgg19/__init__.py",
"keras/applications/xception/__init__.py",
"keras/backend/__init__.py",
"keras/callbacks/__init__.py",
"keras/callbacks/experimental/__init__.py",
"keras/constraints/__init__.py",
"keras/datasets/__init__.py",
"keras/datasets/boston_housing/__init__.py",
"keras/datasets/cifar10/__init__.py",
"keras/datasets/cifar100/__init__.py",
"keras/datasets/fashion_mnist/__init__.py",
"keras/datasets/imdb/__init__.py",
"keras/datasets/mnist/__init__.py",
"keras/datasets/reuters/__init__.py",
"keras/estimator/__init__.py",
"keras/experimental/__init__.py",
"keras/export/__init__.py",
"keras/initializers/__init__.py",
"keras/layers/__init__.py",
"keras/layers/experimental/__init__.py",
"keras/layers/experimental/preprocessing/__init__.py",
"keras/losses/__init__.py",
"keras/metrics/__init__.py",
"keras/mixed_precision/__init__.py",
"keras/models/__init__.py",
"keras/optimizers/__init__.py",
"keras/optimizers/schedules/__init__.py",
"keras/optimizers/legacy/__init__.py",
"keras/premade/__init__.py",
"keras/preprocessing/__init__.py",
"keras/preprocessing/image/__init__.py",
"keras/preprocessing/sequence/__init__.py",
"keras/preprocessing/text/__init__.py",
"keras/regularizers/__init__.py",
"keras/saving/__init__.py",
"keras/utils/__init__.py",
"keras/utils/legacy/__init__.py",
"keras/wrappers/__init__.py",
"keras/wrappers/scikit_learn/__init__.py",
]
| tf-keras/tf_keras/api/api_init_files.bzl/0 | {
"file_path": "tf-keras/tf_keras/api/api_init_files.bzl",
"repo_id": "tf-keras",
"token_count": 2888
} | 165 |
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Inception-ResNet V2 model for TF-Keras.
Reference:
- [Inception-v4, Inception-ResNet and the Impact of
Residual Connections on Learning](https://arxiv.org/abs/1602.07261)
(AAAI 2017)
"""
import tensorflow.compat.v2 as tf
import tf_keras as keras
from tf_keras import backend
from tf_keras import layers as keras_layers
from tf_keras.applications import imagenet_utils
from tf_keras.engine import training
from tf_keras.layers import VersionAwareLayers
from tf_keras.utils import data_utils
from tf_keras.utils import layer_utils
# isort: off
from tensorflow.python.util.tf_export import keras_export
BASE_WEIGHT_URL = (
"https://storage.googleapis.com/tensorflow/"
"keras-applications/inception_resnet_v2/"
)
layers = None
@keras_export(
"keras.applications.inception_resnet_v2.InceptionResNetV2",
"keras.applications.InceptionResNetV2",
)
def InceptionResNetV2(
include_top=True,
weights="imagenet",
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation="softmax",
**kwargs,
):
"""Instantiates the Inception-ResNet v2 architecture.
Reference:
- [Inception-v4, Inception-ResNet and the Impact of
Residual Connections on Learning](https://arxiv.org/abs/1602.07261)
(AAAI 2017)
This function returns a TF-Keras image classification model,
optionally loaded with weights pre-trained on ImageNet.
For image classification use cases, see
[this page for detailed examples](
https://keras.io/api/applications/#usage-examples-for-image-classification-models).
For transfer learning use cases, make sure to read the
[guide to transfer learning & fine-tuning](
https://keras.io/guides/transfer_learning/).
Note: each TF-Keras Application expects a specific kind of input
preprocessing. For InceptionResNetV2, call
`tf.keras.applications.inception_resnet_v2.preprocess_input`
on your inputs before passing them to the model.
`inception_resnet_v2.preprocess_input`
will scale input pixels between -1 and 1.
Args:
include_top: whether to include the fully-connected
layer at the top of the network.
weights: one of `None` (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded.
input_tensor: optional TF-Keras tensor (i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is `False` (otherwise the input shape
has to be `(299, 299, 3)` (with `'channels_last'` data format)
or `(3, 299, 299)` (with `'channels_first'` data format).
It should have exactly 3 inputs channels,
and width and height should be no smaller than 75.
E.g. `(150, 150, 3)` would be one valid value.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the last convolutional block.
- `'avg'` means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `'max'` means that global max pooling will be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is `True`, and
if no `weights` argument is specified.
classifier_activation: A `str` or callable. The activation function to use
on the "top" layer. Ignored unless `include_top=True`. Set
`classifier_activation=None` to return the logits of the "top" layer.
When loading pretrained weights, `classifier_activation` can only
be `None` or `"softmax"`.
**kwargs: For backwards compatibility only.
Returns:
A `keras.Model` instance.
"""
global layers
if "layers" in kwargs:
layers = kwargs.pop("layers")
else:
layers = VersionAwareLayers()
if kwargs:
raise ValueError(f"Unknown argument(s): {kwargs}")
if not (weights in {"imagenet", None} or tf.io.gfile.exists(weights)):
raise ValueError(
"The `weights` argument should be either "
"`None` (random initialization), `imagenet` "
"(pre-training on ImageNet), "
"or the path to the weights file to be loaded."
)
if weights == "imagenet" and include_top and classes != 1000:
raise ValueError(
'If using `weights` as `"imagenet"` with `include_top`'
" as true, `classes` should be 1000"
)
# Determine proper input shape
input_shape = imagenet_utils.obtain_input_shape(
input_shape,
default_size=299,
min_size=75,
data_format=backend.image_data_format(),
require_flatten=include_top,
weights=weights,
)
if input_tensor is None:
img_input = layers.Input(shape=input_shape)
else:
if not backend.is_keras_tensor(input_tensor):
img_input = layers.Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
# Stem block: 35 x 35 x 192
x = conv2d_bn(img_input, 32, 3, strides=2, padding="valid")
x = conv2d_bn(x, 32, 3, padding="valid")
x = conv2d_bn(x, 64, 3)
x = layers.MaxPooling2D(3, strides=2)(x)
x = conv2d_bn(x, 80, 1, padding="valid")
x = conv2d_bn(x, 192, 3, padding="valid")
x = layers.MaxPooling2D(3, strides=2)(x)
# Mixed 5b (Inception-A block): 35 x 35 x 320
branch_0 = conv2d_bn(x, 96, 1)
branch_1 = conv2d_bn(x, 48, 1)
branch_1 = conv2d_bn(branch_1, 64, 5)
branch_2 = conv2d_bn(x, 64, 1)
branch_2 = conv2d_bn(branch_2, 96, 3)
branch_2 = conv2d_bn(branch_2, 96, 3)
branch_pool = layers.AveragePooling2D(3, strides=1, padding="same")(x)
branch_pool = conv2d_bn(branch_pool, 64, 1)
branches = [branch_0, branch_1, branch_2, branch_pool]
channel_axis = 1 if backend.image_data_format() == "channels_first" else 3
x = layers.Concatenate(axis=channel_axis, name="mixed_5b")(branches)
# 10x block35 (Inception-ResNet-A block): 35 x 35 x 320
for block_idx in range(1, 11):
x = inception_resnet_block(
x, scale=0.17, block_type="block35", block_idx=block_idx
)
# Mixed 6a (Reduction-A block): 17 x 17 x 1088
branch_0 = conv2d_bn(x, 384, 3, strides=2, padding="valid")
branch_1 = conv2d_bn(x, 256, 1)
branch_1 = conv2d_bn(branch_1, 256, 3)
branch_1 = conv2d_bn(branch_1, 384, 3, strides=2, padding="valid")
branch_pool = layers.MaxPooling2D(3, strides=2, padding="valid")(x)
branches = [branch_0, branch_1, branch_pool]
x = layers.Concatenate(axis=channel_axis, name="mixed_6a")(branches)
# 20x block17 (Inception-ResNet-B block): 17 x 17 x 1088
for block_idx in range(1, 21):
x = inception_resnet_block(
x, scale=0.1, block_type="block17", block_idx=block_idx
)
# Mixed 7a (Reduction-B block): 8 x 8 x 2080
branch_0 = conv2d_bn(x, 256, 1)
branch_0 = conv2d_bn(branch_0, 384, 3, strides=2, padding="valid")
branch_1 = conv2d_bn(x, 256, 1)
branch_1 = conv2d_bn(branch_1, 288, 3, strides=2, padding="valid")
branch_2 = conv2d_bn(x, 256, 1)
branch_2 = conv2d_bn(branch_2, 288, 3)
branch_2 = conv2d_bn(branch_2, 320, 3, strides=2, padding="valid")
branch_pool = layers.MaxPooling2D(3, strides=2, padding="valid")(x)
branches = [branch_0, branch_1, branch_2, branch_pool]
x = layers.Concatenate(axis=channel_axis, name="mixed_7a")(branches)
# 10x block8 (Inception-ResNet-C block): 8 x 8 x 2080
for block_idx in range(1, 10):
x = inception_resnet_block(
x, scale=0.2, block_type="block8", block_idx=block_idx
)
x = inception_resnet_block(
x, scale=1.0, activation=None, block_type="block8", block_idx=10
)
# Final convolution block: 8 x 8 x 1536
x = conv2d_bn(x, 1536, 1, name="conv_7b")
if include_top:
# Classification block
x = layers.GlobalAveragePooling2D(name="avg_pool")(x)
imagenet_utils.validate_activation(classifier_activation, weights)
x = layers.Dense(
classes, activation=classifier_activation, name="predictions"
)(x)
else:
if pooling == "avg":
x = layers.GlobalAveragePooling2D()(x)
elif pooling == "max":
x = layers.GlobalMaxPooling2D()(x)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = layer_utils.get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
model = training.Model(inputs, x, name="inception_resnet_v2")
# Load weights.
if weights == "imagenet":
if include_top:
fname = "inception_resnet_v2_weights_tf_dim_ordering_tf_kernels.h5"
weights_path = data_utils.get_file(
fname,
BASE_WEIGHT_URL + fname,
cache_subdir="models",
file_hash="e693bd0210a403b3192acc6073ad2e96",
)
else:
fname = (
"inception_resnet_v2_weights_"
"tf_dim_ordering_tf_kernels_notop.h5"
)
weights_path = data_utils.get_file(
fname,
BASE_WEIGHT_URL + fname,
cache_subdir="models",
file_hash="d19885ff4a710c122648d3b5c3b684e4",
)
model.load_weights(weights_path)
elif weights is not None:
model.load_weights(weights)
return model
def conv2d_bn(
x,
filters,
kernel_size,
strides=1,
padding="same",
activation="relu",
use_bias=False,
name=None,
):
"""Utility function to apply conv + BN.
Args:
x: input tensor.
filters: filters in `Conv2D`.
kernel_size: kernel size as in `Conv2D`.
strides: strides in `Conv2D`.
padding: padding mode in `Conv2D`.
activation: activation in `Conv2D`.
use_bias: whether to use a bias in `Conv2D`.
name: name of the ops; will become `name + '_ac'` for the activation
and `name + '_bn'` for the batch norm layer.
Returns:
Output tensor after applying `Conv2D` and `BatchNormalization`.
"""
x = layers.Conv2D(
filters,
kernel_size,
strides=strides,
padding=padding,
use_bias=use_bias,
name=name,
)(x)
if not use_bias:
bn_axis = 1 if backend.image_data_format() == "channels_first" else 3
bn_name = None if name is None else name + "_bn"
x = layers.BatchNormalization(axis=bn_axis, scale=False, name=bn_name)(
x
)
if activation is not None:
ac_name = None if name is None else name + "_ac"
x = layers.Activation(activation, name=ac_name)(x)
return x
@keras.utils.register_keras_serializable()
class CustomScaleLayer(keras_layers.Layer):
def __init__(self, scale, **kwargs):
super().__init__(**kwargs)
self.scale = scale
def get_config(self):
config = super().get_config()
config.update({"scale": self.scale})
return config
def call(self, inputs):
return inputs[0] + inputs[1] * self.scale
def inception_resnet_block(x, scale, block_type, block_idx, activation="relu"):
"""Adds an Inception-ResNet block.
This function builds 3 types of Inception-ResNet blocks mentioned
in the paper, controlled by the `block_type` argument (which is the
block name used in the official TF-slim implementation):
- Inception-ResNet-A: `block_type='block35'`
- Inception-ResNet-B: `block_type='block17'`
- Inception-ResNet-C: `block_type='block8'`
Args:
x: input tensor.
scale: scaling factor to scale the residuals (i.e., the output of passing
`x` through an inception module) before adding them to the shortcut
branch. Let `r` be the output from the residual branch, the output of
this block will be `x + scale * r`.
block_type: `'block35'`, `'block17'` or `'block8'`, determines the network
structure in the residual branch.
block_idx: an `int` used for generating layer names. The Inception-ResNet
blocks are repeated many times in this network. We use `block_idx` to
identify each of the repetitions. For example, the first
Inception-ResNet-A block will have `block_type='block35', block_idx=0`,
and the layer names will have a common prefix `'block35_0'`.
activation: activation function to use at the end of the block (see
[activations](../activations.md)). When `activation=None`, no activation
is applied
(i.e., "linear" activation: `a(x) = x`).
Returns:
Output tensor for the block.
Raises:
ValueError: if `block_type` is not one of `'block35'`,
`'block17'` or `'block8'`.
"""
if block_type == "block35":
branch_0 = conv2d_bn(x, 32, 1)
branch_1 = conv2d_bn(x, 32, 1)
branch_1 = conv2d_bn(branch_1, 32, 3)
branch_2 = conv2d_bn(x, 32, 1)
branch_2 = conv2d_bn(branch_2, 48, 3)
branch_2 = conv2d_bn(branch_2, 64, 3)
branches = [branch_0, branch_1, branch_2]
elif block_type == "block17":
branch_0 = conv2d_bn(x, 192, 1)
branch_1 = conv2d_bn(x, 128, 1)
branch_1 = conv2d_bn(branch_1, 160, [1, 7])
branch_1 = conv2d_bn(branch_1, 192, [7, 1])
branches = [branch_0, branch_1]
elif block_type == "block8":
branch_0 = conv2d_bn(x, 192, 1)
branch_1 = conv2d_bn(x, 192, 1)
branch_1 = conv2d_bn(branch_1, 224, [1, 3])
branch_1 = conv2d_bn(branch_1, 256, [3, 1])
branches = [branch_0, branch_1]
else:
raise ValueError(
"Unknown Inception-ResNet block type. "
'Expects "block35", "block17" or "block8", '
"but got: " + str(block_type)
)
block_name = block_type + "_" + str(block_idx)
channel_axis = 1 if backend.image_data_format() == "channels_first" else 3
mixed = layers.Concatenate(axis=channel_axis, name=block_name + "_mixed")(
branches
)
up = conv2d_bn(
mixed,
backend.int_shape(x)[channel_axis],
1,
activation=None,
use_bias=True,
name=block_name + "_conv",
)
x = CustomScaleLayer(scale)([x, up])
if activation is not None:
x = layers.Activation(activation, name=block_name + "_ac")(x)
return x
@keras_export("keras.applications.inception_resnet_v2.preprocess_input")
def preprocess_input(x, data_format=None):
return imagenet_utils.preprocess_input(
x, data_format=data_format, mode="tf"
)
@keras_export("keras.applications.inception_resnet_v2.decode_predictions")
def decode_predictions(preds, top=5):
return imagenet_utils.decode_predictions(preds, top=top)
preprocess_input.__doc__ = imagenet_utils.PREPROCESS_INPUT_DOC.format(
mode="",
ret=imagenet_utils.PREPROCESS_INPUT_RET_DOC_TF,
error=imagenet_utils.PREPROCESS_INPUT_ERROR_DOC,
)
decode_predictions.__doc__ = imagenet_utils.decode_predictions.__doc__
| tf-keras/tf_keras/applications/inception_resnet_v2.py/0 | {
"file_path": "tf-keras/tf_keras/applications/inception_resnet_v2.py",
"repo_id": "tf-keras",
"token_count": 6891
} | 166 |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for TF-Keras backend."""
import gc
import warnings
import numpy as np
import scipy.sparse
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
from tf_keras import activations
from tf_keras import backend
from tf_keras.engine import input_layer
from tf_keras.layers import activation
from tf_keras.layers.normalization import batch_normalization_v1
from tf_keras.testing_infra import test_combinations
from tf_keras.utils import losses_utils
from tf_keras.utils import tf_inspect
from tf_keras.utils import tf_utils
# isort: off
from tensorflow.python.eager import context
from tensorflow.python.eager.context import get_config
from tensorflow.python.framework import (
test_util as tf_test_utils,
)
def compare_single_input_op_to_numpy(
keras_op,
np_op,
input_shape,
dtype="float32",
negative_values=True,
keras_args=None,
keras_kwargs=None,
np_args=None,
np_kwargs=None,
):
keras_args = keras_args or []
keras_kwargs = keras_kwargs or {}
np_args = np_args or []
np_kwargs = np_kwargs or {}
inputs = 2.0 * np.random.random(input_shape)
if negative_values:
inputs -= 1.0
keras_output = keras_op(
backend.variable(inputs, dtype=dtype), *keras_args, **keras_kwargs
)
keras_output = backend.eval(keras_output)
np_output = np_op(inputs.astype(dtype), *np_args, **np_kwargs)
try:
np.testing.assert_allclose(keras_output, np_output, atol=1e-4)
except AssertionError:
raise AssertionError(
"Test for op `"
+ str(keras_op.__name__)
+ "` failed; Expected "
+ str(np_output)
+ " but got "
+ str(keras_output)
)
def compare_two_inputs_op_to_numpy(
keras_op,
np_op,
input_shape_a,
input_shape_b,
dtype="float32",
keras_args=None,
keras_kwargs=None,
np_args=None,
np_kwargs=None,
):
keras_args = keras_args or []
keras_kwargs = keras_kwargs or {}
np_args = np_args or []
np_kwargs = np_kwargs or {}
input_a = np.random.random(input_shape_a)
input_b = np.random.random(input_shape_b)
keras_output = keras_op(
backend.variable(input_a, dtype=dtype),
backend.variable(input_b, dtype=dtype),
*keras_args,
**keras_kwargs,
)
keras_output = backend.eval(keras_output)
np_output = np_op(
input_a.astype(dtype), input_b.astype(dtype), *np_args, **np_kwargs
)
try:
np.testing.assert_allclose(keras_output, np_output, atol=1e-4)
except AssertionError:
raise AssertionError(
"Test for op `"
+ str(keras_op.__name__)
+ "` failed; Expected "
+ str(np_output)
+ " but got "
+ str(keras_output)
)
class BackendResetTest(tf.test.TestCase, parameterized.TestCase):
def test_new_config(self):
# User defined jit setting
tf.config.optimizer.set_jit(False)
sess = backend.get_session()
default_config = get_config()
self.assertEqual(
sess._config.graph_options.optimizer_options.global_jit_level,
default_config.graph_options.optimizer_options.global_jit_level,
)
backend.clear_session()
# New session has the same jit setting
sess = backend.get_session()
default_config = get_config()
self.assertEqual(
sess._config.graph_options.optimizer_options.global_jit_level,
default_config.graph_options.optimizer_options.global_jit_level,
)
backend.clear_session()
# Change respected
tf.config.optimizer.set_jit(True)
sess = backend.get_session()
default_config = get_config()
self.assertEqual(
sess._config.graph_options.optimizer_options.global_jit_level,
default_config.graph_options.optimizer_options.global_jit_level,
)
backend.clear_session()
# We can't use the normal parameterized decorator because the test session
# will block graph clearing.
@parameterized.named_parameters(
("_v1", context.graph_mode),
("_v2", tf.__internal__.eager_context.eager_mode),
)
def test_new_graph(self, test_context):
with test_context():
g_old = backend.get_graph()
backend.clear_session()
g = backend.get_graph()
assert g_old is not g
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class BackendUtilsTest(tf.test.TestCase):
def test_backend(self):
self.assertEqual(backend.backend(), "tensorflow")
def test_get_reset_uids(self):
self.assertEqual(backend.get_uid("foo"), 1)
self.assertEqual(backend.get_uid("foo"), 2)
backend.reset_uids()
self.assertEqual(backend.get_uid("foo"), 1)
def test_learning_phase(self):
with self.cached_session() as sess:
with self.assertRaises(ValueError):
backend.set_learning_phase(2)
# Test running with a learning-phase-consuming layer
with backend.learning_phase_scope(0):
x = input_layer.Input((3,))
y = batch_normalization_v1.BatchNormalization()(x)
if not tf.executing_eagerly():
self.evaluate(tf.compat.v1.global_variables_initializer())
sess.run(y, feed_dict={x: np.random.random((2, 3))})
def test_get_learning_phase_eager(self):
if not tf.executing_eagerly():
self.skipTest("Check for eager only.")
# see b/251520266 for more details.
# By default the learning phase should be False
self.assertFalse(backend.learning_phase())
# Also make sure retrieving the learning phase doesn't set the default
# value
self.assertFalse(backend.global_learning_phase_is_set())
with backend.learning_phase_scope(1):
self.assertTrue(backend.learning_phase())
self.assertTrue(backend.global_learning_phase_is_set())
self.assertFalse(backend.global_learning_phase_is_set())
def test_learning_phase_name(self):
with backend.name_scope("test_scope"):
# Test that outer name scopes do not affect the learning phase's
# name.
lp = backend.symbolic_learning_phase()
self.assertEqual(lp.name, "keras_learning_phase:0")
def test_learning_phase_scope(self):
initial_learning_phase = backend.learning_phase()
with backend.learning_phase_scope(1):
self.assertEqual(backend.learning_phase(), 1)
self.assertEqual(backend.learning_phase(), initial_learning_phase)
with backend.learning_phase_scope(0):
self.assertEqual(backend.learning_phase(), 0)
self.assertEqual(backend.learning_phase(), initial_learning_phase)
with self.assertRaises(ValueError):
with backend.learning_phase_scope(None):
pass
self.assertEqual(backend.learning_phase(), initial_learning_phase)
new_learning_phase = 0
backend.set_learning_phase(new_learning_phase)
self.assertEqual(backend.learning_phase(), new_learning_phase)
with backend.learning_phase_scope(1):
self.assertEqual(backend.learning_phase(), 1)
self.assertEqual(backend.learning_phase(), new_learning_phase)
def test_learning_phase_scope_in_graph(self):
initial_learning_phase_outside_graph = backend.learning_phase()
with backend.get_graph().as_default():
initial_learning_phase_in_graph = backend.learning_phase()
self.assertEqual(
backend.learning_phase(), initial_learning_phase_outside_graph
)
with backend.learning_phase_scope(1):
self.assertEqual(backend.learning_phase(), 1)
self.assertEqual(
backend.learning_phase(), initial_learning_phase_outside_graph
)
with backend.get_graph().as_default():
self.assertIs(
backend.learning_phase(), initial_learning_phase_in_graph
)
self.assertEqual(
backend.learning_phase(), initial_learning_phase_outside_graph
)
def test_int_shape(self):
x = backend.ones(shape=(3, 4))
self.assertEqual(backend.int_shape(x), (3, 4))
if not tf.executing_eagerly():
x = backend.placeholder(shape=(None, 4))
self.assertEqual(backend.int_shape(x), (None, 4))
def test_in_train_phase(self):
y1 = backend.variable(1)
y2 = backend.variable(2)
if tf.executing_eagerly():
with backend.learning_phase_scope(0):
y_val_test = backend.in_train_phase(y1, y2).numpy()
with backend.learning_phase_scope(1):
y_val_train = backend.in_train_phase(y1, y2).numpy()
else:
y = backend.in_train_phase(y1, y2)
f = backend.function([backend.learning_phase()], [y])
y_val_test = f([0])[0]
y_val_train = f([1])[0]
self.assertAllClose(y_val_test, 2)
self.assertAllClose(y_val_train, 1)
def test_is_keras_tensor(self):
x = backend.variable(1)
self.assertEqual(backend.is_keras_tensor(x), False)
x = input_layer.Input(shape=(1,))
self.assertEqual(backend.is_keras_tensor(x), True)
x = input_layer.Input(shape=(None,), ragged=True)
self.assertEqual(backend.is_keras_tensor(x), True)
x = input_layer.Input(shape=(None, None), sparse=True)
self.assertEqual(backend.is_keras_tensor(x), True)
with self.assertRaises(ValueError):
backend.is_keras_tensor(0)
def test_stop_gradient(self):
x = backend.variable(1)
y = backend.stop_gradient(x)
if not tf.executing_eagerly():
self.assertEqual(y.op.name[:12], "StopGradient")
xs = [backend.variable(1) for _ in range(3)]
ys = backend.stop_gradient(xs)
if not tf.executing_eagerly():
for y in ys:
self.assertEqual(y.op.name[:12], "StopGradient")
def test_placeholder(self):
x = backend.placeholder(shape=(3, 4))
self.assertEqual(x.shape.as_list(), [3, 4])
x = backend.placeholder(shape=(3, 4), sparse=True)
self.assertEqual(x.shape.as_list(), [3, 4])
def test_is_placeholder(self):
x = backend.placeholder(shape=(1,))
self.assertEqual(backend.is_placeholder(x), True)
x = backend.variable(1)
self.assertEqual(backend.is_placeholder(x), False)
def test_print_tensor(self):
# Unfortunately it seems impossible to use `mock` (or any other method)
# to capture stdout when used inside a graph or graph function, thus
# we cannot test correctness.
# The message gets correctly printed in practice.
x = backend.placeholder(shape=())
y = backend.print_tensor(x, f"eager={tf.executing_eagerly()}")
f = backend.function(x, y)
f(0)
def test_cast_to_floatx(self):
x = backend.variable(1, dtype="float64")
x = backend.cast_to_floatx(x)
self.assertEqual(x.dtype.name, "float32")
x = backend.cast_to_floatx(2)
self.assertEqual(x.dtype.name, "float32")
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class BackendVariableTest(tf.test.TestCase):
def test_zeros(self):
x = backend.zeros((3, 4))
val = backend.eval(x)
self.assertAllClose(val, np.zeros((3, 4)))
def test_ones(self):
x = backend.ones((3, 4))
val = backend.eval(x)
self.assertAllClose(val, np.ones((3, 4)))
def test_eye(self):
x = backend.eye(4)
val = backend.eval(x)
self.assertAllClose(val, np.eye(4))
def test_zeros_like(self):
x = backend.zeros((3, 4))
y = backend.zeros_like(x)
val = backend.eval(y)
self.assertAllClose(val, np.zeros((3, 4)))
def test_ones_like(self):
x = backend.zeros((3, 4))
y = backend.ones_like(x)
val = backend.eval(y)
self.assertAllClose(val, np.ones((3, 4)))
def test_random_uniform_variable(self):
x = backend.random_uniform_variable((30, 20), low=1.0, high=2.0, seed=0)
val = backend.eval(x)
self.assertAllClose(val.mean(), 1.5, atol=1e-1)
self.assertAllClose(val.max(), 2.0, atol=1e-1)
self.assertAllClose(val.min(), 1.0, atol=1e-1)
def test_random_normal_variable(self):
x = backend.random_normal_variable((30, 20), 1.0, 0.5, seed=0)
val = backend.eval(x)
self.assertAllClose(val.mean(), 1.0, atol=1e-1)
self.assertAllClose(val.std(), 0.5, atol=1e-1)
def test_count_params(self):
x = backend.zeros((4, 5))
val = backend.count_params(x)
self.assertAllClose(val, 20)
def test_constant(self):
ref_val = np.random.random((3, 4)).astype("float32")
x = backend.constant(ref_val)
val = backend.eval(x)
self.assertAllClose(val, ref_val)
def test_sparse_variable(self):
val = scipy.sparse.eye(10)
x = backend.variable(val)
self.assertTrue(isinstance(x, tf.SparseTensor))
y = backend.to_dense(x)
self.assertFalse(backend.is_sparse(y))
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class BackendLinearAlgebraTest(tf.test.TestCase, parameterized.TestCase):
def test_dot(self):
x = backend.ones(shape=(2, 3))
y = backend.ones(shape=(3, 4))
xy = backend.dot(x, y)
self.assertEqual(xy.shape.as_list(), [2, 4])
x = backend.ones(shape=(32, 28, 3))
y = backend.ones(shape=(3, 4))
xy = backend.dot(x, y)
self.assertEqual(xy.shape.as_list(), [32, 28, 4])
@parameterized.parameters(
[(2, 3, 4, 5), (2, 5, 6, 7), (2, 3, 4, 6, 7), (3, 1)],
[(2, 20, 1), (2, 30, 20), (2, 1, 30), (1, 2)],
[(4, 2, 3), (4, 5, 3), (4, 2, 5), (2, 2)],
[(4, 2), (4, 2, 3), (4, 3), (1, 1)],
[(4, 2), (4, 2, 3), (4, 3), 1],
[(4, 2, 3), (4, 3), (4, 2), (2, 1)],
)
def test_batch_dot(self, x_shape, y_shape, output_shape, axes):
x_val = np.random.random(x_shape)
y_val = np.random.random(y_shape)
x = backend.variable(x_val)
y = backend.variable(y_val)
xy = backend.batch_dot(x, y, axes=axes)
self.assertEqual(tuple(xy.shape.as_list()), output_shape)
xy_val = backend.eval(xy)
ref_val = self._reference_batch_dot(x_val, y_val, axes)
self.assertAllClose(xy_val, ref_val, atol=1e-5)
def _reference_batch_dot(self, x, y, axes):
if isinstance(axes, int):
axes = [axes, axes]
elif isinstance(axes, tuple):
axes = list(axes)
if axes is None:
if y.ndim == 2:
axes = [x.ndim - 1, y.ndim - 1]
else:
axes = [x.ndim - 1, y.ndim - 2]
if axes[0] < 0:
axes[0] += x.ndim
if axes[1] < 0:
axes[1] += y.ndim
result = []
axes = [axes[0] - 1, axes[1] - 1]
for xi, yi in zip(x, y):
result.append(np.tensordot(xi, yi, axes))
result = np.array(result)
if result.ndim == 1:
result = np.expand_dims(result, -1)
return result
def test_reduction_ops(self):
ops_to_test = [
(backend.max, np.max),
(backend.min, np.min),
(backend.sum, np.sum),
(backend.prod, np.prod),
(backend.var, np.var),
(backend.std, np.std),
(backend.mean, np.mean),
(backend.argmin, np.argmin),
(backend.argmax, np.argmax),
]
for keras_op, np_op in ops_to_test:
compare_single_input_op_to_numpy(
keras_op,
np_op,
input_shape=(4, 7, 5),
keras_kwargs={"axis": 1},
np_kwargs={"axis": 1},
)
compare_single_input_op_to_numpy(
keras_op,
np_op,
input_shape=(4, 7, 5),
keras_kwargs={"axis": -1},
np_kwargs={"axis": -1},
)
if "keepdims" in tf_inspect.getargspec(keras_op).args:
compare_single_input_op_to_numpy(
keras_op,
np_op,
input_shape=(4, 7, 5),
keras_kwargs={"axis": 1, "keepdims": True},
np_kwargs={"axis": 1, "keepdims": True},
)
def test_elementwise_ops(self):
ops_to_test = [
(backend.square, np.square),
(backend.abs, np.abs),
(backend.round, np.round),
(backend.sign, np.sign),
(backend.sin, np.sin),
(backend.cos, np.cos),
(backend.exp, np.exp),
]
for keras_op, np_op in ops_to_test:
compare_single_input_op_to_numpy(
keras_op, np_op, input_shape=(4, 7)
)
ops_to_test = [
(backend.sqrt, np.sqrt),
(backend.log, np.log),
]
for keras_op, np_op in ops_to_test:
compare_single_input_op_to_numpy(
keras_op, np_op, input_shape=(4, 7), negative_values=False
)
compare_single_input_op_to_numpy(
backend.clip,
np.clip,
input_shape=(6, 4),
keras_kwargs={"min_value": 0.1, "max_value": 2.4},
np_kwargs={"a_min": 0.1, "a_max": 1.4},
)
compare_single_input_op_to_numpy(
backend.pow,
np.power,
input_shape=(6, 4),
keras_args=[3],
np_args=[3],
)
def test_two_tensor_ops(self):
ops_to_test = [
(backend.equal, np.equal),
(backend.not_equal, np.not_equal),
(backend.greater, np.greater),
(backend.greater_equal, np.greater_equal),
(backend.less, np.less),
(backend.less_equal, np.less_equal),
(backend.maximum, np.maximum),
(backend.minimum, np.minimum),
]
for keras_op, np_op in ops_to_test:
compare_two_inputs_op_to_numpy(
keras_op, np_op, input_shape_a=(4, 7), input_shape_b=(4, 7)
)
def test_relu(self):
x = tf.convert_to_tensor([[-4, 0], [2, 7]], "float32")
# standard relu
relu_op = backend.relu(x)
self.assertAllClose(backend.eval(relu_op), [[0, 0], [2, 7]])
# alpha (leaky relu used)
relu_op = backend.relu(x, alpha=0.5)
if not tf.executing_eagerly():
self.assertTrue("LeakyRelu" in relu_op.name)
self.assertAllClose(backend.eval(relu_op), [[-2, 0], [2, 7]])
# max_value < some elements
relu_op = backend.relu(x, max_value=5.0)
self.assertAllClose(backend.eval(relu_op), [[0, 0], [2, 5]])
# nn.relu6 used
relu_op = backend.relu(x, max_value=6.0)
if not tf.executing_eagerly():
self.assertTrue("Relu6" in relu_op.name) # uses tf.nn.relu6
self.assertAllClose(backend.eval(relu_op), [[0, 0], [2, 6]])
# max value > 6
relu_op = backend.relu(x, max_value=10.0)
self.assertAllClose(backend.eval(relu_op), [[0, 0], [2, 7]])
# max value is float
relu_op = backend.relu(x, max_value=4.3)
self.assertAllClose(backend.eval(relu_op), [[0, 0], [2, 4.3]])
# max value == 0
relu_op = backend.relu(x, max_value=0.0)
self.assertAllClose(backend.eval(relu_op), [[0, 0], [0, 0]])
# alpha and max_value
relu_op = backend.relu(x, alpha=0.25, max_value=3.0)
self.assertAllClose(backend.eval(relu_op), [[-1, 0], [2, 3]])
# threshold
relu_op = backend.relu(x, threshold=3)
self.assertAllClose(backend.eval(relu_op), [[0, 0], [0, 7]])
# threshold is float
relu_op = backend.relu(x, threshold=1.5)
self.assertAllClose(backend.eval(relu_op), [[0, 0], [2, 7]])
# threshold is negative
relu_op = backend.relu(x, threshold=-5)
self.assertAllClose(backend.eval(relu_op), [[-4, 0], [2, 7]])
# threshold and max_value
relu_op = backend.relu(x, threshold=3, max_value=5.0)
self.assertAllClose(backend.eval(relu_op), [[0, 0], [0, 5]])
# threshold and alpha
relu_op = backend.relu(x, alpha=0.25, threshold=4.0)
self.assertAllClose(backend.eval(relu_op), [[-2, -1], [-0.5, 7]])
# threshold, alpha, and max_value
relu_op = backend.relu(x, alpha=0.25, threshold=4.0, max_value=5.0)
self.assertAllClose(backend.eval(relu_op), [[-2, -1], [-0.5, 5]])
# Test case for GitHub issue 35430, with integer dtype
x = input_layer.Input(shape=(), name="x", dtype="int64")
_ = activation.ReLU(max_value=100.0, dtype="int64")(x)
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class BackendShapeOpsTest(tf.test.TestCase):
def test_reshape(self):
compare_single_input_op_to_numpy(
backend.reshape,
np.reshape,
input_shape=(4, 7),
keras_args=[(2, 14)],
np_args=[(2, 14)],
)
def test_concatenate(self):
a = backend.variable(np.ones((1, 2, 3)))
b = backend.variable(np.ones((1, 2, 2)))
y = backend.concatenate([a, b], axis=-1)
self.assertEqual(y.shape.as_list(), [1, 2, 5])
def test_permute_dimensions(self):
compare_single_input_op_to_numpy(
backend.permute_dimensions,
np.transpose,
input_shape=(4, 7),
keras_args=[(1, 0)],
np_args=[(1, 0)],
)
def test_resize_images(self):
height_factor = 2
width_factor = 2
data_format = "channels_last"
x = backend.variable(np.ones((1, 2, 2, 3)))
y = backend.resize_images(x, height_factor, width_factor, data_format)
self.assertEqual(y.shape.as_list(), [1, 4, 4, 3])
data_format = "channels_first"
x = backend.variable(np.ones((1, 3, 2, 2)))
y = backend.resize_images(x, height_factor, width_factor, data_format)
self.assertEqual(y.shape.as_list(), [1, 3, 4, 4])
# Use with a dynamic axis:
if not tf.executing_eagerly():
x = backend.placeholder(shape=(1, 3, None, None))
y = backend.resize_images(
x, height_factor, width_factor, data_format
)
self.assertEqual(y.shape.as_list(), [1, 3, None, None])
# Invalid use:
with self.assertRaises(ValueError):
backend.resize_images(
x, height_factor, width_factor, data_format="unknown"
)
def test_resize_volumes(self):
height_factor = 2
width_factor = 2
depth_factor = 2
data_format = "channels_last"
x = backend.variable(np.ones((1, 2, 2, 2, 3)))
y = backend.resize_volumes(
x, depth_factor, height_factor, width_factor, data_format
)
self.assertEqual(y.shape.as_list(), [1, 4, 4, 4, 3])
data_format = "channels_first"
x = backend.variable(np.ones((1, 3, 2, 2, 2)))
y = backend.resize_volumes(
x, depth_factor, height_factor, width_factor, data_format
)
self.assertEqual(y.shape.as_list(), [1, 3, 4, 4, 4])
# Invalid use:
with self.assertRaises(ValueError):
backend.resize_volumes(
x,
depth_factor,
height_factor,
width_factor,
data_format="unknown",
)
def test_repeat_elements(self):
x = backend.variable(np.ones((1, 3, 2)))
y = backend.repeat_elements(x, 3, axis=1)
self.assertEqual(y.shape.as_list(), [1, 9, 2])
# Use with a dynamic axis:
if not tf.executing_eagerly():
x = backend.placeholder(shape=(2, None, 2))
y = backend.repeat_elements(x, 3, axis=1)
self.assertEqual(y.shape.as_list(), [2, None, 2])
def test_repeat(self):
x = backend.variable(np.ones((1, 3)))
y = backend.repeat(x, 2)
self.assertEqual(y.shape.as_list(), [1, 2, 3])
def test_flatten(self):
compare_single_input_op_to_numpy(
backend.flatten,
np.reshape,
input_shape=(4, 7, 6),
np_args=[(4 * 7 * 6,)],
)
def test_batch_flatten(self):
compare_single_input_op_to_numpy(
backend.batch_flatten,
np.reshape,
input_shape=(4, 7, 6),
np_args=[(4, 7 * 6)],
)
def test_temporal_padding(self):
def ref_op(x, padding):
shape = list(x.shape)
shape[1] += padding[0] + padding[1]
y = np.zeros(tuple(shape))
y[:, padding[0] : -padding[1], :] = x
return y
compare_single_input_op_to_numpy(
backend.temporal_padding,
ref_op,
input_shape=(4, 7, 6),
keras_args=[(2, 3)],
np_args=[(2, 3)],
)
def test_spatial_2d_padding(self):
def ref_op(x, padding, data_format="channels_last"):
shape = list(x.shape)
if data_format == "channels_last":
shape[1] += padding[0][0] + padding[0][1]
shape[2] += padding[1][0] + padding[1][1]
y = np.zeros(tuple(shape))
y[
:,
padding[0][0] : -padding[0][1],
padding[1][0] : -padding[1][1],
:,
] = x
else:
shape[2] += padding[0][0] + padding[0][1]
shape[3] += padding[1][0] + padding[1][1]
y = np.zeros(tuple(shape))
y[
:,
:,
padding[0][0] : -padding[0][1],
padding[1][0] : -padding[1][1],
] = x
return y
compare_single_input_op_to_numpy(
backend.spatial_2d_padding,
ref_op,
input_shape=(2, 3, 2, 3),
keras_args=[((2, 3), (1, 2))],
keras_kwargs={"data_format": "channels_last"},
np_args=[((2, 3), (1, 2))],
np_kwargs={"data_format": "channels_last"},
)
compare_single_input_op_to_numpy(
backend.spatial_2d_padding,
ref_op,
input_shape=(2, 3, 2, 3),
keras_args=[((2, 3), (1, 2))],
keras_kwargs={"data_format": "channels_first"},
np_args=[((2, 3), (1, 2))],
np_kwargs={"data_format": "channels_first"},
)
def test_spatial_3d_padding(self):
def ref_op(x, padding, data_format="channels_last"):
shape = list(x.shape)
if data_format == "channels_last":
shape[1] += padding[0][0] + padding[0][1]
shape[2] += padding[1][0] + padding[1][1]
shape[3] += padding[2][0] + padding[2][1]
y = np.zeros(tuple(shape))
y[
:,
padding[0][0] : -padding[0][1],
padding[1][0] : -padding[1][1],
padding[2][0] : -padding[2][1],
:,
] = x
else:
shape[2] += padding[0][0] + padding[0][1]
shape[3] += padding[1][0] + padding[1][1]
shape[4] += padding[2][0] + padding[2][1]
y = np.zeros(tuple(shape))
y[
:,
:,
padding[0][0] : -padding[0][1],
padding[1][0] : -padding[1][1],
padding[2][0] : -padding[2][1],
] = x
return y
compare_single_input_op_to_numpy(
backend.spatial_3d_padding,
ref_op,
input_shape=(2, 3, 2, 3, 2),
keras_args=[((2, 3), (1, 2), (2, 3))],
keras_kwargs={"data_format": "channels_last"},
np_args=[((2, 3), (1, 2), (2, 3))],
np_kwargs={"data_format": "channels_last"},
)
compare_single_input_op_to_numpy(
backend.spatial_3d_padding,
ref_op,
input_shape=(2, 3, 2, 3, 2),
keras_args=[((2, 3), (1, 2), (2, 3))],
keras_kwargs={"data_format": "channels_first"},
np_args=[((2, 3), (1, 2), (2, 3))],
np_kwargs={"data_format": "channels_first"},
)
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class BackendNNOpsTest(tf.test.TestCase, parameterized.TestCase):
def test_bias_add(self):
keras_op = backend.bias_add
np_op = np.add
compare_two_inputs_op_to_numpy(
keras_op, np_op, input_shape_a=(4, 7), input_shape_b=(7,)
)
compare_two_inputs_op_to_numpy(
keras_op, np_op, input_shape_a=(4, 3, 7), input_shape_b=(7,)
)
compare_two_inputs_op_to_numpy(
keras_op, np_op, input_shape_a=(4, 3, 5, 7), input_shape_b=(7,)
)
compare_two_inputs_op_to_numpy(
keras_op, np_op, input_shape_a=(4, 3, 5, 2, 7), input_shape_b=(7,)
)
with self.assertRaises((ValueError, tf.errors.InvalidArgumentError)):
x = backend.variable((3, 4))
b = backend.variable((3, 4))
backend.bias_add(x, b)
with self.assertRaises(ValueError):
x = backend.variable((3, 4))
b = backend.variable((4,))
backend.bias_add(x, b, data_format="unknown")
def test_bias_add_channels_first(self):
def keras_op(x, b):
return backend.bias_add(x, b, data_format="channels_first")
def np_op(x, b):
if x.ndim == 3:
b = b.reshape((1, b.shape[0], 1))
if x.ndim == 4:
b = b.reshape((1, b.shape[0], 1, 1))
return x + b
compare_two_inputs_op_to_numpy(
keras_op, np_op, input_shape_a=(4, 3, 7), input_shape_b=(3,)
)
compare_two_inputs_op_to_numpy(
keras_op, np_op, input_shape_a=(4, 3, 5, 7), input_shape_b=(3,)
)
def test_pool2d(self):
val = np.random.random((10, 3, 10, 10))
x = backend.variable(val)
y = backend.pool2d(
x,
(2, 2),
strides=(1, 1),
padding="valid",
data_format="channels_first",
pool_mode="max",
)
self.assertEqual(y.shape.as_list(), [10, 3, 9, 9])
y = backend.pool2d(
x,
(2, 2),
strides=(1, 1),
padding="valid",
data_format="channels_first",
pool_mode="avg",
)
self.assertEqual(y.shape.as_list(), [10, 3, 9, 9])
val = np.random.random((10, 10, 10, 3))
x = backend.variable(val)
y = backend.pool2d(
x,
(2, 2),
strides=(1, 1),
padding="valid",
data_format="channels_last",
)
self.assertEqual(y.shape.as_list(), [10, 9, 9, 3])
val = np.random.random((10, 10, 10, 3))
x = backend.variable(val)
y = backend.pool2d(
x,
(2, 2),
strides=(1, 1),
padding="same",
data_format="channels_last",
)
self.assertEqual(y.shape.as_list(), [10, 10, 10, 3])
val = np.random.random((10, 10, 10, 3))
x = backend.variable(val)
y = backend.pool2d(
x,
(2, 2),
strides=(2, 2),
padding="same",
data_format="channels_last",
)
self.assertEqual(y.shape.as_list(), [10, 5, 5, 3])
with self.assertRaises(ValueError):
y = backend.pool2d(
x,
(2, 2),
strides=(2, 2),
padding="other",
data_format="channels_last",
)
with self.assertRaises(ValueError):
y = backend.pool2d(x, (2, 2), strides=(2, 2), data_format="other")
with self.assertRaises(ValueError):
y = backend.pool2d(x, (2, 2, 2), strides=(2, 2))
with self.assertRaises(ValueError):
y = backend.pool2d(x, (2, 2), strides=(2, 2, 2))
with self.assertRaises(ValueError):
y = backend.pool2d(x, (2, 2), strides=(2, 2), pool_mode="other")
def test_pool3d(self):
val = np.random.random((10, 3, 10, 10, 10))
x = backend.variable(val)
y = backend.pool3d(
x,
(2, 2, 2),
strides=(1, 1, 1),
padding="valid",
data_format="channels_first",
pool_mode="max",
)
self.assertEqual(y.shape.as_list(), [10, 3, 9, 9, 9])
y = backend.pool3d(
x,
(2, 2, 2),
strides=(1, 1, 1),
padding="valid",
data_format="channels_first",
pool_mode="avg",
)
self.assertEqual(y.shape.as_list(), [10, 3, 9, 9, 9])
val = np.random.random((10, 10, 10, 10, 3))
x = backend.variable(val)
y = backend.pool3d(
x,
(2, 2, 2),
strides=(1, 1, 1),
padding="valid",
data_format="channels_last",
)
self.assertEqual(y.shape.as_list(), [10, 9, 9, 9, 3])
val = np.random.random((10, 10, 10, 10, 3))
x = backend.variable(val)
y = backend.pool3d(
x,
(2, 2, 2),
strides=(1, 1, 1),
padding="same",
data_format="channels_last",
)
self.assertEqual(y.shape.as_list(), [10, 10, 10, 10, 3])
val = np.random.random((10, 10, 10, 10, 3))
x = backend.variable(val)
y = backend.pool3d(
x,
(2, 2, 2),
strides=(2, 2, 2),
padding="same",
data_format="channels_last",
)
self.assertEqual(y.shape.as_list(), [10, 5, 5, 5, 3])
def test_conv1d(self):
val = np.random.random((10, 4, 10))
x = backend.variable(val)
kernel_val = np.random.random((3, 4, 5))
k = backend.variable(kernel_val)
y = backend.conv1d(
x, k, strides=(1,), padding="valid", data_format="channels_first"
)
self.assertEqual(y.shape.as_list(), [10, 5, 8])
val = np.random.random((10, 10, 4))
x = backend.variable(val)
y = backend.conv1d(
x, k, strides=(1,), padding="valid", data_format="channels_last"
)
self.assertEqual(y.shape.as_list(), [10, 8, 5])
val = np.random.random((10, 10, 4))
x = backend.variable(val)
y = backend.conv1d(
x, k, strides=(1,), padding="same", data_format="channels_last"
)
self.assertEqual(y.shape.as_list(), [10, 10, 5])
val = np.random.random((10, 10, 4))
x = backend.variable(val)
y = backend.conv1d(
x, k, strides=(2,), padding="same", data_format="channels_last"
)
self.assertEqual(y.shape.as_list(), [10, 5, 5])
def test_local_conv_channels_dim(self):
filters = 3
batch_size = 2
for input_shape in [(3, 5), (2, 3, 5), (2, 5, 3, 4)]:
channels_in = input_shape[0]
input_spatial_shape = input_shape[1:]
dim = len(input_spatial_shape)
inputs = np.random.normal(0, 1, (batch_size,) + input_shape)
inputs_cf = backend.variable(inputs)
for kernel_size in [1, 2]:
for stride in [1, 2]:
kernel_sizes = (kernel_size,) * dim
strides = (stride,) * dim
output_shape = tuple(
[
(i - kernel_size + stride) // stride
for i in input_spatial_shape
]
)
kernel_shape = (
np.prod(output_shape),
np.prod(kernel_sizes) * channels_in,
filters,
)
kernel = np.random.normal(
0,
1,
output_shape
+ (channels_in, np.prod(kernel_sizes), filters),
)
kernel_cf = np.reshape(kernel, kernel_shape)
kernel_cf = backend.variable(kernel_cf)
conv_cf = backend.local_conv(
inputs_cf,
kernel_cf,
kernel_sizes,
strides,
output_shape,
"channels_first",
)
inputs_cl = np.transpose(
inputs, [0, 2] + list(range(3, dim + 2)) + [1]
)
inputs_cl = backend.variable(inputs_cl)
kernel_cl = np.reshape(
np.transpose(
kernel, list(range(dim)) + [dim + 1, dim, dim + 2]
),
kernel_shape,
)
kernel_cl = backend.variable(kernel_cl)
conv_cl = backend.local_conv(
inputs_cl,
kernel_cl,
kernel_sizes,
strides,
output_shape,
"channels_last",
)
conv_cf = backend.eval(conv_cf)
conv_cl = backend.eval(conv_cl)
self.assertAllCloseAccordingToType(
conv_cf,
np.transpose(
conv_cl, [0, dim + 1] + list(range(1, dim + 1))
),
atol=1e-5,
)
@parameterized.named_parameters(
("local_conv1d", (5, 6), (3,), (1,), (3,)),
("local_conv2d", (4, 5, 6), (3, 3), (1, 1), (2, 3)),
)
def test_local_conv_1d_and_2d(
self, input_shape, kernel_sizes, strides, output_shape
):
filters = 3
batch_size = 2
inputs = np.random.normal(0, 1, (batch_size,) + input_shape)
inputs = backend.variable(inputs)
kernel = np.random.normal(
0,
1,
(
np.prod(output_shape),
np.prod(kernel_sizes) * input_shape[-1],
filters,
),
)
kernel = backend.variable(kernel)
local_conv = backend.local_conv(
inputs, kernel, kernel_sizes, strides, output_shape, "channels_last"
)
if len(output_shape) == 1:
local_conv_dim = backend.local_conv1d(
inputs, kernel, kernel_sizes, strides, "channels_last"
)
else:
local_conv_dim = backend.local_conv2d(
inputs,
kernel,
kernel_sizes,
strides,
output_shape,
"channels_last",
)
local_conv = backend.eval(local_conv)
local_conv_dim = backend.eval(local_conv_dim)
self.assertAllCloseAccordingToType(local_conv, local_conv_dim)
def test_conv2d(self):
kernel_val = np.random.random((3, 3, 4, 5))
k = backend.variable(kernel_val)
# Test channels_first
val = np.random.random((10, 4, 10, 10))
x = backend.variable(val)
y = backend.conv2d(x, k, padding="valid", data_format="channels_first")
self.assertEqual(y.shape.as_list(), [10, 5, 8, 8])
# Test channels_last
val = np.random.random((10, 10, 10, 4))
x = backend.variable(val)
y = backend.conv2d(
x, k, strides=(1, 1), padding="valid", data_format="channels_last"
)
self.assertEqual(y.shape.as_list(), [10, 8, 8, 5])
# Test same padding
val = np.random.random((10, 10, 10, 4))
x = backend.variable(val)
y = backend.conv2d(x, k, padding="same", data_format="channels_last")
self.assertEqual(y.shape.as_list(), [10, 10, 10, 5])
# Test dilation_rate
val = np.random.random((10, 10, 10, 4))
x = backend.variable(val)
y = backend.conv2d(
x,
k,
dilation_rate=(2, 2),
padding="same",
data_format="channels_last",
)
self.assertEqual(y.shape.as_list(), [10, 10, 10, 5])
# Test strides
val = np.random.random((10, 10, 10, 4))
x = backend.variable(val)
y = backend.conv2d(
x, k, strides=(2, 2), padding="same", data_format="channels_last"
)
self.assertEqual(y.shape.as_list(), [10, 5, 5, 5])
# Test invalid arguments
with self.assertRaises(ValueError):
y = backend.conv2d(
x, k, (2, 2), padding="other", data_format="channels_last"
)
with self.assertRaises(ValueError):
y = backend.conv2d(x, k, (2, 2), data_format="other")
with self.assertRaises(ValueError):
y = backend.conv2d(x, k, (2, 2, 2))
def test_conv2d_transpose(self):
input_size = (7, 8)
kernel_size = (3, 3)
input_depth = 6
filters = 6
batch_size = 2
kernel_val = np.random.random(kernel_size + (input_depth, filters))
k = backend.variable(kernel_val)
# Test channels_first
input_val = np.random.random((batch_size, input_depth) + input_size)
x = backend.variable(input_val)
y = backend.conv2d_transpose(
x,
k,
(batch_size, filters) + input_size,
padding="same",
data_format="channels_first",
)
self.assertEqual(
tuple(y.shape.as_list()), (batch_size, filters) + input_size
)
# Test channels_last
input_val = np.random.random(
(batch_size,) + input_size + (input_depth,)
)
x = backend.variable(input_val)
y = backend.conv2d_transpose(
x,
k,
(batch_size,) + input_size + (filters,),
padding="same",
data_format="channels_last",
)
self.assertEqual(
tuple(y.shape.as_list()), (batch_size,) + input_size + (filters,)
)
# Test dilation_rate
y = backend.conv2d_transpose(
x,
k,
(batch_size,) + input_size + (filters,),
padding="same",
data_format="channels_last",
dilation_rate=(2, 2),
)
self.assertEqual(
tuple(y.shape.as_list()), (batch_size,) + input_size + (filters,)
)
# Test dilation_rate error
with self.assertRaisesRegex(ValueError, "Expected the 2 dimensions"):
y = backend.conv2d_transpose(
x,
k,
(batch_size,) + input_size + (filters,),
padding="same",
data_format="channels_last",
dilation_rate=(1, 2),
)
# Test batch size of None in output_shape
y = backend.conv2d_transpose(
x,
k,
(None,) + input_size + (filters,),
padding="same",
data_format="channels_last",
)
self.assertEqual(
tuple(y.shape.as_list()), (batch_size,) + input_size + (filters,)
)
# Test invalid values
with self.assertRaises(ValueError):
y = backend.conv2d_transpose(
x, k, (2, 2, 8, 9), padding="other", data_format="channels_last"
)
with self.assertRaises(ValueError):
y = backend.conv2d_transpose(
x, k, (2, 2, 8, 9), data_format="other"
)
def test_separable_conv2d(self):
val = np.random.random((10, 4, 10, 10))
x = backend.variable(val)
depthwise_kernel_val = np.random.random((3, 3, 4, 1))
pointwise_kernel_val = np.random.random((1, 1, 4, 5))
dk = backend.variable(depthwise_kernel_val)
pk = backend.variable(pointwise_kernel_val)
y = backend.separable_conv2d(
x, dk, pk, padding="valid", data_format="channels_first"
)
self.assertEqual(y.shape.as_list(), [10, 5, 8, 8])
val = np.random.random((10, 10, 10, 4))
x = backend.variable(val)
y = backend.separable_conv2d(
x,
dk,
pk,
strides=(1, 1),
padding="valid",
data_format="channels_last",
)
self.assertEqual(y.shape.as_list(), [10, 8, 8, 5])
val = np.random.random((10, 10, 10, 4))
x = backend.variable(val)
y = backend.separable_conv2d(
x,
dk,
pk,
strides=(1, 1),
padding="same",
data_format="channels_last",
)
self.assertEqual(y.shape.as_list(), [10, 10, 10, 5])
val = np.random.random((10, 10, 10, 4))
x = backend.variable(val)
y = backend.separable_conv2d(
x,
dk,
pk,
strides=(2, 2),
padding="same",
data_format="channels_last",
)
self.assertEqual(y.shape.as_list(), [10, 5, 5, 5])
with self.assertRaises(ValueError):
y = backend.separable_conv2d(
x, dk, pk, (2, 2), padding="other", data_format="channels_last"
)
with self.assertRaises(ValueError):
y = backend.separable_conv2d(x, dk, pk, (2, 2), data_format="other")
with self.assertRaises(ValueError):
y = backend.separable_conv2d(x, dk, pk, (2, 2, 2))
def test_conv3d(self):
val = np.random.random((10, 4, 10, 10, 10))
x = backend.variable(val)
kernel_val = np.random.random((3, 3, 3, 4, 5))
k = backend.variable(kernel_val)
y = backend.conv3d(x, k, padding="valid", data_format="channels_first")
self.assertEqual(y.shape.as_list(), [10, 5, 8, 8, 8])
val = np.random.random((10, 10, 10, 10, 4))
x = backend.variable(val)
y = backend.conv3d(
x,
k,
strides=(1, 1, 1),
padding="valid",
data_format="channels_last",
)
self.assertEqual(y.shape.as_list(), [10, 8, 8, 8, 5])
val = np.random.random((10, 10, 10, 10, 4))
x = backend.variable(val)
y = backend.conv3d(
x, k, strides=(1, 1, 1), padding="same", data_format="channels_last"
)
self.assertEqual(y.shape.as_list(), [10, 10, 10, 10, 5])
val = np.random.random((10, 10, 10, 10, 4))
x = backend.variable(val)
y = backend.conv3d(
x, k, strides=(2, 2, 2), padding="same", data_format="channels_last"
)
self.assertEqual(y.shape.as_list(), [10, 5, 5, 5, 5])
with self.assertRaises(ValueError):
y = backend.conv3d(
x, k, (2, 2, 2), padding="other", data_format="channels_last"
)
with self.assertRaises(ValueError):
y = backend.conv3d(x, k, (2, 2, 2), data_format="other")
with self.assertRaises(ValueError):
y = backend.conv3d(x, k, (2, 2))
def test_rnn(self):
# implement a simple RNN
num_samples = 4
input_dim = 5
output_dim = 3
timesteps = 6
input_val = np.random.random(
(num_samples, timesteps, input_dim)
).astype(np.float32)
init_state_val = np.random.random((num_samples, output_dim)).astype(
np.float32
)
w_i_val = np.random.random((input_dim, output_dim)).astype(np.float32)
w_o_val = np.random.random((output_dim, output_dim)).astype(np.float32)
np_mask = np.random.randint(2, size=(num_samples, timesteps))
def rnn_step_fn():
w_i = backend.variable(w_i_val)
w_o = backend.variable(w_o_val)
def step_function(x, states):
assert len(states) == 1
prev_output = states[0]
output = backend.dot(x, w_i) + backend.dot(prev_output, w_o)
return output, [output]
return step_function
# test default setup
last_output_list = [[], [], [], [], [], []]
outputs_list = [[], [], [], [], [], []]
state_list = [[], [], [], [], [], []]
rnn_fn = rnn_step_fn()
inputs = backend.variable(input_val)
initial_states = [backend.variable(init_state_val)]
mask = backend.variable(np_mask)
kwargs_list = [
{"go_backwards": False, "mask": None},
{"go_backwards": False, "mask": None, "unroll": True},
{"go_backwards": True, "mask": None},
{"go_backwards": True, "mask": None, "unroll": True},
{"go_backwards": False, "mask": mask},
{"go_backwards": False, "mask": mask, "unroll": True},
]
for i, kwargs in enumerate(kwargs_list):
last_output, outputs, new_states = backend.rnn(
rnn_fn, inputs, initial_states, **kwargs
)
# check static shape inference
self.assertEqual(
last_output.shape.as_list(), [num_samples, output_dim]
)
self.assertEqual(
outputs.shape.as_list(), [num_samples, timesteps, output_dim]
)
for state in new_states:
self.assertEqual(
state.shape.as_list(), [num_samples, output_dim]
)
last_output_list[i].append(backend.eval(last_output))
outputs_list[i].append(backend.eval(outputs))
self.assertLen(new_states, 1)
state_list[i].append(backend.eval(new_states[0]))
def assert_list_pairwise(z_list, atol=1e-05):
for z1, z2 in zip(z_list[1:], z_list[:-1]):
self.assertAllClose(z1, z2, atol=atol)
assert_list_pairwise(last_output_list[0], atol=1e-04)
assert_list_pairwise(outputs_list[0], atol=1e-04)
assert_list_pairwise(state_list[0], atol=1e-04)
assert_list_pairwise(last_output_list[2], atol=1e-04)
assert_list_pairwise(outputs_list[2], atol=1e-04)
assert_list_pairwise(state_list[2], atol=1e-04)
for l, u_l in zip(last_output_list[0], last_output_list[1]):
self.assertAllClose(l, u_l, atol=1e-04)
for o, u_o in zip(outputs_list[0], outputs_list[1]):
self.assertAllClose(o, u_o, atol=1e-04)
for s, u_s in zip(state_list[0], state_list[1]):
self.assertAllClose(s, u_s, atol=1e-04)
for b_l, b_u_l in zip(last_output_list[2], last_output_list[3]):
self.assertAllClose(b_l, b_u_l, atol=1e-04)
for b_o, b_u_o in zip(outputs_list[2], outputs_list[3]):
self.assertAllClose(b_o, b_u_o, atol=1e-04)
for b_s, b_u_s in zip(state_list[2], state_list[3]):
self.assertAllClose(b_s, b_u_s, atol=1e-04)
def test_rnn_additional_states(self):
# implement a simple RNN
num_samples = 4
input_dim = 5
output_dim = 3
timesteps = 6
input_val = np.random.random(
(num_samples, timesteps, input_dim)
).astype(np.float32)
init_state_val = np.random.random((num_samples, output_dim)).astype(
np.float32
)
w_i_val = np.random.random((input_dim, output_dim)).astype(np.float32)
w_o_val = np.random.random((output_dim, output_dim)).astype(np.float32)
np_mask = np.random.randint(2, size=(num_samples, timesteps))
def rnn_step_fn():
w_i = backend.variable(w_i_val)
w_o = backend.variable(w_o_val)
def step_function(x, states):
assert len(states) == 2
prev_output = states[0]
output = backend.dot(x, w_i) + backend.dot(prev_output, w_o)
return output, [
output,
backend.concatenate([output, output], axis=-1),
]
return step_function
# test default setup
last_output_list = [[], [], [], [], [], []]
outputs_list = [[], [], [], [], [], []]
state_list = [[], [], [], [], [], []]
additional_state_list = [[], [], [], [], [], []]
rnn_fn = rnn_step_fn()
inputs = backend.variable(input_val)
initial_states = [
backend.variable(init_state_val),
tf.convert_to_tensor(
np.concatenate([init_state_val, init_state_val], axis=-1)
),
]
mask = backend.variable(np_mask)
kwargs_list = [
{"go_backwards": False, "mask": None},
{"go_backwards": False, "mask": None, "unroll": True},
{"go_backwards": True, "mask": None},
{"go_backwards": True, "mask": None, "unroll": True},
{"go_backwards": False, "mask": mask},
{"go_backwards": False, "mask": mask, "unroll": True},
]
for i, kwargs in enumerate(kwargs_list):
last_output, outputs, new_states = backend.rnn(
rnn_fn, inputs, initial_states, **kwargs
)
# check static shape inference
self.assertEqual(
last_output.shape.as_list(), [num_samples, output_dim]
)
self.assertEqual(
outputs.shape.as_list(), [num_samples, timesteps, output_dim]
)
# for state in new_states:
# self.assertEqual(state.shape.as_list(),
# [num_samples, output_dim])
self.assertEqual(
new_states[0].shape.as_list(), [num_samples, output_dim]
)
self.assertEqual(
new_states[1].shape.as_list(), [num_samples, 2 * output_dim]
)
last_output_list[i].append(backend.eval(last_output))
outputs_list[i].append(backend.eval(outputs))
self.assertLen(new_states, 2)
state_list[i].append(backend.eval(new_states[0]))
additional_state_list[i].append(backend.eval(new_states[1]))
def assert_list_pairwise(z_list, atol=1e-05):
for z1, z2 in zip(z_list[1:], z_list[:-1]):
self.assertAllClose(z1, z2, atol=atol)
assert_list_pairwise(last_output_list[0], atol=1e-04)
assert_list_pairwise(outputs_list[0], atol=1e-04)
assert_list_pairwise(state_list[0], atol=1e-04)
assert_list_pairwise(additional_state_list[0], atol=1e-04)
assert_list_pairwise(last_output_list[2], atol=1e-04)
assert_list_pairwise(outputs_list[2], atol=1e-04)
assert_list_pairwise(state_list[2], atol=1e-04)
assert_list_pairwise(additional_state_list[2], atol=1e-04)
for l, u_l in zip(last_output_list[0], last_output_list[1]):
self.assertAllClose(l, u_l, atol=1e-04)
for o, u_o in zip(outputs_list[0], outputs_list[1]):
self.assertAllClose(o, u_o, atol=1e-04)
for s, u_s in zip(state_list[0], state_list[1]):
self.assertAllClose(s, u_s, atol=1e-04)
for s, u_s in zip(
additional_state_list[0], additional_state_list[1]
):
self.assertAllClose(s, u_s, atol=1e-04)
for b_l, b_u_l in zip(last_output_list[2], last_output_list[3]):
self.assertAllClose(b_l, b_u_l, atol=1e-04)
for b_o, b_u_o in zip(outputs_list[2], outputs_list[3]):
self.assertAllClose(b_o, b_u_o, atol=1e-04)
for b_s, b_u_s in zip(state_list[2], state_list[3]):
self.assertAllClose(b_s, b_u_s, atol=1e-04)
for s, u_s in zip(
additional_state_list[2], additional_state_list[3]
):
self.assertAllClose(s, u_s, atol=1e-04)
def test_rnn_output_and_state_masking_independent(self):
num_samples = 2
num_timesteps = 4
state_and_io_size = 2
mask_last_num_timesteps = 2 # for second sample only
# a step function that just outputs inputs,
# but increments states +1 per timestep
def step_function(inputs, states):
return inputs, [s + 1 for s in states]
inputs_vals = np.random.random(
(num_samples, num_timesteps, state_and_io_size)
)
initial_state_vals = np.random.random((num_samples, state_and_io_size))
# masking of two last timesteps for second sample only
mask_vals = np.ones((num_samples, num_timesteps))
mask_vals[1, -mask_last_num_timesteps:] = 0
# outputs expected to be same as inputs for the first sample
expected_outputs = inputs_vals.copy()
# but for the second sample all outputs in masked region should be the
# same as last output before masked region
expected_outputs[1, -mask_last_num_timesteps:] = expected_outputs[
1, -(mask_last_num_timesteps + 1)
]
expected_last_state = initial_state_vals.copy()
# first state should be incremented for every timestep (no masking)
expected_last_state[0] += num_timesteps
# second state should not be incremented for last two timesteps
expected_last_state[1] += num_timesteps - mask_last_num_timesteps
# verify same expected output for `unroll=true/false`
inputs = backend.variable(inputs_vals)
initial_states = [backend.variable(initial_state_vals)]
mask = backend.variable(mask_vals)
for unroll in [True, False]:
_, outputs, last_states = backend.rnn(
step_function,
inputs,
initial_states,
mask=mask,
unroll=unroll,
input_length=num_timesteps if unroll else None,
)
self.assertAllClose(backend.eval(outputs), expected_outputs)
self.assertAllClose(
backend.eval(last_states[0]), expected_last_state
)
def test_rnn_output_num_dim_larger_than_2_masking(self):
num_samples = 3
num_timesteps = 4
num_features = 5
def step_function(inputs, states):
outputs = backend.tile(backend.expand_dims(inputs), [1, 1, 2])
return outputs, [backend.identity(s) for s in states]
# Note: cannot just return states (which can be a problem) ->
# tensorflow/python/ops/resource_variable_ops.py", line 824, in
# set_shape NotImplementedError: ResourceVariable does not implement
# set_shape()
inputs_vals = np.random.random(
(num_samples, num_timesteps, num_features)
)
initial_state_vals = np.random.random((num_samples, 6))
mask_vals = np.ones((num_samples, num_timesteps))
mask_vals[-1, -1] = 0 # final timestep masked for last sample
expected_outputs = np.repeat(inputs_vals[..., None], repeats=2, axis=-1)
# for the last sample, the final timestep (in masked region) should be
# the same as the second to final output (before masked region)
expected_outputs[-1, -1] = expected_outputs[-1, -2]
inputs = backend.variable(inputs_vals)
initial_states = [backend.variable(initial_state_vals)]
mask = backend.variable(mask_vals)
for unroll in [True, False]:
_, outputs, _ = backend.rnn(
step_function,
inputs,
initial_states,
mask=mask,
unroll=unroll,
input_length=num_timesteps if unroll else None,
)
self.assertAllClose(backend.eval(outputs), expected_outputs)
def test_rnn_state_num_dim_larger_than_2_masking(self):
num_samples = 3
num_timesteps = 4
def step_function(inputs, states):
return inputs, [s + 1 for s in states]
inputs_vals = np.random.random((num_samples, num_timesteps, 5))
initial_state_vals = np.random.random((num_samples, 6, 7))
mask_vals = np.ones((num_samples, num_timesteps))
mask_vals[0, -2:] = 0 # final two timesteps masked for first sample
expected_last_state = initial_state_vals.copy()
expected_last_state[0] += num_timesteps - 2
expected_last_state[1:] += num_timesteps
inputs = backend.variable(inputs_vals)
initial_states = [backend.variable(initial_state_vals)]
mask = backend.variable(mask_vals)
for unroll in [True, False]:
_, _, last_states = backend.rnn(
step_function,
inputs,
initial_states,
mask=mask,
unroll=unroll,
input_length=num_timesteps if unroll else None,
)
self.assertAllClose(
backend.eval(last_states[0]), expected_last_state
)
def test_rnn_function_jit_compile_no_unroll_input_length_none(self):
num_samples = 3
num_timesteps = 4
def step_function(inputs, states):
return inputs, [s + 1 for s in states]
inputs_vals = np.random.random((num_samples, num_timesteps, 5))
initial_state_vals = np.random.random((num_samples, 6, 7))
mask_vals = np.ones((num_samples, num_timesteps))
mask_vals[0, -2:] = 0 # final two timesteps masked for first sample
expected_last_state = initial_state_vals.copy()
expected_last_state[0] += num_timesteps - 2
expected_last_state[1:] += num_timesteps
inputs = backend.variable(inputs_vals)
initial_states = [backend.variable(initial_state_vals)]
mask = backend.variable(mask_vals)
@tf.function(jit_compile=True)
def fn():
_, _, last_states = backend.rnn(
step_function,
inputs,
initial_states,
mask=mask,
unroll=False,
input_length=None,
)
return last_states
last_states = fn()
self.assertAllClose(backend.eval(last_states[0]), expected_last_state)
def test_batch_normalization(self):
g_val = np.random.random((3,))
b_val = np.random.random((3,))
gamma = backend.variable(g_val)
beta = backend.variable(b_val)
# 3D NHC case
val = np.random.random((10, 5, 3))
x = backend.variable(val)
mean, var = tf.nn.moments(x, (0, 1), None, None, False)
normed = backend.batch_normalization(
x, mean, var, beta, gamma, axis=-1, epsilon=1e-3
)
self.assertEqual(normed.shape.as_list(), [10, 5, 3])
# 4D NHWC case
val = np.random.random((10, 5, 5, 3))
x = backend.variable(val)
mean, var = tf.nn.moments(x, (0, 1, 2), None, None, False)
normed = backend.batch_normalization(
x, mean, var, beta, gamma, axis=-1, epsilon=1e-3
)
self.assertEqual(normed.shape.as_list(), [10, 5, 5, 3])
# 4D NCHW case
if not tf.executing_eagerly():
# Eager CPU kernel for NCHW does not exist.
val = np.random.random((10, 3, 5, 5))
x = backend.variable(val)
mean, var = tf.nn.moments(x, (0, 2, 3), None, None, False)
normed = backend.batch_normalization(
x, mean, var, beta, gamma, axis=1, epsilon=1e-3
)
self.assertEqual(normed.shape.as_list(), [10, 3, 5, 5])
def test_normalize_batch_in_training(self):
val = np.random.random((10, 3, 10, 10))
x = backend.variable(val)
reduction_axes = (0, 2, 3)
g_val = np.random.random((3,))
b_val = np.random.random((3,))
gamma = backend.variable(g_val)
beta = backend.variable(b_val)
normed, mean, var = backend.normalize_batch_in_training(
x, gamma, beta, reduction_axes, epsilon=1e-3
)
self.assertEqual(normed.shape.as_list(), [10, 3, 10, 10])
self.assertEqual(
mean.shape.as_list(),
[
3,
],
)
self.assertEqual(
var.shape.as_list(),
[
3,
],
)
# case: gamma=None
gamma = None
normed, mean, var = backend.normalize_batch_in_training(
x, gamma, beta, reduction_axes, epsilon=1e-3
)
self.assertEqual(normed.shape.as_list(), [10, 3, 10, 10])
self.assertEqual(
mean.shape.as_list(),
[
3,
],
)
self.assertEqual(
var.shape.as_list(),
[
3,
],
)
# case: beta=None
beta = None
normed, mean, var = backend.normalize_batch_in_training(
x, gamma, beta, reduction_axes, epsilon=1e-3
)
self.assertEqual(normed.shape.as_list(), [10, 3, 10, 10])
self.assertEqual(
mean.shape.as_list(),
[
3,
],
)
self.assertEqual(
var.shape.as_list(),
[
3,
],
)
def test_dropout(self):
inputs = tf.ones((200, 200))
outputs = backend.dropout(inputs, 0.2)
outputs_val = backend.eval(outputs)
self.assertEqual(np.min(outputs_val), 0)
self.assertAllClose(np.count_nonzero(outputs_val), 32000, atol=1000)
# Test noise shape
outputs = backend.dropout(inputs, 0.2, noise_shape=(200, 1))
outputs_val = backend.eval(outputs)
# Make sure the whole column gets the same dropout
self.assertEqual(np.min(outputs_val[0, :]), np.max(outputs_val[0, :]))
class BackendCrossEntropyLossesTest(tf.test.TestCase, parameterized.TestCase):
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_binary_crossentropy_with_sigmoid(self):
t = backend.constant([[0, 1, 0]])
logits = backend.constant([[8.0, 1.0, 1.0]])
p = backend.sigmoid(logits)
p = tf.identity(tf.identity(p))
result = self.evaluate(backend.binary_crossentropy(t, p))
self.assertArrayNear(result[0], [8.0, 0.313, 1.313], 1e-3)
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_categorical_crossentropy_loss(self):
t = backend.constant([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
p = backend.constant(
[[0.9, 0.05, 0.05], [0.05, 0.89, 0.06], [0.05, 0.01, 0.94]]
)
result = backend.categorical_crossentropy(t, p)
self.assertArrayNear(self.evaluate(result), [0.105, 0.116, 0.062], 1e-3)
p = backend.constant(
[[0.9, 0.05, 0.05], [0.05, 0.89, 0.01], [0.05, 0.06, 0.94]]
)
result = backend.categorical_crossentropy(t, p, axis=0)
self.assertArrayNear(self.evaluate(result), [0.105, 0.116, 0.062], 1e-3)
p = backend.constant(
[[8.0, 1.0, 1.0], [0.0, 9.0, 1.0], [2.0, 3.0, 5.0]]
)
result = (backend.categorical_crossentropy(t, p, from_logits=True),)
self.assertArrayNear(self.evaluate(result)[0], [0.002, 0, 0.17], 1e-3)
p = backend.constant(
[[8.0, 0.0, 2.0], [1.0, 9.0, 3.0], [1.0, 1.0, 5.0]]
)
result = (
backend.categorical_crossentropy(t, p, from_logits=True, axis=0),
)
self.assertArrayNear(self.evaluate(result)[0], [0.002, 0, 0.17], 1e-3)
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_categorical_crossentropy_loss_with_unknown_rank_tensor(self):
t = backend.placeholder()
p = backend.placeholder()
o = backend.categorical_crossentropy(t, p)
t_val = tf.convert_to_tensor(
[[1.0, 0.0, 0.0], [0.0, 1.0, 0.0], [0.0, 0.0, 1.0]]
)
p_val = tf.convert_to_tensor(
[[0.9, 0.05, 0.05], [0.05, 0.89, 0.06], [0.05, 0.01, 0.94]]
)
f = backend.function([t, p], o)
result = f([t_val, p_val])
self.assertArrayNear(result, [0.105, 0.116, 0.062], 1e-3)
# With axis set
o = backend.categorical_crossentropy(t, p, axis=0)
f = backend.function([t, p], o)
result = f([t_val, p_val])
self.assertArrayNear(result, [0.105, 0.065, 0.111], 1e-3)
# from logits
p_val = tf.convert_to_tensor(
[[8.0, 1.0, 1.0], [0.0, 9.0, 1.0], [2.0, 3.0, 5.0]]
)
o = backend.categorical_crossentropy(t, p, from_logits=True)
f = backend.function([t, p], o)
result = f([t_val, p_val])
self.assertArrayNear(result, [0.002, 0, 0.17], 1e-3)
# from logits and axis set
o = backend.categorical_crossentropy(t, p, from_logits=True, axis=0)
f = backend.function([t, p], o)
result = f([t_val, p_val])
self.assertArrayNear(result, [0.002, 0.003, 0.036], 1e-3)
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_categorical_crossentropy_with_softmax(self):
t = backend.constant([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
logits = backend.constant(
[[8.0, 1.0, 1.0], [0.0, 9.0, 1.0], [2.0, 3.0, 5.0]]
)
p = backend.softmax(logits)
p = tf.identity(tf.identity(p))
result = self.evaluate(backend.categorical_crossentropy(t, p))
self.assertArrayNear(result, [0.002, 0.0005, 0.17], 1e-3)
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_sparse_categorical_crossentropy_loss(self):
t = backend.constant([0, 1, 2])
p = backend.constant(
[[0.9, 0.05, 0.05], [0.05, 0.89, 0.06], [0.05, 0.01, 0.94]]
)
result = backend.sparse_categorical_crossentropy(t, p)
self.assertArrayNear(self.evaluate(result), [0.105, 0.116, 0.062], 1e-3)
p = backend.constant(
[[0.9, 0.05, 0.05], [0.05, 0.89, 0.01], [0.05, 0.06, 0.94]]
)
result = backend.sparse_categorical_crossentropy(t, p, axis=0)
self.assertArrayNear(self.evaluate(result), [0.105, 0.116, 0.062], 1e-3)
p = backend.constant(
[[8.0, 1.0, 1.0], [0.0, 9.0, 1.0], [2.0, 3.0, 5.0]]
)
result = (
backend.sparse_categorical_crossentropy(t, p, from_logits=True),
)
self.assertArrayNear(self.evaluate(result)[0], [0.002, 0, 0.17], 1e-3)
p = backend.constant(
[[8.0, 0.0, 2.0], [1.0, 9.0, 3.0], [1.0, 1.0, 5.0]]
)
result = (
backend.sparse_categorical_crossentropy(
t, p, from_logits=True, axis=0
),
)
self.assertArrayNear(self.evaluate(result)[0], [0.002, 0, 0.17], 1e-3)
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_sparse_categorical_crossentropy_loss_with_ignore_class(self):
tests = (([255, 1, 2, 2], 255), ([-1, 1, 2, 2], -1))
p = backend.softmax(
backend.constant(
[
[1.8, 1.2, 0.5],
[0.2, 3.8, 0.8],
[1.1, 0.4, 3.4],
[1.3, 0.7, 3.8],
]
)
)
for t, ignore_class in tests:
t = backend.constant(t)
result = backend.sparse_categorical_crossentropy(
t, p, ignore_class=ignore_class
)
self.assertArrayNear(
self.evaluate(result),
[0.0, 0.07428224, 0.13980183, 0.11967831],
1e-3,
)
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_sparse_cce_loss_with_ignore_class_for_segmentation(self):
t = backend.constant(
[[[0, 2], [-1, -1]], [[0, 2], [-1, -1]], [[0, 0], [0, 0]]]
)
p = backend.constant(
[
[
[[1.0, 0.0, 0.0], [0.0, 0.0, 1.0]],
[[0.2, 0.5, 0.3], [0.0, 1.0, 0.0]],
],
[
[[1.0, 0.0, 0.0], [0.0, 0.5, 0.5]],
[[0.2, 0.5, 0.3], [0.0, 1.0, 0.0]],
],
[
[[1.0, 0.0, 0.0], [1.0, 0.0, 0.0]],
[[0.1, 0.9, 0.0], [0.2, 0.8, 0.0]],
],
]
)
expected_result = [
[[0.0, 0.0], [0.0, 0.0]],
[[0.0, 0.693148], [0.0, 0.0]],
[[0.0, 0.0], [2.302585, 1.609438]],
]
# total_entries = 12
# valid_entries = 8
expected_mask = backend.constant(
[
[[True, True], [False, False]],
[[True, True], [False, False]],
[[True, True], [True, True]],
]
)
result = backend.sparse_categorical_crossentropy(t, p, ignore_class=-1)
mask = losses_utils.get_mask(result)
self.assertIsNotNone(
mask,
"expected sparse_categorical_crossentropy to set the "
"`_keras_mask` attribute when `ignore_class is not None`, "
"which indicates which loss values are valid.",
)
result = self.evaluate(result)
mask = self.evaluate(mask)
self.assertAllEqual(mask, expected_mask)
self.assertAllClose(result, expected_result, atol=1e-6)
@test_combinations.generate(test_combinations.combine(mode=["graph"]))
def test_sparse_categorical_crossentropy_loss_with_unknown_rank_tensor(
self,
):
# This test only runs in graph because the TF op layer is not supported
# yet for sparse ops.
t = backend.placeholder()
p = backend.placeholder()
o = backend.sparse_categorical_crossentropy(t, p)
t_val = tf.convert_to_tensor([0, 1, 2])
p_val = tf.convert_to_tensor(
[[0.9, 0.05, 0.05], [0.05, 0.89, 0.06], [0.05, 0.01, 0.94]]
)
f = backend.function([t, p], o)
result = f([t_val, p_val])
self.assertArrayNear(result, [0.105, 0.116, 0.062], 1e-3)
# With axis set
with self.assertRaisesRegex(
ValueError,
"Cannot compute sparse categorical crossentropy with `axis=0`",
):
o = backend.sparse_categorical_crossentropy(t, p, axis=0)
f = backend.function([t, p], o)
_ = f([t_val, p_val])
# from logits
p_val = tf.convert_to_tensor(
[[8.0, 1.0, 1.0], [0.0, 9.0, 1.0], [2.0, 3.0, 5.0]]
)
o = backend.sparse_categorical_crossentropy(t, p, from_logits=True)
f = backend.function([t, p], o)
result = f([t_val, p_val])
self.assertArrayNear(result, [0.002, 0, 0.17], 1e-3)
# from logits and axis set
with self.assertRaisesRegex(
ValueError,
"Cannot compute sparse categorical crossentropy with `axis=0`",
):
o = backend.sparse_categorical_crossentropy(
t, p, from_logits=True, axis=0
)
f = backend.function([t, p], o)
_ = f([t_val, p_val])
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_sparse_categorical_crossentropy_with_softmax(self):
t = backend.constant([0, 1, 2])
logits = backend.constant(
[[8.0, 1.0, 1.0], [0.0, 9.0, 1.0], [2.0, 3.0, 5.0]]
)
p = backend.softmax(logits)
p = tf.identity(tf.identity(p))
result = self.evaluate(backend.sparse_categorical_crossentropy(t, p))
self.assertArrayNear(result, [0.002, 0.0005, 0.17], 1e-3)
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_binary_crossentropy_from_logits_no_warnings(self):
t = backend.constant([[0, 1, 0]])
logits = backend.constant([[8.0, 1.0, 1.0]])
with warnings.catch_warnings(record=True) as w:
self.evaluate(
backend.binary_crossentropy(t, logits, from_logits=True)
)
self.assertEmpty(w)
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_binary_crossentropy_from_logits_with_sigmoid(self):
t = backend.constant([[0, 1, 0]])
logits = backend.constant([[8.0, 1.0, 1.0]])
p = activations.sigmoid(logits)
with warnings.catch_warnings(record=True) as w:
self.evaluate(backend.binary_crossentropy(t, p, from_logits=True))
self.assertLen(w, 1)
self.assertIn("received `from_logits=True`", str(w[0].message))
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_categorical_crossentropy_from_logits_with_softmax(self):
t = backend.constant([[1, 0, 0], [0, 1, 0], [0, 0, 1]])
logits = backend.constant(
[[8.0, 1.0, 1.0], [0.0, 9.0, 1.0], [2.0, 3.0, 5.0]]
)
p = activations.softmax(logits)
with warnings.catch_warnings(record=True) as w:
self.evaluate(
backend.categorical_crossentropy(t, p, from_logits=True)
)
self.assertLen(w, 1)
self.assertIn("received `from_logits=True`", str(w[0].message))
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_sparse_categorical_crossentropy_from_logits_with_softmax(self):
t = backend.constant([0, 1, 2])
logits = backend.constant(
[[8.0, 1.0, 1.0], [0.0, 9.0, 1.0], [2.0, 3.0, 5.0]]
)
p = activations.softmax(logits)
with warnings.catch_warnings(record=True) as w:
self.evaluate(
backend.sparse_categorical_crossentropy(t, p, from_logits=True)
)
self.assertLen(w, 1)
self.assertIn("received `from_logits=True`", str(w[0].message))
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_binary_focal_crossentropy_with_sigmoid(self):
t = backend.constant([[0, 1, 0]])
logits = backend.constant([[8.0, 1.0, 1.0]])
p = backend.sigmoid(logits)
p = tf.identity(tf.identity(p))
result = self.evaluate(
backend.binary_focal_crossentropy(t, p, gamma=2.0)
)
self.assertArrayNear(result[0], [7.995, 0.022, 0.701], 1e-3)
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_categorical_focal_crossentropy_with_softmax(self):
t = backend.constant([[0, 1, 0]])
logits = backend.constant([[8.0, 1.0, 1.0]])
p = backend.softmax(logits)
p = tf.identity(tf.identity(p))
result = self.evaluate(
backend.categorical_focal_crossentropy(t, p, gamma=2.0)
)
self.assertArrayNear(result, [1.747], 1e-3)
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_binary_focal_crossentropy_from_logits(self):
t = backend.constant([[0, 1, 0]])
logits = backend.constant([[8.0, 1.0, 1.0]])
result = self.evaluate(
backend.binary_focal_crossentropy(
target=t,
output=logits,
gamma=2.0,
from_logits=True,
)
)
self.assertArrayNear(result[0], [7.995, 0.022, 0.701], 1e-3)
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_categorical_focal_crossentropy_from_logits(self):
t = backend.constant([[0, 1, 0]])
logits = backend.constant([[8.0, 1.0, 1.0]])
result = self.evaluate(
backend.categorical_focal_crossentropy(
target=t,
output=logits,
from_logits=True,
)
)
self.assertArrayNear(result, [1.7472], 1e-3)
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_binary_focal_crossentropy_no_focal_effect_with_zero_gamma(self):
t = backend.constant([[0, 1, 0]])
logits = backend.constant([[8.0, 1.0, 1.0]])
p = backend.sigmoid(logits)
p = tf.identity(tf.identity(p))
gamma = 0
focal_result = self.evaluate(
backend.binary_focal_crossentropy(
target=t,
output=p,
gamma=gamma,
)
)
non_focal_result = self.evaluate(backend.binary_crossentropy(t, p))
self.assertArrayNear(focal_result[0], non_focal_result[0], 1e-3)
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_categorical_focal_crossentropy_no_focal_effect(self):
t = backend.constant([[0, 1, 0]])
logits = backend.constant([[8.0, 1.0, 1.0]])
p = backend.softmax(logits)
p = tf.identity(tf.identity(p))
focal_result = self.evaluate(
backend.categorical_focal_crossentropy(
target=t,
output=p,
gamma=0.0,
alpha=1.0,
)
)
non_focal_result = self.evaluate(backend.categorical_crossentropy(t, p))
self.assertArrayNear(focal_result, non_focal_result, 1e-3)
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_binary_weighted_focal_crossentropy_with_sigmoid(self):
t = backend.constant([[0, 1, 0]])
logits = backend.constant([[8.0, 1.0, 1.0]])
p = backend.sigmoid(logits)
p = tf.identity(tf.identity(p))
result = self.evaluate(
backend.binary_focal_crossentropy(
target=t,
output=p,
apply_class_balancing=True,
)
)
self.assertArrayNear(result[0], [5.996, 0.006, 0.526], 1e-3)
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_binary_weighted_focal_crossentropy_from_logits(self):
t = backend.constant([[0, 1, 0]])
logits = backend.constant([[8.0, 1.0, 1.0]])
result = self.evaluate(
backend.binary_focal_crossentropy(
target=t,
output=logits,
apply_class_balancing=True,
from_logits=True,
)
)
self.assertArrayNear(result[0], [5.996, 0.006, 0.526], 1e-3)
@tf_test_utils.with_control_flow_v2
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class TestCTC(tf.test.TestCase):
def test_ctc_decode(self):
depth = 6
seq_len_0 = 5
input_prob_matrix_0 = np.asarray(
[
[0.30999, 0.309938, 0.0679938, 0.0673362, 0.0708352, 0.173908],
[0.215136, 0.439699, 0.0370931, 0.0393967, 0.0381581, 0.230517],
[0.199959, 0.489485, 0.0233221, 0.0251417, 0.0233289, 0.238763],
[0.279611, 0.452966, 0.0204795, 0.0209126, 0.0194803, 0.20655],
[0.51286, 0.288951, 0.0243026, 0.0220788, 0.0219297, 0.129878],
# Random entry added in at time=5
[0.155251, 0.164444, 0.173517, 0.176138, 0.169979, 0.160671],
],
dtype=np.float32,
)
# len max_time_steps array of batch_size x depth matrices
inputs = [
input_prob_matrix_0[t, :][np.newaxis, :] for t in range(seq_len_0)
] + 2 * [ # Pad to max_time_steps = 8
np.zeros((1, depth), dtype=np.float32)
]
inputs = backend.variable(np.asarray(inputs).transpose((1, 0, 2)))
# batch_size length vector of sequence_lengths
input_length = backend.variable(np.array([seq_len_0], dtype=np.int32))
# batch_size length vector of negative log probabilities
log_prob_truth = np.array(
[-3.5821197, -3.777835], # output beam 0 # output beam 1
np.float32,
)[np.newaxis, :]
decode_truth = [
np.array([1, 0, -1, -1, -1, -1, -1]),
np.array([0, 1, 0, -1, -1, -1, -1]),
]
beam_width = 2
top_paths = 2
decode_pred_tf, log_prob_pred_tf = backend.ctc_decode(
inputs,
input_length,
greedy=False,
beam_width=beam_width,
top_paths=top_paths,
)
self.assertEqual(len(decode_pred_tf), top_paths)
log_prob_pred = backend.eval(log_prob_pred_tf)
for i in range(top_paths):
self.assertTrue(
np.all(decode_truth[i] == backend.eval(decode_pred_tf[i]))
)
self.assertAllClose(log_prob_truth, log_prob_pred)
def test_ctc_batch_cost(self):
with self.cached_session():
label_lens = np.expand_dims(np.asarray([5, 4]), 1)
input_lens = np.expand_dims(
np.asarray([5, 5]), 1
) # number of timesteps
loss_log_probs = [3.34211, 5.42262]
# dimensions are batch x time x categories
labels = np.asarray([[0, 1, 2, 1, 0], [0, 1, 1, 0, -1]])
inputs = np.asarray(
[
[
[
0.633766,
0.221185,
0.0917319,
0.0129757,
0.0142857,
0.0260553,
],
[
0.111121,
0.588392,
0.278779,
0.0055756,
0.00569609,
0.010436,
],
[
0.0357786,
0.633813,
0.321418,
0.00249248,
0.00272882,
0.0037688,
],
[
0.0663296,
0.643849,
0.280111,
0.00283995,
0.0035545,
0.00331533,
],
[
0.458235,
0.396634,
0.123377,
0.00648837,
0.00903441,
0.00623107,
],
],
[
[
0.30176,
0.28562,
0.0831517,
0.0862751,
0.0816851,
0.161508,
],
[
0.24082,
0.397533,
0.0557226,
0.0546814,
0.0557528,
0.19549,
],
[
0.230246,
0.450868,
0.0389607,
0.038309,
0.0391602,
0.202456,
],
[
0.280884,
0.429522,
0.0326593,
0.0339046,
0.0326856,
0.190345,
],
[
0.423286,
0.315517,
0.0338439,
0.0393744,
0.0339315,
0.154046,
],
],
],
dtype=np.float32,
)
labels = backend.variable(labels, dtype="int32")
inputs = backend.variable(inputs, dtype="float32")
input_lens = backend.variable(input_lens, dtype="int32")
label_lens = backend.variable(label_lens, dtype="int32")
res = backend.eval(
backend.ctc_batch_cost(labels, inputs, input_lens, label_lens)
)
self.assertAllClose(res[:, 0], loss_log_probs, atol=1e-05)
# test when batch_size = 1, that is, one sample only
ref = [3.34211]
input_lens = np.expand_dims(np.asarray([5]), 1)
label_lens = np.expand_dims(np.asarray([5]), 1)
labels = np.asarray([[0, 1, 2, 1, 0]])
inputs = np.asarray(
[
[
[
0.633766,
0.221185,
0.0917319,
0.0129757,
0.0142857,
0.0260553,
],
[
0.111121,
0.588392,
0.278779,
0.0055756,
0.00569609,
0.010436,
],
[
0.0357786,
0.633813,
0.321418,
0.00249248,
0.00272882,
0.0037688,
],
[
0.0663296,
0.643849,
0.280111,
0.00283995,
0.0035545,
0.00331533,
],
[
0.458235,
0.396634,
0.123377,
0.00648837,
0.00903441,
0.00623107,
],
]
],
dtype=np.float32,
)
k_labels = backend.variable(labels, dtype="int32")
k_inputs = backend.variable(inputs, dtype="float32")
k_input_lens = backend.variable(input_lens, dtype="int32")
k_label_lens = backend.variable(label_lens, dtype="int32")
res = backend.eval(
backend.ctc_batch_cost(
k_labels, k_inputs, k_input_lens, k_label_lens
)
)
self.assertAllClose(res[:, 0], ref, atol=1e-05)
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class TestRandomOps(tf.test.TestCase):
def test_random_normal(self):
np.random.seed(123)
x = backend.random_normal((500, 500))
val = backend.eval(x)
self.assertAllClose(np.mean(val), 0.0, atol=0.01)
self.assertAllClose(np.std(val), 1.0, atol=0.01)
def test_random_uniform(self):
np.random.seed(123)
x = backend.random_uniform((500, 500))
val = backend.eval(x)
self.assertAllClose(np.mean(val), 0.5, atol=0.01)
self.assertAllClose(np.max(val), 1.0, atol=0.01)
self.assertAllClose(np.min(val), 0.0, atol=0.01)
def test_random_binomial(self):
np.random.seed(123)
x = backend.random_binomial((500, 500), p=0.5)
self.assertAllClose(np.mean(backend.eval(x)), 0.5, atol=0.01)
def test_truncated_normal(self):
np.random.seed(123)
x = backend.truncated_normal((500, 500), mean=0.0, stddev=1.0)
x = backend.truncated_normal((1000, 1000), mean=0.0, stddev=1.0)
y = backend.eval(x)
self.assertAllClose(np.mean(y), 0.0, atol=0.01)
self.assertAllClose(np.std(y), 0.88, atol=0.01)
self.assertAllClose(np.max(y), 2.0, atol=0.01)
self.assertAllClose(np.min(y), -2.0, atol=0.01)
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class FunctionTest(tf.test.TestCase):
def test_function_basics(self):
if tf.executing_eagerly():
self.skipTest("eager backend.function does not support updates")
x1 = backend.placeholder(shape=(), dtype="float32")
x2 = backend.placeholder(shape=(), dtype="int32")
v = backend.variable(10.0)
y1 = x1 + backend.cast(x2, "float32") + v
y2 = x1 * backend.cast(x2, "float32")
with tf.control_dependencies([y1]):
u = backend.update(v, x1)
f = backend.function([x1, x2], [y1, y2], updates=[u])
output_values = f([2, 3])
self.assertEqual(output_values, [15.0, 6.0])
self.assertEqual(backend.eval(v), 2.0)
def test_function_dict_outputs(self):
x_ph = backend.placeholder(shape=(), name="x")
y_ph = backend.placeholder(shape=(), name="y")
outputs = {"x*y": y_ph * x_ph, "x*x": x_ph * x_ph}
f = backend.function(inputs=[x_ph, y_ph], outputs=outputs)
x, y = 2.0, 5.0
results = f([x, y])
self.assertEqual(results["x*y"], 10.0)
self.assertEqual(results["x*x"], 4)
def test_function_dict_inputs(self):
placeholders = {
"x": backend.placeholder(shape=()),
"y": backend.placeholder(shape=()),
}
outputs = [placeholders["x"] * placeholders["y"]]
f = backend.function(inputs=placeholders, outputs=outputs)
results = f({"x": 2.0, "y": 3.0})
self.assertEqual(results[0], 6.0)
def test_function_variable_inputs(self):
placeholders = {
"x": backend.placeholder(shape=()),
"y": backend.placeholder(shape=()),
}
outputs = [placeholders["x"] * placeholders["y"]]
f = backend.function(inputs=placeholders, outputs=outputs)
results = f({"x": backend.variable(2.0), "y": 3.0})
self.assertEqual(results[0], 6.0)
def test_function_composite_variable_inputs(self):
if context.executing_eagerly():
self.skipTest(
"Only graph mode flattens composite tensor inputs into flat "
"tensors."
)
class Spec(tf.TypeSpec):
value_type = property(lambda self: CompositeVariable)
def _serialize(self):
pass
def _component_specs(self):
pass
def _to_components(self, value):
return value.variables
def _from_components(self, variable_list):
return CompositeVariable(variable_list)
class CompositeVariable(tf.__internal__.CompositeTensor):
def __init__(self, variable_list):
self.variables = variable_list
@property
def _type_spec(self):
return Spec()
def _convert_variables_to_tensors(self):
self.variables = tf.nest.map_structure(
tf_utils.convert_variables_to_tensors, self.variables
)
return self
placeholders = {
"x": backend.placeholder(shape=()),
"y": backend.placeholder(shape=()),
}
outputs = [placeholders["x"] * placeholders["y"]]
f = backend.function(inputs=placeholders, outputs=outputs)
results = f({"x": CompositeVariable([backend.variable(2.0)]), "y": 3.0})
self.assertEqual(results[0], 6.0)
def test_function_single_input_output(self):
x_ph = backend.placeholder(shape=(), name="x")
output = x_ph * x_ph
f = backend.function(x_ph, output)
result = f(2.0)
self.assertEqual(result, 4.0)
def test_tuple_updates(self):
if tf.executing_eagerly():
self.skipTest("eager backend.function does not support updates")
x_ph = backend.placeholder(ndim=2)
v = backend.variable(np.ones((4, 2)))
output = x_ph**2 + v
new_v = v + x_ph
f = backend.function(x_ph, output, updates=[(v, new_v)])
input_val = np.random.random((4, 2))
result = f(input_val)
self.assertAllClose(result, input_val**2 + 1)
self.assertAllClose(backend.get_value(v), np.ones((4, 2)) + input_val)
class BackendGraphTests(tf.test.TestCase, parameterized.TestCase):
@test_combinations.generate(test_combinations.combine(mode=["graph"]))
def test_function_placeholder_with_default(self):
with backend.get_graph().as_default():
x1 = tf.compat.v1.placeholder_with_default(
np.array(2.0, dtype="float32"), shape=()
)
x2 = tf.compat.v1.placeholder_with_default(
np.array(3, dtype="int32"), shape=()
)
y1 = x1 + backend.cast(x2, "float32")
y2 = x1 * backend.cast(x2, "float32")
f = backend.function([x1, x2], [y1, y2])
output_values = f([4, 5])
self.assertEqual(output_values, [9.0, 20.0])
output_values = f([None, None])
self.assertEqual(output_values, [5.0, 6.0])
def test_function_tf_feed_symbols(self):
# Test TF-Keras backend functions with TF tensor inputs.
with tf.Graph().as_default(), self.cached_session():
# Test feeding a resource variable to `function`.
x1 = backend.placeholder(shape=())
x2 = backend.placeholder(shape=())
lr = backend.learning_phase() # Include a placeholder_with_default.
y1 = backend.variable(10.0)
y2 = 3
f = backend.function(
inputs=[x1, x2, lr],
outputs=[x1 + 1, backend.in_train_phase(x2 + 2, x2 - 1)],
)
outs = f([y1, y2, None]) # Use default learning_phase value.
self.assertEqual(outs, [11.0, 2.0])
outs = f([y1, y2, 1]) # Set learning phase value.
self.assertEqual(outs, [11.0, 5.0])
# Test triggering a callable refresh by changing the input.
y3 = backend.constant(20.0) # Test with tensor
outs = f([y3, y2, None])
self.assertEqual(outs, [21.0, 2.0])
y4 = 4 # Test with non-symbol
outs = f([y4, y2, None])
self.assertEqual(outs, [5.0, 2.0])
# Test with a different dtype
y5 = backend.constant(10.0, dtype="float64")
outs = f([y5, y2, None])
self.assertEqual(outs, [11.0, 2.0])
def test_function_tf_fetches(self):
# Additional operations can be passed to tf.compat.v1.Session().run()
# via its `fetches` arguments. In contrast to `updates` argument of
# backend.function() these do not have control dependency on `outputs`
# so they can run in parallel. Also they should not contribute to output
# of backend.function().
with tf.Graph().as_default(), self.cached_session():
x = backend.variable(0.0)
y = backend.variable(0.0)
x_placeholder = backend.placeholder(shape=())
y_placeholder = backend.placeholder(shape=())
f = backend.function(
inputs=[x_placeholder, y_placeholder],
outputs=[x_placeholder + y_placeholder],
updates=[(x, x_placeholder + 1.0)],
fetches=[backend.update(y, 5.0)],
)
output = f([10.0, 20.0])
self.assertEqual(output, [30.0])
self.assertEqual(
backend.get_session().run(fetches=[x, y]), [11.0, 5.0]
)
def test_function_tf_feed_dict(self):
# Additional substitutions can be passed to
# `tf.compat.v1.Session().run()` via its `feed_dict` arguments. Note
# that the feed_dict is passed once in the constructor but we can modify
# the values in the dictionary. Through this feed_dict we can provide
# additional substitutions besides TF-Keras inputs.
with tf.Graph().as_default(), self.cached_session():
x = backend.variable(0.0)
y = backend.variable(0.0)
x_placeholder = backend.placeholder(shape=())
y_placeholder = backend.placeholder(shape=())
feed_dict = {y_placeholder: 3.0}
fetches = [backend.update(y, y_placeholder * 10.0)]
f = backend.function(
inputs=[x_placeholder],
outputs=[x_placeholder + 1.0],
updates=[(x, x_placeholder + 10.0)],
feed_dict=feed_dict,
fetches=fetches,
)
output = f([10.0])
self.assertEqual(output, [11.0])
self.assertEqual(
backend.get_session().run(fetches=[x, y]), [20.0, 30.0]
)
# updated value in feed_dict will be modified within the
# K.function()
feed_dict[y_placeholder] = 4.0
output = f([20.0])
self.assertEqual(output, [21.0])
self.assertEqual(
backend.get_session().run(fetches=[x, y]), [30.0, 40.0]
)
def test_function_tf_run_options_with_run_metadata(self):
with tf.Graph().as_default(), self.cached_session():
x_placeholder = backend.placeholder(shape=())
y_placeholder = backend.placeholder(shape=())
run_options = tf.compat.v1.RunOptions(output_partition_graphs=True)
run_metadata = tf.compat.v1.RunMetadata()
# enable run_options.
f = backend.function(
inputs=[x_placeholder, y_placeholder],
outputs=[x_placeholder + y_placeholder],
options=run_options,
run_metadata=run_metadata,
)
output = f([10.0, 20.0])
self.assertEqual(output, [30.0])
self.assertNotEmpty(run_metadata.partition_graphs)
# disable run_options.
f1 = backend.function(
inputs=[x_placeholder, y_placeholder],
outputs=[x_placeholder + y_placeholder],
run_metadata=run_metadata,
)
output1 = f1([10.0, 20.0])
self.assertEqual(output1, [30.0])
self.assertEmpty(run_metadata.partition_graphs)
def test_function_fetch_callbacks(self):
class CallbackStub:
def __init__(self):
self.times_called = 0
self.callback_result = 0
def _fetch_callback(self, result):
self.times_called += 1
self.callback_result = result
with tf.Graph().as_default(), self.cached_session():
callback = CallbackStub()
x_placeholder = backend.placeholder(shape=())
y_placeholder = backend.placeholder(shape=())
callback_op = x_placeholder * y_placeholder
f = backend.function(
inputs=[x_placeholder, y_placeholder],
outputs=[x_placeholder + y_placeholder],
)
f.fetches.append(callback_op)
f.fetch_callbacks[callback_op] = callback._fetch_callback
_ = f([10.0, 20.0])
self.assertEqual(callback.times_called, 1)
self.assertEqual(callback.callback_result, 200)
def test_get_session_different_graphs(self):
with tf.Graph().as_default():
x = backend.constant(1)
session = backend.get_session()
self.assertIs(session, backend.get_session((x,)))
self.assertIs(session, backend.get_session())
with tf.Graph().as_default():
self.assertIs(session, backend.get_session((x,)))
self.assertIsNot(session, backend.get_session())
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class ControlOpsTests(tf.test.TestCase):
def test_function_switch_basics(self):
x = tf.constant(2.0)
y = tf.constant(3.0)
def xpowy():
return backend.pow(x, y)
def ypowx():
return backend.pow(y, x)
tensor = backend.switch(backend.less(x, y), xpowy, ypowx)
self.assertEqual(backend.eval(tensor), [8.0])
tensor = backend.switch(backend.greater(x, y), xpowy, ypowx)
self.assertEqual(backend.eval(tensor), [9.0])
def test_unequal_rank(self):
x = tf.convert_to_tensor(
np.array([[1, 2, 3], [4, 5, 6]]), dtype="float32"
)
y = tf.convert_to_tensor(np.array([1, 2, 3]), dtype="float32")
def true_func():
return x
def false_func():
return y
with self.assertRaisesRegex(
ValueError, "Rank of `condition` should be less than"
):
backend.switch(backend.equal(x, x), false_func, true_func)
class ContextValueCacheTest(tf.test.TestCase):
def test_cache(self):
cache = backend.ContextValueCache(list)
graph1 = tf.Graph()
graph2 = tf.Graph()
cache[graph1].append(1)
with graph1.as_default():
cache[None].append(2)
with graph2.as_default():
cache[None].append(3)
cache[graph2].append(4)
self.assertAllEqual(cache[graph1], [1, 2])
self.assertAllEqual(cache[graph2], [3, 4])
with tf.__internal__.eager_context.eager_mode():
cache[None].append(5)
cache[None].append(6)
self.assertAllEqual(cache[None], [5, 6])
self.assertLen(cache, 3)
del graph1
gc.collect()
self.assertLen(cache, 2)
def test_cache_in_parent_graph(self):
cache = backend.ContextValueCache(int)
cache.setdefault(None, backend.constant(5))
with tf.Graph().as_default() as g:
# g is not a child graph of the default test context, so the
# recursive lookup will create a new default value.
self.assertAllEqual(cache[g], 0)
@tf.function
def fn():
# The function graph is a child of the default test context, so
# __getitem__ will return the previously saved value.
return cache[tf.compat.v1.get_default_graph()]
self.assertEqual(self.evaluate(fn()), 5)
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class RandomGeneratorTest(tf.test.TestCase, parameterized.TestCase):
def test_generator_reproducibility(self):
seed = 1337
gen1 = backend.RandomGenerator(seed, rng_type="stateful")
output1 = gen1.random_normal(shape=[2, 3])
output2 = gen1.random_normal(shape=[2, 3])
self.assertNotAllClose(output1, output2)
gen2 = backend.RandomGenerator(seed, rng_type="stateful")
output3 = gen2.random_normal(shape=[2, 3])
output4 = gen2.random_normal(shape=[2, 3])
if tf.compat.v1.executing_eagerly():
# Make sure generator with same seed will produce same sequence.
self.assertAllEqual(output1, output3)
self.assertAllEqual(output2, output4)
def test_unseeded(self):
seed = None
gen1 = backend.RandomGenerator(seed, rng_type="stateful")
output1 = gen1.random_normal(shape=[2, 3])
gen2 = backend.RandomGenerator(seed, rng_type="stateful")
output2 = gen2.random_normal(shape=[2, 3])
self.assertNotAllClose(output1, output2)
def test_implementation(self):
seed = 1337
seeded = backend.RandomGenerator(seed, rng_type="stateful")
seeded._maybe_init()
unseeded = backend.RandomGenerator(None, rng_type="stateful")
unseeded._maybe_init()
if tf.compat.v1.executing_eagerly():
# Make sure we use tf.random.Generator in v2.
self.assertIsNotNone(seeded._generator)
self.assertIsNotNone(unseeded._generator)
else:
# In v1, we can't use tf.random.Generator since it is not compatible
# with graph mode.
self.assertIsNone(seeded._generator)
self.assertIsNone(unseeded._generator)
def test_unseeded_with_utils_set_random_seed(self):
keras_seed = 1337
tf_utils.set_random_seed(keras_seed)
gen1 = backend.RandomGenerator(seed=None, rng_type="stateful")
output1 = gen1.random_normal(shape=[2, 3])
output2 = gen1.random_normal(shape=[2, 3])
self.assertNotAllClose(output1, output2)
# Make sure even with unseeded backend generator, as long as we set the
# keras random seed, it will make the generator to produce the same
# sequence. This will ensure all the client are in sync in the
# multi-client setting, when they all set the keras seed.
tf_utils.set_random_seed(keras_seed)
gen2 = backend.RandomGenerator(seed=None, rng_type="stateful")
output3 = gen2.random_normal(shape=[2, 3])
output4 = gen2.random_normal(shape=[2, 3])
gen3 = backend.RandomGenerator(seed=None, rng_type="stateful")
output5 = gen3.random_normal(shape=[2, 3])
output6 = gen3.random_normal(shape=[2, 3])
if tf.compat.v1.executing_eagerly():
# The generator is only used in the tf2 with eager.
self.assertAllEqual(output1, output3)
self.assertAllEqual(output2, output4)
# Also make sure different generator instance are still producing
# different result
self.assertNotAllEqual(output3, output5)
self.assertNotAllEqual(output4, output6)
def test_force_stateless(self):
gen = backend.RandomGenerator(seed=None, rng_type="stateless")
output1 = gen.random_normal(shape=[2, 3])
seed1 = gen._seed
output2 = gen.random_normal(shape=[2, 3])
seed2 = gen._seed
self.assertAllClose(output1, output2)
# Make sure we always use the same seed, and it is not None
self.assertEqual(seed1, seed2)
self.assertIsNotNone(seed1)
# Make sure a new seed is used when creating a new generator instance.
gen2 = backend.RandomGenerator(seed=None, rng_type="stateless")
output3 = gen2.random_normal(shape=[2, 3])
seed3 = gen2._seed
output4 = gen2.random_normal(shape=[2, 3])
seed4 = gen2._seed
self.assertAllClose(output3, output4)
self.assertEqual(seed3, seed4)
self.assertNotEqual(seed1, seed3)
def test_force_stateless_with_seed(self):
seed = 1337
gen = backend.RandomGenerator(seed=seed, rng_type="stateless")
output1 = gen.random_normal(shape=[2, 3])
seed1 = gen._seed
output2 = gen.random_normal(shape=[2, 3])
seed2 = gen._seed
self.assertAllClose(output1, output2)
# Make sure we always use the same seed, and it is not None
self.assertEqual(seed, seed1)
self.assertEqual(seed, seed2)
# Make sure RandomGenerator always generate same value with same seed.
gen2 = backend.RandomGenerator(seed=seed, rng_type="stateless")
output3 = gen2.random_normal(shape=[2, 3])
self.assertAllClose(output3, output1)
@parameterized.named_parameters(("seeded", 1337), ("unseeded", None))
def test_stateless_with_seed_delta(self, seed):
gen = backend.RandomGenerator(seed=seed, rng_type="stateless")
output1 = gen.random_normal(shape=[2, 3], nonce=hash((1, 1)))
seed1 = gen._seed
output2 = gen.random_normal(shape=[2, 3], nonce=hash((1, 1)))
seed2 = gen._seed
output3 = gen.random_normal(shape=[2, 3], nonce=hash((2, 1)))
seed3 = gen._seed
self.assertAllClose(output1, output2)
# Different seed_delta will produce different value.
self.assertNotAllClose(output1, output3)
# Make sure the internal seed is not changed at all.
self.assertEqual(seed1, seed2)
self.assertEqual(seed1, seed3)
def test_unknown_rng_type(self):
with self.assertRaisesRegex(ValueError, "Got: unknown"):
backend.RandomGenerator(seed=None, rng_type="unknown")
def test_prefer_stateless_over_global_generator(self):
try:
generator_enabled = backend.is_tf_random_generator_enabled()
if not generator_enabled:
backend.enable_tf_random_generator()
seed = 1337
gen = backend.RandomGenerator(seed=seed, rng_type="stateless")
output1 = gen.random_normal(shape=[2, 3])
output2 = gen.random_normal(shape=[2, 3])
self.assertIsNone(gen._generator)
self.assertAllClose(output1, output2)
finally:
if not generator_enabled:
# Change the global flag back.
backend.disable_tf_random_generator()
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/backend_test.py/0 | {
"file_path": "tf-keras/tf_keras/backend_test.py",
"repo_id": "tf-keras",
"token_count": 61670
} | 167 |
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Benchmarks on Hierarchical RNN on MNIST digits."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v2 as tf
import tf_keras as keras
from tf_keras.benchmarks import benchmark_util
class HierarchicalRNNBenchmark(tf.test.Benchmark):
"""Benchmarks for Hierarchical RNN using `tf.test.Benchmark`."""
def __init__(self):
super().__init__()
self.num_classes = 10
self.row_hidden, self.col_hidden = 128, 128
(self.x_train, self.y_train), _ = keras.datasets.mnist.load_data()
self.x_train = self.x_train.reshape(self.x_train.shape[0], 28, 28, 1)
self.x_train = self.x_train.astype("float32") / 255
self.y_train = keras.utils.to_categorical(
self.y_train, self.num_classes
)
def _build_model(self):
"""Model from https://github.com/keras-team/tf-keras/blob/master/
examples/mnist_hierarchical_rnn.py.
"""
row, col, pixel = self.x_train.shape[1:]
inputs = keras.layers.Input(shape=(row, col, pixel))
encoded_rows = keras.layers.TimeDistributed(
keras.layers.LSTM(self.row_hidden)
)(inputs)
encoded_cols = keras.layers.LSTM(self.col_hidden)(encoded_rows)
outputs = keras.layers.Dense(self.num_classes, activation="softmax")(
encoded_cols
)
model = keras.Model(inputs, outputs)
return model
# In each benchmark test, the required arguments for the
# method `measure_performance` include:
# x: Input data, it could be Numpy or loaded from tfds.
# y: Target data. If `x` is a dataset or generator instance,
# `y` should not be specified.
# loss: Loss function for model.
# optimizer: Optimizer for model.
# Check more details in `measure_performance()` method of
# benchmark_util.
def benchmark_hrnn_mnist_bs_256(self):
"""Measure performance with batch_size=256."""
batch_size = 256
metrics, wall_time, extras = benchmark_util.measure_performance(
self._build_model,
x=self.x_train,
y=self.y_train,
batch_size=batch_size,
optimizer="rmsprop",
loss="categorical_crossentropy",
metrics=["accuracy"],
)
metadata = benchmark_util.get_keras_examples_metadata(
"hierarchical_rnn", batch_size
)
extras.update(metadata)
self.report_benchmark(
wall_time=wall_time, metrics=metrics, extras=extras
)
def benchmark_hrnn_mnist_bs_512(self):
"""Measure performance with batch_size=512."""
batch_size = 512
metrics, wall_time, extras = benchmark_util.measure_performance(
self._build_model,
x=self.x_train,
y=self.y_train,
batch_size=batch_size,
optimizer="rmsprop",
loss="categorical_crossentropy",
metrics=["accuracy"],
)
metadata = benchmark_util.get_keras_examples_metadata(
"hierarchical_rnn", batch_size
)
extras.update(metadata)
self.report_benchmark(
wall_time=wall_time, metrics=metrics, extras=extras
)
def benchmark_hrnn_mnist_bs_1024(self):
"""Measure performance with batch_size=1024."""
batch_size = 1024
metrics, wall_time, extras = benchmark_util.measure_performance(
self._build_model,
x=self.x_train,
y=self.y_train,
batch_size=batch_size,
optimizer="rmsprop",
loss="categorical_crossentropy",
metrics=["accuracy"],
)
metadata = benchmark_util.get_keras_examples_metadata(
"hierarchical_rnn", batch_size
)
extras.update(metadata)
self.report_benchmark(
wall_time=wall_time, metrics=metrics, extras=extras
)
def benchmark_hrnn_mnist_bs_1024_gpu_2(self):
"""Measure performance with batch_size=1024, gpu=2 and
distribution_strategy='mirrored'
"""
batch_size = 1024
metrics, wall_time, extras = benchmark_util.measure_performance(
self._build_model,
x=self.x_train,
y=self.y_train,
batch_size=batch_size,
num_gpus=2,
distribution_strategy="mirrored",
optimizer="rmsprop",
loss="categorical_crossentropy",
metrics=["accuracy"],
)
metadata = benchmark_util.get_keras_examples_metadata(
"hierarchical_rnn", batch_size
)
extras.update(metadata)
self.report_benchmark(
wall_time=wall_time, metrics=metrics, extras=extras
)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/benchmarks/keras_examples_benchmarks/mnist_hierarchical_rnn_benchmark_test.py/0 | {
"file_path": "tf-keras/tf_keras/benchmarks/keras_examples_benchmarks/mnist_hierarchical_rnn_benchmark_test.py",
"repo_id": "tf-keras",
"token_count": 2445
} | 168 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""CIFAR10 small images classification dataset."""
import os
import numpy as np
from tf_keras import backend
from tf_keras.datasets.cifar import load_batch
from tf_keras.utils.data_utils import get_file
# isort: off
from tensorflow.python.util.tf_export import keras_export
@keras_export("keras.datasets.cifar10.load_data")
def load_data():
"""Loads the CIFAR10 dataset.
This is a dataset of 50,000 32x32 color training images and 10,000 test
images, labeled over 10 categories. See more info at the
[CIFAR homepage](https://www.cs.toronto.edu/~kriz/cifar.html).
The classes are:
| Label | Description |
|:-----:|-------------|
| 0 | airplane |
| 1 | automobile |
| 2 | bird |
| 3 | cat |
| 4 | deer |
| 5 | dog |
| 6 | frog |
| 7 | horse |
| 8 | ship |
| 9 | truck |
Returns:
Tuple of NumPy arrays: `(x_train, y_train), (x_test, y_test)`.
**x_train**: uint8 NumPy array of image data with shapes
`(50000, 32, 32, 3)`, containing the training data. Pixel values range
from 0 to 255.
**y_train**: uint8 NumPy array of labels (integers in range 0-9)
with shape `(50000, 1)` for the training data.
**x_test**: uint8 NumPy array of image data with shapes
`(10000, 32, 32, 3)`, containing the test data. Pixel values range
from 0 to 255.
**y_test**: uint8 NumPy array of labels (integers in range 0-9)
with shape `(10000, 1)` for the test data.
Example:
```python
(x_train, y_train), (x_test, y_test) = keras.datasets.cifar10.load_data()
assert x_train.shape == (50000, 32, 32, 3)
assert x_test.shape == (10000, 32, 32, 3)
assert y_train.shape == (50000, 1)
assert y_test.shape == (10000, 1)
```
"""
dirname = "cifar-10-batches-py"
origin = "https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz"
path = get_file(
dirname,
origin=origin,
untar=True,
file_hash=( # noqa: E501
"6d958be074577803d12ecdefd02955f39262c83c16fe9348329d7fe0b5c001ce"
),
)
num_train_samples = 50000
x_train = np.empty((num_train_samples, 3, 32, 32), dtype="uint8")
y_train = np.empty((num_train_samples,), dtype="uint8")
for i in range(1, 6):
fpath = os.path.join(path, "data_batch_" + str(i))
(
x_train[(i - 1) * 10000 : i * 10000, :, :, :],
y_train[(i - 1) * 10000 : i * 10000],
) = load_batch(fpath)
fpath = os.path.join(path, "test_batch")
x_test, y_test = load_batch(fpath)
y_train = np.reshape(y_train, (len(y_train), 1))
y_test = np.reshape(y_test, (len(y_test), 1))
if backend.image_data_format() == "channels_last":
x_train = x_train.transpose(0, 2, 3, 1)
x_test = x_test.transpose(0, 2, 3, 1)
x_test = x_test.astype(x_train.dtype)
y_test = y_test.astype(y_train.dtype)
return (x_train, y_train), (x_test, y_test)
| tf-keras/tf_keras/datasets/cifar10.py/0 | {
"file_path": "tf-keras/tf_keras/datasets/cifar10.py",
"repo_id": "tf-keras",
"token_count": 1582
} | 169 |
# Copyright 2021 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for `DatasetCreator` with `Model.fit` across usages and strategies."""
import numpy as np
import tensorflow.compat.v2 as tf
from tf_keras.distribute import dataset_creator_model_fit_test_base as test_base
from tf_keras.distribute import strategy_combinations
from tf_keras.testing_infra import test_utils
from tf_keras.utils import dataset_creator
# isort: off
from tensorflow.python.framework import (
test_util as tf_test_utils,
)
# TODO(rchao): Investigate why there cannot be single worker and multi worker
# PS strategies running in the same shard.
@test_utils.run_v2_only
@tf.__internal__.distribute.combinations.generate(
tf.__internal__.test.combinations.combine(
strategy=strategy_combinations.all_strategies
+ strategy_combinations.multi_worker_mirrored_strategies
+ strategy_combinations.parameter_server_strategies_multi_worker,
mode="eager",
)
)
class DatasetCreatorModelFitTest(test_base.DatasetCreatorModelFitTestBase):
def setUp(self):
super().setUp()
if tf_test_utils.is_xla_enabled():
self.skipTest(
"model.optimizer.iterations values is not as expected "
"with XLA: b/184384487"
)
def testModelFit(self, strategy):
model = self._model_fit(strategy)
self.assertEqual(model.optimizer.iterations, 100)
def testModelFitwithStepsPerEpochNegativeOne(self, strategy):
def dataset_fn(input_context):
del input_context
x = tf.random.uniform((10, 10))
y = tf.random.uniform((10,))
return (
tf.data.Dataset.from_tensor_slices((x, y)).shuffle(10).batch(2)
)
if strategy._should_use_with_coordinator:
with self.assertRaises(
(tf.errors.OutOfRangeError, tf.errors.CancelledError)
):
self._model_fit(
strategy,
steps_per_epoch=-1,
x=dataset_creator.DatasetCreator(dataset_fn),
validation_data=dataset_creator.DatasetCreator(dataset_fn),
)
else:
self._model_fit(
strategy,
steps_per_epoch=-1,
x=dataset_creator.DatasetCreator(dataset_fn),
validation_data=dataset_creator.DatasetCreator(dataset_fn),
)
def testModelFitWithNumpyData(self, strategy):
x = np.random.rand(100, 10)
y = np.random.rand(100, 1)
model = self._model_fit(
strategy,
x=x,
y=y,
batch_size=1,
validation_data=(x, y),
)
self.assertEqual(model.optimizer.iterations, 100)
def testModelFitWithTensorData(self, strategy):
x = tf.random.uniform((100, 10))
y = tf.random.uniform((100,))
model = self._model_fit(
strategy,
x=x,
y=y,
batch_size=1,
validation_data=(x, y),
)
self.assertEqual(model.optimizer.iterations, 100)
def testModelFitWithLookupLayer(self, strategy):
model = self._model_fit(strategy, use_lookup_layer=True)
self.assertEqual(model.optimizer.iterations, 100)
def testModelFitWithNormalizationLayer(self, strategy):
model = self._model_fit(strategy, with_normalization_layer=True)
self.assertEqual(model.optimizer.iterations, 100)
def testModelFitWithStepsPerExecution(self, strategy):
model = self._model_fit(strategy, steps_per_execution=10)
self.assertEqual(model.optimizer.iterations, 100)
def testModelFitWithNoStepsPerEpoch(self, strategy):
with self.assertRaisesRegex(
ValueError,
"When using a `tf.keras.utils.experimental.DatasetCreator`, "
"`steps_per_epoch`, `validation_steps`, `steps`, or "
"`pss_evaluation_shards` argument must be provided in "
"`Model.fit`, `Model.evaluate`, or `Model.predict`.",
):
self._model_fit(strategy, steps_per_epoch=None)
def testModelEvaluate(self, strategy):
self._model_evaluate(strategy)
self.assertGreaterEqual(self._accuracy_metric.result(), 0.0)
def testModelEvaluateWithNumpyData(self, strategy):
x = np.random.rand(100, 10)
y = np.random.rand(100, 1)
self._model_evaluate(
strategy,
x=x,
y=y,
batch_size=1,
)
self.assertGreaterEqual(self._accuracy_metric.result(), 0.0)
def testModelEvaluateWithTensorData(self, strategy):
x = tf.random.uniform((100, 10))
y = tf.random.uniform((100,))
self._model_evaluate(
strategy,
x=x,
y=y,
batch_size=1,
)
self.assertGreaterEqual(self._accuracy_metric.result(), 0.0)
def testModelEvaluateWithNormalizationLayer(self, strategy):
self._model_evaluate(strategy, with_normalization_layer=True)
self.assertGreaterEqual(self._accuracy_metric.result(), 0.0)
def testModelEvaluateWithStepsPerExecution(self, strategy):
self._model_evaluate(strategy, steps_per_execution=10)
self.assertGreaterEqual(self._accuracy_metric.result(), 0.0)
def testModelEvaluateWithNoStepsPerEpoch(self, strategy):
with self.assertRaisesRegex(
ValueError,
"When using a `tf.keras.utils.experimental.DatasetCreator`, "
"`steps_per_epoch`, `validation_steps`, `steps`, or "
"`pss_evaluation_shards` argument must be provided in "
"`Model.fit`, `Model.evaluate`, or `Model.predict`.",
):
self._model_evaluate(strategy, steps=None)
def testModelPredict(self, strategy):
_, predictions = self._model_predict(strategy, steps=3)
# Check the first (0th index), fourth (3rd index) and the last
# predictions because the first, fourth and the last input are the same
# in `model.predict` so there predictions should match.
self.assertTrue(
all(predictions[0] == predictions[i] for i in [0, 3, 5])
)
self.assertFalse(
all(predictions[0] == predictions[i] for i in [0, 1, 2, 4])
)
def testModelPredictWithNumpyData(self, strategy):
x = np.array([[1.0], [2.0], [3.0], [1.0], [5.0], [1.0]])
_, predictions = self._model_predict(strategy, test_data=x)
self.assertTrue(
all(predictions[0] == predictions[i] for i in [0, 3, 5])
)
self.assertFalse(
all(predictions[0] == predictions[i] for i in [0, 1, 2, 4])
)
def testModelPredictWithTensorData(self, strategy):
x = tf.constant([[1.0], [2.0], [3.0], [1.0], [5.0], [1.0]])
_, predictions = self._model_predict(strategy, test_data=x)
self.assertTrue(
all(predictions[0] == predictions[i] for i in [0, 3, 5])
)
self.assertFalse(
all(predictions[0] == predictions[i] for i in [0, 1, 2, 4])
)
def testModelPredictWithNormalizationLayer(self, strategy):
_, predictions = self._model_predict(
strategy, with_normalization_layer=True, steps=3
)
# Check the first (0th index), fourth (3rd index) and the last
# predictions because the first, fourth and the last input is the same
# in `model.predict` so there predictions should match.
self.assertTrue(
all(predictions[0] == predictions[i] for i in [0, 3, 5])
)
self.assertFalse(
all(predictions[0] == predictions[i] for i in [0, 1, 2, 4])
)
def testModelPredictWithStepsPerExecution(self, strategy):
_, predictions = self._model_predict(
strategy, steps_per_execution=3, steps=3
)
# Check the first (0th index), fourth (3rd index) and the last
# predictions because the first, fourth and the last input is the same
# in `model.predict` so there predictions should match.
self.assertTrue(
all(predictions[0] == predictions[i] for i in [0, 3, 5])
)
self.assertFalse(
all(predictions[0] == predictions[i] for i in [0, 1, 2, 4])
)
def testModelFitAndPredict(self, strategy):
def fit_dataset_fn(input_context):
del input_context
x = tf.random.uniform((10, 1))
y = tf.random.uniform((10,))
return (
tf.data.Dataset.from_tensor_slices((x, y))
.shuffle(10)
.repeat()
.batch(2)
)
x = dataset_creator.DatasetCreator(fit_dataset_fn)
validation_data = dataset_creator.DatasetCreator(fit_dataset_fn)
model = self._model_fit(strategy, x=x, validation_data=validation_data)
_, predictions = self._model_predict(strategy, model, steps=3)
# Check the first (0th index), fourth (3rd index) and the last
# predictions because the first, fourth and the last input is the same
# in `model.predict` so there predictions should match.
self.assertTrue(
all(predictions[0] == predictions[i] for i in [0, 3, 5])
)
self.assertFalse(
all(predictions[0] == predictions[i] for i in [0, 1, 2, 4])
)
def testModelPredictWithDatasetCreator(self, strategy):
if isinstance(strategy, tf.distribute.MultiWorkerMirroredStrategy):
self.skipTest("b/189223991")
def _dataset_fn(input_context):
del input_context
x = tf.constant([[1.0], [2.0], [3.0], [1.0], [5.0], [1.0]])
return tf.data.Dataset.from_tensor_slices(x).repeat().batch(2)
_, predictions = self._model_predict(
strategy,
steps=3,
test_data=dataset_creator.DatasetCreator(_dataset_fn),
)
# Check the first (0th index), fourth (3rd index) and the last
# predictions because the first, fourth and the last input is the same
# in `model.predict` so there predictions should match.
self.assertTrue(
all(predictions[0] == predictions[i] for i in [0, 3, 5])
)
self.assertFalse(
all(predictions[0] == predictions[i] for i in [0, 1, 2, 4])
)
def testModelTrainTFFunction(self, strategy):
model = self._model_fit(strategy)
self.assertIsInstance(
model.train_tf_function, tf.__internal__.function.Function
)
if __name__ == "__main__":
tf.__internal__.distribute.multi_process_runner.test_main()
| tf-keras/tf_keras/distribute/dataset_creator_model_fit_test.py/0 | {
"file_path": "tf-keras/tf_keras/distribute/dataset_creator_model_fit_test.py",
"repo_id": "tf-keras",
"token_count": 5141
} | 170 |
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for keras premade models using tf.distribute.Strategy."""
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
from tf_keras.engine import sequential
from tf_keras.layers import core
from tf_keras.optimizers.legacy import adagrad
from tf_keras.optimizers.legacy import gradient_descent
from tf_keras.premade_models import linear
from tf_keras.premade_models import wide_deep
from tf_keras.utils import dataset_creator
def strategy_combinations_eager_data_fn():
return tf.__internal__.test.combinations.combine(
distribution=[
tf.__internal__.distribute.combinations.default_strategy,
tf.__internal__.distribute.combinations.one_device_strategy,
tf.__internal__.distribute.combinations.one_device_strategy_gpu,
tf.__internal__.distribute.combinations.mirrored_strategy_with_gpu_and_cpu, # noqa: E501
tf.__internal__.distribute.combinations.mirrored_strategy_with_two_gpus, # noqa: E501
tf.__internal__.distribute.combinations.mirrored_strategy_with_two_gpus_no_merge_call, # noqa: E501
tf.__internal__.distribute.combinations.multi_worker_mirrored_2x1_cpu, # noqa: E501
tf.__internal__.distribute.combinations.multi_worker_mirrored_2x1_gpu, # noqa: E501
tf.__internal__.distribute.combinations.multi_worker_mirrored_2x2_gpu, # noqa: E501
tf.__internal__.distribute.combinations.parameter_server_strategy_1worker_2ps_cpu, # noqa: E501
tf.__internal__.distribute.combinations.parameter_server_strategy_1worker_2ps_1gpu, # noqa: E501
# NOTE: TPUStrategy not tested because the models in this test are
# sparse and do not work with TPUs.
],
use_dataset_creator=[True, False],
mode=["eager"],
data_fn=["numpy", "dataset"],
)
INPUT_SIZE = 64
BATCH_SIZE = 10
def get_numpy():
inputs = np.random.uniform(low=-5.0, high=5.0, size=(INPUT_SIZE, 2)).astype(
np.float32
)
output = 0.3 * inputs[:, 0] + 0.2 * inputs[:, 1]
return inputs, output
def get_dataset(input_context=None, batch_size=None):
inputs, output = get_numpy()
dataset = tf.data.Dataset.from_tensor_slices((inputs, output))
if input_context:
dataset = dataset.shard(
input_context.num_input_pipelines, input_context.input_pipeline_id
)
if batch_size is None:
batch_size = BATCH_SIZE
dataset = dataset.batch(batch_size).repeat(200)
return dataset
# A `dataset_fn` is required for `Model.fit` to work across all strategies.
def dataset_fn(input_context):
batch_size = input_context.get_per_replica_batch_size(
global_batch_size=BATCH_SIZE
)
return get_dataset(input_context, batch_size)
class KerasPremadeModelsTest(tf.test.TestCase, parameterized.TestCase):
@tf.__internal__.distribute.combinations.generate(
strategy_combinations_eager_data_fn()
)
def test_linear_model(self, distribution, use_dataset_creator, data_fn):
if (not use_dataset_creator) and isinstance(
distribution, tf.distribute.experimental.ParameterServerStrategy
):
self.skipTest(
"Parameter Server strategy requires dataset creator to be used "
"in model.fit."
)
if (
not tf.__internal__.tf2.enabled()
and use_dataset_creator
and isinstance(
distribution, tf.distribute.experimental.ParameterServerStrategy
)
):
self.skipTest(
"Parameter Server strategy with dataset creator needs to be "
"run when eager execution is enabled."
)
with distribution.scope():
model = linear.LinearModel()
opt = gradient_descent.SGD(learning_rate=0.1)
model.compile(opt, "mse")
if use_dataset_creator:
x = dataset_creator.DatasetCreator(dataset_fn)
hist = model.fit(x, epochs=3, steps_per_epoch=INPUT_SIZE)
else:
if data_fn == "numpy":
inputs, output = get_numpy()
hist = model.fit(inputs, output, epochs=3)
else:
hist = model.fit(get_dataset(), epochs=3)
self.assertLess(hist.history["loss"][2], 0.2)
@tf.__internal__.distribute.combinations.generate(
strategy_combinations_eager_data_fn()
)
def test_wide_deep_model(self, distribution, use_dataset_creator, data_fn):
if (not use_dataset_creator) and isinstance(
distribution, tf.distribute.experimental.ParameterServerStrategy
):
self.skipTest(
"Parameter Server strategy requires dataset creator to be used "
"in model.fit."
)
if (
not tf.__internal__.tf2.enabled()
and use_dataset_creator
and isinstance(
distribution, tf.distribute.experimental.ParameterServerStrategy
)
):
self.skipTest(
"Parameter Server strategy with dataset creator needs to be "
"run when eager execution is enabled."
)
with distribution.scope():
linear_model = linear.LinearModel(units=1)
dnn_model = sequential.Sequential([core.Dense(units=1)])
wide_deep_model = wide_deep.WideDeepModel(linear_model, dnn_model)
linear_opt = gradient_descent.SGD(learning_rate=0.05)
dnn_opt = adagrad.Adagrad(learning_rate=0.1)
wide_deep_model.compile(optimizer=[linear_opt, dnn_opt], loss="mse")
if use_dataset_creator:
x = dataset_creator.DatasetCreator(dataset_fn)
hist = wide_deep_model.fit(
x, epochs=3, steps_per_epoch=INPUT_SIZE
)
else:
if data_fn == "numpy":
inputs, output = get_numpy()
hist = wide_deep_model.fit(inputs, output, epochs=3)
else:
hist = wide_deep_model.fit(get_dataset(), epochs=3)
self.assertLess(hist.history["loss"][2], 0.2)
if __name__ == "__main__":
tf.__internal__.distribute.multi_process_runner.test_main()
| tf-keras/tf_keras/distribute/keras_premade_models_test.py/0 | {
"file_path": "tf-keras/tf_keras/distribute/keras_premade_models_test.py",
"repo_id": "tf-keras",
"token_count": 3122
} | 171 |
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for evaluation using TF-Keras model and ParameterServerStrategy."""
import threading
import time
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
from tensorflow.python.platform import tf_logging as logging
import tf_keras as keras
from tf_keras.metrics import base_metric
from tf_keras.testing_infra import test_utils
from tf_keras.utils import dataset_creator
from tf_keras.utils import tf_utils
# isort: off
from tensorflow.python.distribute import (
multi_worker_test_base,
)
from tensorflow.python.distribute.cluster_resolver import (
SimpleClusterResolver,
)
def _aggregate_results(coordinator_metrics, results):
for result in results:
for metric in coordinator_metrics:
if metric.name == "loss":
continue
assert metric.name in result.keys()
metric_result = result[metric.name]
assert len(metric_result) == len(metric.weights)
for weight, val in zip(metric.weights, metric_result):
weight.assign_add(val)
return coordinator_metrics
def make_binary_dataset_fn(num_examples, num_data_shards, batch_size):
def dataset_fn(input_context=None):
del input_context
x = np.arange(num_examples)
def make_batch_with_n_true(n):
return np.concatenate((np.ones(n), np.zeros(batch_size - n)))
y = np.zeros(num_examples)
batch_idxs = np.arange(num_examples // batch_size)
for shard_idx in range(num_data_shards):
num_correct = shard_idx
# Dataset.shard uses mod sharding, so each shard consists of the
# batches whose index mod (num_data_shards) = shard_idx
batch_idxs_for_shard = np.where(
np.mod(batch_idxs, num_data_shards) == shard_idx
)[0]
for batch_idx in batch_idxs_for_shard:
# Select the individual data elements for this batch
batch_range = range(
batch_idx * batch_size, (batch_idx + 1) * batch_size
)
num_for_batch = min(num_correct, batch_size)
y[batch_range] = make_batch_with_n_true(num_for_batch)
num_correct -= num_for_batch
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.batch(batch_size)
return dataset
return dataset_fn
def make_multiclass_dataset_fn(
num_examples, num_data_shards, batch_size, n_classes
):
def dataset_fn(input_context=None):
del input_context
x = np.arange(num_examples)
y = np.mod(np.arange(num_examples), n_classes)
y[y == 0] = 1
y = tf.convert_to_tensor(y, dtype=tf.int64)
weights = np.random.uniform(size=num_examples)
dataset = tf.data.Dataset.from_tensor_slices((x, y, weights)).batch(
batch_size
)
return dataset
return dataset_fn
@test_utils.run_v2_only
class ExactEvaluationTest(tf.test.TestCase, parameterized.TestCase):
def setUp(self):
super(ExactEvaluationTest, self).setUp()
self._cluster = multi_worker_test_base.create_multi_process_cluster(
num_workers=5, num_ps=1, rpc_layer="grpc"
)
self._cluster_def = (
self._cluster.cluster_resolver.cluster_spec().as_dict()
)
cluster_resolver = SimpleClusterResolver(
tf.train.ClusterSpec(self._cluster_def), rpc_layer="grpc"
)
self.strategy = tf.distribute.experimental.ParameterServerStrategy(
cluster_resolver
)
self.cluster_coord = (
tf.distribute.experimental.coordinator.ClusterCoordinator(
self.strategy
)
)
def tearDown(self):
super(ExactEvaluationTest, self).tearDown()
self._cluster.stop()
self._cluster = None
def testDistributedMetrics(self):
coordinator_metrics = [
keras.metrics.AUC(),
keras.metrics.MeanAbsoluteError(),
]
def dataset_fn():
y_true = np.concatenate((np.zeros(512), np.ones(512)))
y_pred = np.concatenate(
(np.linspace(0, 1, 512), np.linspace(0, 1, 512))
)
return tf.data.Dataset.from_tensor_slices((y_true, y_pred)).batch(1)
@tf.function
def eval_shard_fn(total_shard, shard_id, worker_dataset):
with tf_utils.with_metric_local_vars_scope():
worker_metrics = []
for coord_metric in coordinator_metrics:
worker_metrics.append(
base_metric.clone_metric(coord_metric)
)
dataset_shard = worker_dataset.shard(total_shard, shard_id)
for value in dataset_shard:
for worker_metric in worker_metrics:
worker_metric.update_state(*value)
return {
metric.name: metric.weights for metric in worker_metrics
}
per_worker_dataset = self.cluster_coord.create_per_worker_dataset(
dataset_fn()
)
# Trigger dataset creation on workers without creating an iterator
built_dataset = per_worker_dataset.build()
# needs to be a tf.constant so it doesn't get re-traced each time
# needs to be int64 because that's what Dataset.shard expects
total_shards = tf.constant(100, dtype=tf.int64)
result_remote_values = []
logging.info("Scheduling eval closures")
for i in tf.range(total_shards):
result_remote_values.append(
self.cluster_coord.schedule(
eval_shard_fn,
args=(total_shards, i, built_dataset),
)
)
logging.info("Killing 2 workers")
self._cluster.kill_task("worker", 0)
self._cluster.kill_task("worker", 1)
time.sleep(1)
self._cluster.start_task("worker", 0)
self._cluster.start_task("worker", 1)
self.cluster_coord.join()
results = [r.fetch() for r in result_remote_values]
coordinator_metrics = _aggregate_results(coordinator_metrics, results)
expected_results = {"auc": 0.5, "mean_absolute_error": 0.5}
for metric in coordinator_metrics:
self.assertAlmostEqual(
metric.result().numpy(), expected_results[metric.name], places=5
)
def testModelAddMetricErrors(self):
class MyModel(keras.Model):
def call(self, x):
self.add_metric(
tf.cast(x >= 0, tf.float32),
aggregation="sum",
name="num_positive",
)
return tf.cast(tf.add(x, 1), tf.float32)
dataset = tf.data.Dataset.zip(
(tf.data.Dataset.range(-5, 5), tf.data.Dataset.range(-4, 6))
).batch(1)
with self.strategy.scope():
model = MyModel()
model.compile(
metrics=[keras.metrics.Accuracy()],
loss="binary_crossentropy",
pss_evaluation_shards="auto",
)
# run a single train step to compile metrics
model.fit(dataset, steps_per_epoch=1)
with self.assertRaises(ValueError):
model.evaluate(dataset, return_dict=True)
def testModelInfiniteDatasetErrors(self):
dataset = tf.data.Dataset.range(10).repeat()
with self.strategy.scope():
model = keras.Model()
model.compile(pss_evaluation_shards="auto")
with self.assertRaisesRegex(
ValueError,
"When performing exact evaluation, the dataset must "
"be finite. Make sure not to call `repeat\(\)` on your "
"dataset.",
):
model.evaluate(dataset)
def testTrainingWithVariablesCreatedInFunction(self):
# When metrics are specified via string, they are instantiated in a
# tf.function in the the first pass of the model when update_state is
# called. This use case should not be affected by exact visitation
# guarantee support.
class MyModel(keras.Model):
@tf.function
def worker_fn(self, y_true, y_pred):
self.compiled_metrics.update_state(y_true, y_pred)
with self.strategy.scope():
model = MyModel()
model.compile(metrics=["accuracy"])
y_true_0 = tf.convert_to_tensor([[0.0], [0.0], [0.0]])
y_pred_0 = tf.convert_to_tensor([[0.0], [0.0], [1.0]])
self.cluster_coord.schedule(model.worker_fn, args=(y_true_0, y_pred_0))
y_true_1 = tf.convert_to_tensor([[0.0], [0.0], [0.0]])
y_pred_1 = tf.convert_to_tensor([[0.0], [1.0], [1.0]])
self.cluster_coord.schedule(model.worker_fn, args=(y_true_1, y_pred_1))
self.cluster_coord.join()
for metric in model.compiled_metrics.metrics:
self.assertAlmostEqual(metric.result().numpy(), 0.5)
@tf.__internal__.distribute.combinations.generate(
tf.__internal__.test.combinations.combine(
input_type=["dataset", "dataset_creator", "distributed_dataset"],
eval_in_model_fit=[True, False],
use_auto=[True, False],
custom_metric=[True, False],
)
)
def testDistributedModelEvaluation(
self, input_type, eval_in_model_fit, use_auto, custom_metric
):
# Define dataset by batch size, number of shards, and batches per shard
batch_size = 16
num_data_shards = 32
batches_per_shard = 4
num_examples = batch_size * num_data_shards * batches_per_shard
# Input dataset x: just the sequence of numbers up to the dataset size
# Input dataset y: defined such that each shard has index equal to the
# number of y_i's == True in that shard
expected_acc = sum(range(num_data_shards)) / num_examples
# The predictions y_pred from this dummy model are fixed to True. This
# way we can control the expected accuracy by just modifying y.
class BinaryModel(keras.Model):
def __call__(self, x, training=False):
return tf.cast(x >= 0, tf.float32)
class CustomAccuracy(keras.metrics.Metric):
def __init__(self, name="custom_acc", dtype=None):
super().__init__(name, dtype)
self.total = self.add_weight("total", initializer="zeros")
self.count = self.add_weight("count", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
y_true = tf.cast(y_true, tf.float32)
y_pred = tf.cast(y_pred, tf.float32)
matches = tf.cast(tf.equal(y_true, y_pred), tf.float32)
count = tf.reduce_sum(matches)
self.count.assign_add(count)
total = tf.cast(tf.size(y_true), tf.float32)
self.total.assign_add(total)
def result(self):
return self.count / self.total
def reset_state(self):
self.total.assign(0)
self.count.assign(0)
def build_metric():
metric = (
CustomAccuracy() if custom_metric else keras.metrics.Accuracy()
)
return metric
dataset_fn = make_binary_dataset_fn(
num_examples, num_data_shards, batch_size
)
loss = "mae"
logging.info("Local evaluation (exact)")
model = BinaryModel()
model.compile(metrics=[build_metric()], loss=loss)
ground_truth_evaluation = model.evaluate(dataset_fn())
logging.info(
"Result local evaluation (exact): %s", ground_truth_evaluation
)
self.assertAlmostEqual(ground_truth_evaluation[1], expected_acc)
# Since outputs are always 0 or 1, MAE loss should == 1 - accuracy
self.assertAlmostEqual(ground_truth_evaluation[0], 1 - expected_acc)
logging.info("Distributed evaluation (exact)")
if use_auto:
num_shards = "auto"
else:
num_shards = 5 * self.strategy._extended._num_workers
with self.strategy.scope():
model = BinaryModel()
model.compile(
metrics=[build_metric()],
loss=loss,
pss_evaluation_shards=num_shards,
)
if input_type == "dataset":
train_dataset = dataset_fn()
val_dataset = dataset_fn()
elif input_type == "dataset_creator":
train_dataset = dataset_creator.DatasetCreator(dataset_fn)
val_dataset = dataset_creator.DatasetCreator(dataset_fn)
elif input_type == "distributed_dataset":
train_dataset = self.strategy.experimental_distribute_dataset(
dataset_fn()
)
val_dataset = self.strategy.experimental_distribute_dataset(
dataset_fn()
)
metric_name = "custom_acc" if custom_metric else "accuracy"
expected_results = {metric_name: expected_acc, "loss": 1 - expected_acc}
def kill_and_revive_in_thread(wait_secs=0.1):
def _kill_and_revive_fn():
time.sleep(wait_secs)
logging.info("Killing 2 workers")
self._cluster.kill_task("worker", 0)
self._cluster.kill_task("worker", 1)
time.sleep(1)
self._cluster.start_task("worker", 0)
self._cluster.start_task("worker", 1)
restart_thread = threading.Thread(target=_kill_and_revive_fn)
restart_thread.start()
return restart_thread
eval_results = {}
if eval_in_model_fit:
kill_and_revive_in_thread()
history = model.fit(
train_dataset,
steps_per_epoch=1,
validation_data=val_dataset,
)
logging.info(
"History: params (%r), history (%r)",
history.params,
history.history,
)
eval_results = {
metric.split("val_")[1]: val[-1]
for metric, val in history.history.items()
if metric.startswith("val_")
}
else:
# run a single train step to compile metrics
model.fit(train_dataset, steps_per_epoch=1)
kill_and_revive_in_thread()
eval_results = model.evaluate(val_dataset, return_dict=True)
eval_results = {
metric: val.numpy() for metric, val in eval_results.items()
}
for metric, val in eval_results.items():
self.assertIn(metric, expected_results)
self.assertAlmostEqual(val, expected_results[metric], places=5)
def testDistributedMulticlassWeightedEvaluation(self):
n_classes = 5
# Define dataset by batch size, number of shards, and batches per shard
batch_size = n_classes * 2
num_data_shards = 32
batches_per_shard = 4
num_examples = batch_size * num_data_shards * batches_per_shard
expected_acc = 4 / 5
class MulticlassModel(keras.Model):
def __call__(self, x, training=False):
# e.g. x = 6 -> y_pred = [0, 1, 0, 0, 0]
return tf.squeeze(
tf.one_hot(
indices=[tf.math.floormod(x, n_classes)],
depth=n_classes,
)
)
dataset_fn = make_multiclass_dataset_fn(
num_examples, num_data_shards, batch_size, n_classes
)
model = MulticlassModel()
model.compile(
metrics=[
keras.metrics.SparseCategoricalAccuracy(),
keras.metrics.SparseCategoricalCrossentropy(),
],
weighted_metrics=[keras.metrics.SparseCategoricalCrossentropy()],
loss="sparse_categorical_crossentropy",
)
eval_dataset = dataset_fn()
ground_truth_evaluation = model.evaluate(eval_dataset, return_dict=True)
self.assertAlmostEqual(
ground_truth_evaluation["sparse_categorical_accuracy"], expected_acc
)
with self.strategy.scope():
model = MulticlassModel()
model.compile(
metrics=[
keras.metrics.SparseCategoricalAccuracy(),
keras.metrics.SparseCategoricalCrossentropy(),
],
weighted_metrics=[
keras.metrics.SparseCategoricalCrossentropy()
],
loss="sparse_categorical_crossentropy",
pss_evaluation_shards=num_data_shards,
)
# run a single train step to compile metrics
train_dataset = dataset_fn()
model.fit(train_dataset, steps_per_epoch=1)
eval_results = model.evaluate(eval_dataset, return_dict=True)
eval_results = {
metric: val.numpy() for metric, val in eval_results.items()
}
for metric, val in eval_results.items():
self.assertIn(metric, ground_truth_evaluation)
self.assertAlmostEqual(
val, ground_truth_evaluation[metric], places=4
)
if __name__ == "__main__":
tf.__internal__.distribute.multi_process_runner.test_main()
| tf-keras/tf_keras/distribute/parameter_server_exact_evaluation_test.py/0 | {
"file_path": "tf-keras/tf_keras/distribute/parameter_server_exact_evaluation_test.py",
"repo_id": "tf-keras",
"token_count": 8843
} | 172 |
# Copyright 2022 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Library for map layout and corresponding tf.Variable."""
import collections
import contextlib
import re
import threading
import tensorflow.compat.v2 as tf
from tf_keras.dtensor import dtensor_api as dtensor
from tf_keras.dtensor import lazy_variable
from tf_keras.dtensor import utils
from tf_keras.engine import base_layer
# isort: off
from tensorflow.python.util.tf_export import keras_export
# We will skip the path for certain attributes when mapping the layout, e.g.
# model._self_tracked_trackables, or layer._trainable_weights/
# _non_trainable_weights, etc. Those attributes are usually served as a cache,
# and the actual variable should be in somewhere else.
_KERAS_ATTRIBUTES_TO_SKIP = [
"_self_tracked_trackables",
"_trainable_weights",
"_non_trainable_weights",
"_captured_weight_regularizer",
]
_LAYOUT_MAP = threading.local()
def get_current_layout_map():
return getattr(_LAYOUT_MAP, "layout_map", None)
@keras_export("keras.dtensor.experimental.LayoutMap", v1=[])
class LayoutMap(collections.abc.MutableMapping):
"""A dict-like object that maps string to `Layout` instances.
`LayoutMap` uses a string as key and a `Layout` as value. There is a
behavior difference between a normal Python dict and this class. The string
key will be treated as a regex when retrieving the value. See the docstring
of `get` for more details.
See below for a usage example. You can define the naming schema
of the `Layout`, and then retrieve the corresponding `Layout` instance.
To use the `LayoutMap` with a `Model`, please see the docstring of
`tf.keras.dtensor.experimental.layout_map_scope`.
```python
map = LayoutMap(mesh=None)
map['.*dense.*kernel'] = layout_2d
map['.*dense.*bias'] = layout_1d
map['.*conv2d.*kernel'] = layout_4d
map['.*conv2d.*bias'] = layout_1d
layout_1 = map['dense_1.kernel'] # layout_1 == layout_2d
layout_2 = map['dense_1.bias'] # layout_2 == layout_1d
layout_3 = map['dense_2.kernel'] # layout_3 == layout_2d
layout_4 = map['dense_2.bias'] # layout_4 == layout_1d
layout_5 = map['my_model/conv2d_123/kernel'] # layout_5 == layout_4d
layout_6 = map['my_model/conv2d_123/bias'] # layout_6 == layout_1d
```
Args:
mesh: An optional `Mesh` that can be used to create all replicated
layout as default when there isn't a layout found based on the input
string query.
"""
def __init__(self, mesh=None):
self._layout_map = collections.OrderedDict()
self._default_mesh = mesh
def __getitem__(self, key):
"""Retrieve the corresponding layout by the string key.
When there isn't an exact match, all the existing keys in the layout map
will be treated as a regex and map against the input key again. The
first match will be returned, based on the key insertion order. Return
None if there isn't any match found.
Args:
key: the string key as the query for the layout.
Returns:
Corresponding layout based on the query.
"""
if key in self._layout_map:
return self._layout_map[key]
for k in self._layout_map:
if re.match(k, key):
return self._layout_map[k]
return None
def __setitem__(self, key, layout):
if key in self._layout_map:
raise ValueError(
f"{key} already exist in the LayoutMap with "
f"value {self._layout_map[key]}. Please make sure to "
"not use duplicated keys."
)
if not isinstance(layout, dtensor.Layout):
raise ValueError(
f"{layout} should be a dtensor.Layout type, got {type(layout)}"
)
self._layout_map[key] = layout
def __delitem__(self, key):
# let the dict to handle the key missing error
return self._layout_map.pop(key)
def __len__(self):
return len(self._layout_map)
def __iter__(self):
return iter(self._layout_map)
def get_default_mesh(self):
"""Return the default `Mesh` set at instance creation.
The `Mesh` can be used to create default replicated `Layout` when there
isn't a match of the input string query.
"""
return self._default_mesh
def scope(self):
"""Apply layout to all `tf.Variable` instances created under the scope.
All `tf.Variable` instances created under this scope
will be lazily initialized first. Once they are attached as the model
or layer attributes, and there is a stable layout mapping for it, the
variables will be reinitialized into a
`tf.experimental.dtensor.DVariable` with corresponding layout.
Note that the layout mapping will use object/attribute names as the
keys to map the variable to the layout.
For subclassed models, the full object/attribute name is used as the
key. For Functional/Sequential models, we use `layer.name` as
the key for the layer, followed by the attribute name. TF-Keras ensures
name uniqueness among the layers within a Functional/Sequential model.
See the following examples that show variable object names
for different TF-Keras model types:
```python
layout_map = layout_map_lib.LayoutMap(mesh=self.mesh)
layout_map['d1.kernel'] = layout_1
layout_map['d1.bias'] = layout_2
layout_map['d2.kernel'] = layout_3
layout_map['d2.bias'] = layout_4
## Subclassed model
class SubclassModel(tf.keras.Model):
def __init__(self, name=None):
super().__init__(name=name)
self.d1 = tf.keras.layers.Dense(1000)
self.d2 = tf.keras.layers.Dense(1000)
def call(self, inputs):
x = self.d1(inputs)
return self.d2(x)
with layout_map.scope():
model = SubclassModel()
inputs = tf.zeros((10, 10))
results = model(inputs)
model.d1.kernel.layout == layout_1
model.d1.bias.layout == layout_2
model.d2.kernel.layout == layout_3
model.d2.bias.layout == layout_4
## Functional model
with layout_map.scope():
inputs = tf.keras.Input((10,), batch_size=10)
x = tf.keras.layers.Dense(20, name='d1')(inputs)
output = tf.keras.layers.Dense(30, name='d2')(x)
model = tf.keras.Model(inputs, output)
d1 = model.layers[1]
d2 = model.layers[2]
d1.kernel.layout == layout_1
d1.bias.layout == layout_2
d1.kernel.layout == layout_3
d1.bias.layout == layout_4
## Sequential model
with layout_map.scope():
model = tf.keras.Sequential([
tf.keras.layers.Dense(20, name='d1', input_shape=(10,)),
tf.keras.layers.Dense(30, name='d2')
])
d1 = model.layers[0]
d2 = model.layers[1]
d1.kernel.layout == layout_1
d1.bias.layout == layout_2
d1.kernel.layout == layout_3
d1.bias.layout == layout_4
```
Returns:
A context that will lazily initialize all `tf.Variable` objects
within the model, with their attributed layouts.
"""
return layout_map_scope(self)
LayoutMap.get.__doc__ = LayoutMap.__getitem__.__doc__
@contextlib.contextmanager
def layout_map_scope(layout_map):
"""Apply the layout to all the tf.Variables created under the scope.
Create a scope that all the tf.Variable created under this scope
will be lazily inited, and initialized later on with proper layout when the
object path in the model is stable/finalized.
Note that the layout mapping will use the object/attribute names as the key
to map the variable against the layout.
For subclassed models, the full object/attribute name is used as the key.
For Functional/Sequential models, since the layers within the model do not
get assigned to a meaningful attribute, we use `layer.name` as the key for
the layer, followed by the attribute name. TF-Keras ensures name uniqueness
among the layers in all Functional/Sequential models.
See the following examples that show the variable object names
for different TF-Keras model types:
```python
layout_map = layout_map_lib.LayoutMap(mesh=self.mesh)
layout_map['d1.kernel'] = layout_1
layout_map['d1.bias'] = layout_2
layout_map['d2.kernel'] = layout_3
layout_map['d2.bias'] = layout_4
## Subclassed model
class SubclassModel(tf.keras.Model):
def __init__(self, name=None):
super().__init__(name=name)
self.d1 = tf.keras.layers.Dense(1000)
self.d2 = tf.keras.layers.Dense(1000)
def call(self, inputs):
x = self.d1(inputs)
return self.d2(x)
with layout_map_scope(layout_map):
model = SubclassModel()
# Triggering the creation of weights within or outside of the scope works
inputs = tf.zeros((10, 10))
results = model(inputs)
model.d1.kernel.layout == layout_1
model.d1.bias.layout == layout_2
model.d2.kernel.layout == layout_3
model.d2.bias.layout == layout_4
## Functional model
with layout_map_scope(layout_map):
inputs = tf.keras.Input((10,), batch_size=10)
x = tf.keras.layers.Dense(20, name='d1')(inputs)
output = tf.keras.layers.Dense(30, name='d2')(x)
model = tf.keras.Model(inputs, output)
d1 = model.layers[1]
d2 = model.layers[2]
d1.kernel.layout == layout_1
d1.bias.layout == layout_2
d1.kernel.layout == layout_3
d1.bias.layout == layout_4
## Sequential model
with layout_map_scope(layout_map):
model = tf.keras.Sequential([
tf.keras.layers.Dense(20, name='d1', input_shape=(10,)),
tf.keras.layers.Dense(30, name='d2')
])
d1 = model.layers[0]
d2 = model.layers[1]
d1.kernel.layout == layout_1
d1.bias.layout == layout_2
d1.kernel.layout == layout_3
d1.bias.layout == layout_4
```
Args:
layout_map: a LayoutMap which contains the variable_object_path (string)
-> Layout. When a layout is not found for the variable, a default all
replicated layout will be created for the variable.
Yields:
A context that will lazily initialize all `tf.Variable` objects
within the model, with their attributed layouts.
"""
previous_layout_map = get_current_layout_map()
global _LAYOUT_MAP
_LAYOUT_MAP.layout_map = layout_map
with lazy_variable.lazy_init_scope():
try:
yield
finally:
_LAYOUT_MAP.layout_map = previous_layout_map
def _map_subclass_model_variable(model, layout_map):
"""Map/Replace LazyInitVariable for subclass model."""
lazy_init_variable_to_tf_variable_map = {}
# Note that the model._flatten is a method from tf.Module, and it returns
# duplicated items (since some of the items have different paths).
for path, variable in model._flatten(
predicate=_is_lazy_init_variable,
with_path=True,
):
# Note that path is a tuple that contains string and ints, eg:
# ('d1', '_trainable_weights', 0) maps to model.d1._trainable_weights[0]
if [a for a in _KERAS_ATTRIBUTES_TO_SKIP if a in path]:
continue
# Convert all the ints to string and join with .
object_path = ".".join([str(item) for item in path])
new_variable = _create_dvariable(layout_map, object_path, variable)
_set_object_by_path(model, path, new_variable)
lazy_init_variable_to_tf_variable_map[id(variable)] = new_variable
for layer in model._flatten(
predicate=lambda o: isinstance(o, base_layer.Layer)
):
_config_dvariable_regularization(
layer, lazy_init_variable_to_tf_variable_map
)
# After we replaced all the variables, we want to make sure all the cached
# attributes are having the new variable, rather than old LazyInitVariable.
for path, variable in model._flatten(
predicate=_is_lazy_init_variable,
with_path=True,
):
tf_variable = lazy_init_variable_to_tf_variable_map[id(variable)]
_set_object_by_path(model, path, tf_variable)
_init_state_variable_for_rng(model, layout_map)
_update_trackable_reference(model, lazy_init_variable_to_tf_variable_map)
return model
def _map_functional_model_variable(model, layout_map):
"""Map/Replace LazyInitVariable for functional/sequential model."""
lazy_init_variable_to_tf_variable_map = {}
for layer in model.layers:
# Note that layer name is unique among the functional/sequential model
# when the layer name is not provided, TF-Keras will auto generate a
# layer name based on the class name.
layer_name = layer.name
for path, variable in layer._flatten(
predicate=_is_lazy_init_variable,
with_path=True,
):
# Note that path is a tuple that contains string and ints, eg:
# ('d1', '_trainable_weights', 0) maps to
# model.d1._trainable_weights[0]
if [a for a in _KERAS_ATTRIBUTES_TO_SKIP if a in path]:
continue
# Convert all the ints to string and join with .
object_path = ".".join([str(item) for item in path])
# Also attach the layer name
object_path = layer_name + "." + object_path
new_variable = _create_dvariable(layout_map, object_path, variable)
_set_object_by_path(layer, path, new_variable)
lazy_init_variable_to_tf_variable_map[id(variable)] = new_variable
_config_dvariable_regularization(
layer, lazy_init_variable_to_tf_variable_map
)
# After we replaced all the variables, we want to make sure all the
# cached attributes are having the new variable, rather than old
# LazyInitVariable.
for path, variable in layer._flatten(
predicate=_is_lazy_init_variable,
with_path=True,
):
tf_variable = lazy_init_variable_to_tf_variable_map[id(variable)]
_set_object_by_path(layer, path, tf_variable)
_init_state_variable_for_rng(model, layout_map)
_update_trackable_reference(model, lazy_init_variable_to_tf_variable_map)
return model
def _init_state_variable_for_rng(model, layout_map):
"""Init the state variable in tf.ranodm.Generator.
Since the BaseRandomLayer in keras explicitly untrack the
tf.random.Generator, the variable in it will stay as LazyInitVariable, which
cause runtime error if we don't replace them with proper DVariable. Since
user usually are not aware the existence of those variable, we will just
give them replicated layout since they are tiny.
Args:
model: the model whose layers will be checked to find the
BaseRandomLayers.
layout_map: used to get the default mesh information to create DVariable.
"""
for l in model._flatten(
predicate=lambda o: isinstance(o, base_layer.BaseRandomLayer)
):
keras_generator = l._random_generator
if keras_generator._built and keras_generator._generator is None:
raise ValueError(
"Keras is expected to use tf.random.Generator when using "
"DTensor API. Please call "
"`tf.keras.backend.experimental.enable_tf_random_generator` at "
"the beginning of your program."
)
if hasattr(keras_generator, "_generator") and _is_lazy_init_variable(
keras_generator._generator._state_var
):
# Replace it with DVariable
keras_generator._generator._state_var = _create_dvariable(
layout_map, "", keras_generator._generator._state_var
)
else:
# When the keras_generator is not built yet. Call the init function
# with DTensor device to init all the variable with default
# replicated layout.
with dtensor.default_mesh(layout_map.get_default_mesh()):
keras_generator._maybe_init()
def _config_dvariable_regularization(
layer, lazy_init_variable_to_tf_variable_map
):
"""Update the weights regularizer for newly created `DVariable`.
The weight regularization usually happens when `layer.add_weight()` is
called, at which point the library will first create a `LazyInitVariable`,
and then replace it with a `DVariable`. We will defer the creation of those
losses, until the DVariable is created.
See `layer._captured_weight_regularizer` for more details.
Args:
layer: the layer instance for DVariable regularization config.
lazy_init_variable_to_tf_variable_map: the dict between LazyInitVariable
ID and newly created DVariable.
"""
for name, variable, regualarizer in layer._captured_weight_regularizer:
if not _is_lazy_init_variable(variable):
raise ValueError(
"Expect the regularization loss are created from "
f"LazyInitVariable, got {variable}"
)
d_variable = lazy_init_variable_to_tf_variable_map[id(variable)]
layer._handle_weight_regularization(name, d_variable, regualarizer)
# After that, we should cleanup `layer._captured_weight_regularizer`
layer._captured_weight_regularizer = []
def _create_dvariable(layout_map, object_path, variable):
"""Create a new variable instead of using the LazyInitVariable.
We choose to do this since even the LazyInitVariable might behavior like
a normal tf.Variable/DVariable, it is not future proof for any new changes
to variable class. It will also fail the instance type check in python,
which could affect user's code when they do any filtering based on type to
find any variables.
Args:
layout_map: a LayoutMap which contains the variable_object_path (string)
-> Layout.
object_path: string, the object attribute path for the variable.
variable: LazyInitVariable which will be replaced by the newly created
tf.Variable.
Returns:
A new tf.Variable with correct layout information.
"""
# TODO(b/228209108): Revisit this in future and see if we can just reuse the
# LazyInitVariable rather than creating a new tf.Variable instance.
layout = layout_map[object_path]
if layout is None:
variable_rank = variable.shape.rank
layout = dtensor.Layout.replicated(
mesh=layout_map.get_default_mesh(), rank=variable_rank
)
init_val = variable._initial_value
if callable(init_val):
with lazy_variable.disable_init_variable_creator():
init_val = utils.call_with_layout(init_val, layout)
else:
# The init value is probably already created as a tensor, we will just
# copy it to mesh and give it a proper layout.
init_val = dtensor.copy_to_mesh(init_val, layout)
# Use the original variable name for new DVariable creation. TF was adding
# ":0" suffix to it.
variable_name = variable.name
if variable_name.endswith(":0"):
variable_name = variable_name[:-2]
new_variable = dtensor.DVariable(
init_val, trainable=variable.trainable, name=variable_name
)
return new_variable
def _set_object_by_path(object_to_set, path, value):
"""Set the attribute of instance to the object.
Args:
object_to_set: the instance whose attribute should be set.
path: the tuple/list of string and ints, representing the attribute names.
Int means that the attribute to set is a item a list.
value: the value of the attribute.
"""
for i, attr_name in enumerate(path):
if i == len(path) - 1:
# We found the actual attribute to set
if isinstance(attr_name, int):
# This means we are trying to set an element in the array, make
# sure the instance is array like object.
object_to_set[attr_name] = value
else:
setattr(object_to_set, attr_name, value)
else:
if isinstance(attr_name, int):
object_to_set = object_to_set[attr_name]
else:
object_to_set = getattr(object_to_set, attr_name)
# TODO(b/228209108): Revisit this after we can reinit LazyInitVariable.
def _update_trackable_reference(model, lazy_init_variable_to_tf_variable_map):
"""Update the trackable object references for the model.
Note that this method is only needed because of a corner case for model
checkpoint, where it could accidently catch a LazyInitVariable in checkpoint
dependency and not visible to the model attribute graph itself.
Args:
model: the keras model instance whose checkpoint dependency will be
examed.
lazy_init_variable_to_tf_variable_map: the dict between LazyInitVariable
ID and newly created DVariable.
"""
# See b/234621758 for more details.
object_graph = tf.__internal__.tracking.ObjectGraphView(model)
trackables, _ = object_graph.breadth_first_traversal()
for trackable in trackables:
for ref_name, ref in trackable._trackable_children().items():
if _is_lazy_init_variable(ref):
# Replacing the LazyVariable with DVariable.
trackable._track_trackable(
lazy_init_variable_to_tf_variable_map[id(ref)],
ref_name,
overwrite=True,
)
def _is_lazy_init_variable(obj):
return isinstance(obj, lazy_variable.LazyInitVariable)
| tf-keras/tf_keras/dtensor/layout_map.py/0 | {
"file_path": "tf-keras/tf_keras/dtensor/layout_map.py",
"repo_id": "tf-keras",
"token_count": 8889
} | 173 |
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the 'License');
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an 'AS IS' BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import numpy as np
import tensorflow.compat.v2 as tf
import tf_keras as keras
from tf_keras import backend
from tf_keras.engine import base_layer_utils
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
class BaseLayerUtilsTest(test_combinations.TestCase):
def test_is_subclassed(self):
self.assertFalse(base_layer_utils.is_subclassed(keras.layers.Dense(3)))
subclass = test_utils.get_small_subclass_mlp(3, 2)
self.assertTrue(base_layer_utils.is_subclassed(subclass))
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class TrackableWeightHandlerTest(test_combinations.TestCase):
def get_table_handler(self):
# Note: There is some repetition in these tests' setup. However,
# Tensorflow does not play nicely with a separate setUp() call (causing
# errors related to graph building), so we have to use a called setup
# instead of a setUp() call.
table = tf.lookup.experimental.MutableHashTable(
key_dtype=tf.string, value_dtype=tf.int32, default_value=0
)
return base_layer_utils.TrackableWeightHandler(table)
def test_get_num_tensors(self):
table_handler = self.get_table_handler()
self.assertEqual(2, table_handler.num_tensors)
def test_get_and_set_weights(self):
table_handler = self.get_table_handler()
table_data = {b"a": 1, b"b": 2, b"c": 3}
table_handler.set_weights(
[list(table_data.keys()), list(table_data.values())]
)
weights = backend.batch_get_value(table_handler.get_tensors())
weight_data = {key: value for key, value in zip(weights[0], weights[1])}
self.assertDictEqual(table_data, weight_data)
def test_get_and_set_weights_does_not_add_ops(self):
table_handler = self.get_table_handler()
table_data = {b"a": 1, b"b": 2, b"c": 3}
table_handler.set_weights(
[list(table_data.keys()), list(table_data.values())]
)
_ = backend.batch_get_value(table_handler.get_tensors())
backend.get_session().graph.finalize()
table_handler.set_weights(
[list(table_data.keys()), list(table_data.values())]
)
_ = backend.batch_get_value(table_handler.get_tensors())
@test_combinations.generate(test_combinations.combine(mode=["eager"]))
class OpLayerTest(test_combinations.TestCase):
def test_tensor_op_layer(self):
int_values = keras.Input(shape=(2,), dtype=tf.int32)
float_values = tf.cast(int_values, tf.float32)
model = keras.Model(int_values, float_values)
model.compile(loss="mse")
input_data = np.array([[1, 2], [3, 4]], dtype=np.int32)
expected = [[1.0, 2.0], [3.0, 4.0]]
output = model.predict(input_data)
self.assertAllClose(expected, output)
def test_ragged_op_layer_keras_tensors(self):
int_values = keras.Input(shape=(None,), dtype=tf.int32, ragged=True)
float_values = tf.cast(int_values, tf.float32)
model = keras.Model(int_values, float_values)
model.compile(loss="mse")
input_data = tf.ragged.constant([[1, 2], [3, 4]], dtype=np.int32)
expected = [[1.0, 2.0], [3.0, 4.0]]
output = model.predict(input_data)
self.assertIsInstance(output, tf.RaggedTensor)
self.assertAllClose(expected, output)
def test_sparse_op_layer_keras_tensors(self):
int_values = keras.Input(shape=(None,), dtype=tf.int32, sparse=True)
float_values = tf.cast(int_values, tf.float32)
_ = keras.Model(int_values, float_values)
model = keras.Model(int_values, float_values)
model.compile(loss="mse")
input_data = tf.sparse.from_dense(
np.array([[1, 2], [3, 4]], dtype=np.int32)
)
expected = [[1.0, 2.0], [3.0, 4.0]]
output = model.predict(input_data)
self.assertIsInstance(output, tf.SparseTensor)
self.assertAllClose(expected, tf.sparse.to_dense(output))
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/engine/base_layer_utils_test.py/0 | {
"file_path": "tf-keras/tf_keras/engine/base_layer_utils_test.py",
"repo_id": "tf-keras",
"token_count": 1974
} | 174 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Input layer code (`Input` and `InputLayer`)."""
import tensorflow.compat.v2 as tf
from tf_keras import backend
from tf_keras.distribute import distributed_training_utils
from tf_keras.engine import base_layer
from tf_keras.engine import keras_tensor
from tf_keras.engine import node as node_module
from tf_keras.saving import serialization_lib
from tf_keras.saving.legacy.saved_model import layer_serialization
from tf_keras.utils import tf_utils
from tf_keras.utils import traceback_utils
# isort: off
from tensorflow.python.util.tf_export import keras_export
def _assert_other_arg_none(arg_name, arg):
if arg is not None:
raise ValueError(
"When `type_spec` is not None, all other args "
"except `name` must be None, "
"but %s is not None." % arg_name
)
@keras_export("keras.layers.InputLayer")
class InputLayer(base_layer.Layer):
"""Layer to be used as an entry point into a Network (a graph of layers).
It can either wrap an existing tensor (pass an `input_tensor` argument)
or create a placeholder tensor (pass arguments `input_shape`, and
optionally, `dtype`).
It is generally recommend to use the TF-Keras Functional model via `Input`,
(which creates an `InputLayer`) without directly using `InputLayer`.
When using `InputLayer` with the TF-Keras Sequential model, it can be
skipped by moving the `input_shape` parameter to the first layer after the
`InputLayer`.
This class can create placeholders for `tf.Tensors`, `tf.SparseTensors`, and
`tf.RaggedTensors` by choosing `sparse=True` or `ragged=True`. Note that
`sparse` and `ragged` can't be configured to `True` at the same time.
Usage:
```python
# With explicit InputLayer.
model = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=(4,)),
tf.keras.layers.Dense(8)])
model.compile(tf.keras.optimizers.RMSprop(0.001), loss='mse')
model.fit(np.zeros((10, 4)),
np.ones((10, 8)))
# Without InputLayer and let the first layer to have the input_shape.
# TF-Keras will add a input for the model behind the scene.
model = tf.keras.Sequential([
tf.keras.layers.Dense(8, input_shape=(4,))])
model.compile(tf.keras.optimizers.RMSprop(0.001), loss='mse')
model.fit(np.zeros((10, 4)),
np.ones((10, 8)))
```
Args:
input_shape: Shape tuple (not including the batch axis), or
`TensorShape` instance (not including the batch axis).
batch_size: Optional input batch size (integer or `None`).
dtype: Optional datatype of the input. When not provided, the Keras
default `float` type will be used.
input_tensor: Optional tensor to use as layer input. If set, the layer
will use the `tf.TypeSpec` of this tensor rather
than creating a new placeholder tensor.
sparse: Boolean, whether the placeholder created is meant to be sparse.
Defaults to `False`.
ragged: Boolean, whether the placeholder created is meant to be ragged.
In this case, values of `None` in the `shape` argument represent
ragged dimensions. For more information about `tf.RaggedTensor`, see
[this guide](https://www.tensorflow.org/guide/ragged_tensor).
Defaults to `False`.
type_spec: A `tf.TypeSpec` object to create Input from. This
`tf.TypeSpec` represents the entire batch. When provided, all other
args except name must be `None`.
name: Optional name of the layer (string).
"""
@traceback_utils.filter_traceback
def __init__(
self,
input_shape=None,
batch_size=None,
dtype=None,
input_tensor=None,
sparse=None,
name=None,
ragged=None,
type_spec=None,
**kwargs,
):
self._init_input_shape = input_shape
self._init_batch_size = batch_size
self._init_dtype = dtype
self._init_sparse = sparse
self._init_ragged = ragged
self._init_type_spec = type_spec
strategy = tf.distribute.get_strategy()
if (
strategy
and batch_size is not None
and distributed_training_utils.global_batch_size_supported(strategy)
):
if batch_size % strategy.num_replicas_in_sync != 0:
raise ValueError(
"The `batch_size` argument ({}) must be divisible by "
"the number of replicas ({})".format(
batch_size, strategy.num_replicas_in_sync
)
)
batch_size = batch_size // strategy.num_replicas_in_sync
if "batch_input_shape" in kwargs:
batch_input_shape = kwargs.pop("batch_input_shape")
if input_shape and batch_input_shape:
raise ValueError(
"Only provide the input_shape OR "
"batch_input_shape argument to "
"InputLayer, not both at the same time."
)
# Set the input shape and batch size from the batch_input_shape.
# Note that batch_input_shape can be None (unknown rank) or []
# (scalar), in which case the batch size must be None.
if batch_input_shape:
batch_size = batch_input_shape[0]
input_shape = batch_input_shape[1:]
if kwargs:
raise ValueError(
f"Unrecognized keyword arguments: {list(kwargs.keys())}"
)
if sparse and ragged:
raise ValueError(
"Cannot set both sparse and ragged to True in a TF-Keras input."
)
if not name:
prefix = "input"
name = prefix + "_" + str(backend.get_uid(prefix))
if not dtype:
if input_tensor is None:
dtype = backend.floatx()
else:
dtype = backend.dtype(input_tensor)
elif input_tensor is not None and input_tensor.dtype != dtype:
raise ValueError(
"`input_tensor.dtype` differs from `dtype`. Received: "
f"input_tensor.dtype={input_tensor.dtype} "
f"but expected dtype={dtype}"
)
super().__init__(dtype=dtype, name=name)
self.built = True
self.sparse = True if sparse else False
self.ragged = True if ragged else False
self.batch_size = batch_size
self.supports_masking = True
if isinstance(input_shape, tf.TensorShape):
input_shape = tuple(input_shape.as_list())
elif isinstance(input_shape, int):
input_shape = (input_shape,)
if type_spec is not None:
args_that_must_be_none = [
("(input_)shape", self._init_input_shape),
("batch_size", self._init_batch_size),
("dtype", self._init_dtype),
("input_tensor", input_tensor),
("sparse", self._init_sparse),
("ragged", self._init_ragged),
]
for arg_name, arg in args_that_must_be_none:
_assert_other_arg_none(arg_name, arg)
if not tf.compat.v1.executing_eagerly_outside_functions():
raise ValueError(
"Creating TF-Keras inputs from a type_spec is only "
"supported when eager execution is enabled."
)
# Needed for type_spec deserialization since TypeSpec objects
# are not Keras-native (not automatically deserialized).
if isinstance(type_spec, dict):
type_spec = serialization_lib.deserialize_keras_object(
type_spec
)
input_tensor = keras_tensor.keras_tensor_from_type_spec(type_spec)
if isinstance(input_tensor, keras_tensor.SparseKerasTensor):
self.sparse = True
if isinstance(input_tensor, keras_tensor.RaggedKerasTensor):
self.ragged = True
self.is_placeholder = True
try:
self._batch_input_shape = tuple(input_tensor.shape.as_list())
except ValueError:
# If the shape cannot be represented as a tuple (e.g. unknown
# rank)
self._batch_input_shape = None
elif input_tensor is None:
if input_shape is not None:
batch_input_shape = (batch_size,) + tuple(input_shape)
else:
batch_input_shape = None
graph = backend.get_graph()
with graph.as_default():
input_tensor = backend.placeholder(
shape=batch_input_shape,
dtype=dtype,
name=self.name,
sparse=sparse,
ragged=ragged,
)
self.is_placeholder = True
self._batch_input_shape = batch_input_shape
else:
if tf.compat.v1.executing_eagerly_outside_functions():
if not isinstance(input_tensor, keras_tensor.KerasTensor):
input_tensor = keras_tensor.keras_tensor_from_tensor(
input_tensor
)
else:
if not tf_utils.is_symbolic_tensor(input_tensor):
raise ValueError(
"You should not pass an EagerTensor to `Input`. "
"For example, instead of creating an "
"`InputLayer`, you should instantiate your model "
"and directly call it on your input."
)
self.is_placeholder = False
try:
self._batch_input_shape = tuple(input_tensor.shape.as_list())
except ValueError:
# If the shape cannot be represented as a tuple (e.g. unknown
# rank)
self._batch_input_shape = None
# Create an input node.
input_tensor._keras_mask = None
node_module.Node(layer=self, outputs=input_tensor)
# Store type spec
if isinstance(input_tensor, keras_tensor.KerasTensor) or (
tf_utils.is_extension_type(input_tensor)
):
self._type_spec = input_tensor._type_spec
else:
self._type_spec = tf.TensorSpec(
shape=input_tensor.shape,
dtype=input_tensor.dtype,
name=self.name,
)
def get_config(self):
if self._init_type_spec is not None:
config = {"name": self.name, "type_spec": self._init_type_spec}
else:
config = {
"batch_input_shape": self._batch_input_shape,
"dtype": self.dtype,
"sparse": self.sparse,
"ragged": self.ragged,
"name": self.name,
}
return config
@property
def _trackable_saved_model_saver(self):
return layer_serialization.InputLayerSavedModelSaver(self)
@keras_export("keras.Input", "keras.layers.Input")
@traceback_utils.filter_traceback
def Input(
shape=None,
batch_size=None,
name=None,
dtype=None,
sparse=None,
tensor=None,
ragged=None,
type_spec=None,
**kwargs,
):
"""`Input()` is used to instantiate a TF-Keras tensor.
A TF-Keras tensor is a symbolic tensor-like object, which we augment with
certain attributes that allow us to build a TF-Keras model just by knowing
the inputs and outputs of the model.
For instance, if `a`, `b` and `c` are TF-Keras tensors,
it becomes possible to do:
`model = Model(input=[a, b], output=c)`
Args:
shape: A shape tuple (integers), not including the batch size.
For instance, `shape=(32,)` indicates that the expected input
will be batches of 32-dimensional vectors. Elements of this tuple
can be None; 'None' elements represent dimensions where the shape is
not known.
batch_size: optional static batch size (integer).
name: An optional name string for the layer.
Should be unique in a model (do not reuse the same name twice).
It will be autogenerated if it isn't provided.
dtype: The data type expected by the input, as a string
(`float32`, `float64`, `int32`...)
sparse: A boolean specifying whether the placeholder to be created is
sparse. Only one of 'ragged' and 'sparse' can be True. Note that,
if `sparse` is False, sparse tensors can still be passed into the
input - they will be densified with a default value of 0.
tensor: Optional existing tensor to wrap into the `Input` layer.
If set, the layer will use the `tf.TypeSpec` of this tensor rather
than creating a new placeholder tensor.
ragged: A boolean specifying whether the placeholder to be created is
ragged. Only one of 'ragged' and 'sparse' can be True. In this case,
values of 'None' in the 'shape' argument represent ragged
dimensions. For more information about RaggedTensors, see
[this guide](https://www.tensorflow.org/guide/ragged_tensor).
type_spec: A `tf.TypeSpec` object to create the input placeholder from.
When provided, all other args except name must be None.
**kwargs: deprecated arguments support. Supports `batch_shape` and
`batch_input_shape`.
Returns:
A `tensor`.
Example:
```python
# this is a logistic regression in Keras
x = Input(shape=(32,))
y = Dense(16, activation='softmax')(x)
model = Model(x, y)
```
Note that even if eager execution is enabled,
`Input` produces a symbolic tensor-like object (i.e. a placeholder).
This symbolic tensor-like object can be used with lower-level
TensorFlow ops that take tensors as inputs, as such:
```python
x = Input(shape=(32,))
y = tf.square(x) # This op will be treated like a layer
model = Model(x, y)
```
(This behavior does not work for higher-order TensorFlow APIs such as
control flow and being directly watched by a `tf.GradientTape`).
However, the resulting model will not track any variables that were
used as inputs to TensorFlow ops. All variable usages must happen within
TF-Keras layers to make sure they will be tracked by the model's weights.
The TF-Keras Input can also create a placeholder from an arbitrary
`tf.TypeSpec`, e.g:
```python
x = Input(type_spec=tf.RaggedTensorSpec(shape=[None, None],
dtype=tf.float32, ragged_rank=1))
y = x.values
model = Model(x, y)
```
When passing an arbitrary `tf.TypeSpec`, it must represent the signature of
an entire batch instead of just one example.
Raises:
ValueError: If both `sparse` and `ragged` are provided.
ValueError: If both `shape` and (`batch_input_shape` or `batch_shape`) are
provided.
ValueError: If `shape`, `tensor` and `type_spec` are None.
ValueError: If arguments besides `type_spec` are non-None while
`type_spec` is passed.
ValueError: if any unrecognized parameters are provided.
"""
if sparse and ragged:
raise ValueError(
"Cannot set both `sparse` and `ragged` to `True` in a "
"Keras `Input`."
)
has_spec_name = (
name is None and type_spec is not None and hasattr(type_spec, "name")
)
if has_spec_name:
name = type_spec.name
input_layer_config = {
"name": name,
"dtype": dtype,
"sparse": sparse,
"ragged": ragged,
"input_tensor": tensor,
"type_spec": type_spec,
}
batch_input_shape = kwargs.pop(
"batch_input_shape", kwargs.pop("batch_shape", None)
)
if shape is not None and batch_input_shape is not None:
raise ValueError(
"Only provide the `shape` OR `batch_input_shape` argument "
"to Input, not both at the same time."
)
if (
batch_input_shape is None
and shape is None
and tensor is None
and type_spec is None
):
raise ValueError(
"Please provide to Input a `shape` "
"or a `tensor` or a `type_spec` argument. Note that "
"`shape` does not include the batch "
"dimension."
)
if kwargs:
raise ValueError(
f"Unrecognized keyword arguments: {list(kwargs.keys())}"
)
if batch_input_shape:
shape = batch_input_shape[1:]
input_layer_config.update({"batch_input_shape": batch_input_shape})
else:
input_layer_config.update(
{"batch_size": batch_size, "input_shape": shape}
)
input_layer = InputLayer(**input_layer_config)
# Return tensor including `_keras_history`.
# Note that in this case train_output and test_output are the same pointer.
outputs = input_layer._inbound_nodes[0].outputs
if isinstance(outputs, list) and len(outputs) == 1:
output = outputs[0]
else:
output = outputs
if has_spec_name and hasattr(output, "_name"):
output._name = input_layer.name
return output
| tf-keras/tf_keras/engine/input_layer.py/0 | {
"file_path": "tf-keras/tf_keras/engine/input_layer.py",
"repo_id": "tf-keras",
"token_count": 8047
} | 175 |
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for training routines."""
import io
import sys
import numpy as np
import tensorflow.compat.v2 as tf
import tf_keras as keras
from tf_keras import callbacks
from tf_keras import metrics as metrics_module
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
from tf_keras.utils import io_utils
# isort: off
from tensorflow.python.platform import tf_logging as logging
class BatchCounterCallback(callbacks.Callback):
def __init__(self):
self.batch_begin_count = 0
self.batch_end_count = 0
def on_batch_begin(self, *args, **kwargs):
self.batch_begin_count += 1
def on_batch_end(self, *args, **kwargs):
self.batch_end_count += 1
class TestTrainingWithDataset(test_combinations.TestCase):
@test_combinations.run_with_all_model_types
@test_combinations.run_all_keras_modes
def test_calling_model_on_same_dataset(self):
model = test_utils.get_small_mlp(1, 4, input_dim=3)
optimizer = "rmsprop"
loss = "mse"
metrics = ["mae"]
model.compile(
optimizer,
loss,
metrics=metrics,
run_eagerly=test_utils.should_run_eagerly(),
)
inputs = np.zeros((10, 3), np.float32)
targets = np.zeros((10, 4), np.float32)
dataset = tf.data.Dataset.from_tensor_slices((inputs, targets))
dataset = dataset.repeat(100)
dataset = dataset.batch(10)
# Call fit with validation data
model.fit(
dataset,
epochs=1,
steps_per_epoch=2,
verbose=0,
validation_data=dataset,
validation_steps=2,
)
model.fit(
dataset,
epochs=1,
steps_per_epoch=2,
verbose=0,
validation_data=dataset,
validation_steps=2,
)
@test_combinations.run_with_all_model_types
@test_combinations.run_all_keras_modes
def test_training_and_eval_methods_on_dataset(self):
model = test_utils.get_small_mlp(1, 4, input_dim=3)
optimizer = "rmsprop"
loss = "mse"
metrics = ["mae", metrics_module.CategoricalAccuracy()]
model.compile(
optimizer,
loss,
metrics=metrics,
run_eagerly=test_utils.should_run_eagerly(),
)
inputs = np.zeros((10, 3), np.float32)
targets = np.zeros((10, 4), np.float32)
dataset = tf.data.Dataset.from_tensor_slices((inputs, targets))
dataset = dataset.repeat() # Infinite dataset.
dataset = dataset.batch(10)
model.fit(dataset, epochs=1, steps_per_epoch=2, verbose=1)
model.evaluate(dataset, steps=2, verbose=1)
model.predict(dataset, steps=2)
# Test with validation data
model.fit(
dataset,
epochs=1,
steps_per_epoch=2,
verbose=0,
validation_data=dataset,
validation_steps=2,
)
# Test with validation split
with self.assertRaises(ValueError):
model.fit(
dataset,
epochs=1,
steps_per_epoch=2,
verbose=0,
validation_split=0.5,
validation_steps=2,
)
# Test with sample weight.
sample_weight = np.random.random((10,))
with self.assertRaisesRegex(
ValueError, r"`sample_weight` argument is not supported .+dataset"
):
model.fit(
dataset,
epochs=1,
steps_per_epoch=2,
verbose=0,
sample_weight=sample_weight,
)
with self.assertRaisesRegex(
ValueError,
"(you should not specify a target)|"
"(`y` argument is not supported when using dataset as input.)",
):
model.fit(dataset, dataset, epochs=1, steps_per_epoch=2, verbose=0)
# With an infinite dataset, `steps_per_epoch`/`steps` argument is
# required.
with self.assertRaises(ValueError):
model.fit(dataset, epochs=1, verbose=0)
with self.assertRaises(ValueError):
model.evaluate(dataset, verbose=0)
with self.assertRaises(ValueError):
model.predict(dataset, verbose=0)
@test_combinations.run_with_all_model_types(exclude_models="sequential")
@test_combinations.run_all_keras_modes
def test_training_and_eval_methods_on_multi_input_output_dataset(self):
input_a = keras.layers.Input(shape=(3,), name="input_1")
input_b = keras.layers.Input(shape=(3,), name="input_2")
dense = keras.layers.Dense(4, name="dense")
dropout = keras.layers.Dropout(0.5, name="dropout")
branch_a = [input_a, dense]
branch_b = [input_b, dense, dropout]
model = test_utils.get_multi_io_model(branch_a, branch_b)
model.compile(
optimizer="rmsprop",
loss="mse",
run_eagerly=test_utils.should_run_eagerly(),
)
input_a_np = np.random.random((10, 3)).astype(dtype=np.float32)
input_b_np = np.random.random((10, 3)).astype(dtype=np.float32)
output_d_np = np.random.random((10, 4)).astype(dtype=np.float32)
output_e_np = np.random.random((10, 4)).astype(dtype=np.float32)
# Test with tuples
dataset_tuple = tf.data.Dataset.from_tensor_slices(
((input_a_np, input_b_np), (output_d_np, output_e_np))
)
dataset_tuple = dataset_tuple.repeat(100)
dataset_tuple = dataset_tuple.batch(10)
model.fit(dataset_tuple, epochs=1, steps_per_epoch=2, verbose=1)
model.evaluate(dataset_tuple, steps=2, verbose=1)
# Test with dict
input_dict = {"input_1": input_a_np, "input_2": input_b_np}
if test_utils.get_model_type() == "subclass":
output_dict = {"output_1": output_d_np, "output_2": output_e_np}
else:
output_dict = {"dense": output_d_np, "dropout": output_e_np}
dataset_dict = tf.data.Dataset.from_tensor_slices(
(input_dict, output_dict)
)
dataset_dict = dataset_dict.repeat(100)
dataset_dict = dataset_dict.batch(10)
model.fit(dataset_dict, epochs=1, steps_per_epoch=2, verbose=1)
model.evaluate(dataset_dict, steps=2, verbose=1)
predict_dataset_dict = tf.data.Dataset.from_tensor_slices(input_dict)
predict_dataset_dict = predict_dataset_dict.repeat(100)
predict_dataset_dict = predict_dataset_dict.batch(10)
model.predict(predict_dataset_dict, steps=1)
@test_combinations.run_with_all_model_types
@test_combinations.run_all_keras_modes
def test_dataset_with_sample_weights(self):
model = test_utils.get_small_mlp(1, 4, input_dim=3)
optimizer = "rmsprop"
loss = "mse"
metrics = ["mae", metrics_module.CategoricalAccuracy()]
model.compile(
optimizer,
loss,
metrics=metrics,
run_eagerly=test_utils.should_run_eagerly(),
)
inputs = np.zeros((10, 3), np.float32)
targets = np.zeros((10, 4), np.float32)
sample_weights = np.ones((10), np.float32)
dataset = tf.data.Dataset.from_tensor_slices(
(inputs, targets, sample_weights)
)
dataset = dataset.repeat(100)
dataset = dataset.batch(10)
model.fit(dataset, epochs=1, steps_per_epoch=2, verbose=1)
model.evaluate(dataset, steps=2, verbose=1)
model.predict(dataset, steps=2)
@test_combinations.run_with_all_model_types
@test_combinations.run_all_keras_modes
def test_dataset_with_sample_weights_correctness(self):
x = keras.layers.Input(shape=(1,), name="input")
y = keras.layers.Dense(
1, kernel_initializer="ones", bias_initializer="zeros", name="dense"
)(x)
model = keras.Model(x, y)
optimizer = "rmsprop"
loss = "mse"
model.compile(optimizer, loss)
inputs = np.array([[0], [1], [2], [3]], np.float32)
targets = np.array([[2], [4], [6], [8]], np.float32)
sample_weights = np.array([0.25, 0.5, 0.75, 1], np.float32)
ds = tf.data.Dataset.from_tensor_slices(
(inputs, targets, sample_weights)
).batch(2)
result = model.evaluate(ds, verbose=1)
# The per sample loss is multiplied by the corresponding sample weight.
# The average of these weighted losses is the return value of the
# `evaluate` call. For example, in the test above the average weighted
# loss is calculated in the following manner:
# ((2-0)^2) * 0.25 + ((4-1)^2) * 0.5 + ((6-2)^2 * 0.75) + ((8-3)^2 * 1)
# equals 42.5 / 4 = 10.625
self.assertEqual(result, 10.625)
@test_combinations.run_with_all_model_types
@test_combinations.run_all_keras_modes
def test_dataset_with_sparse_labels(self):
model = test_utils.get_small_mlp(1, 4, input_dim=3)
optimizer = "rmsprop"
model.compile(
optimizer,
loss="sparse_categorical_crossentropy",
run_eagerly=test_utils.should_run_eagerly(),
)
inputs = np.zeros((10, 3), dtype=np.float32)
targets = np.random.randint(0, 4, size=10, dtype=np.int32)
dataset = tf.data.Dataset.from_tensor_slices((inputs, targets))
dataset = dataset.repeat(100)
dataset = dataset.batch(10)
model.fit(dataset, epochs=1, steps_per_epoch=2, verbose=1)
@test_combinations.run_all_keras_modes
def test_dataset_fit_correctness(self):
class SumLayer(keras.layers.Layer):
def build(self, _):
self.w = self.add_weight("w", ())
def call(self, inputs):
return (
keras.backend.sum(inputs, axis=1, keepdims=True)
+ self.w * 0
)
model = keras.Sequential([SumLayer(input_shape=(2,))])
model.compile(
"rmsprop", loss="mae", run_eagerly=test_utils.should_run_eagerly()
)
inputs = np.zeros((40, 2), dtype=np.float32)
inputs[10:20, :] = 2
inputs[20:30, :] = 1
inputs[30:, :] = 4
targets = np.zeros((40, 1), dtype=np.float32)
# Test correctness with `steps_per_epoch`.
train_dataset = tf.data.Dataset.from_tensor_slices(
(inputs, targets)
).batch(10)
val_dataset = tf.data.Dataset.from_tensor_slices(
(inputs, targets)
).batch(10)
history = model.fit(
train_dataset,
epochs=2,
steps_per_epoch=2,
verbose=1,
validation_data=val_dataset,
validation_steps=2,
)
self.assertAllClose(
history.history["loss"],
[inputs[:20].sum() / 20, inputs[20:].sum() / 20],
)
# The validation dataset will be reset at the end of each validation
# run.
self.assertAllClose(
history.history["val_loss"],
[inputs[:20].sum() / 20, inputs[:20].sum() / 20],
)
# Test correctness with dataset reset.
train_dataset = tf.data.Dataset.from_tensor_slices(
(inputs, targets)
).batch(10)
val_dataset = tf.data.Dataset.from_tensor_slices(
(inputs, targets)
).batch(10)
history = model.fit(
train_dataset, epochs=2, verbose=1, validation_data=val_dataset
)
self.assertAllClose(
history.history["loss"], [inputs.sum() / 40, inputs.sum() / 40]
)
self.assertAllClose(
history.history["val_loss"], [inputs.sum() / 40, inputs.sum() / 40]
)
def test_dataset_input_shape_validation(self):
with tf.compat.v1.get_default_graph().as_default(), self.cached_session(): # noqa: E501
model = test_utils.get_small_functional_mlp(1, 4, input_dim=3)
model.compile(optimizer="rmsprop", loss="mse")
# User forgets to batch the dataset
inputs = np.zeros((10, 3))
targets = np.zeros((10, 4))
dataset = tf.data.Dataset.from_tensor_slices((inputs, targets))
dataset = dataset.repeat(100)
with self.assertRaisesRegex(
ValueError,
r"expected (.*?) to have shape \(3,\) "
r"but got array with shape \(1,\)",
):
model.train_on_batch(dataset)
# Wrong input shape
inputs = np.zeros((10, 5))
targets = np.zeros((10, 4))
dataset = tf.data.Dataset.from_tensor_slices((inputs, targets))
dataset = dataset.repeat(100)
dataset = dataset.batch(10)
with self.assertRaisesRegex(
ValueError, r"expected (.*?) to have shape \(3,\)"
):
model.train_on_batch(dataset)
@test_combinations.run_with_all_model_types
@test_combinations.run_all_keras_modes
def test_finite_dataset_known_cardinality_no_steps_arg(self):
model = test_utils.get_small_mlp(1, 4, input_dim=3)
model.compile(
"rmsprop", "mse", run_eagerly=test_utils.should_run_eagerly()
)
inputs = np.zeros((100, 3), dtype=np.float32)
targets = np.random.randint(0, 4, size=100, dtype=np.int32)
dataset = tf.data.Dataset.from_tensor_slices((inputs, targets))
dataset = dataset.batch(10)
batch_counter = BatchCounterCallback()
history = model.fit(
dataset, epochs=2, verbose=1, callbacks=[batch_counter]
)
self.assertLen(history.history["loss"], 2)
self.assertEqual(batch_counter.batch_end_count, 20)
model.evaluate(dataset)
out = model.predict(dataset)
self.assertEqual(out.shape[0], 100)
@test_combinations.run_with_all_model_types
@test_combinations.run_all_keras_modes
def test_finite_dataset_unknown_cardinality_no_steps_arg(self):
model = test_utils.get_small_mlp(1, 4, input_dim=3)
model.compile(
"rmsprop", "mse", run_eagerly=test_utils.should_run_eagerly()
)
inputs = np.zeros((100, 3), dtype=np.float32)
targets = np.random.randint(0, 4, size=100, dtype=np.int32)
dataset = tf.data.Dataset.from_tensor_slices((inputs, targets))
dataset = dataset.filter(lambda x, y: True).batch(10)
self.assertEqual(
keras.backend.get_value(tf.data.experimental.cardinality(dataset)),
tf.data.experimental.UNKNOWN_CARDINALITY,
)
batch_counter = BatchCounterCallback()
history = model.fit(
dataset, epochs=2, verbose=1, callbacks=[batch_counter]
)
self.assertLen(history.history["loss"], 2)
self.assertEqual(batch_counter.batch_end_count, 20)
model.evaluate(dataset)
out = model.predict(dataset)
self.assertEqual(out.shape[0], 100)
@test_combinations.run_with_all_model_types
@test_combinations.run_all_keras_modes(always_skip_v1=True)
def test_finite_dataset_unknown_cardinality_no_step_with_train_and_val(
self,
):
class CaptureStdout:
def __enter__(self):
self._stdout = sys.stdout
string_io = io.StringIO()
sys.stdout = string_io
self._stringio = string_io
return self
def __exit__(self, *args):
self.output = self._stringio.getvalue()
sys.stdout = self._stdout
model = test_utils.get_small_mlp(1, 4, input_dim=3)
model.compile(
"rmsprop", "mse", run_eagerly=test_utils.should_run_eagerly()
)
inputs = np.zeros((100, 3), dtype=np.float32)
targets = np.random.randint(0, 4, size=100, dtype=np.int32)
dataset = tf.data.Dataset.from_tensor_slices((inputs, targets))
dataset = dataset.filter(lambda x, y: True).batch(10)
self.assertEqual(
keras.backend.get_value(tf.data.experimental.cardinality(dataset)),
tf.data.experimental.UNKNOWN_CARDINALITY,
)
batch_counter = BatchCounterCallback()
io_utils.enable_interactive_logging()
with CaptureStdout() as capture:
history = model.fit(
dataset,
epochs=2,
callbacks=[batch_counter],
validation_data=dataset.take(3),
)
lines = capture.output.splitlines()
self.assertIn("10/10", lines[-1])
self.assertLen(history.history["loss"], 2)
self.assertEqual(batch_counter.batch_begin_count, 21)
self.assertEqual(batch_counter.batch_end_count, 20)
model.evaluate(dataset)
out = model.predict(dataset)
self.assertEqual(out.shape[0], 100)
@test_combinations.run_with_all_model_types
@test_combinations.run_all_keras_modes
def test_finite_dataset_unknown_cardinality_out_of_data(self):
model = test_utils.get_small_mlp(1, 4, input_dim=3)
model.compile(
"rmsprop", "mse", run_eagerly=test_utils.should_run_eagerly()
)
inputs = np.zeros((100, 3), dtype=np.float32)
targets = np.random.randint(0, 4, size=100, dtype=np.int32)
dataset = tf.data.Dataset.from_tensor_slices((inputs, targets))
dataset = dataset.filter(lambda x, y: True).batch(10)
self.assertEqual(
keras.backend.get_value(tf.data.experimental.cardinality(dataset)),
tf.data.experimental.UNKNOWN_CARDINALITY,
)
batch_counter = BatchCounterCallback()
with tf.compat.v1.test.mock.patch.object(
logging, "warning"
) as mock_log:
# steps_per_epoch (200) is greater than the dataset size (100). As
# this is unexpected, training will stop and not make it to the
# second epoch.
history = model.fit(
dataset,
epochs=2,
verbose=1,
callbacks=[batch_counter],
steps_per_epoch=200,
)
self.assertIn(
"ran out of data; interrupting training.",
str(mock_log.call_args),
)
self.assertIn(
"can generate at least "
"`steps_per_epoch * epochs` batches (in this case, "
"400 batches). You may need to use the repeat() function when "
"building your dataset.",
str(mock_log.call_args),
)
self.assertLen(history.history["loss"], 1)
self.assertEqual(batch_counter.batch_end_count, 10)
model.evaluate(dataset)
out = model.predict(dataset)
self.assertEqual(out.shape[0], 100)
@test_combinations.run_all_keras_modes
def test_with_external_loss(self):
inp = keras.Input(shape=(4,), name="inp1")
out = keras.layers.Dense(2)(inp)
model = keras.Model(inp, out)
model.add_loss(tf.reduce_mean(out))
model.compile("rmsprop")
x = np.ones((10, 4))
# dataset contains only features, no labels.
dataset = tf.data.Dataset.from_tensor_slices(x).repeat(10).batch(10)
model.fit(dataset)
@test_combinations.run_all_keras_modes(always_skip_v1=True)
def test_train_eval_with_steps(self):
# See b/142880049 for more details.
inp = keras.Input(shape=(4,), name="inp1")
out = keras.layers.Dense(2)(inp)
model = keras.Model(inp, out)
model.compile(
"rmsprop", loss="mse", run_eagerly=test_utils.should_run_eagerly()
)
inputs = np.zeros((100, 4), dtype=np.float32)
targets = np.random.randint(0, 2, size=100, dtype=np.int32)
training_ds = (
tf.data.Dataset.from_tensor_slices((inputs, targets))
.repeat()
.batch(10)
)
# Create eval dataset with generator, so that dataset won't contain the
# overall size metadata. Without eval_steps, we expect to run through
# all the data in this dataset every epoch.
def gen():
for _ in range(100):
yield (
np.zeros(4, dtype=np.float32),
np.random.randint(0, 2, size=1, dtype=np.int32),
)
eval_ds = tf.data.Dataset.from_generator(
generator=gen,
output_types=("float64", "int32"),
output_shapes=([4], [1]),
).batch(100)
batch_counter = BatchCounterCallback()
model.fit(
training_ds,
steps_per_epoch=10,
epochs=10,
validation_data=eval_ds,
callbacks=[batch_counter],
)
# Expect 10 batch from training per epoch.
self.assertEqual(batch_counter.batch_end_count, 100)
class TestMetricsWithDatasets(test_combinations.TestCase):
@test_combinations.run_with_all_model_types
@test_combinations.run_all_keras_modes
def test_metrics_correctness_with_dataset(self):
layers = [
keras.layers.Dense(
8, activation="relu", input_dim=4, kernel_initializer="ones"
),
keras.layers.Dense(
1, activation="sigmoid", kernel_initializer="ones"
),
]
model = test_utils.get_model_from_layers(layers, (4,))
model.compile(
loss="binary_crossentropy",
metrics=["accuracy", metrics_module.BinaryAccuracy()],
optimizer="rmsprop",
run_eagerly=test_utils.should_run_eagerly(),
)
np.random.seed(123)
x = np.random.randint(10, size=(100, 4)).astype(np.float32)
y = np.random.randint(2, size=(100, 1)).astype(np.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.batch(10)
outs = model.evaluate(dataset, steps=10)
self.assertEqual(np.around(outs[1], decimals=1), 0.5)
self.assertEqual(np.around(outs[2], decimals=1), 0.5)
y = np.zeros((100, 1), dtype=np.float32)
dataset = tf.data.Dataset.from_tensor_slices((x, y))
dataset = dataset.repeat(100)
dataset = dataset.batch(10)
outs = model.evaluate(dataset, steps=10)
self.assertEqual(outs[1], 0.0)
self.assertEqual(outs[2], 0.0)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/engine/training_dataset_test.py/0 | {
"file_path": "tf-keras/tf_keras/engine/training_dataset_test.py",
"repo_id": "tf-keras",
"token_count": 11516
} | 176 |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for TF-Keras initializers."""
import warnings
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
from tf_keras import backend
from tf_keras import initializers
from tf_keras import models
from tf_keras.engine import input_layer
from tf_keras.layers import core
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
RANDOM_INITIALIZERS = [
initializers.RandomUniformV2,
initializers.RandomNormalV2,
initializers.OrthogonalV2,
# TODO(scottzhu): Enable this after the forward compat period expires for
# TruncatedNormalV2
# initializers.TruncatedNormalV2,
initializers.VarianceScalingV2,
initializers.LecunUniformV2,
initializers.LecunNormalV2,
initializers.GlorotUniformV2,
initializers.GlorotNormalV2,
initializers.HeNormalV2,
initializers.HeUniformV2,
]
def _compute_fans(shape):
"""Computes the number of input and output units for a weight shape.
Args:
shape: Integer shape tuple or TF tensor shape.
Returns:
A tuple of integer scalars (fan_in, fan_out).
"""
if len(shape) < 1: # Just to avoid errors for constants.
fan_in = fan_out = 1
elif len(shape) == 1:
fan_in = fan_out = shape[0]
elif len(shape) == 2:
fan_in = shape[0]
fan_out = shape[1]
else:
# Assuming convolution kernels (2D, 3D, or more).
# kernel shape: (..., input_depth, depth)
receptive_field_size = 1
for dim in shape[:-2]:
receptive_field_size *= dim
fan_in = shape[-2] * receptive_field_size
fan_out = shape[-1] * receptive_field_size
return int(fan_in), int(fan_out)
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class KerasInitializersTest(tf.test.TestCase, parameterized.TestCase):
def _runner(
self,
init,
shape,
):
# The global seed is set so that we can get the same random streams
# between eager and graph mode when stateful op is used.
tf.random.set_seed(1337)
variable = backend.variable(init(shape))
output = backend.get_value(variable)
# Test serialization (assumes deterministic behavior).
config = init.get_config()
reconstructed_init = init.__class__.from_config(config)
tf.random.set_seed(1337)
variable = backend.variable(reconstructed_init(shape))
output_2 = backend.get_value(variable)
self.assertAllClose(output, output_2, atol=1e-4)
def test_uniform(self):
tensor_shape = (3, 2, 3)
with self.cached_session():
self._runner(
initializers.RandomUniformV2(minval=-1, maxval=1, seed=124),
tensor_shape,
)
def test_normal(self):
tensor_shape = (8, 12, 99)
with self.cached_session():
self._runner(
initializers.RandomNormalV2(mean=0, stddev=1, seed=153),
tensor_shape,
)
def test_truncated_normal(self):
tensor_shape = (12, 99, 7)
with self.cached_session():
self._runner(
initializers.TruncatedNormalV2(mean=0, stddev=1, seed=126),
tensor_shape,
)
def test_constant(self):
tensor_shape = (5, 6, 4)
with self.cached_session():
self._runner(initializers.ConstantV2(2.0), tensor_shape)
def test_lecun_uniform(self):
tensor_shape = (5, 6, 4, 2)
with self.cached_session():
self._runner(initializers.LecunUniformV2(seed=123), tensor_shape)
def test_glorot_uniform(self):
tensor_shape = (5, 6, 4, 2)
with self.cached_session():
self._runner(initializers.GlorotUniformV2(seed=123), tensor_shape)
def test_he_uniform(self):
tensor_shape = (5, 6, 4, 2)
with self.cached_session():
self._runner(initializers.HeUniformV2(seed=123), tensor_shape)
def test_lecun_normal(self):
tensor_shape = (5, 6, 4, 2)
with self.cached_session():
self._runner(initializers.LecunNormalV2(seed=123), tensor_shape)
def test_glorot_normal(self):
tensor_shape = (5, 6, 4, 2)
with self.cached_session():
self._runner(initializers.GlorotNormalV2(seed=123), tensor_shape)
def test_he_normal(self):
tensor_shape = (5, 6, 4, 2)
with self.cached_session():
self._runner(initializers.HeNormalV2(seed=123), tensor_shape)
def test_orthogonal(self):
tensor_shape = (20, 20)
with self.cached_session():
self._runner(initializers.OrthogonalV2(seed=123), tensor_shape)
def test_identity(self):
with self.cached_session():
tensor_shape = (3, 4, 5)
with self.assertRaises(ValueError):
self._runner(initializers.IdentityV2(), tensor_shape)
tensor_shape = (3, 3)
self._runner(initializers.IdentityV2(), tensor_shape)
def test_zero(self):
tensor_shape = (4, 5)
with self.cached_session():
self._runner(initializers.ZerosV2(), tensor_shape)
def test_one(self):
tensor_shape = (4, 5)
with self.cached_session():
self._runner(initializers.OnesV2(), tensor_shape)
def test_default_random_uniform(self):
ru = initializers.get("uniform")
self.assertEqual(ru.minval, -0.05)
self.assertEqual(ru.maxval, 0.05)
def test_default_random_normal(self):
rn = initializers.get("normal")
self.assertEqual(rn.mean, 0.0)
self.assertEqual(rn.stddev, 0.05)
def test_default_truncated_normal(self):
tn = initializers.get("truncated_normal")
self.assertEqual(tn.mean, 0.0)
self.assertEqual(tn.stddev, 0.05)
def test_custom_initializer_saving(self):
def my_initializer(shape, dtype=None):
return tf.ones(shape, dtype=dtype)
inputs = input_layer.Input((10,))
outputs = core.Dense(1, kernel_initializer=my_initializer)(inputs)
model = models.Model(inputs, outputs)
model2 = model.from_config(
model.get_config(),
custom_objects={"my_initializer": my_initializer},
)
self.assertEqual(model2.layers[1].kernel_initializer, my_initializer)
@test_utils.run_v2_only
def test_load_external_variance_scaling_v2(self):
external_serialized_json = {
"class_name": "VarianceScaling",
"config": {
"distribution": "normal",
"mode": "fan_avg",
"scale": 1.0,
"seed": None,
},
}
initializer = initializers.deserialize(external_serialized_json)
self.assertEqual(initializer.distribution, "truncated_normal")
@parameterized.named_parameters(
("Zeros", initializers.ZerosV2, {}),
("Ones", initializers.OnesV2, {}),
("Constant", initializers.ConstantV2, {}),
("RandomUniform", initializers.RandomUniformV2, {}),
("RandomUniform_seeded", initializers.RandomUniformV2, {"seed": 123}),
("RandomNormal", initializers.RandomNormalV2, {}),
("RandomNormal_seeded", initializers.RandomNormalV2, {"seed": 123}),
# TODO(scottzhu): Enable these tests after the forward compat period
# expires for TruncatedNormalV2.
# ("TruncatedNormal", initializers.TruncatedNormalV2, {}),
# (
# "TruncatedNormal_seeded",
# initializers.TruncatedNormalV2,
# {"seed": 123},
# ),
("LecunUniform", initializers.LecunUniformV2, {}),
("LecunUniform_seeded", initializers.LecunUniformV2, {"seed": 123}),
("GlorotUniform", initializers.GlorotUniformV2, {}),
("GlorotUniform_seeded", initializers.GlorotUniformV2, {"seed": 123}),
("HeUniform", initializers.HeUniformV2, {}),
("HeUniform_seeded", initializers.HeUniformV2, {"seed": 123}),
)
def test_partition(self, initializer_cls, kwargs):
with self.cached_session():
initializer = initializer_cls(**kwargs)
result = initializer(
shape=(4, 2), partition_shape=(2, 2), partition_offset=(0, 0)
)
self.assertEqual(result.shape, (2, 2))
if hasattr(initializer, "seed"):
# Make sure the result are different when the partition_shape is
# same, but partition_offset is different, for random related
# initializers.
result_2 = initializer(
shape=(4, 2),
partition_shape=(2, 2),
partition_offset=(1, 0),
)
self.assertNotAllClose(result, result_2)
# Make sure initializer produce same result when provide same
# partition offset.
result_3 = initializer(
shape=(4, 2),
partition_shape=(2, 2),
partition_offset=(1, 0),
)
self.assertAllClose(result_2, result_3)
@parameterized.named_parameters(
("Orthogonal", initializers.OrthogonalV2),
("Identity", initializers.IdentityV2),
)
def test_partition_unsupported(self, initializer_cls):
with self.assertRaisesRegex(
ValueError,
"initializer doesn't support partition-related arguments",
):
initializer_cls()(
shape=(4, 2), partition_shape=(2, 2), partition_offset=(0, 0)
)
@parameterized.parameters(RANDOM_INITIALIZERS)
def test_stateless(self, initializer_cl):
with self.cached_session():
initializer = initializer_cl()
output1 = initializer(shape=[2, 3])
output2 = initializer(shape=[2, 3])
initializer2 = initializer_cl()
output3 = initializer2(shape=[2, 3])
output4 = initializer2(shape=[2, 3])
self.assertAllClose(output1, output2)
self.assertAllClose(output3, output4)
self.assertNotAllClose(output1, output3)
with warnings.catch_warnings(record=True) as w:
initializer(shape=[2, 3])
self.assertLen(w, 1)
self.assertIn("being called multiple times", str(w[0].message))
@parameterized.parameters(RANDOM_INITIALIZERS)
def test_seed_stateless(self, initializer_cl):
with self.cached_session():
seed = 1337
initializer = initializer_cl(seed=seed)
output1 = initializer(shape=[2, 3])
output2 = initializer(shape=[2, 3])
initializer2 = initializer_cl(seed=seed)
output3 = initializer2(shape=[2, 3])
output4 = initializer2(shape=[2, 3])
self.assertAllClose(output1, output2)
self.assertAllClose(output3, output4)
self.assertAllClose(output1, output3)
# We don't raise warning for seeded initializer.
with warnings.catch_warnings(record=True) as w:
initializer(shape=[2, 3])
self.assertEmpty(w)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/initializers/initializers_test.py/0 | {
"file_path": "tf-keras/tf_keras/initializers/initializers_test.py",
"repo_id": "tf-keras",
"token_count": 5474
} | 177 |
# Description:
# Contains a collection of diverse TF-Keras models to be used for integration tests.
# Placeholder: load unaliased py_library
package(
# copybara:uncomment default_applicable_licenses = ["//tf_keras:license"],
default_visibility = [
"//tf_keras:friends",
],
licenses = ["notice"],
)
py_library(
name = "models",
srcs = [
"__init__.py",
"bert.py",
"ctc_speech_rnn.py",
"dcgan.py",
"edge_case_model.py",
"efficientnet_v2.py",
"input_spec.py",
"low_level_model.py",
"mini_unet.py",
"mini_xception.py",
"retinanet.py",
"structured_data_classification.py",
"text_classification.py",
"timeseries_forecasting.py",
"translation.py",
"vae.py",
],
srcs_version = "PY3",
deps = ["//:expect_tensorflow_installed"],
)
| tf-keras/tf_keras/integration_test/models/BUILD/0 | {
"file_path": "tf-keras/tf_keras/integration_test/models/BUILD",
"repo_id": "tf-keras",
"token_count": 430
} | 178 |
"""Variable autoencoder.
Adapted from https://keras.io/examples/generative/vae/
"""
import tensorflow as tf
from tensorflow import keras
from tf_keras.integration_test.models.input_spec import InputSpec
from tf_keras.saving import serialization_lib
IMG_SIZE = (28, 28)
LATENT_DIM = 64
def get_input_preprocessor():
return None
class Sampling(keras.layers.Layer):
def call(self, inputs):
z_mean, z_log_var = inputs
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = tf.keras.backend.random_normal(shape=(batch, dim))
return z_mean + tf.exp(0.5 * z_log_var) * epsilon
class VAE(keras.Model):
def __init__(self, encoder, decoder, **kwargs):
super(VAE, self).__init__(**kwargs)
self.encoder = encoder
self.decoder = decoder
self.total_loss_tracker = keras.metrics.Mean(name="total_loss")
self.reconstruction_loss_tracker = keras.metrics.Mean(
name="reconstruction_loss"
)
self.kl_loss_tracker = keras.metrics.Mean(name="kl_loss")
@property
def metrics(self):
return [
self.total_loss_tracker,
self.reconstruction_loss_tracker,
self.kl_loss_tracker,
]
def train_step(self, data):
with tf.GradientTape() as tape:
z_mean, z_log_var, z = self.encoder(data)
reconstruction = self.decoder(z)
reconstruction_loss = tf.reduce_mean(
tf.reduce_sum(
keras.losses.binary_crossentropy(data, reconstruction),
axis=(1, 2),
)
)
kl_loss = -0.5 * (
1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var)
)
kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1))
total_loss = reconstruction_loss + kl_loss
grads = tape.gradient(total_loss, self.trainable_weights)
self.optimizer.apply_gradients(zip(grads, self.trainable_weights))
self.total_loss_tracker.update_state(total_loss)
self.reconstruction_loss_tracker.update_state(reconstruction_loss)
self.kl_loss_tracker.update_state(kl_loss)
return {
"loss": self.total_loss_tracker.result(),
"reconstruction_loss": self.reconstruction_loss_tracker.result(),
"kl_loss": self.kl_loss_tracker.result(),
}
def get_config(self):
base_config = super().get_config()
return {
"encoder": self.encoder,
"decoder": self.decoder,
**base_config,
}
@classmethod
def from_config(cls, config):
encoder = serialization_lib.deserialize_keras_object(
config.pop("encoder")
)
decoder = serialization_lib.deserialize_keras_object(
config.pop("decoder")
)
return cls(encoder, decoder, **config)
def get_data_spec(batch_size):
return InputSpec((batch_size,) + IMG_SIZE + (1,))
def get_model(
build=False, compile=False, jit_compile=False, include_preprocessing=True
):
encoder_inputs = keras.Input(shape=IMG_SIZE + (1,))
x = keras.layers.Conv2D(
32, 3, activation="relu", strides=2, padding="same"
)(encoder_inputs)
x = keras.layers.Conv2D(
64, 3, activation="relu", strides=2, padding="same"
)(x)
x = keras.layers.Flatten()(x)
x = keras.layers.Dense(16, activation="relu")(x)
z_mean = keras.layers.Dense(LATENT_DIM, name="z_mean")(x)
z_log_var = keras.layers.Dense(LATENT_DIM, name="z_log_var")(x)
z = Sampling()([z_mean, z_log_var])
encoder = keras.Model(
encoder_inputs, [z_mean, z_log_var, z], name="encoder"
)
latent_inputs = keras.Input(shape=(LATENT_DIM,))
x = keras.layers.Dense(7 * 7 * 64, activation="relu")(latent_inputs)
x = keras.layers.Reshape((7, 7, 64))(x)
x = keras.layers.Conv2DTranspose(
64, 3, activation="relu", strides=2, padding="same"
)(x)
x = keras.layers.Conv2DTranspose(
32, 3, activation="relu", strides=2, padding="same"
)(x)
decoder_outputs = keras.layers.Conv2DTranspose(
1, 3, activation="sigmoid", padding="same"
)(x)
decoder = keras.Model(latent_inputs, decoder_outputs, name="decoder")
vae = VAE(encoder, decoder)
if compile:
vae.compile(optimizer=keras.optimizers.Adam(), jit_compile=jit_compile)
return vae
def get_custom_objects():
return {"VAE": VAE, "Sampling": Sampling}
| tf-keras/tf_keras/integration_test/models/vae.py/0 | {
"file_path": "tf-keras/tf_keras/integration_test/models/vae.py",
"repo_id": "tf-keras",
"token_count": 2170
} | 179 |
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
import tensorflow.compat.v2 as tf
import tf_keras as keras
class VectorizedMapTest(tf.test.TestCase):
def test_vectorized_map(self):
batch_size = 10
num_features = 32
layer = keras.layers.Dense(1)
def model_fn(arg):
with tf.GradientTape() as g:
inp, label = arg
inp = tf.expand_dims(inp, 0)
label = tf.expand_dims(label, 0)
prediction = layer(inp)
loss = tf.nn.l2_loss(label - prediction)
return g.gradient(loss, (layer.kernel, layer.bias))
inputs = tf.random.uniform([batch_size, num_features])
labels = tf.random.uniform([batch_size, 1])
per_example_gradients = tf.vectorized_map(model_fn, (inputs, labels))
self.assertEqual(
per_example_gradients[0].shape, (batch_size, num_features, 1)
)
self.assertEqual(per_example_gradients[1].shape, (batch_size, 1))
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/integration_test/vectorized_map_test.py/0 | {
"file_path": "tf-keras/tf_keras/integration_test/vectorized_map_test.py",
"repo_id": "tf-keras",
"token_count": 657
} | 180 |
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for the MultiHeadAttention layer."""
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
import tf_keras as keras
from tf_keras.saving import object_registration
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
# This decorator runs the test in V1, V2-Eager, and V2-Functional mode. It
# guarantees forward compatibility of this code for the V2 switchover.
@test_combinations.run_all_keras_modes
class MultiHeadAttentionTest(test_combinations.TestCase):
@parameterized.named_parameters(
("key_value_same_proj", None, None, [40, 80]),
("key_value_different_proj", 32, 60, [40, 60]),
)
def test_non_masked_attention(self, value_dim, output_shape, output_dims):
"""Test that the attention layer can be created without a mask
tensor."""
test_layer = keras.layers.MultiHeadAttention(
num_heads=12,
key_dim=64,
value_dim=value_dim,
output_shape=output_shape,
)
# Create a 3-dimensional input (the first dimension is implicit).
query = keras.Input(shape=(40, 80))
value = keras.Input(shape=(20, 80))
output = test_layer(query=query, value=value)
self.assertEqual(output.shape.as_list(), [None] + output_dims)
def test_non_masked_self_attention(self):
"""Test with one input (self-attenntion) and no mask tensor."""
test_layer = keras.layers.MultiHeadAttention(num_heads=12, key_dim=64)
# Create a 3-dimensional input (the first dimension is implicit).
query = keras.Input(shape=(40, 80))
output = test_layer(query, query)
self.assertEqual(output.shape.as_list(), [None, 40, 80])
def test_attention_scores(self):
"""Test attention outputs with coefficients."""
test_layer = keras.layers.MultiHeadAttention(num_heads=12, key_dim=64)
# Create a 3-dimensional input (the first dimension is implicit).
query = keras.Input(shape=(40, 80))
output, coef = test_layer(query, query, return_attention_scores=True)
self.assertEqual(output.shape.as_list(), [None, 40, 80])
self.assertEqual(coef.shape.as_list(), [None, 12, 40, 40])
def test_attention_scores_with_values(self):
"""Test attention outputs with coefficients."""
test_layer = keras.layers.MultiHeadAttention(num_heads=12, key_dim=64)
# Create a 3-dimensional input (the first dimension is implicit).
query = keras.Input(shape=(40, 80))
value = keras.Input(shape=(60, 80))
output, coef = test_layer(query, value, return_attention_scores=True)
self.assertEqual(output.shape.as_list(), [None, 40, 80])
self.assertEqual(coef.shape.as_list(), [None, 12, 40, 60])
@parameterized.named_parameters(("with_bias", True), ("no_bias", False))
def test_masked_attention(self, use_bias):
"""Test with a mask tensor."""
test_layer = keras.layers.MultiHeadAttention(
num_heads=2, key_dim=2, use_bias=use_bias
)
# Create a 3-dimensional input (the first dimension is implicit).
batch_size = 3
query = keras.Input(shape=(4, 8))
value = keras.Input(shape=(2, 8))
mask_tensor = keras.Input(shape=(4, 2))
output = test_layer(
query=query, value=value, attention_mask=mask_tensor
)
# Create a model containing the test layer.
model = keras.Model([query, value, mask_tensor], output)
# Generate data for the input (non-mask) tensors.
from_data = 10 * np.random.random_sample((batch_size, 4, 8))
to_data = 10 * np.random.random_sample((batch_size, 2, 8))
# Invoke the data with a random set of mask data. This should mask at
# least one element.
mask_data = np.random.randint(2, size=(batch_size, 4, 2))
masked_output_data = model.predict([from_data, to_data, mask_data])
# Invoke the same data, but with a null mask (where no elements are
# masked).
null_mask_data = np.ones((batch_size, 4, 2))
unmasked_output_data = model.predict(
[from_data, to_data, null_mask_data]
)
# Because one data is masked and one is not, the outputs should not be
# the same.
self.assertNotAllClose(masked_output_data, unmasked_output_data)
# Tests the layer with three inputs: Q, K, V.
key = keras.Input(shape=(2, 8))
output = test_layer(
query, value=value, key=key, attention_mask=mask_tensor
)
model = keras.Model([query, value, key, mask_tensor], output)
masked_output_data = model.predict(
[from_data, to_data, to_data, mask_data]
)
unmasked_output_data = model.predict(
[from_data, to_data, to_data, null_mask_data]
)
# Because one data is masked and one is not, the outputs should not be
# the same.
self.assertNotAllClose(masked_output_data, unmasked_output_data)
if use_bias:
self.assertLen(test_layer._query_dense.trainable_variables, 2)
self.assertLen(test_layer._output_dense.trainable_variables, 2)
else:
self.assertLen(test_layer._query_dense.trainable_variables, 1)
self.assertLen(test_layer._output_dense.trainable_variables, 1)
def test_initializer(self):
"""Test with a specified initializer."""
test_layer = keras.layers.MultiHeadAttention(
num_heads=12,
key_dim=64,
kernel_initializer=keras.initializers.TruncatedNormal(stddev=0.02),
)
# Create a 3-dimensional input (the first dimension is implicit).
query = keras.Input(shape=(40, 80))
output = test_layer(query, query)
self.assertEqual(output.shape.as_list(), [None, 40, 80])
# Make sure the sub layers have different kernel init value, and not
# reusing the initializers.
self.assertNotAllClose(
keras.backend.eval(test_layer._query_dense.kernel),
keras.backend.eval(test_layer._key_dense.kernel),
)
self.assertNotAllClose(
keras.backend.eval(test_layer._query_dense.kernel),
keras.backend.eval(test_layer._value_dense.kernel),
)
self.assertNotAllClose(
keras.backend.eval(test_layer._query_dense.kernel),
keras.backend.eval(test_layer._output_dense.kernel),
)
@parameterized.named_parameters(
("bfloat16", tf.bfloat16),
("float16", tf.float16),
("float32", tf.float32),
("float64", tf.float64),
)
def test_sublayer_dtypes(self, dtype):
test_layer = keras.layers.MultiHeadAttention(
num_heads=12, key_dim=64, dtype=dtype
)
query = keras.Input(shape=(40, 80), dtype=dtype)
# Build the layer
test_layer(query=query, value=query)
self.assertEqual(test_layer._query_dense.dtype, dtype)
self.assertEqual(test_layer._key_dense.dtype, dtype)
self.assertEqual(test_layer._value_dense.dtype, dtype)
self.assertEqual(test_layer._output_dense.dtype, dtype)
def test_masked_attention_with_scores(self):
"""Test with a mask tensor."""
test_layer = keras.layers.MultiHeadAttention(num_heads=2, key_dim=2)
# Create a 3-dimensional input (the first dimension is implicit).
batch_size = 3
query = keras.Input(shape=(4, 8))
value = keras.Input(shape=(2, 8))
mask_tensor = keras.Input(shape=(4, 2))
output = test_layer(
query=query, value=value, attention_mask=mask_tensor
)
# Create a model containing the test layer.
model = keras.Model([query, value, mask_tensor], output)
# Generate data for the input (non-mask) tensors.
from_data = 10 * np.random.random_sample((batch_size, 4, 8))
to_data = 10 * np.random.random_sample((batch_size, 2, 8))
# Invoke the data with a random set of mask data. This should mask at
# least one element.
mask_data = np.random.randint(2, size=(batch_size, 4, 2))
masked_output_data = model.predict([from_data, to_data, mask_data])
# Invoke the same data, but with a null mask (where no elements are
# masked).
null_mask_data = np.ones((batch_size, 4, 2))
unmasked_output_data = model.predict(
[from_data, to_data, null_mask_data]
)
# Because one data is masked and one is not, the outputs should not be
# the same.
self.assertNotAllClose(masked_output_data, unmasked_output_data)
# Create a model containing attention scores.
output, scores = test_layer(
query=query,
value=value,
attention_mask=mask_tensor,
return_attention_scores=True,
)
model = keras.Model([query, value, mask_tensor], [output, scores])
masked_output_data_score, masked_score = model.predict(
[from_data, to_data, mask_data]
)
unmasked_output_data_score, unmasked_score = model.predict(
[from_data, to_data, null_mask_data]
)
self.assertNotAllClose(
masked_output_data_score, unmasked_output_data_score
)
self.assertAllClose(masked_output_data, masked_output_data_score)
self.assertAllClose(unmasked_output_data, unmasked_output_data_score)
self.assertNotAllClose(masked_score, unmasked_score)
@parameterized.named_parameters(
("4d_inputs_1freebatch_mask2", [3, 4], [3, 2], [4, 2], (2,)),
("4d_inputs_1freebatch_mask3", [3, 4], [3, 2], [3, 4, 2], (2,)),
("4d_inputs_1freebatch_mask4", [3, 4], [3, 2], [3, 2, 4, 2], (2,)),
("4D_inputs_2D_attention", [3, 4], [3, 2], [3, 4, 3, 2], (1, 2)),
("5D_inputs_2D_attention", [5, 3, 4], [5, 3, 2], [3, 4, 3, 2], (2, 3)),
(
"5D_inputs_2D_attention_fullmask",
[5, 3, 4],
[5, 3, 2],
[5, 3, 4, 3, 2],
(2, 3),
),
)
def test_high_dim_attention(
self, q_dims, v_dims, mask_dims, attention_axes
):
"""Test with a mask tensor."""
test_layer = keras.layers.MultiHeadAttention(
num_heads=2, key_dim=2, attention_axes=attention_axes
)
batch_size, hidden_size = 3, 8
# Generate data for the input (non-mask) tensors.
query_shape = [batch_size] + q_dims + [hidden_size]
value_shape = [batch_size] + v_dims + [hidden_size]
mask_shape = [batch_size] + mask_dims
query = 10 * np.random.random_sample(query_shape)
value = 10 * np.random.random_sample(value_shape)
# Invoke the data with a random set of mask data. This should mask at
# least one element.
mask_data = np.random.randint(2, size=mask_shape).astype("bool")
# Invoke the same data, but with a null mask (where no elements are
# masked).
null_mask_data = np.ones(mask_shape)
# Because one data is masked and one is not, the outputs should not be
# the same.
query_tensor = keras.Input(query_shape[1:], name="query")
value_tensor = keras.Input(value_shape[1:], name="value")
mask_tensor = keras.Input(mask_shape[1:], name="mask")
output = test_layer(
query=query_tensor, value=value_tensor, attention_mask=mask_tensor
)
model = keras.Model([query_tensor, value_tensor, mask_tensor], output)
self.assertNotAllClose(
model.predict([query, value, mask_data]),
model.predict([query, value, null_mask_data]),
)
def test_dropout(self):
test_layer = keras.layers.MultiHeadAttention(
num_heads=2, key_dim=2, dropout=0.5
)
# Generate data for the input (non-mask) tensors.
from_data = keras.backend.ones(shape=(32, 4, 8))
to_data = keras.backend.ones(shape=(32, 2, 8))
train_out = test_layer(from_data, to_data, None, None, None, True)
test_out = test_layer(from_data, to_data, None, None, None, False)
# Output should be close when not in training mode,
# and should not be close when enabling dropout in training mode.
self.assertNotAllClose(
keras.backend.eval(train_out), keras.backend.eval(test_out)
)
@test_combinations.generate(
test_combinations.combine(
ragged_query=[True, False],
ragged_value=[True, False],
ragged_key=[True, False],
)
)
def test_ragged_tensor(self, ragged_query, ragged_value, ragged_key):
if ragged_query:
query = tf.ragged.constant(
[
[[3.0, 1.0], [4.0, 1.0]],
[[5.0, 9.0], [2.0, 6.0], [3.0, 1.0]],
[[1.0, 2.0]],
],
inner_shape=(2,),
)
else:
query = keras.backend.ones(shape=(3, 2, 2))
if ragged_value:
value = tf.ragged.constant(
[[[3.0, 1.0], [4.0, 1.0]], [[5.0, 9.0]], [[1.0, 2.0]]],
inner_shape=(2,),
)
else:
value = keras.backend.ones(shape=(3, 4, 2))
if ragged_key:
key = tf.ragged.constant(
[
[[3.0, 1.0], [4.0, 1.0]],
[[5.0, 9.0], [2.0, 6.0], [3.0, 1.0], [1.0, 5.0]],
[[1.0, 2.0]],
],
inner_shape=(2,),
)
else:
key = keras.backend.ones(shape=(3, 4, 2))
test_layer = keras.layers.MultiHeadAttention(num_heads=5, key_dim=2)
results = test_layer(query, value, key)
self.assertAllEqual(results.shape.as_list(), query.shape.as_list())
def test_ragged_tensor_with_causal_mask_no_error(self):
ragged_tensor = tf.ragged.constant(
[
[[3.0, 1.0], [4.0, 1.0]],
[[5.0, 9.0], [2.0, 6.0], [3.0, 1.0]],
[[1.0, 2.0]],
],
inner_shape=(2,),
)
test_layer = keras.layers.MultiHeadAttention(num_heads=5, key_dim=2)
results = test_layer(
ragged_tensor, ragged_tensor, ragged_tensor, use_causal_mask=True
)
self.assertAllEqual(
results.shape.as_list(), ragged_tensor.shape.as_list()
)
def test_query_mask_progagation(self):
"""Test automatic propagation of the query's mask."""
test_layer = keras.layers.MultiHeadAttention(num_heads=2, key_dim=2)
self.assertTrue(test_layer.supports_masking)
query = tf.constant([[1, 2, 3, 0, 0], [3, 3, 1, 1, 2], [1, 0, 0, 0, 0]])
masked_query = keras.layers.Embedding(4, 8, mask_zero=True)(query)
value = tf.random.normal((3, 3, 8))
output = test_layer(query=masked_query, value=value)
self.assertTrue(hasattr(output, "_keras_mask"))
self.assertAllEqual(masked_query._keras_mask, output._keras_mask)
@parameterized.named_parameters(("causal", True), ("not_causal", False))
@test_utils.run_v2_only
def test_value_mask(self, use_causal_mask):
"""Test that the value and causal masks are taken into account."""
test_layer = keras.layers.MultiHeadAttention(num_heads=2, key_dim=2)
query = tf.constant([[1, 2, 3, 0, 0], [3, 3, 1, 1, 2], [1, 0, 0, 0, 0]])
masked_query = keras.layers.Embedding(4, 8, mask_zero=True)(query)
value = tf.constant([[5, 4, 0], [3, 0, 0], [2, 1, 1]])
masked_value = keras.layers.Embedding(6, 8, mask_zero=True)(value)
output = test_layer(
query=masked_query,
value=masked_value,
use_causal_mask=use_causal_mask,
)
mask = tf.constant(
[[[True, True, False]] * 3 + [[False, False, False]] * 2]
+ [[[True, False, False]] * 5]
+ [[[True, True, True]] + [[False, False, False]] * 4]
)
if use_causal_mask:
mask = mask & tf.constant(
[
[[True, False, False], [True, True, False]]
+ [[True, True, True]] * 3
]
)
del masked_query._keras_mask
del masked_value._keras_mask
output_with_manual_mask = test_layer(
query=masked_query, value=masked_value, attention_mask=mask
)
self.assertAllClose(output, output_with_manual_mask)
def test_masks_are_cast_to_bool(self):
"""Test that the implicit and explicit masks are cast to bool."""
test_layer = keras.layers.MultiHeadAttention(num_heads=2, key_dim=2)
query = np.array([[1, 2, 3, 0, 0], [3, 3, 1, 1, 2], [1, 0, 0, 0, 0]])
masked_query = keras.layers.Embedding(4, 8, mask_zero=True)(query)
masked_query._keras_mask = tf.cast(masked_query._keras_mask, tf.float32)
value = np.array([[5, 4, 0], [3, 0, 0], [2, 1, 1]])
masked_value = keras.layers.Embedding(6, 8, mask_zero=True)(value)
masked_value._keras_mask = tf.cast(masked_value._keras_mask, tf.float32)
float_mask = tf.constant([[[1.0]]])
# if all works well, the following should not raise any exception:
_ = test_layer(
query=masked_query,
value=masked_value,
use_causal_mask=True,
attention_mask=float_mask,
)
@parameterized.named_parameters(
("without_key_same_proj", [40, 80], [20, 80], None, None),
("with_key_same_proj", [40, 80], [20, 80], [20, 30], None),
("wihtout_key_different_proj", [40, 80], [20, 80], None, [30, 40]),
("with_key_different_proj", [40, 80], [20, 80], [20, 30], [15, 50]),
(
"high_dim_same_proj",
[40, 20, 30, 80],
[10, 10, 50, 80],
[10, 10, 50, 20],
None,
),
(
"high_dim_different_proj",
[40, 20, 30, 80],
[10, 10, 50, 80],
[10, 10, 50, 20],
[30, 20],
),
)
def test_compute_output_shape(
self, query_dims, value_dims, key_dims, output_shape
):
"""Test computed shape is equal to the layer output's shape."""
test_layer = keras.layers.MultiHeadAttention(
num_heads=2,
key_dim=2,
value_dim=2,
output_shape=output_shape,
)
batch_size = None
query_shape = [batch_size] + query_dims
value_shape = [batch_size] + value_dims
if key_dims:
key_shape = [batch_size] + key_dims
else:
key_shape = None
query = keras.Input(query_shape[1:])
value = keras.Input(value_shape[1:])
if key_shape:
key = keras.Input(key_shape[1:])
else:
key = None
output = test_layer(query=query, value=value, key=key)
comp_output_shape = test_layer.compute_output_shape(
query_shape, value_shape, key_shape
)
self.assertListEqual(
output.shape.as_list(), comp_output_shape.as_list()
)
@parameterized.named_parameters(
("query_value_dim_mismatch", (None, 40, 80), (None, 20, 70), None),
(
"key_value_dim_mismatch",
(None, 40, 80),
(None, 20, 80),
(None, 10, 70),
),
(
"key_value_dim_mismatch_high_dim",
(None, 40, 20, 30, 80),
(None, 10, 10, 50, 80),
(None, 10, 15, 50, 20),
),
)
def test_compute_output_shape_raises_error(
self, query_shape, value_shape, key_shape
):
"""Test dimension mismatches"""
test_layer = keras.layers.MultiHeadAttention(
num_heads=4,
key_dim=2,
value_dim=2,
)
with self.assertRaisesRegex(ValueError, r"must be equal"):
test_layer.compute_output_shape(query_shape, value_shape, key_shape)
class SubclassAttention(keras.layers.MultiHeadAttention):
def _build_attention(self, qkv_rank):
pass
def _compute_attention(
self,
query_tensor,
key_tensor,
value_tensor,
attention_mask=None,
training=None,
):
return value_tensor, None
@test_combinations.run_all_keras_modes
class AttentionSubclassTest(test_combinations.TestCase):
def test_initializer(self):
"""Test with a specified initializer."""
test_layer = SubclassAttention(num_heads=12, key_dim=64)
# Create a 3-dimensional input (the first dimension is implicit).
query = keras.Input(shape=(40, 80))
output = test_layer(query, query)
self.assertEqual(output.shape.as_list(), [None, 40, 80])
@object_registration.register_keras_serializable()
class TestModel(keras.Model):
def __init__(self):
super().__init__()
self.attention = keras.layers.MultiHeadAttention(
num_heads=3,
key_dim=4,
value_dim=4,
use_bias=True,
dropout=0.0,
output_shape=[12],
)
@classmethod
def from_config(cls, config):
return cls(**config)
def get_config(self):
return {}
def call(self, x, training=False):
return self.attention(x, x, training=training)
@test_combinations.run_all_keras_modes(always_skip_v1=True)
class KerasModelSavingTest(test_combinations.TestCase):
@parameterized.parameters("tf", "keras_v3")
def test_keras_saving_subclass(self, save_format):
model = TestModel()
query = keras.Input(shape=(40, 80))
_ = model(query)
model_path = self.get_temp_dir() + "/tmp_model"
if save_format == "keras_v3":
if not tf.__internal__.tf2.enabled():
self.skipTest(
"TF2 must be enabled to use the new `.keras` saving."
)
model_path += ".keras"
keras.models.save_model(model, model_path, save_format=save_format)
reloaded_model = keras.models.load_model(model_path)
self.assertEqual(
len(model.trainable_variables),
len(reloaded_model.trainable_variables),
)
for src_v, loaded_v in zip(
model.trainable_variables, reloaded_model.trainable_variables
):
self.assertAllEqual(src_v, loaded_v)
@parameterized.parameters("h5", "tf", "keras_v3")
def test_keras_saving_functional(self, save_format):
model = TestModel()
query = keras.Input(shape=(40, 80))
output = keras.layers.MultiHeadAttention(
num_heads=3, key_dim=4, value_dim=4, use_bias=True, dropout=0.0
)(query, query)
model = keras.Model(inputs=query, outputs=output)
model_path = self.get_temp_dir() + "/tmp_model"
if save_format == "keras_v3":
if not tf.__internal__.tf2.enabled():
self.skipTest(
"TF2 must be enabled to use the new `.keras` saving."
)
model_path += ".keras"
keras.models.save_model(model, model_path, save_format=save_format)
reloaded_model = keras.models.load_model(model_path)
self.assertEqual(
len(model.trainable_variables),
len(reloaded_model.trainable_variables),
)
for src_v, loaded_v in zip(
model.trainable_variables, reloaded_model.trainable_variables
):
self.assertAllEqual(src_v, loaded_v)
def test_create_without_build(self):
not_initialized_layer = keras.layers.MultiHeadAttention(
num_heads=3, key_dim=4, value_dim=4
)
keras.layers.MultiHeadAttention.from_config(
not_initialized_layer.get_config()
)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/layers/attention/multi_head_attention_test.py/0 | {
"file_path": "tf-keras/tf_keras/layers/attention/multi_head_attention_test.py",
"repo_id": "tf-keras",
"token_count": 11774
} | 181 |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for depthwise convolutional layers."""
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
import tf_keras as keras
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
@test_combinations.run_all_keras_modes
class DepthwiseConv1DTest(test_combinations.TestCase):
def _run_test(self, kwargs, expected_output_shape=None):
num_samples = 2
stack_size = 3
num_row = 7
with self.cached_session():
test_utils.layer_test(
keras.layers.DepthwiseConv1D,
kwargs=kwargs,
input_shape=(num_samples, num_row, stack_size),
expected_output_shape=expected_output_shape,
)
@parameterized.named_parameters(
("padding_valid", {"padding": "valid"}),
("padding_same", {"padding": "same"}),
("strides", {"strides": 2}),
# Only runs on GPU with CUDA, channels_first is not supported on CPU.
# TODO(b/62340061): Support channels_first on CPU.
("data_format", {"data_format": "channels_first"}),
("depth_multiplier_1", {"depth_multiplier": 1}),
("depth_multiplier_2", {"depth_multiplier": 2}),
("dilation_rate", {"dilation_rate": 2}, (None, 3, 3)),
)
def test_depthwise_conv1d(self, kwargs, expected_output_shape=None):
kwargs["kernel_size"] = 3
if "data_format" not in kwargs or tf.test.is_gpu_available(
cuda_only=True
):
self._run_test(kwargs, expected_output_shape)
def test_depthwise_conv1d_full(self):
kwargs = {
"kernel_size": 3,
"padding": "valid",
"data_format": "channels_last",
"dilation_rate": 1,
"activation": None,
"depthwise_regularizer": "l2",
"bias_regularizer": "l2",
"activity_regularizer": "l2",
"depthwise_constraint": "unit_norm",
"use_bias": True,
"strides": 2,
"depth_multiplier": 1,
}
self._run_test(kwargs)
def test_depthwise_conv1d_invalid_strides_and_dilation_rate(self):
kwargs = {"strides": 2, "dilation_rate": 2}
with self.assertRaisesRegex(
ValueError, r"""`strides > 1` not supported in conjunction"""
):
keras.layers.DepthwiseConv1D(kernel_size=2, **kwargs)
@test_combinations.run_all_keras_modes
class DepthwiseConv2DTest(test_combinations.TestCase):
def _run_test(self, kwargs, expected_output_shape=None):
num_samples = 2
stack_size = 3
num_row = 7
num_col = 6
with self.cached_session():
test_utils.layer_test(
keras.layers.DepthwiseConv2D,
kwargs=kwargs,
input_shape=(num_samples, num_row, num_col, stack_size),
expected_output_shape=expected_output_shape,
)
@parameterized.named_parameters(
("padding_valid", {"padding": "valid"}),
("padding_same", {"padding": "same"}),
("strides", {"strides": (2, 2)}),
# Only runs on GPU with CUDA, channels_first is not supported on CPU.
# TODO(b/62340061): Support channels_first on CPU.
("data_format", {"data_format": "channels_first"}),
("depth_multiplier_1", {"depth_multiplier": 1}),
("depth_multiplier_2", {"depth_multiplier": 2}),
("dilation_rate", {"dilation_rate": (2, 2)}, (None, 3, 2, 3)),
)
def test_depthwise_conv2d(self, kwargs, expected_output_shape=None):
kwargs["kernel_size"] = (3, 3)
if "data_format" not in kwargs or tf.test.is_gpu_available(
cuda_only=True
):
self._run_test(kwargs, expected_output_shape)
def test_depthwise_conv2d_full(self):
kwargs = {
"kernel_size": 3,
"padding": "valid",
"data_format": "channels_last",
"dilation_rate": (1, 1),
"activation": None,
"depthwise_regularizer": "l2",
"bias_regularizer": "l2",
"activity_regularizer": "l2",
"depthwise_constraint": "unit_norm",
"use_bias": True,
"strides": (2, 2),
"depth_multiplier": 1,
}
self._run_test(kwargs)
def test_depthwise_conv2d_invalid_strides_and_dilation_rate(self):
kwargs = {"strides": [2, 1], "dilation_rate": [2, 1]}
with self.assertRaisesRegex(
ValueError, r"""`strides > 1` not supported in conjunction"""
):
keras.layers.DepthwiseConv2D(kernel_size=2, **kwargs)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/layers/convolutional/depthwise_conv_test.py/0 | {
"file_path": "tf-keras/tf_keras/layers/convolutional/depthwise_conv_test.py",
"repo_id": "tf-keras",
"token_count": 2448
} | 182 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains the TFOpLambda layer."""
import tensorflow.compat.v2 as tf
from tf_keras import backend
from tf_keras.engine import keras_tensor
from tf_keras.engine.base_layer import Layer
# isort: off
from tensorflow.python.platform import tf_logging
from tensorflow.python.util.tf_export import (
get_canonical_name_for_symbol,
)
from tensorflow.python.util.tf_export import (
get_symbol_from_name,
)
class ClassMethod(Layer):
"""Wraps a TF API Class's class method in a `Layer` object.
It is inserted by the Functional API construction whenever users call
a supported TF Class's class method on KerasTensors.
This is useful in the case where users do something like:
x = keras.Input(...)
y = keras.Input(...)
out = tf.RaggedTensor.from_row_splits(x, y)
"""
@tf.__internal__.tracking.no_automatic_dependency_tracking
def __init__(self, cls_ref, method_name, **kwargs):
self.cls_ref = cls_ref
self.method_name = method_name
self.cls_symbol = get_canonical_name_for_symbol(
self.cls_ref, add_prefix_to_v1_names=True
) or get_canonical_name_for_symbol(
self.cls_ref, api_name="keras", add_prefix_to_v1_names=True
)
if "name" not in kwargs:
kwargs["name"] = backend.unique_object_name(
"tf." + self.cls_symbol + "." + self.method_name,
zero_based=True,
avoid_observed_names=True,
)
kwargs["autocast"] = False
# Do not individually trace op layers in the SavedModel.
self._must_restore_from_config = True
super().__init__(**kwargs)
# Preserve all argument data structures when saving/loading a config
# (e.g., don't unnest lists that contain one element)
self._preserve_input_structure_in_config = True
self._call_spec.expects_training_arg = False
self._call_spec.expects_mask_arg = False
def call(self, args, kwargs):
return getattr(self.cls_ref, self.method_name)(*args, **kwargs)
def get_config(self):
if not self.cls_symbol:
raise ValueError(
"This TF-Keras class method conversion tried to convert "
f"a method belonging to class {self.cls_symbol}, a class "
"that is not publicly exposed in the TensorFlow API. "
"To ensure cross-version compatibility of TF-Keras models "
"that use op layers, only op layers produced from "
"public TensorFlow API symbols can be serialized."
)
config = {
"cls_symbol": self.cls_symbol,
"method_name": self.method_name,
}
base_config = super().get_config()
return dict(list(base_config.items()) + list(config.items()))
@classmethod
def from_config(cls, config, custom_objects=None):
config = config.copy()
symbol_name = config.pop("cls_symbol")
cls_ref = get_symbol_from_name(symbol_name)
if not cls_ref:
raise ValueError(
f"TensorFlow symbol `{symbol_name}` could not be found."
)
config["cls_ref"] = cls_ref
return cls(**config)
class KerasOpDispatcher(tf.__internal__.dispatch.GlobalOpDispatcher):
"""A global dispatcher that allows building a functional model with TF
Ops."""
def handle(self, op, args, kwargs):
"""Handle the specified operation with the specified arguments."""
if any(
isinstance(x, keras_tensor.KerasTensor)
for x in tf.nest.flatten([args, kwargs])
):
return TFOpLambda(op)(*args, **kwargs)
else:
return self.NOT_SUPPORTED
KerasOpDispatcher().register()
class InstanceProperty(Layer):
"""Wraps an instance property access (e.g.
`x.foo`) in a TF-Keras Layer.
This layer takes an attribute name `attr_name` in the constructor and,
when called on input tensor `obj` returns `obj.attr_name`.
KerasTensors specialized for specific extension types use it to
represent instance property accesses on the represented object in the
case where the property needs to be dynamically accessed as opposed to
being statically computed from the typespec, e.g.
x = keras.Input(..., ragged=True)
out = x.flat_values
"""
@tf.__internal__.tracking.no_automatic_dependency_tracking
def __init__(self, attr_name, **kwargs):
self.attr_name = attr_name
if "name" not in kwargs:
kwargs["name"] = backend.unique_object_name(
"input." + self.attr_name,
zero_based=True,
avoid_observed_names=True,
)
kwargs["autocast"] = False
# Do not individually trace op layers in the SavedModel.
self._must_restore_from_config = True
super().__init__(**kwargs)
# Preserve all argument data structures when saving/loading a config
# (e.g., don't unnest lists that contain one element)
self._preserve_input_structure_in_config = True
def call(self, obj):
return getattr(obj, self.attr_name)
def get_config(self):
config = {"attr_name": self.attr_name}
base_config = super().get_config()
return dict(list(base_config.items()) + list(config.items()))
@classmethod
def from_config(cls, config, custom_objects=None):
return cls(**config)
class InstanceMethod(InstanceProperty):
"""Wraps an instance method access (e.g. `x.foo(arg)` in a TF-Keras Layer.
This layer takes an attribute name `attr_name` in the constructor and,
when called on input tensor `obj` with additional arguments `args` and
`kwargs` returns `obj.attr_name(*args, **kwargs)`.
KerasTensors specialized for specific extension types use it to
represent dynamic instance method calls on the represented object, e.g.
x = keras.Input(..., ragged=True)
new_values = keras.Input(...)
out = x.with_values(new_values)
"""
def call(self, obj, args, kwargs):
method = getattr(obj, self.attr_name)
return method(*args, **kwargs)
class TFOpLambda(Layer):
"""Wraps TF API symbols in a `Layer` object.
It is inserted by the Functional API construction whenever users call
a supported TF symbol on KerasTensors.
Like Lambda layers, this layer tries to raise warnings when it detects users
explicitly use variables in the call. (To let them know
that the layer will not capture the variables).
This is useful in the case where users do something like:
x = keras.Input(...)
y = tf.Variable(...)
out = x * tf_variable
"""
@tf.__internal__.tracking.no_automatic_dependency_tracking
def __init__(self, function, **kwargs):
self.function = function
self.symbol = get_canonical_name_for_symbol(
self.function, add_prefix_to_v1_names=True
) or get_canonical_name_for_symbol(
self.function, api_name="keras", add_prefix_to_v1_names=True
)
if "name" not in kwargs:
# Generate a name.
# TFOpLambda layers avoid already-observed names,
# because users cannot easily control the generated names.
# Without this avoidance, users would be more likely to run
# into unavoidable duplicate layer name collisions.
# (For standard layers users could just set `name` when creating the
# layer to work around a collision, but they can't do that for
# auto-generated layers)
if self.symbol:
name = "tf." + self.symbol
else:
name = self.function.__name__
kwargs["name"] = backend.unique_object_name(
name, zero_based=True, avoid_observed_names=True
)
kwargs["autocast"] = False
# Decorate the function to produce this layer's call method
def _call_wrapper(*args, **kwargs):
return self._call_wrapper(*args, **kwargs)
self.call = tf.__internal__.decorator.make_decorator(
function, _call_wrapper
)
# Do not individually trace op layers in the SavedModel.
self._must_restore_from_config = True
super().__init__(**kwargs)
# Preserve all argument data structures when saving/loading a config
# (e.g., don't unnest lists that contain one element)
self._preserve_input_structure_in_config = True
# Warning on every invocation will be quite irksome in Eager mode.
self._already_warned = False
self._call_spec.expects_training_arg = False
self._call_spec.expects_mask_arg = False
def _call_wrapper(self, *args, **kwargs):
created_variables = []
def _variable_creator(next_creator, **creator_kwargs):
var = next_creator(**creator_kwargs)
created_variables.append(var)
return var
with tf.GradientTape(
watch_accessed_variables=True
) as tape, tf.variable_creator_scope(_variable_creator):
# We explicitly drop `name` arguments here,
# to guard against the case where an op explicitly has a
# `name` passed (which is susceptible to producing
# multiple ops w/ the same name when the layer is reused)
kwargs.pop("name", None)
result = self.function(*args, **kwargs)
self._check_variables(created_variables, tape.watched_variables())
return result
def _check_variables(self, created_variables, accessed_variables):
if not created_variables and not accessed_variables:
# In the common case that a Lambda layer does not touch a Variable,
# we don't want to incur the runtime cost of assembling any state
# used for checking only to immediately discard it.
return
tracked_weights = set(v.ref() for v in self.weights)
untracked_new_vars = [
v for v in created_variables if v.ref() not in tracked_weights
]
if untracked_new_vars:
variable_str = "\n".join(f" {i}" for i in untracked_new_vars)
raise ValueError(
"The following Variables were created within a Lambda layer "
f"({self.name}) but are not tracked by said layer: "
f"{variable_str}\n"
"The layer cannot safely ensure proper Variable reuse "
"across multiple calls, and consequently this behavior "
"is disallowed for safety reasons. Lambda layers are "
"not well suited for stateful computation; instead, "
"writing a subclassed Layer is the recommend "
"way to define layers with Variables."
)
untracked_used_vars = [
v for v in accessed_variables if v.ref() not in tracked_weights
]
if untracked_used_vars and not self._already_warned:
variable_str = "\n".join(f" {i}" for i in untracked_used_vars)
self._warn(
"The following Variables were used in a Lambda layer's call "
f"({self.name}), but are not present in its tracked objects: "
f"{variable_str}. This is a strong indication that the Lambda "
"layer should be rewritten as a subclassed Layer."
)
self._already_warned = True
def _warn(self, msg):
# This method will be overridden in a unit test to raise an error,
# because self.assertWarns is not universally implemented.
return tf_logging.warning(msg)
def get_config(self):
if not self.symbol:
raise ValueError(
f"This TF-Keras op layer was generated from {self.function}, a "
"method that is not publicly exposed in the TensorFlow API. "
"This may have happened if the method was explicitly "
"decorated to add dispatching support, and it was used "
"during Functional model construction. "
"To ensure cross-version compatibility of TF-Keras models "
"that use op layers, only op layers produced from "
"public TensorFlow API symbols can be serialized."
)
config = {"function": self.symbol}
base_config = super().get_config()
return dict(list(base_config.items()) + list(config.items()))
@classmethod
def from_config(cls, config, custom_objects=None):
config = config.copy()
symbol_name = config["function"]
function = get_symbol_from_name(symbol_name)
if not function:
raise ValueError(f"TF symbol `{symbol_name}` could not be found.")
config["function"] = function
return cls(**config)
def _delegate_property(keras_tensor_cls, property_name):
"""Register property on a KerasTensor class.
Calling this multiple times with the same arguments should be a no-op.
This method exposes a property on the KerasTensor class that will use an
`InstanceProperty` layer to access the property on the represented
intermediate values in the model.
Args:
keras_tensor_cls: The KerasTensor subclass that should expose the
property.
property_name: The name of the property to expose and delegate to the
represented (Composite)Tensor.
"""
# We use a lambda because we can't create a TF-Keras layer at import time
# due to dynamic layer class versioning.
property_access = property(
lambda self: InstanceProperty(property_name)(self)
)
setattr(keras_tensor_cls, property_name, property_access)
def _delegate_method(keras_tensor_cls, method_name):
"""Register method on a KerasTensor class.
Calling this function times with the same arguments should be a no-op.
This method exposes an instance method on the KerasTensor class that will
use an `InstanceMethod` layer to run the desired method on the represented
intermediate values in the model.
Args:
keras_tensor_cls: The KerasTensor subclass that should expose the
property.
method_name: The name of the method to expose and delegate to the
represented (Composite)Tensor.
"""
def delegate(self, *args, **kwargs):
return InstanceMethod(method_name)(self, args, kwargs)
setattr(keras_tensor_cls, method_name, delegate)
# We do not support the `uniform_row_length` property because it
# returns either `None` or an int tensor, and code that relies on it tends
# to check `is None` directly. Delegating it here would always return a
# `KerasTensor`, regardless of what can be statically inferred. This would
# never equal `None`, breaking code that expects it to be partially-static
# in unpredictable ways.
for ragged_property in [
"values",
"flat_values",
"row_splits",
"nested_row_splits",
]:
_delegate_property(keras_tensor.RaggedKerasTensor, ragged_property)
for ragged_method_name in [
"value_rowids",
"nested_value_rowids",
"nrows",
"row_starts",
"row_limits",
"row_lengths",
"nested_row_lengths",
"bounding_shape",
"with_values",
"with_flat_values",
"with_row_splits_dtype",
"merge_dims",
"to_tensor",
"to_sparse",
]:
_delegate_method(keras_tensor.RaggedKerasTensor, ragged_method_name)
for sparse_property in [
"indices",
"values",
"dense_shape",
]:
_delegate_property(keras_tensor.SparseKerasTensor, sparse_property)
for sparse_method in [
"with_values",
]:
_delegate_method(keras_tensor.SparseKerasTensor, sparse_method)
class TFClassMethodDispatcher(tf.__internal__.dispatch.OpDispatcher):
"""A class method dispatcher that allows building a functional model with TF
class methods."""
def __init__(self, cls, method_name):
self.cls = cls
self.method_name = method_name
def handle(self, args, kwargs):
"""Handle the specified operation with the specified arguments."""
if any(
isinstance(x, keras_tensor.KerasTensor)
for x in tf.nest.flatten([args, kwargs])
):
return ClassMethod(self.cls, self.method_name)(args[1:], kwargs)
else:
return self.NOT_SUPPORTED
for ragged_class_method in [
"from_value_rowids",
"from_row_splits",
"from_row_lengths",
"from_row_starts",
"from_row_limits",
"from_uniform_row_length",
"from_nested_value_rowids",
"from_nested_row_splits",
"from_nested_row_lengths",
"from_tensor",
"from_sparse",
]:
TFClassMethodDispatcher(tf.RaggedTensor, ragged_class_method).register(
getattr(tf.RaggedTensor, ragged_class_method)
)
class SlicingOpLambda(TFOpLambda):
"""Wraps TF API symbols in a `Layer` object.
It is inserted by the Functional API construction whenever users call
a supported TF symbol on KerasTensors.
Like Lambda layers, this layer tries to raise warnings when it detects users
explicitly use variables in the call. (To let them know
that the layer will not capture the variables).
This is useful in the case where users do something like:
x = keras.Input(...)
y = tf.Variable(...)
out = x * tf_variable
"""
@tf.__internal__.tracking.no_automatic_dependency_tracking
def __init__(self, function, **kwargs):
super().__init__(function, **kwargs)
original_call = self.call
# Decorate the function to produce this layer's call method
def _call_wrapper(*args, **kwargs):
# Turn any slice dicts in the args back into `slice` objects.
# This conversion cannot use nest.flatten/map_structure,
# because dicts are flattened by nest while slices aren't.
# So, map_structure would only see the individual elements in the
# dict.
# This can't use map_structure_up_to either because the
# 'shallowness' of the shallow tree would have to vary depending on
# if only one dim or multiple are being sliced.
new_args = []
for arg in args:
arg = _dict_to_slice(arg)
if isinstance(arg, (list, tuple)):
new_arg = []
for sub_arg in arg:
new_arg.append(_dict_to_slice(sub_arg))
arg = new_arg
new_args.append(arg)
# Handle the kwargs too.
new_kwargs = {}
for key, value in kwargs.items():
value = _dict_to_slice(value)
if isinstance(value, (list, tuple)):
new_value = []
for v in value:
new_value.append(_dict_to_slice(v))
value = new_value
new_kwargs[key] = value
return original_call(*new_args, **new_kwargs)
self.call = tf.__internal__.decorator.make_decorator(
original_call, _call_wrapper
)
def _slice_to_dict(x):
if isinstance(x, slice):
return {"start": x.start, "stop": x.stop, "step": x.step}
return x
def _dict_to_slice(x):
if isinstance(x, dict):
return slice(x["start"], x["stop"], x["step"])
return x
class TFSlicingOpDispatcher(tf.__internal__.dispatch.OpDispatcher):
"""A global dispatcher that allows building a functional model with TF
Ops."""
def __init__(self, op):
self.op = op
def handle(self, args, kwargs):
"""Handle the specified operation with the specified arguments."""
args = tf.nest.map_structure(_slice_to_dict, args)
kwargs = tf.nest.map_structure(_slice_to_dict, kwargs)
if any(
isinstance(x, keras_tensor.KerasTensor)
for x in tf.nest.flatten([args, kwargs])
):
return SlicingOpLambda(self.op)(*args, **kwargs)
else:
return self.NOT_SUPPORTED
for slicing_op in [
tf.__operators__.getitem,
tf.compat.v1.boolean_mask,
tf.boolean_mask,
tf.__operators__.ragged_getitem,
]:
TFSlicingOpDispatcher(slicing_op).register(slicing_op)
| tf-keras/tf_keras/layers/core/tf_op_layer.py/0 | {
"file_path": "tf-keras/tf_keras/layers/core/tf_op_layer.py",
"repo_id": "tf-keras",
"token_count": 8631
} | 183 |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for locally-connected layers."""
import os
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
import tf_keras as keras
from tf_keras.layers.locally_connected import locally_connected_utils
from tf_keras.optimizers.legacy import rmsprop
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
# isort: off
from tensorflow.python.framework import (
test_util as tf_test_util,
)
from tensorflow.python.training.rmsprop import (
RMSPropOptimizer,
)
_DATA_FORMAT_PADDING_IMPLEMENTATION = [
{"data_format": "channels_first", "padding": "valid", "implementation": 1},
{"data_format": "channels_first", "padding": "same", "implementation": 1},
{"data_format": "channels_last", "padding": "valid", "implementation": 1},
{"data_format": "channels_last", "padding": "same", "implementation": 1},
{"data_format": "channels_first", "padding": "valid", "implementation": 2},
{"data_format": "channels_first", "padding": "same", "implementation": 2},
{"data_format": "channels_last", "padding": "valid", "implementation": 2},
{"data_format": "channels_last", "padding": "same", "implementation": 2},
{"data_format": "channels_first", "padding": "valid", "implementation": 3},
{"data_format": "channels_first", "padding": "same", "implementation": 3},
{"data_format": "channels_last", "padding": "valid", "implementation": 3},
{"data_format": "channels_last", "padding": "same", "implementation": 3},
]
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class LocallyConnected1DLayersTest(tf.test.TestCase, parameterized.TestCase):
@parameterized.parameters(_DATA_FORMAT_PADDING_IMPLEMENTATION)
def test_locallyconnected_1d(self, data_format, padding, implementation):
with self.cached_session():
num_samples = 2
num_steps = 8
input_dim = 5
filter_length = 3
filters = 4
for strides in [1]:
if padding == "same" and strides != 1:
continue
kwargs = {
"filters": filters,
"kernel_size": filter_length,
"padding": padding,
"strides": strides,
"data_format": data_format,
"implementation": implementation,
}
if padding == "same" and implementation == 1:
self.assertRaises(
ValueError, keras.layers.LocallyConnected1D, **kwargs
)
else:
test_utils.layer_test(
keras.layers.LocallyConnected1D,
kwargs=kwargs,
input_shape=(num_samples, num_steps, input_dim),
)
@parameterized.parameters(_DATA_FORMAT_PADDING_IMPLEMENTATION)
def test_locallyconnected_1d_regularization(
self, data_format, padding, implementation
):
num_samples = 2
num_steps = 8
input_dim = 5
filter_length = 3
filters = 4
kwargs = {
"filters": filters,
"kernel_size": filter_length,
"kernel_regularizer": "l2",
"bias_regularizer": "l2",
"activity_regularizer": "l2",
"data_format": data_format,
"implementation": implementation,
"padding": padding,
}
if padding == "same" and implementation == 1:
self.assertRaises(
ValueError, keras.layers.LocallyConnected1D, **kwargs
)
else:
with self.cached_session():
layer = keras.layers.LocallyConnected1D(**kwargs)
layer.build((num_samples, num_steps, input_dim))
self.assertLen(layer.losses, 2)
layer(
keras.backend.variable(
np.ones((num_samples, num_steps, input_dim))
)
)
self.assertLen(layer.losses, 3)
k_constraint = keras.constraints.max_norm(0.01)
b_constraint = keras.constraints.max_norm(0.01)
kwargs = {
"filters": filters,
"kernel_size": filter_length,
"kernel_constraint": k_constraint,
"bias_constraint": b_constraint,
}
with self.cached_session():
layer = keras.layers.LocallyConnected1D(**kwargs)
layer.build((num_samples, num_steps, input_dim))
self.assertEqual(layer.kernel.constraint, k_constraint)
self.assertEqual(layer.bias.constraint, b_constraint)
def test_locallyconnected1d_invalid_output_shapes(self):
kwargs = {"filters": 2, "kernel_size": 10}
with self.assertRaisesRegex(
ValueError, r"""One of the dimensions in the output is <= 0 """
):
layer = keras.layers.LocallyConnected1D(**kwargs)
layer.build((None, 5, 2))
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class LocallyConnected2DLayersTest(tf.test.TestCase, parameterized.TestCase):
@parameterized.parameters(_DATA_FORMAT_PADDING_IMPLEMENTATION)
def test_locallyconnected_2d(self, data_format, padding, implementation):
with self.cached_session():
num_samples = 8
filters = 3
stack_size = 4
num_row = 6
num_col = 10
for strides in [(1, 1), (2, 2)]:
if padding == "same" and strides != (1, 1):
continue
kwargs = {
"filters": filters,
"kernel_size": 3,
"padding": padding,
"kernel_regularizer": "l2",
"bias_regularizer": "l2",
"strides": strides,
"data_format": data_format,
"implementation": implementation,
}
if padding == "same" and implementation == 1:
self.assertRaises(
ValueError, keras.layers.LocallyConnected2D, **kwargs
)
else:
test_utils.layer_test(
keras.layers.LocallyConnected2D,
kwargs=kwargs,
input_shape=(num_samples, num_row, num_col, stack_size),
)
@parameterized.parameters(_DATA_FORMAT_PADDING_IMPLEMENTATION)
def test_locallyconnected_2d_channels_first(
self, data_format, padding, implementation
):
with self.cached_session():
num_samples = 8
filters = 3
stack_size = 4
num_row = 6
num_col = 10
kwargs = {
"filters": filters,
"kernel_size": 3,
"data_format": data_format,
"implementation": implementation,
"padding": padding,
}
if padding == "same" and implementation == 1:
self.assertRaises(
ValueError, keras.layers.LocallyConnected2D, **kwargs
)
else:
test_utils.layer_test(
keras.layers.LocallyConnected2D,
kwargs=kwargs,
input_shape=(num_samples, num_row, num_col, stack_size),
)
@parameterized.parameters(_DATA_FORMAT_PADDING_IMPLEMENTATION)
def test_locallyconnected_2d_regularization(
self, data_format, padding, implementation
):
num_samples = 2
filters = 3
stack_size = 4
num_row = 6
num_col = 7
kwargs = {
"filters": filters,
"kernel_size": 3,
"kernel_regularizer": "l2",
"bias_regularizer": "l2",
"activity_regularizer": "l2",
"implementation": implementation,
"padding": padding,
"data_format": data_format,
}
if padding == "same" and implementation == 1:
self.assertRaises(
ValueError, keras.layers.LocallyConnected2D, **kwargs
)
else:
with self.cached_session():
layer = keras.layers.LocallyConnected2D(**kwargs)
layer.build((num_samples, num_row, num_col, stack_size))
self.assertLen(layer.losses, 2)
layer(
keras.backend.variable(
np.ones((num_samples, num_row, num_col, stack_size))
)
)
self.assertLen(layer.losses, 3)
k_constraint = keras.constraints.max_norm(0.01)
b_constraint = keras.constraints.max_norm(0.01)
kwargs = {
"filters": filters,
"kernel_size": 3,
"kernel_constraint": k_constraint,
"bias_constraint": b_constraint,
}
with self.cached_session():
layer = keras.layers.LocallyConnected2D(**kwargs)
layer.build((num_samples, num_row, num_col, stack_size))
self.assertEqual(layer.kernel.constraint, k_constraint)
self.assertEqual(layer.bias.constraint, b_constraint)
def test_locallyconnected2d_invalid_output_shapes(self):
kwargs = {"filters": 2, "kernel_size": 10}
with self.assertRaisesRegex(
ValueError, r"""One of the dimensions in the output is <= 0 """
):
layer = keras.layers.LocallyConnected2D(**kwargs)
layer.build((None, 5, 5, 2))
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class LocallyConnectedImplementationModeTest(
tf.test.TestCase, parameterized.TestCase
):
@parameterized.parameters(
[
{"width": 1, "data_format": "channels_first"},
{"width": 1, "data_format": "channels_last"},
{"width": 6, "data_format": "channels_first"},
{"width": 6, "data_format": "channels_last"},
]
)
def test_locallyconnected_implementation(self, width, data_format):
with self.cached_session():
num_samples = 4
num_classes = 3
num_epochs = 2
np.random.seed(1)
tf_test_util.random_seed.set_seed(1)
# Following code generates sparse targets and converts them
# to one-hot encoded vectors
# Create sparse targets eg. [0,1,2]
sparse_targets = np.random.randint(0, num_classes, (num_samples,))
# Convert to one-hot encoding
# Final targets:
# [[ 1. 0. 0. ]
# [ 0. 1. 0. ]
# [ 0. 0. 1. ]]
targets = np.zeros((sparse_targets.size, num_classes))
targets[np.arange(sparse_targets.size), sparse_targets] = 1
height = 7
filters = 2
inputs = get_inputs(
data_format, filters, height, num_samples, width
)
kernel_x = (3,)
kernel_y = () if width == 1 else (2,)
stride_x = (1,)
stride_y = () if width == 1 else (3,)
layers = 2
kwargs = {
"layers": layers,
"filters": filters,
"kernel_size": kernel_x + kernel_y,
"strides": stride_x + stride_y,
"data_format": data_format,
"num_classes": num_classes,
}
model_1 = get_model(implementation=1, **kwargs)
model_2 = get_model(implementation=2, **kwargs)
model_3 = get_model(implementation=3, **kwargs)
# Build models.
model_1.train_on_batch(inputs, targets)
model_2.train_on_batch(inputs, targets)
model_3.train_on_batch(inputs, targets)
# Copy weights.
copy_model_weights(model_from=model_2, model_to=model_1)
copy_model_weights(model_from=model_2, model_to=model_3)
# Compare outputs at initialization.
out_1 = model_1(inputs)
out_2 = model_2(inputs)
out_3 = model_3(inputs)
self.assertAllCloseAccordingToType(
out_2, out_1, rtol=1e-5, atol=1e-5
)
self.assertAllCloseAccordingToType(
out_2, out_3, rtol=1e-5, atol=1e-5
)
self.assertAllCloseAccordingToType(
out_1, out_3, rtol=1e-5, atol=1e-5
)
# Train.
model_1.fit(
x=inputs,
y=targets,
epochs=num_epochs,
batch_size=num_samples,
shuffle=False,
)
model_2.fit(
x=inputs,
y=targets,
epochs=num_epochs,
batch_size=num_samples,
shuffle=False,
)
model_3.fit(
x=inputs,
y=targets,
epochs=num_epochs,
batch_size=num_samples,
shuffle=False,
)
# Compare outputs after a few training steps.
out_1 = model_1(inputs)
out_2 = model_2(inputs)
out_3 = model_3(inputs)
self.assertAllCloseAccordingToType(out_2, out_1, atol=2e-4)
self.assertAllCloseAccordingToType(out_2, out_3, atol=2e-4)
self.assertAllCloseAccordingToType(out_1, out_3, atol=2e-4)
@parameterized.parameters(
[
{"width": 1, "data_format": "channels_first"},
{"width": 1, "data_format": "channels_last"},
{"width": 6, "data_format": "channels_first"},
{"width": 6, "data_format": "channels_last"},
]
)
def test_locallyconnected_save(self, width, data_format):
with self.cached_session():
num_samples = 4
num_classes = 3
num_epochs = 2
np.random.seed(1)
tf_test_util.random_seed.set_seed(1)
# Following code generates sparse targets and converts them
# to one-hot encoded vectors
# Create sparse targets eg. [0,1,2]
sparse_targets = np.random.randint(0, num_classes, (num_samples,))
# Convert to one-hot encoding
# Final targets:
# [[ 1. 0. 0. ]
# [ 0. 1. 0. ]
# [ 0. 0. 1. ]]
targets = np.zeros((sparse_targets.size, num_classes))
targets[np.arange(sparse_targets.size), sparse_targets] = 1
height = 7
filters = 2
inputs = get_inputs(
data_format, filters, height, num_samples, width
)
kernel_x = (3,)
kernel_y = () if width == 1 else (2,)
stride_x = (1,)
stride_y = () if width == 1 else (3,)
layers = 2
kwargs = {
"layers": layers,
"filters": filters,
"kernel_size": kernel_x + kernel_y,
"strides": stride_x + stride_y,
"data_format": data_format,
"num_classes": num_classes,
}
model_1 = get_model_saveable(implementation=1, **kwargs)
model_2 = get_model_saveable(implementation=2, **kwargs)
model_3 = get_model_saveable(implementation=3, **kwargs)
# Train.
model_1.fit(
x=inputs,
y=targets,
epochs=num_epochs,
batch_size=num_samples,
shuffle=False,
)
model_2.fit(
x=inputs,
y=targets,
epochs=num_epochs,
batch_size=num_samples,
shuffle=False,
)
model_3.fit(
x=inputs,
y=targets,
epochs=num_epochs,
batch_size=num_samples,
shuffle=False,
)
out_1_before = model_1(inputs)
out_2_before = model_2(inputs)
out_3_before = model_3(inputs)
path_1 = os.path.join(self.get_temp_dir(), "model_1_path")
model_1.save(path_1)
model_1 = keras.models.load_model(
path_1, custom_objects={"xent": xent}
)
path_2 = os.path.join(self.get_temp_dir(), "model_2_path")
model_2.save(path_2)
model_2 = keras.models.load_model(
path_2, custom_objects={"xent": xent}
)
path_3 = os.path.join(self.get_temp_dir(), "model_3_path")
model_3.save(path_3)
model_3 = keras.models.load_model(
path_3, custom_objects={"xent": xent}
)
out_1_after = model_1(inputs)
out_2_after = model_2(inputs)
out_3_after = model_3(inputs)
self.assertAllCloseAccordingToType(
out_1_before, out_1_after, atol=2e-4
)
self.assertAllCloseAccordingToType(
out_2_before, out_2_after, atol=2e-4
)
self.assertAllCloseAccordingToType(
out_3_before, out_3_after, atol=2e-4
)
def test_make_2d(self):
input_shapes = [
(0,),
(0, 0),
(1,),
(2,),
(3,),
(1, 0),
(0, 3),
(1, 1),
(1, 2),
(3, 1),
(2, 2),
(3, 3),
(1, 0, 1),
(5, 2, 3),
(3, 5, 6, 7, 0),
(3, 2, 2, 4, 4),
(1, 2, 3, 4, 7, 2),
]
np.random.seed(1)
for input_shape in input_shapes:
inputs = np.random.normal(0, 1, input_shape)
inputs_tf = keras.backend.variable(inputs)
split_dim = np.random.randint(0, inputs.ndim + 1)
shape_2d = (
int(np.prod(inputs.shape[:split_dim])),
int(np.prod(inputs.shape[split_dim:])),
)
inputs_2d = np.reshape(inputs, shape_2d)
inputs_2d_tf = locally_connected_utils.make_2d(inputs_tf, split_dim)
inputs_2d_tf = keras.backend.get_value(inputs_2d_tf)
self.assertAllCloseAccordingToType(inputs_2d, inputs_2d_tf)
def get_inputs(data_format, filters, height, num_samples, width):
if data_format == "channels_first":
if width == 1:
input_shape = (filters, height)
else:
input_shape = (filters, height, width)
elif data_format == "channels_last":
if width == 1:
input_shape = (height, filters)
else:
input_shape = (height, width, filters)
else:
raise NotImplementedError(data_format)
inputs = np.random.normal(0, 1, (num_samples,) + input_shape).astype(
np.float32
)
return inputs
def xent(y_true, y_pred):
y_true = keras.backend.cast(keras.backend.reshape(y_true, (-1,)), tf.int32)
return tf.compat.v1.nn.sparse_softmax_cross_entropy_with_logits(
labels=y_true, logits=y_pred
)
def get_model(
implementation,
filters,
kernel_size,
strides,
layers,
num_classes,
data_format,
):
model = keras.Sequential()
if len(kernel_size) == 1:
lc_layer = keras.layers.LocallyConnected1D
elif len(kernel_size) == 2:
lc_layer = keras.layers.LocallyConnected2D
else:
raise NotImplementedError(kernel_size)
for _ in range(layers):
model.add(
lc_layer(
padding="valid",
kernel_initializer=keras.initializers.random_normal(),
bias_initializer=keras.initializers.random_normal(),
filters=filters,
strides=strides,
kernel_size=kernel_size,
activation=keras.activations.relu,
data_format=data_format,
implementation=implementation,
)
)
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(num_classes))
model.compile(
optimizer=RMSPropOptimizer(0.01),
metrics=[keras.metrics.categorical_accuracy],
loss=keras.losses.CategoricalCrossentropy(from_logits=True),
)
return model
def get_model_saveable(
implementation,
filters,
kernel_size,
strides,
layers,
num_classes,
data_format,
):
model = keras.Sequential()
if len(kernel_size) == 1:
lc_layer = keras.layers.LocallyConnected1D
elif len(kernel_size) == 2:
lc_layer = keras.layers.LocallyConnected2D
else:
raise NotImplementedError(kernel_size)
for _ in range(layers):
model.add(
lc_layer(
padding="valid",
kernel_initializer=keras.initializers.random_normal(),
bias_initializer=keras.initializers.random_normal(),
filters=filters,
strides=strides,
kernel_size=kernel_size,
activation=keras.activations.relu,
data_format=data_format,
implementation=implementation,
)
)
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(num_classes))
model.compile(
optimizer=rmsprop.RMSProp(learning_rate=0.01),
metrics=[keras.metrics.categorical_accuracy],
loss=keras.losses.CategoricalCrossentropy(from_logits=True),
)
return model
def copy_lc_weights_2_to_1(lc_layer_2_from, lc_layer_1_to):
lc_2_kernel, lc_2_bias = lc_layer_2_from.weights
lc_2_kernel_masked = lc_2_kernel * lc_layer_2_from.kernel_mask
data_format = lc_layer_2_from.data_format
if data_format == "channels_first":
if isinstance(lc_layer_2_from, keras.layers.LocallyConnected1D):
permutation = (3, 0, 1, 2)
elif isinstance(lc_layer_2_from, keras.layers.LocallyConnected2D):
permutation = (4, 5, 0, 1, 2, 3)
else:
raise NotImplementedError(lc_layer_2_from)
elif data_format == "channels_last":
if isinstance(lc_layer_2_from, keras.layers.LocallyConnected1D):
permutation = (2, 0, 1, 3)
elif isinstance(lc_layer_2_from, keras.layers.LocallyConnected2D):
permutation = (3, 4, 0, 1, 2, 5)
else:
raise NotImplementedError(lc_layer_2_from)
else:
raise NotImplementedError(data_format)
lc_2_kernel_masked = keras.backend.permute_dimensions(
lc_2_kernel_masked, permutation
)
lc_2_kernel_mask = tf.not_equal(lc_2_kernel_masked, 0)
lc_2_kernel_flat = tf.compat.v1.boolean_mask(
lc_2_kernel_masked, lc_2_kernel_mask
)
lc_2_kernel_reshaped = keras.backend.reshape(
lc_2_kernel_flat, lc_layer_1_to.kernel.shape
)
lc_2_kernel_reshaped = keras.backend.get_value(lc_2_kernel_reshaped)
lc_2_bias = keras.backend.get_value(lc_2_bias)
lc_layer_1_to.set_weights([lc_2_kernel_reshaped, lc_2_bias])
def copy_lc_weights_2_to_3(lc_layer_2_from, lc_layer_3_to):
lc_2_kernel, lc_2_bias = lc_layer_2_from.weights
lc_2_kernel_masked = lc_2_kernel * lc_layer_2_from.kernel_mask
lc_2_kernel_masked = locally_connected_utils.make_2d(
lc_2_kernel_masked,
split_dim=keras.backend.ndim(lc_2_kernel_masked) // 2,
)
lc_2_kernel_masked = keras.backend.transpose(lc_2_kernel_masked)
lc_2_kernel_mask = tf.not_equal(lc_2_kernel_masked, 0)
lc_2_kernel_flat = tf.compat.v1.boolean_mask(
lc_2_kernel_masked, lc_2_kernel_mask
)
lc_2_kernel_flat = keras.backend.get_value(lc_2_kernel_flat)
lc_2_bias = keras.backend.get_value(lc_2_bias)
lc_layer_3_to.set_weights([lc_2_kernel_flat, lc_2_bias])
def copy_model_weights(model_from, model_to):
for l in range(len(model_from.layers)):
layer_from = model_from.layers[l]
layer_to = model_to.layers[l]
if isinstance(
layer_from,
(keras.layers.LocallyConnected2D, keras.layers.LocallyConnected1D),
) and isinstance(
layer_to,
(keras.layers.LocallyConnected2D, keras.layers.LocallyConnected1D),
):
if layer_from.implementation == 2:
if layer_to.implementation == 1:
copy_lc_weights_2_to_1(layer_from, layer_to)
elif layer_to.implementation == 3:
copy_lc_weights_2_to_3(layer_from, layer_to)
else:
raise NotImplementedError
else:
raise NotImplementedError
elif isinstance(layer_from, keras.layers.Dense):
weights_2, bias_2 = layer_from.weights
weights_2 = keras.backend.get_value(weights_2)
bias_2 = keras.backend.get_value(bias_2)
layer_to.set_weights([weights_2, bias_2])
else:
continue
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/layers/locally_connected/locally_connected_test.py/0 | {
"file_path": "tf-keras/tf_keras/layers/locally_connected/locally_connected_test.py",
"repo_id": "tf-keras",
"token_count": 13757
} | 184 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Average pooling 2D layer."""
import tensorflow.compat.v2 as tf
from tf_keras.layers.pooling.base_pooling2d import Pooling2D
# isort: off
from tensorflow.python.util.tf_export import keras_export
@keras_export("keras.layers.AveragePooling2D", "keras.layers.AvgPool2D")
class AveragePooling2D(Pooling2D):
"""Average pooling operation for spatial data.
Downsamples the input along its spatial dimensions (height and width)
by taking the average value over an input window
(of size defined by `pool_size`) for each channel of the input.
The window is shifted by `strides` along each dimension.
The resulting output when using `"valid"` padding option has a shape
(number of rows or columns) of:
`output_shape = math.floor((input_shape - pool_size) / strides) + 1`
(when `input_shape >= pool_size`)
The resulting output shape when using the `"same"` padding option is:
`output_shape = math.floor((input_shape - 1) / strides) + 1`
For example, for `strides=(1, 1)` and `padding="valid"`:
>>> x = tf.constant([[1., 2., 3.],
... [4., 5., 6.],
... [7., 8., 9.]])
>>> x = tf.reshape(x, [1, 3, 3, 1])
>>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2),
... strides=(1, 1), padding='valid')
>>> avg_pool_2d(x)
<tf.Tensor: shape=(1, 2, 2, 1), dtype=float32, numpy=
array([[[[3.],
[4.]],
[[6.],
[7.]]]], dtype=float32)>
For example, for `stride=(2, 2)` and `padding="valid"`:
>>> x = tf.constant([[1., 2., 3., 4.],
... [5., 6., 7., 8.],
... [9., 10., 11., 12.]])
>>> x = tf.reshape(x, [1, 3, 4, 1])
>>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2),
... strides=(2, 2), padding='valid')
>>> avg_pool_2d(x)
<tf.Tensor: shape=(1, 1, 2, 1), dtype=float32, numpy=
array([[[[3.5],
[5.5]]]], dtype=float32)>
For example, for `strides=(1, 1)` and `padding="same"`:
>>> x = tf.constant([[1., 2., 3.],
... [4., 5., 6.],
... [7., 8., 9.]])
>>> x = tf.reshape(x, [1, 3, 3, 1])
>>> avg_pool_2d = tf.keras.layers.AveragePooling2D(pool_size=(2, 2),
... strides=(1, 1), padding='same')
>>> avg_pool_2d(x)
<tf.Tensor: shape=(1, 3, 3, 1), dtype=float32, numpy=
array([[[[3.],
[4.],
[4.5]],
[[6.],
[7.],
[7.5]],
[[7.5],
[8.5],
[9.]]]], dtype=float32)>
Args:
pool_size: integer or tuple of 2 integers,
factors by which to downscale (vertical, horizontal).
`(2, 2)` will halve the input in both spatial dimension.
If only one integer is specified, the same window length
will be used for both dimensions.
strides: Integer, tuple of 2 integers, or None.
Strides values.
If None, it will default to `pool_size`.
padding: One of `"valid"` or `"same"` (case-insensitive).
`"valid"` means no padding. `"same"` results in padding evenly to
the left/right or up/down of the input such that output has the same
height/width dimension as the input.
data_format: A string,
one of `channels_last` (default) or `channels_first`.
The ordering of the dimensions in the inputs.
`channels_last` corresponds to inputs with shape
`(batch, height, width, channels)` while `channels_first`
corresponds to inputs with shape
`(batch, channels, height, width)`.
When unspecified, uses
`image_data_format` value found in your TF-Keras config file at
`~/.keras/keras.json` (if exists) else 'channels_last'.
Defaults to 'channels_last'.
Input shape:
- If `data_format='channels_last'`:
4D tensor with shape `(batch_size, rows, cols, channels)`.
- If `data_format='channels_first'`:
4D tensor with shape `(batch_size, channels, rows, cols)`.
Output shape:
- If `data_format='channels_last'`:
4D tensor with shape `(batch_size, pooled_rows, pooled_cols, channels)`.
- If `data_format='channels_first'`:
4D tensor with shape `(batch_size, channels, pooled_rows, pooled_cols)`.
"""
def __init__(
self,
pool_size=(2, 2),
strides=None,
padding="valid",
data_format=None,
**kwargs
):
super().__init__(
tf.nn.avg_pool,
pool_size=pool_size,
strides=strides,
padding=padding,
data_format=data_format,
**kwargs
)
# Alias
AvgPool2D = AveragePooling2D
| tf-keras/tf_keras/layers/pooling/average_pooling2d.py/0 | {
"file_path": "tf-keras/tf_keras/layers/pooling/average_pooling2d.py",
"repo_id": "tf-keras",
"token_count": 2379
} | 185 |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for global max pooling layers."""
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
import tf_keras as keras
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class GlobalMaxPoolingTest(tf.test.TestCase, parameterized.TestCase):
def test_global_max_pooling_1d(self):
test_utils.layer_test(
keras.layers.GlobalMaxPooling1D, input_shape=(3, 4, 5)
)
test_utils.layer_test(
keras.layers.GlobalMaxPooling1D,
kwargs={"data_format": "channels_first"},
input_shape=(3, 4, 5),
)
def test_global_max_pooling_2d_with_ragged(self):
ragged_data = tf.ragged.constant(
[
[[[1.0], [1.0]], [[2.0], [2.0]], [[3.0], [3.0]]],
[[[1.0], [1.0]], [[2.0], [2.0]]],
],
ragged_rank=1,
)
dense_data = ragged_data.to_tensor()
inputs = keras.Input(shape=(None, 2, 1), dtype="float32", ragged=True)
out = keras.layers.GlobalMaxPooling2D()(inputs)
model = keras.models.Model(inputs=inputs, outputs=out)
output_ragged = model.predict(ragged_data, steps=1)
inputs = keras.Input(shape=(None, 2, 1), dtype="float32")
out = keras.layers.GlobalMaxPooling2D()(inputs)
model = keras.models.Model(inputs=inputs, outputs=out)
output_dense = model.predict(dense_data, steps=1)
self.assertAllEqual(output_ragged, output_dense)
def test_global_max_pooling_2d(self):
test_utils.layer_test(
keras.layers.GlobalMaxPooling2D,
kwargs={"data_format": "channels_first"},
input_shape=(3, 4, 5, 6),
)
test_utils.layer_test(
keras.layers.GlobalMaxPooling2D,
kwargs={"data_format": "channels_last"},
input_shape=(3, 5, 6, 4),
)
def test_global_maxpooling_3d(self):
test_utils.layer_test(
keras.layers.GlobalMaxPooling3D,
kwargs={"data_format": "channels_first"},
input_shape=(3, 4, 3, 4, 3),
)
test_utils.layer_test(
keras.layers.GlobalMaxPooling3D,
kwargs={"data_format": "channels_last"},
input_shape=(3, 4, 3, 4, 3),
)
def test_global_max_pooling_1d_keepdims(self):
test_utils.layer_test(
keras.layers.GlobalMaxPooling1D,
kwargs={"keepdims": True},
input_shape=(3, 4, 5),
expected_output_shape=(None, 1, 5),
)
test_utils.layer_test(
keras.layers.GlobalMaxPooling1D,
kwargs={"data_format": "channels_first", "keepdims": True},
input_shape=(3, 4, 5),
expected_output_shape=(None, 4, 1),
)
def test_global_max_pooling_2d_keepdims(self):
test_utils.layer_test(
keras.layers.GlobalMaxPooling2D,
kwargs={"data_format": "channels_first", "keepdims": True},
input_shape=(3, 4, 5, 6),
expected_output_shape=(None, 4, 1, 1),
)
test_utils.layer_test(
keras.layers.GlobalMaxPooling2D,
kwargs={"data_format": "channels_last", "keepdims": True},
input_shape=(3, 4, 5, 6),
expected_output_shape=(None, 1, 1, 6),
)
def test_global_max_pooling_3d_keepdims(self):
test_utils.layer_test(
keras.layers.GlobalMaxPooling3D,
kwargs={"data_format": "channels_first", "keepdims": True},
input_shape=(3, 4, 3, 4, 3),
expected_output_shape=(None, 4, 1, 1, 1),
)
test_utils.layer_test(
keras.layers.GlobalMaxPooling3D,
kwargs={"data_format": "channels_last", "keepdims": True},
input_shape=(3, 4, 3, 4, 3),
expected_output_shape=(None, 1, 1, 1, 3),
)
def test_global_max_pooling_1d_invalid_input_dimension(self):
with self.assertRaisesRegex(ValueError, r"""Incorrect input shape"""):
layer = keras.layers.GlobalMaxPooling1D()
layer.build((None, 0, 2))
def test_global_max_pooling_3d_invalid_input_dimension(self):
with self.assertRaisesRegex(ValueError, r"""Incorrect input shape"""):
layer = keras.layers.GlobalMaxPooling3D(keepdims=True)
layer.build((None, 0, 16, 16, 3))
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/layers/pooling/global_max_pooling_test.py/0 | {
"file_path": "tf-keras/tf_keras/layers/pooling/global_max_pooling_test.py",
"repo_id": "tf-keras",
"token_count": 2470
} | 186 |
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for TF-Keras text category_encoding preprocessing layer."""
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
import tf_keras as keras
from tf_keras import backend
from tf_keras.layers import core
from tf_keras.layers.preprocessing import category_encoding
from tf_keras.layers.preprocessing import preprocessing_test_utils
from tf_keras.testing_infra import test_combinations
@test_combinations.run_all_keras_modes(always_skip_v1=True)
class CategoryEncodingInputTest(
test_combinations.TestCase, preprocessing_test_utils.PreprocessingLayerTest
):
@parameterized.named_parameters(
("list", list),
("tuple", tuple),
("numpy", np.array),
("array_like", preprocessing_test_utils.ArrayLike),
)
def test_tensor_like_inputs(self, data_fn):
category_data = data_fn([1, 2, 3, 3, 0])
weight_data = data_fn([1, 2, 3, 1, 7])
expected_output = [7, 1, 2, 4, 0, 0]
layer = category_encoding.CategoryEncoding(
num_tokens=6, output_mode=category_encoding.COUNT
)
output_data = layer(category_data, count_weights=weight_data)
self.assertAllEqual(output_data, expected_output)
def test_compute_output_shape(self):
layer = category_encoding.CategoryEncoding(5)
output_shape = layer.compute_output_shape((None, 1))
self.assertListEqual(output_shape.as_list(), [None, 5])
output_shape = layer.compute_output_shape([None, 1])
self.assertListEqual(output_shape.as_list(), [None, 5])
def test_dense_input_sparse_output(self):
input_array = tf.constant([[1, 2, 3], [3, 3, 0]])
# The expected output should be (X for missing value):
# [[X, 1, 1, 1, X, X]
# [1, X, X, 2, X, X]]
expected_indices = [[0, 1], [0, 2], [0, 3], [1, 0], [1, 3]]
expected_values = [1, 1, 1, 1, 2]
num_tokens = 6
input_data = keras.Input(shape=(None,), dtype=tf.int32)
layer = category_encoding.CategoryEncoding(
num_tokens=num_tokens,
output_mode=category_encoding.COUNT,
sparse=True,
)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
sp_output_dataset = model.predict(input_array, steps=1)
self.assertAllEqual(expected_values, sp_output_dataset.values)
self.assertAllEqual(expected_indices, sp_output_dataset.indices)
# Assert sparse output is same as dense output.
layer = category_encoding.CategoryEncoding(
num_tokens=num_tokens,
output_mode=category_encoding.COUNT,
sparse=False,
)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(input_array, steps=1)
self.assertAllEqual(
tf.sparse.to_dense(sp_output_dataset, default_value=0),
output_dataset,
)
def test_sparse_input(self):
input_array = np.array([[1, 2, 3, 0], [0, 3, 1, 0]], dtype=np.int64)
sparse_tensor_data = tf.sparse.from_dense(input_array)
# pyformat: disable
expected_output = [[0, 1, 1, 1, 0, 0], [0, 1, 0, 1, 0, 0]]
# pyformat: enable
num_tokens = 6
expected_output_shape = [None, num_tokens]
input_data = keras.Input(shape=(None,), dtype=tf.int64, sparse=True)
layer = category_encoding.CategoryEncoding(
num_tokens=num_tokens, output_mode=category_encoding.MULTI_HOT
)
int_data = layer(input_data)
self.assertAllEqual(expected_output_shape, int_data.shape.as_list())
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(sparse_tensor_data, steps=1)
self.assertAllEqual(expected_output, output_dataset)
def test_sparse_input_with_weights(self):
input_array = np.array([[1, 2, 3, 4], [4, 3, 1, 4]], dtype=np.int64)
weights_array = np.array([[0.1, 0.2, 0.3, 0.4], [0.2, 0.1, 0.4, 0.3]])
sparse_tensor_data = tf.sparse.from_dense(input_array)
sparse_weight_data = tf.sparse.from_dense(weights_array)
# pyformat: disable
expected_output = [[0, 0.1, 0.2, 0.3, 0.4, 0], [0, 0.4, 0, 0.1, 0.5, 0]]
# pyformat: enable
num_tokens = 6
expected_output_shape = [None, num_tokens]
input_data = keras.Input(shape=(None,), dtype=tf.int64, sparse=True)
weight_data = keras.Input(shape=(None,), dtype=tf.float32, sparse=True)
layer = category_encoding.CategoryEncoding(
num_tokens=num_tokens, output_mode=category_encoding.COUNT
)
int_data = layer(input_data, count_weights=weight_data)
self.assertAllEqual(expected_output_shape, int_data.shape.as_list())
model = keras.Model(inputs=[input_data, weight_data], outputs=int_data)
output_dataset = model.predict(
[sparse_tensor_data, sparse_weight_data], steps=1
)
self.assertAllClose(expected_output, output_dataset)
def test_sparse_input_sparse_output(self):
sp_inp = tf.SparseTensor(
indices=[[0, 0], [1, 1], [2, 0], [2, 1], [3, 1]],
values=[0, 2, 1, 1, 0],
dense_shape=[4, 2],
)
input_data = keras.Input(shape=(None,), dtype=tf.int64, sparse=True)
# The expected output should be (X for missing value):
# [[1, X, X, X]
# [X, X, 1, X]
# [X, 2, X, X]
# [1, X, X, X]]
expected_indices = [[0, 0], [1, 2], [2, 1], [3, 0]]
expected_values = [1, 1, 2, 1]
num_tokens = 6
layer = category_encoding.CategoryEncoding(
num_tokens=num_tokens,
output_mode=category_encoding.COUNT,
sparse=True,
)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
sp_output_dataset = model.predict(sp_inp, steps=1)
self.assertAllEqual(expected_values, sp_output_dataset.values)
self.assertAllEqual(expected_indices, sp_output_dataset.indices)
# Assert sparse output is same as dense output.
layer = category_encoding.CategoryEncoding(
num_tokens=num_tokens,
output_mode=category_encoding.COUNT,
sparse=False,
)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(sp_inp, steps=1)
self.assertAllEqual(
tf.sparse.to_dense(sp_output_dataset, default_value=0),
output_dataset,
)
def test_sparse_input_sparse_output_with_weights(self):
indices = [[0, 0], [1, 1], [2, 0], [2, 1], [3, 1]]
sp_inp = tf.SparseTensor(
indices=indices, values=[0, 2, 1, 1, 0], dense_shape=[4, 2]
)
input_data = keras.Input(shape=(None,), dtype=tf.int64, sparse=True)
sp_weight = tf.SparseTensor(
indices=indices,
values=[0.1, 0.2, 0.4, 0.3, 0.2],
dense_shape=[4, 2],
)
weight_data = keras.Input(shape=(None,), dtype=tf.float32, sparse=True)
# The expected output should be (X for missing value):
# [[1, X, X, X]
# [X, X, 1, X]
# [X, 2, X, X]
# [1, X, X, X]]
expected_indices = [[0, 0], [1, 2], [2, 1], [3, 0]]
expected_values = [0.1, 0.2, 0.7, 0.2]
num_tokens = 6
layer = category_encoding.CategoryEncoding(
num_tokens=num_tokens,
output_mode=category_encoding.COUNT,
sparse=True,
)
int_data = layer(input_data, count_weights=weight_data)
model = keras.Model(inputs=[input_data, weight_data], outputs=int_data)
sp_output_dataset = model.predict([sp_inp, sp_weight], steps=1)
self.assertAllClose(expected_values, sp_output_dataset.values)
self.assertAllEqual(expected_indices, sp_output_dataset.indices)
def test_ragged_input(self):
input_array = tf.ragged.constant([[1, 2, 3], [3, 1]])
# pyformat: disable
expected_output = [[0, 1, 1, 1, 0, 0], [0, 1, 0, 1, 0, 0]]
# pyformat: enable
num_tokens = 6
expected_output_shape = [None, num_tokens]
input_data = keras.Input(shape=(None,), dtype=tf.int32, ragged=True)
layer = category_encoding.CategoryEncoding(
num_tokens=num_tokens, output_mode=category_encoding.MULTI_HOT
)
int_data = layer(input_data)
self.assertAllEqual(expected_output_shape, int_data.shape.as_list())
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(input_array, steps=1)
self.assertAllEqual(expected_output, output_dataset)
def test_ragged_input_sparse_output(self):
input_array = tf.ragged.constant([[1, 2, 3], [3, 3]])
# The expected output should be (X for missing value):
# [[X, 1, 1, 1]
# [X, X, X, 2]]
expected_indices = [[0, 1], [0, 2], [0, 3], [1, 3]]
expected_values = [1, 1, 1, 2]
num_tokens = 6
input_data = keras.Input(shape=(None,), dtype=tf.int32, ragged=True)
layer = category_encoding.CategoryEncoding(
num_tokens=num_tokens,
output_mode=category_encoding.COUNT,
sparse=True,
)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
sp_output_dataset = model.predict(input_array, steps=1)
self.assertAllEqual(expected_values, sp_output_dataset.values)
self.assertAllEqual(expected_indices, sp_output_dataset.indices)
# Assert sparse output is same as dense output.
layer = category_encoding.CategoryEncoding(
num_tokens=num_tokens,
output_mode=category_encoding.COUNT,
sparse=False,
)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(input_array, steps=1)
self.assertAllEqual(
tf.sparse.to_dense(sp_output_dataset, default_value=0),
output_dataset,
)
def test_sparse_output_and_dense_layer(self):
input_array = tf.constant([[1, 2, 3], [3, 3, 0]])
num_tokens = 4
input_data = keras.Input(shape=(None,), dtype=tf.int32)
encoding_layer = category_encoding.CategoryEncoding(
num_tokens=num_tokens,
output_mode=category_encoding.COUNT,
sparse=True,
)
int_data = encoding_layer(input_data)
dense_layer = keras.layers.Dense(units=1)
output_data = dense_layer(int_data)
model = keras.Model(inputs=input_data, outputs=output_data)
_ = model.predict(input_array, steps=1)
def test_dense_oov_input(self):
valid_array = tf.constant([[0, 1, 2], [0, 1, 2]])
invalid_array = tf.constant([[0, 1, 2], [2, 3, 1]])
num_tokens = 3
expected_output_shape = [None, num_tokens]
encoder_layer = category_encoding.CategoryEncoding(num_tokens)
input_data = keras.Input(shape=(3,), dtype=tf.int32)
int_data = encoder_layer(input_data)
self.assertAllEqual(expected_output_shape, int_data.shape.as_list())
model = keras.Model(inputs=input_data, outputs=int_data)
# Call predict once on valid input to compile a graph and test control
# flow.
_ = model.predict(valid_array, steps=1)
with self.assertRaisesRegex(
tf.errors.InvalidArgumentError,
".*must be in the range 0 <= values < num_tokens.*",
):
_ = model.predict(invalid_array, steps=1)
def test_dense_negative(self):
valid_array = tf.constant([[0, 1, 2], [0, 1, 2]])
invalid_array = tf.constant([[1, 2, 0], [2, 2, -1]])
num_tokens = 3
expected_output_shape = [None, num_tokens]
encoder_layer = category_encoding.CategoryEncoding(num_tokens)
input_data = keras.Input(shape=(3,), dtype=tf.int32)
int_data = encoder_layer(input_data)
self.assertAllEqual(expected_output_shape, int_data.shape.as_list())
model = keras.Model(inputs=input_data, outputs=int_data)
# Call predict once on valid input to compile a graph and test control
# flow.
_ = model.predict(valid_array, steps=1)
with self.assertRaisesRegex(
tf.errors.InvalidArgumentError,
".*must be in the range 0 <= values < num_tokens.*",
):
_ = model.predict(invalid_array, steps=1)
def test_legacy_max_tokens_arg(self):
input_array = np.array([[1, 2, 3, 1]])
expected_output = [[0, 1, 1, 1, 0, 0]]
num_tokens = 6
expected_output_shape = [None, num_tokens]
input_data = keras.Input(shape=(None,), dtype=tf.int32)
layer = category_encoding.CategoryEncoding(
max_tokens=num_tokens, output_mode=category_encoding.MULTI_HOT
)
int_data = layer(input_data)
self.assertAllEqual(expected_output_shape, int_data.shape.as_list())
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(input_array)
self.assertAllEqual(expected_output, output_dataset)
@test_combinations.run_all_keras_modes
class CategoryEncodingOutputTest(
test_combinations.TestCase, preprocessing_test_utils.PreprocessingLayerTest
):
@parameterized.named_parameters(
("float32", tf.float32),
("float64", tf.float64),
)
def test_output_dtype(self, dtype):
inputs = keras.Input(shape=(1,), dtype=tf.int32)
layer = category_encoding.CategoryEncoding(
num_tokens=4, output_mode=category_encoding.ONE_HOT, dtype=dtype
)
outputs = layer(inputs)
self.assertAllEqual(outputs.dtype, dtype)
def test_one_hot_output(self):
input_data = np.array([[3], [2], [0], [1]])
expected_output = [
[0, 0, 0, 1],
[0, 0, 1, 0],
[1, 0, 0, 0],
[0, 1, 0, 0],
]
num_tokens = 4
expected_output_shape = [None, num_tokens]
layer = category_encoding.CategoryEncoding(
num_tokens=num_tokens, output_mode=category_encoding.ONE_HOT
)
inputs = keras.Input(shape=(1,), dtype=tf.int32)
outputs = layer(inputs)
model = keras.Model(inputs=inputs, outputs=outputs)
output_dataset = model(input_data)
self.assertAllEqual(expected_output_shape, outputs.shape.as_list())
self.assertAllEqual(expected_output, output_dataset)
def test_one_hot_output_rank_one_input(self):
input_data = np.array([3, 2, 0, 1])
expected_output = [
[0, 0, 0, 1],
[0, 0, 1, 0],
[1, 0, 0, 0],
[0, 1, 0, 0],
]
num_tokens = 4
expected_output_shape = [None, num_tokens]
# Test call on layer directly.
layer = category_encoding.CategoryEncoding(
num_tokens=num_tokens, output_mode=category_encoding.ONE_HOT
)
output_data = layer(input_data)
self.assertAllEqual(expected_output, output_data)
# Test call on model.
inputs = keras.Input(shape=(1,), dtype=tf.int32)
outputs = layer(inputs)
model = keras.Model(inputs=inputs, outputs=outputs)
output_data = model(input_data)
self.assertAllEqual(expected_output_shape, outputs.shape.as_list())
self.assertAllEqual(expected_output, output_data)
def test_one_hot_output_rank_zero_input(self):
input_data = np.array(3)
expected_output = [0, 0, 0, 1]
num_tokens = 4
expected_output_shape = [None, num_tokens]
# Test call on layer directly.
layer = category_encoding.CategoryEncoding(
num_tokens=num_tokens, output_mode=category_encoding.ONE_HOT
)
output_data = layer(input_data)
self.assertAllEqual(expected_output, output_data)
# Test call on model.
inputs = keras.Input(shape=(1,), dtype=tf.int32)
outputs = layer(inputs)
model = keras.Model(inputs=inputs, outputs=outputs)
output_data = model(input_data)
self.assertAllEqual(expected_output_shape, outputs.shape.as_list())
self.assertAllEqual(expected_output, output_data)
def test_one_hot_rank_3_output_fails(self):
layer = category_encoding.CategoryEncoding(
num_tokens=4, output_mode=category_encoding.ONE_HOT
)
with self.assertRaisesRegex(
ValueError, "maximum supported output rank"
):
_ = layer(keras.Input(shape=(4,), dtype=tf.int32))
with self.assertRaisesRegex(
ValueError, "maximum supported output rank"
):
_ = layer(np.array([[3, 2, 0, 1], [3, 2, 0, 1]]))
def test_multi_hot_output(self):
input_data = np.array([[1, 2, 3, 1], [0, 3, 1, 0]])
expected_output = [
[0, 1, 1, 1, 0, 0],
[1, 1, 0, 1, 0, 0],
]
num_tokens = 6
expected_output_shape = [None, num_tokens]
layer = category_encoding.CategoryEncoding(
num_tokens=num_tokens, output_mode=category_encoding.MULTI_HOT
)
inputs = keras.Input(shape=(None,), dtype=tf.int32)
outputs = layer(inputs)
model = keras.Model(inputs=inputs, outputs=outputs)
output_data = model.predict(input_data)
self.assertAllEqual(expected_output_shape, outputs.shape.as_list())
self.assertAllEqual(expected_output, output_data)
def test_multi_hot_output_rank_one_input(self):
input_data = np.array([3, 2, 0, 1])
expected_output = [1, 1, 1, 1, 0, 0]
num_tokens = 6
expected_output_shape = [None, num_tokens]
# Test call on layer directly.
layer = category_encoding.CategoryEncoding(
num_tokens=num_tokens, output_mode=category_encoding.MULTI_HOT
)
output_data = layer(input_data)
self.assertAllEqual(expected_output, output_data)
# Test call on model.
inputs = keras.Input(shape=(4,), dtype=tf.int32)
outputs = layer(inputs)
model = keras.Model(inputs=inputs, outputs=outputs)
output_data = model(input_data)
self.assertAllEqual(expected_output_shape, outputs.shape.as_list())
self.assertAllEqual(expected_output, output_data)
def test_multi_hot_output_rank_zero_input(self):
input_data = np.array(3)
expected_output = [0, 0, 0, 1, 0, 0]
num_tokens = 6
expected_output_shape = [None, num_tokens]
# Test call on layer directly.
layer = category_encoding.CategoryEncoding(
num_tokens=num_tokens, output_mode=category_encoding.MULTI_HOT
)
output_data = layer(input_data)
self.assertAllEqual(expected_output, output_data)
# Test call on model.
inputs = keras.Input(shape=(4,), dtype=tf.int32)
outputs = layer(inputs)
model = keras.Model(inputs=inputs, outputs=outputs)
output_data = model(input_data)
self.assertAllEqual(expected_output_shape, outputs.shape.as_list())
self.assertAllEqual(expected_output, output_data)
def test_multi_hot_rank_3_output_fails(self):
layer = category_encoding.CategoryEncoding(
num_tokens=4, output_mode=category_encoding.ONE_HOT
)
with self.assertRaisesRegex(
ValueError, "maximum supported output rank"
):
_ = layer(
keras.Input(
shape=(
3,
4,
),
dtype=tf.int32,
)
)
with self.assertRaisesRegex(
ValueError, "maximum supported output rank"
):
_ = layer(np.array([[[3, 2, 0, 1], [3, 2, 0, 1]]]))
def test_count_output(self):
input_array = np.array([[1, 2, 3, 1], [0, 3, 1, 0]])
# pyformat: disable
expected_output = [[0, 2, 1, 1, 0, 0], [2, 1, 0, 1, 0, 0]]
# pyformat: enable
num_tokens = 6
expected_output_shape = [None, num_tokens]
input_data = keras.Input(shape=(None,), dtype=tf.int32)
layer = category_encoding.CategoryEncoding(
num_tokens=6, output_mode=category_encoding.COUNT
)
int_data = layer(input_data)
self.assertAllEqual(expected_output_shape, int_data.shape.as_list())
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(input_array)
self.assertAllEqual(expected_output, output_dataset)
class CategoryEncodingModelBuildingTest(
test_combinations.TestCase, preprocessing_test_utils.PreprocessingLayerTest
):
@parameterized.named_parameters(
{
"testcase_name": "count_output",
"num_tokens": 5,
"output_mode": category_encoding.COUNT,
},
{
"testcase_name": "multi_hot_output",
"num_tokens": 5,
"output_mode": category_encoding.MULTI_HOT,
},
)
def test_end_to_end_bagged_modeling(self, output_mode, num_tokens):
input_array = np.array([[1, 2, 3, 1], [0, 3, 1, 0]])
input_data = keras.Input(shape=(None,), dtype=tf.int32)
layer = category_encoding.CategoryEncoding(
num_tokens=num_tokens, output_mode=output_mode
)
weights = []
if num_tokens is None:
layer.set_num_elements(5)
layer.set_weights(weights)
int_data = layer(input_data)
float_data = backend.cast(int_data, dtype="float32")
output_data = core.Dense(64)(float_data)
model = keras.Model(inputs=input_data, outputs=output_data)
_ = model.predict(input_array)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/layers/preprocessing/category_encoding_test.py/0 | {
"file_path": "tf-keras/tf_keras/layers/preprocessing/category_encoding_test.py",
"repo_id": "tf-keras",
"token_count": 10816
} | 187 |
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for TF-Keras text vectorization preprocessing layer."""
import gc
import itertools
import os
import random
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
import tf_keras as keras
from tf_keras.layers.preprocessing import integer_lookup
from tf_keras.layers.preprocessing import preprocessing_test_utils
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
def _get_end_to_end_test_cases():
test_cases = (
{
"testcase_name": "test_ints_soft_vocab_cap",
# Create an array where 1138 is the most frequent term, followed by
# 1729, then 725, then 42. This ensures that the vocab accumulator
# is sorting by frequency.
"vocab_data": np.array(
[
[42],
[1138],
[1138],
[1138],
[1138],
[1729],
[1729],
[1729],
[725],
[725],
],
dtype=np.int64,
),
"input_data": np.array(
[[1138], [1729], [725], [42], [42], [725], [1138], [4]],
dtype=np.int64,
),
"kwargs": {
"max_tokens": None,
"dtype": tf.int64,
},
"expected_output": [[1], [2], [3], [4], [4], [3], [1], [0]],
"input_dtype": tf.int64,
},
)
crossed_test_cases = []
# Cross above test cases with use_dataset in (True, False)
for use_dataset in (True, False):
for case in test_cases:
case = case.copy()
if use_dataset:
case["testcase_name"] = case["testcase_name"] + "_with_dataset"
case["use_dataset"] = use_dataset
crossed_test_cases.append(case)
return crossed_test_cases
@test_combinations.run_all_keras_modes(always_skip_v1=True)
class IntegerLookupLayerTest(
test_combinations.TestCase, preprocessing_test_utils.PreprocessingLayerTest
):
@parameterized.named_parameters(*_get_end_to_end_test_cases())
def test_layer_end_to_end_with_adapt(
self,
vocab_data,
input_data,
kwargs,
use_dataset,
expected_output,
input_dtype,
):
cls = integer_lookup.IntegerLookup
expected_output_dtype = tf.int64
input_shape = input_data.shape
if use_dataset:
# TF-Keras APIs expect batched datasets.
# TODO(rachelim): `model.predict` predicts the result on each
# dataset batch separately, then tries to concatenate the results
# together. When the results have different shapes on the non-concat
# axis (which can happen in the output_mode = INT case for
# IntegerLookup), the concatenation fails. In real use cases, this
# may not be an issue because users are likely to pipe the
# preprocessing layer into other keras layers instead of predicting
# it directly. A workaround for these unit tests is to have the
# dataset only contain one batch, so no concatenation needs to
# happen with the result. For consistency with numpy input, we
# should make `predict` join differently shaped results together
# sensibly, with 0 padding.
input_data = tf.data.Dataset.from_tensor_slices(input_data).batch(
input_shape[0]
)
vocab_data = tf.data.Dataset.from_tensor_slices(vocab_data).batch(
input_shape[0]
)
output_data = test_utils.layer_test(
cls,
kwargs=kwargs,
input_shape=input_shape,
input_data=input_data,
input_dtype=input_dtype,
expected_output_dtype=expected_output_dtype,
validate_training=False,
adapt_data=vocab_data,
)
self.assertAllClose(expected_output, output_data)
def test_layer_with_list_input(self):
vocab = [12, 36, 1138, 42]
data = [[12, 1138, 42], [42, 1000, 36]] # Note OOV tokens
layer = integer_lookup.IntegerLookup(vocabulary=vocab)
output = layer(data)
expected_output = np.array([[1, 3, 4], [4, 0, 2]])
self.assertEqual(output.numpy().tolist(), expected_output.tolist())
@test_combinations.run_all_keras_modes(always_skip_v1=True)
class CategoricalEncodingInputTest(
test_combinations.TestCase, preprocessing_test_utils.PreprocessingLayerTest
):
def test_sparse_int_input(self):
vocab_data = np.array([10, 11, 12, 13], dtype=np.int64)
input_array = tf.SparseTensor(
indices=[[0, 0], [1, 2]],
values=np.array([13, 32], dtype=np.int64),
dense_shape=[3, 4],
)
expected_indices = [[0, 0], [1, 2]]
expected_values = [4, 0]
expected_dense_shape = [3, 4]
input_data = keras.Input(shape=(None,), dtype=tf.int64, sparse=True)
layer = integer_lookup.IntegerLookup(max_tokens=None)
layer.set_vocabulary(vocab_data)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_data = model.predict(input_array, steps=1)
self.assertAllEqual(expected_indices, output_data.indices)
self.assertAllEqual(expected_values, output_data.values)
self.assertAllEqual(expected_dense_shape, output_data.dense_shape)
def test_ragged_int_input(self):
vocab_data = np.array([10, 11, 12, 13], dtype=np.int64)
input_array = tf.ragged.constant(
[[10, 11, 13], [13, 12, 10, 42]], dtype=np.int64
)
expected_output = [[1, 2, 4], [4, 3, 1, 0]]
input_data = keras.Input(shape=(None,), dtype=tf.int64, ragged=True)
layer = integer_lookup.IntegerLookup(max_tokens=None)
layer.set_vocabulary(vocab_data)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(input_array)
self.assertAllEqual(expected_output, output_dataset)
@test_combinations.run_all_keras_modes(always_skip_v1=True)
class CategoricalEncodingMultiOOVTest(
test_combinations.TestCase, preprocessing_test_utils.PreprocessingLayerTest
):
def test_sparse_int_input_multi_bucket(self):
vocab_data = np.array([10, 11, 12, 13], dtype=np.int64)
input_array = tf.SparseTensor(
indices=[[0, 0], [1, 2]],
values=np.array([13, 133], dtype=np.int64),
dense_shape=[3, 4],
)
expected_indices = [[0, 0], [1, 2]]
expected_values = [6, 2]
expected_dense_shape = [3, 4]
input_data = keras.Input(shape=(None,), dtype=tf.int64, sparse=True)
layer = integer_lookup.IntegerLookup(
max_tokens=None,
dtype=tf.int64,
num_oov_indices=2,
mask_token=0,
oov_token=-1,
)
layer.set_vocabulary(vocab_data)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_data = model.predict(input_array, steps=1)
self.assertAllEqual(expected_indices, output_data.indices)
self.assertAllEqual(expected_values, output_data.values)
self.assertAllEqual(expected_dense_shape, output_data.dense_shape)
def test_ragged_int_input_multi_bucket(self):
vocab_data = np.array([10, 11, 12, 13], dtype=np.int64)
input_array = tf.ragged.constant(
[[10, 11, 13], [13, 12, 10, 133]], dtype=np.int64
)
expected_output = [[2, 3, 5], [5, 4, 2, 1]]
input_data = keras.Input(shape=(None,), dtype=tf.int64, ragged=True)
layer = integer_lookup.IntegerLookup(max_tokens=None, num_oov_indices=2)
layer.set_vocabulary(vocab_data)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(input_array)
self.assertAllEqual(expected_output, output_dataset)
@test_combinations.run_all_keras_modes(always_skip_v1=True)
class CategoricalEncodingAdaptTest(
test_combinations.TestCase, preprocessing_test_utils.PreprocessingLayerTest
):
def test_sparse_adapt(self):
vocab_data = tf.SparseTensor(
indices=[[0, 0], [0, 1], [1, 2]],
values=[203, 1729, 203],
dense_shape=[3, 4],
)
vocab_dataset = tf.data.Dataset.from_tensors(vocab_data)
layer = integer_lookup.IntegerLookup()
layer.adapt(vocab_dataset)
expected_vocabulary = [-1, 203, 1729]
self.assertAllEqual(expected_vocabulary, layer.get_vocabulary())
def test_ragged_adapt(self):
vocab_data = tf.ragged.constant([[203], [1729, 203]])
vocab_dataset = tf.data.Dataset.from_tensors(vocab_data)
layer = integer_lookup.IntegerLookup()
layer.adapt(vocab_dataset)
expected_vocabulary = [-1, 203, 1729]
self.assertAllEqual(expected_vocabulary, layer.get_vocabulary())
def test_single_int_generator_dataset(self):
def word_gen():
for _ in itertools.count(1):
yield random.randint(0, 100)
ds = tf.data.Dataset.from_generator(
word_gen, tf.int64, tf.TensorShape([])
)
batched_ds = ds.take(2)
input_t = keras.Input(shape=(), dtype=tf.int64)
layer = integer_lookup.IntegerLookup(
max_tokens=10, num_oov_indices=0, mask_token=None, oov_token=None
)
_ = layer(input_t)
layer.adapt(batched_ds)
@test_combinations.run_all_keras_modes(always_skip_v1=True)
class IntegerLookupOutputTest(
test_combinations.TestCase, preprocessing_test_utils.PreprocessingLayerTest
):
def test_int_output(self):
vocab_data = [42, 1138, 725, 1729]
input_array = np.array([[42, 1138, 725, 1729], [1729, 725, 42, 203]])
expected_output = [[1, 2, 3, 4], [4, 3, 1, 0]]
input_data = keras.Input(shape=(None,), dtype=tf.int64)
layer = integer_lookup.IntegerLookup()
layer.set_vocabulary(vocab_data)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(input_array)
self.assertAllEqual(expected_output, output_dataset)
def test_output_shape(self):
input_data = keras.Input(shape=(4,), dtype=tf.int64)
layer = integer_lookup.IntegerLookup(max_tokens=2, num_oov_indices=1)
int_data = layer(input_data)
self.assertAllEqual(int_data.shape[1:], input_data.shape[1:])
def test_int_output_with_mask(self):
vocab_data = [42, 1138, 725, 1729]
input_array = np.array([[42, 1138, 725, 1729], [1729, 725, 42, 203]])
expected_output = [[2, 3, 4, 5], [5, 4, 2, 1]]
input_data = keras.Input(shape=(None,), dtype=tf.int64)
layer = integer_lookup.IntegerLookup(max_tokens=None, mask_token=0)
layer.set_vocabulary(vocab_data)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(input_array)
self.assertAllEqual(expected_output, output_dataset)
def test_int_output_explicit_vocab(self):
vocab_data = [42, 1138, 725, 1729]
input_array = np.array([[42, 1138, 725, 1729], [1729, 725, 42, 203]])
expected_output = [[1, 2, 3, 4], [4, 3, 1, 0]]
input_data = keras.Input(shape=(None,), dtype=tf.int64)
layer = integer_lookup.IntegerLookup(
vocabulary=vocab_data,
max_tokens=None,
)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(input_array)
self.assertAllEqual(expected_output, output_dataset)
def test_int_output_explicit_vocab_with_special_tokens(self):
vocab_data = [0, -1, 42, 1138, 725, 1729]
input_array = np.array([[42, 1138, 725, 1729], [1729, 725, 42, 203]])
expected_output = [[2, 3, 4, 5], [5, 4, 2, 1]]
input_data = keras.Input(shape=(None,), dtype=tf.int64)
layer = integer_lookup.IntegerLookup(
vocabulary=vocab_data,
max_tokens=None,
mask_token=0,
)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(input_array)
self.assertAllEqual(expected_output, output_dataset)
def test_int_output_no_oov(self):
vocab_data = [42, 1138, 725, 1729]
valid_input = np.array([[42, 1138, 725, 1729], [1729, 725, 42, 0]])
invalid_input = np.array([[42, 1138, 725, 203], [1729, 725, 42, 203]])
expected_output = [[1, 2, 3, 4], [4, 3, 1, 0]]
input_data = keras.Input(shape=(None,), dtype=tf.int64)
layer = integer_lookup.IntegerLookup(
vocabulary=vocab_data, mask_token=0, num_oov_indices=0
)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_data = model.predict(valid_input)
self.assertAllEqual(expected_output, output_data)
with self.assertRaisesRegex(
tf.errors.InvalidArgumentError, "found OOV values.*203"
):
_ = model.predict(invalid_input)
def test_inverse_output(self):
vocab_data = [-1, 42, 1138, 725, 1729]
input_array = np.array([[1, 2, 3, 4], [4, 3, 1, 0]])
expected_output = np.array([[42, 1138, 725, 1729], [1729, 725, 42, -1]])
input_data = keras.Input(shape=(None,), dtype=tf.int64)
layer = integer_lookup.IntegerLookup(invert=True)
layer.set_vocabulary(vocab_data)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(input_array)
self.assertAllEqual(expected_output, output_dataset)
def test_forward_backward_explicit_vocab(self):
vocab_data = [42, 1138, 725, 1729]
input_array = np.array([[42, 1138, 725, 1729], [1729, 725, 42, 203]])
expected_output = np.array([[42, 1138, 725, 1729], [1729, 725, 42, -1]])
input_data = keras.Input(shape=(None,), dtype=tf.int64)
layer = integer_lookup.IntegerLookup(vocabulary=vocab_data)
inverse_layer = integer_lookup.IntegerLookup(
vocabulary=vocab_data, invert=True
)
int_data = layer(input_data)
inverse_data = inverse_layer(int_data)
model = keras.Model(inputs=input_data, outputs=inverse_data)
output_dataset = model.predict(input_array)
self.assertAllEqual(expected_output, output_dataset)
def test_forward_backward_adapted_vocab(self):
adapt_data = [42, 1138, 725, 1729]
input_array = np.array([[42, 1138, 725, 1729], [1729, 725, 42, 203]])
expected_output = np.array([[42, 1138, 725, 1729], [1729, 725, 42, -1]])
input_data = keras.Input(shape=(None,), dtype=tf.int64)
layer = integer_lookup.IntegerLookup()
layer.adapt(adapt_data)
inverse_layer = integer_lookup.IntegerLookup(
vocabulary=layer.get_vocabulary(), invert=True
)
int_data = layer(input_data)
inverse_data = inverse_layer(int_data)
model = keras.Model(inputs=input_data, outputs=inverse_data)
output_dataset = model.predict(input_array)
self.assertAllEqual(expected_output, output_dataset)
@test_combinations.run_all_keras_modes(always_skip_v1=True)
class IntegerLookupVocabularyTest(
test_combinations.TestCase, preprocessing_test_utils.PreprocessingLayerTest
):
def _write_to_temp_file(self, file_name, vocab_list):
vocab_path = os.path.join(self.get_temp_dir(), file_name + ".txt")
with tf.io.gfile.GFile(vocab_path, "w") as writer:
for vocab in vocab_list:
writer.write(str(vocab) + "\n")
writer.flush()
writer.close()
return vocab_path
def test_int_output_explicit_vocab(self):
vocab_data = [42, 1138, 725, 1729]
input_array = np.array([[42, 1138, 725, 1729], [1729, 725, 42, 203]])
expected_output = [[1, 2, 3, 4], [4, 3, 1, 0]]
input_data = keras.Input(shape=(None,), dtype=tf.int64)
layer = integer_lookup.IntegerLookup(vocabulary=vocab_data)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(input_array)
self.assertAllEqual(expected_output, output_dataset)
def test_no_vocab(self):
with self.assertRaisesRegex(
RuntimeError, "you must set the layer's vocabulary"
):
layer = integer_lookup.IntegerLookup(output_mode="binary")
layer([[1]])
def test_one_hot_output(self):
vocab_data = [2, 3, 4, 5]
input_array = np.array([2, 3, 4, 5, 6])
expected_output = [
[0, 1, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 1, 0],
[0, 0, 0, 0, 1],
[1, 0, 0, 0, 0],
]
input_data = keras.Input(shape=(1,), dtype=tf.int64)
layer = integer_lookup.IntegerLookup(
vocabulary=vocab_data, output_mode="one_hot"
)
res = layer(input_data)
model = keras.Model(inputs=input_data, outputs=res)
output_data = model.predict(input_array)
self.assertAllEqual(expected_output, output_data)
def test_multi_hot_output(self):
vocab_data = [2, 3, 4, 5]
input_array = np.array([[2, 2, 3, 4], [0, 1, 5, 2]])
expected_output = [[0, 1, 1, 1, 0], [1, 1, 0, 0, 1]]
input_data = keras.Input(shape=(None,), dtype=tf.int64)
layer = integer_lookup.IntegerLookup(
vocabulary=vocab_data, output_mode="multi_hot"
)
res = layer(input_data)
model = keras.Model(inputs=input_data, outputs=res)
output_data = model.predict(input_array)
self.assertAllEqual(expected_output, output_data)
def test_count_output(self):
vocab_data = [2, 3, 4, 5]
input_array = np.array([[2, 2, 3, 4], [0, 1, 5, 6]])
expected_output = [[0, 2, 1, 1, 0], [3, 0, 0, 0, 1]]
input_data = keras.Input(shape=(None,), dtype=tf.int64)
layer = integer_lookup.IntegerLookup(
vocabulary=vocab_data, output_mode="count"
)
res = layer(input_data)
model = keras.Model(inputs=input_data, outputs=res)
output_data = model.predict(input_array)
self.assertAllEqual(expected_output, output_data)
def test_sparse_output(self):
vocab_data = [2, 3, 4, 5]
input_data = keras.Input(shape=(None,), dtype=tf.int64)
layer = integer_lookup.IntegerLookup(
vocabulary=vocab_data, output_mode="multi_hot", sparse=True
)
res = layer(input_data)
self.assertTrue(res.__class__.__name__, "SparseKerasTensor")
def test_get_vocab_returns_int(self):
vocab_data = [42, 1138, 725, 1729]
expected_vocab = [-1, 42, 1138, 725, 1729]
layer = integer_lookup.IntegerLookup(vocabulary=vocab_data)
layer_vocab = layer.get_vocabulary()
self.assertAllEqual(expected_vocab, layer_vocab)
self.assertIsInstance(layer_vocab[0], np.int64)
def test_int_output_explicit_vocab_from_file(self):
vocab_list = [42, 1138, 725, 1729]
vocab_path = self._write_to_temp_file("vocab_file", vocab_list)
input_array = np.array([[42, 1138, 725, 1729], [1729, 725, 42, 203]])
expected_output = [[1, 2, 3, 4], [4, 3, 1, 0]]
input_data = keras.Input(shape=(None,), dtype=tf.int64)
layer = integer_lookup.IntegerLookup(vocabulary=vocab_path)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(input_array)
self.assertAllEqual(expected_output, output_dataset)
def test_int_output_inverted_vocab_from_file(self):
vocab_list = [42, 1138, 725, 1729]
vocab_path = self._write_to_temp_file("vocab_file", vocab_list)
input_array = np.array([[1, 2, 3, 4], [4, 3, 1, 0]])
expected_output = [[42, 1138, 725, 1729], [1729, 725, 42, -1]]
input_data = keras.Input(shape=(None,), dtype=tf.int64)
layer = integer_lookup.IntegerLookup(vocabulary=vocab_path, invert=True)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(input_array)
self.assertAllEqual(expected_output, output_dataset)
def test_int_output_inverted_vocab_from_file_with_mask(self):
vocab_list = [42, 1138, 725, 1729]
vocab_path = self._write_to_temp_file("vocab_file", vocab_list)
input_array = np.array([[2, 3, 4, 5], [5, 4, 2, 0]])
expected_output = [[42, 1138, 725, 1729], [1729, 725, 42, -10]]
input_data = keras.Input(shape=(None,), dtype=tf.int64)
layer = integer_lookup.IntegerLookup(
vocabulary=vocab_path, invert=True, mask_value=-10
)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(input_array)
self.assertAllEqual(expected_output, output_dataset)
def test_int_output_explicit_vocab_from_file_via_setter(self):
vocab_list = [42, 1138, 725, 1729]
vocab_path = self._write_to_temp_file("vocab_file", vocab_list)
input_array = np.array([[42, 1138, 725, 1729], [1729, 725, 42, 203]])
expected_output = [[1, 2, 3, 4], [4, 3, 1, 0]]
input_data = keras.Input(shape=(None,), dtype=tf.int64)
layer = integer_lookup.IntegerLookup()
layer.set_vocabulary(vocab_path)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(input_array)
self.assertAllEqual(expected_output, output_dataset)
def test_non_unique_vocab_fails(self):
vocab_data = [42, 1138, 725, 1729, 1729]
with self.assertRaisesRegex(ValueError, ".*repeated term.*1729.*"):
_ = integer_lookup.IntegerLookup(vocabulary=vocab_data)
def test_non_unique_vocab_from_file_fails(self):
vocab_list = [42, 1138, 725, 1729, 42]
vocab_path = self._write_to_temp_file("repeat_vocab_file", vocab_list)
with self.assertRaisesRegex(
tf.errors.FailedPreconditionError,
".*HashTable has different value for same key.*42.*",
):
_ = integer_lookup.IntegerLookup(vocabulary=vocab_path)
def test_tensor_vocab(self):
vocab_data = [-1, 42, 1138, 725, 1729]
vocab_tensor = tf.constant(vocab_data, tf.int64)
layer = integer_lookup.IntegerLookup(vocabulary=vocab_tensor)
returned_vocab = layer.get_vocabulary()
self.assertAllEqual(vocab_data, returned_vocab)
self.assertAllEqual(layer.vocabulary_size(), 5)
fn = tf.function(lambda: layer.set_vocabulary(vocab_tensor))
with self.assertRaisesRegex(
RuntimeError, "Cannot set a tensor vocabulary"
):
fn()
@test_combinations.run_all_keras_modes(always_skip_v1=True)
class IntegerLookupErrorTest(
test_combinations.TestCase, preprocessing_test_utils.PreprocessingLayerTest
):
def test_too_long_vocab_fails_in_single_setting(self):
vocab_data = [42, 1138, 725, 1729]
layer = integer_lookup.IntegerLookup(max_tokens=4, num_oov_indices=1)
with self.assertRaisesRegex(
ValueError, "vocabulary larger than the maximum vocab.*"
):
layer.set_vocabulary(vocab_data)
def test_zero_max_tokens_fails(self):
with self.assertRaisesRegex(ValueError, ".*max_tokens.*"):
_ = integer_lookup.IntegerLookup(max_tokens=0, num_oov_indices=1)
@test_combinations.run_all_keras_modes(always_skip_v1=True)
class IntegerLookupSavingTest(
test_combinations.TestCase, preprocessing_test_utils.PreprocessingLayerTest
):
def tearDown(self):
keras.backend.clear_session()
gc.collect()
super(IntegerLookupSavingTest, self).tearDown()
def test_vocabulary_persistence_across_saving(self):
vocab_data = [42, 1138, 725, 1729]
input_array = np.array([[42, 1138, 725, 1729], [1729, 725, 42, 203]])
expected_output = [[1, 2, 3, 4], [4, 3, 1, 0]]
# Build and validate a golden model.
input_data = keras.Input(shape=(None,), dtype=tf.int64)
layer = integer_lookup.IntegerLookup(max_tokens=None, num_oov_indices=1)
layer.set_vocabulary(vocab_data)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(input_array)
self.assertAllEqual(output_dataset, expected_output)
with self.subTest("keras_v3"):
if not tf.__internal__.tf2.enabled():
self.skipTest(
"TF2 must be enabled to use the new `.keras` saving."
)
# Save the model to disk.
output_path = os.path.join(
self.get_temp_dir(), "tf_keras_model.keras"
)
model.save(output_path, save_format="keras_v3")
loaded_model = keras.models.load_model(
output_path,
custom_objects={"IntegerLookup": integer_lookup.IntegerLookup},
)
# Ensure that the loaded model is unique
# (so that the save/load is real)
self.assertIsNot(model, loaded_model)
# Validate correctness of the new model.
new_output_dataset = loaded_model.predict(input_array)
self.assertAllEqual(new_output_dataset, expected_output)
with self.subTest("savedmodel"):
# Save the model to disk.
output_path = os.path.join(
self.get_temp_dir(), "tf_keras_saved_model"
)
model.save(output_path, save_format="tf")
# Delete the session and graph to ensure that the loaded model is
# generated from scratch.
# TODO(b/149526183): Can't clear session when TF2 is disabled.
if tf.__internal__.tf2.enabled():
keras.backend.clear_session()
loaded_model = keras.models.load_model(
output_path,
custom_objects={"IntegerLookup": integer_lookup.IntegerLookup},
)
# Ensure that the loaded model is unique
# (so that the save/load is real)
self.assertIsNot(model, loaded_model)
# Validate correctness of the new model.
new_output_dataset = loaded_model.predict(input_array)
self.assertAllEqual(new_output_dataset, expected_output)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/layers/preprocessing/integer_lookup_test.py/0 | {
"file_path": "tf-keras/tf_keras/layers/preprocessing/integer_lookup_test.py",
"repo_id": "tf-keras",
"token_count": 13160
} | 188 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Keras reshaping layers layers."""
from tf_keras.layers.reshaping.cropping1d import Cropping1D
from tf_keras.layers.reshaping.cropping2d import Cropping2D
from tf_keras.layers.reshaping.cropping3d import Cropping3D
from tf_keras.layers.reshaping.flatten import Flatten
from tf_keras.layers.reshaping.permute import Permute
from tf_keras.layers.reshaping.repeat_vector import RepeatVector
from tf_keras.layers.reshaping.reshape import Reshape
from tf_keras.layers.reshaping.up_sampling1d import UpSampling1D
from tf_keras.layers.reshaping.up_sampling2d import UpSampling2D
from tf_keras.layers.reshaping.up_sampling3d import UpSampling3D
from tf_keras.layers.reshaping.zero_padding1d import ZeroPadding1D
from tf_keras.layers.reshaping.zero_padding2d import ZeroPadding2D
from tf_keras.layers.reshaping.zero_padding3d import ZeroPadding3D
| tf-keras/tf_keras/layers/reshaping/__init__.py/0 | {
"file_path": "tf-keras/tf_keras/layers/reshaping/__init__.py",
"repo_id": "tf-keras",
"token_count": 461
} | 189 |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for up-sampling layers."""
import numpy as np
import tensorflow.compat.v2 as tf
import tf_keras as keras
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
# isort: off
from tensorflow.python.framework import (
test_util as tf_test_utils,
)
@tf_test_utils.for_all_test_methods(
tf_test_utils.disable_xla, "align_corners=False not supported by XLA"
)
@test_combinations.run_all_keras_modes
class UpSamplingTest(test_combinations.TestCase):
def test_upsampling_1d(self):
with self.cached_session():
test_utils.layer_test(
keras.layers.UpSampling1D,
kwargs={"size": 2},
input_shape=(3, 5, 4),
)
def test_upsampling_2d(self):
num_samples = 2
stack_size = 2
input_num_row = 11
input_num_col = 12
for data_format in ["channels_first", "channels_last"]:
if data_format == "channels_first":
inputs = np.random.rand(
num_samples, stack_size, input_num_row, input_num_col
)
else:
inputs = np.random.rand(
num_samples, input_num_row, input_num_col, stack_size
)
# basic test
with self.cached_session():
test_utils.layer_test(
keras.layers.UpSampling2D,
kwargs={"size": (2, 2), "data_format": data_format},
input_shape=inputs.shape,
)
for length_row in [2]:
for length_col in [2, 3]:
layer = keras.layers.UpSampling2D(
size=(length_row, length_col),
data_format=data_format,
)
layer.build(inputs.shape)
output = layer(keras.backend.variable(inputs))
if tf.executing_eagerly():
np_output = output.numpy()
else:
np_output = keras.backend.eval(output)
if data_format == "channels_first":
assert (
np_output.shape[2] == length_row * input_num_row
)
assert (
np_output.shape[3] == length_col * input_num_col
)
else: # tf
assert (
np_output.shape[1] == length_row * input_num_row
)
assert (
np_output.shape[2] == length_col * input_num_col
)
# compare with numpy
if data_format == "channels_first":
expected_out = np.repeat(inputs, length_row, axis=2)
expected_out = np.repeat(
expected_out, length_col, axis=3
)
else: # tf
expected_out = np.repeat(inputs, length_row, axis=1)
expected_out = np.repeat(
expected_out, length_col, axis=2
)
np.testing.assert_allclose(np_output, expected_out)
def test_upsampling_2d_bilinear(self):
num_samples = 2
stack_size = 2
input_num_row = 11
input_num_col = 12
for data_format in ["channels_first", "channels_last"]:
if data_format == "channels_first":
inputs = np.random.rand(
num_samples, stack_size, input_num_row, input_num_col
)
else:
inputs = np.random.rand(
num_samples, input_num_row, input_num_col, stack_size
)
test_utils.layer_test(
keras.layers.UpSampling2D,
kwargs={
"size": (2, 2),
"data_format": data_format,
"interpolation": "bilinear",
},
input_shape=inputs.shape,
)
if not tf.executing_eagerly():
for length_row in [2]:
for length_col in [2, 3]:
layer = keras.layers.UpSampling2D(
size=(length_row, length_col),
data_format=data_format,
)
layer.build(inputs.shape)
outputs = layer(keras.backend.variable(inputs))
np_output = keras.backend.eval(outputs)
if data_format == "channels_first":
self.assertEqual(
np_output.shape[2], length_row * input_num_row
)
self.assertEqual(
np_output.shape[3], length_col * input_num_col
)
else:
self.assertEqual(
np_output.shape[1], length_row * input_num_row
)
self.assertEqual(
np_output.shape[2], length_col * input_num_col
)
def test_upsampling_3d(self):
num_samples = 2
stack_size = 2
input_len_dim1 = 10
input_len_dim2 = 11
input_len_dim3 = 12
for data_format in ["channels_first", "channels_last"]:
if data_format == "channels_first":
inputs = np.random.rand(
num_samples,
stack_size,
input_len_dim1,
input_len_dim2,
input_len_dim3,
)
else:
inputs = np.random.rand(
num_samples,
input_len_dim1,
input_len_dim2,
input_len_dim3,
stack_size,
)
# basic test
with self.cached_session():
test_utils.layer_test(
keras.layers.UpSampling3D,
kwargs={"size": (2, 2, 2), "data_format": data_format},
input_shape=inputs.shape,
)
for length_dim1 in [2, 3]:
for length_dim2 in [2]:
for length_dim3 in [3]:
layer = keras.layers.UpSampling3D(
size=(length_dim1, length_dim2, length_dim3),
data_format=data_format,
)
layer.build(inputs.shape)
output = layer(keras.backend.variable(inputs))
if tf.executing_eagerly():
np_output = output.numpy()
else:
np_output = keras.backend.eval(output)
if data_format == "channels_first":
assert (
np_output.shape[2]
== length_dim1 * input_len_dim1
)
assert (
np_output.shape[3]
== length_dim2 * input_len_dim2
)
assert (
np_output.shape[4]
== length_dim3 * input_len_dim3
)
else: # tf
assert (
np_output.shape[1]
== length_dim1 * input_len_dim1
)
assert (
np_output.shape[2]
== length_dim2 * input_len_dim2
)
assert (
np_output.shape[3]
== length_dim3 * input_len_dim3
)
# compare with numpy
if data_format == "channels_first":
expected_out = np.repeat(
inputs, length_dim1, axis=2
)
expected_out = np.repeat(
expected_out, length_dim2, axis=3
)
expected_out = np.repeat(
expected_out, length_dim3, axis=4
)
else: # tf
expected_out = np.repeat(
inputs, length_dim1, axis=1
)
expected_out = np.repeat(
expected_out, length_dim2, axis=2
)
expected_out = np.repeat(
expected_out, length_dim3, axis=3
)
np.testing.assert_allclose(np_output, expected_out)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/layers/reshaping/up_sampling_test.py/0 | {
"file_path": "tf-keras/tf_keras/layers/reshaping/up_sampling_test.py",
"repo_id": "tf-keras",
"token_count": 6739
} | 190 |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for Bidirectional wrapper."""
import copy
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
import tf_keras as keras
from tf_keras.engine import base_layer_utils
from tf_keras.layers import core
from tf_keras.layers.rnn.cell_wrappers import ResidualWrapper
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
# isort: off
from tensorflow.python.checkpoint import (
checkpoint as trackable_util,
)
from tensorflow.python.framework import (
test_util as tf_test_util,
)
class _RNNCellWithConstants(keras.layers.Layer):
def __init__(self, units, constant_size, **kwargs):
self.units = units
self.state_size = units
self.constant_size = constant_size
super().__init__(**kwargs)
def build(self, input_shape):
self.input_kernel = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="uniform",
name="kernel",
)
self.recurrent_kernel = self.add_weight(
shape=(self.units, self.units),
initializer="uniform",
name="recurrent_kernel",
)
self.constant_kernel = self.add_weight(
shape=(self.constant_size, self.units),
initializer="uniform",
name="constant_kernel",
)
self.built = True
def call(self, inputs, states, constants):
[prev_output] = states
[constant] = constants
h_input = keras.backend.dot(inputs, self.input_kernel)
h_state = keras.backend.dot(prev_output, self.recurrent_kernel)
h_const = keras.backend.dot(constant, self.constant_kernel)
output = h_input + h_state + h_const
return output, [output]
def get_config(self):
config = {"units": self.units, "constant_size": self.constant_size}
base_config = super().get_config()
return dict(list(base_config.items()) + list(config.items()))
class _ResidualLSTMCell(keras.layers.LSTMCell):
def call(self, inputs, states, training=None):
output, states = super().call(inputs, states)
return output + inputs, states
class _AddOneCell(keras.layers.AbstractRNNCell):
"""Increments inputs and state by one on each call."""
@property
def state_size(self):
return 1
@property
def output_size(self):
return 1
def call(self, inputs, state):
inputs = tf.reduce_mean(inputs, axis=1, keepdims=True)
outputs = inputs + 1.0
state = tf.nest.map_structure(lambda t: t + 1.0, state)
return outputs, state
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class BidirectionalTest(tf.test.TestCase, parameterized.TestCase):
@parameterized.parameters(["sum", "concat", "ave", "mul"])
def test_bidirectional(self, mode):
rnn = keras.layers.SimpleRNN
samples = 2
dim = 2
timesteps = 2
output_dim = 2
with self.cached_session():
x = np.random.random((samples, timesteps, dim))
target_dim = 2 * output_dim if mode == "concat" else output_dim
y = np.random.random((samples, target_dim))
# test with Sequential model
model = keras.models.Sequential()
model.add(
keras.layers.Bidirectional(
rnn(output_dim),
merge_mode=mode,
input_shape=(timesteps, dim),
)
)
model.compile(optimizer="rmsprop", loss="mse")
model.fit(x, y, epochs=1, batch_size=1)
# check whether the model variables are present in the
# trackable list of objects
checkpointed_object_ids = {
id(o) for o in trackable_util.list_objects(model)
}
for v in model.variables:
self.assertIn(id(v), checkpointed_object_ids)
# test compute output shape
ref_shape = model.layers[-1].output.shape
shape = model.layers[-1].compute_output_shape(
(None, timesteps, dim)
)
self.assertListEqual(shape.as_list(), ref_shape.as_list())
# test config
model.get_config()
model = keras.models.model_from_json(model.to_json())
model.summary()
def test_bidirectional_invalid_init(self):
x = tf.constant(np.zeros((1, 1)).astype("float32"))
with self.assertRaisesRegex(
ValueError,
"Please initialize `Bidirectional` layer with a "
"`tf.keras.layers.Layer` instance.",
):
keras.layers.Bidirectional(x)
def test_bidirectional_weight_loading(self):
rnn = keras.layers.SimpleRNN
samples = 2
dim = 2
timesteps = 2
output_dim = 2
with self.cached_session():
x = np.random.random((samples, timesteps, dim))
model = keras.models.Sequential()
model.add(
keras.layers.Bidirectional(
rnn(output_dim), input_shape=(timesteps, dim)
)
)
y_ref = model.predict(x)
weights = model.layers[-1].get_weights()
model.layers[-1].set_weights(weights)
y = model.predict(x)
self.assertAllClose(y, y_ref)
def test_bidirectional_stacked(self):
# test stacked bidirectional layers
rnn = keras.layers.SimpleRNN
samples = 2
dim = 2
timesteps = 2
output_dim = 2
mode = "sum"
with self.cached_session():
x = np.random.random((samples, timesteps, dim))
target_dim = 2 * output_dim if mode == "concat" else output_dim
y = np.random.random((samples, target_dim))
model = keras.models.Sequential()
model.add(
keras.layers.Bidirectional(
rnn(output_dim, return_sequences=True),
merge_mode=mode,
input_shape=(timesteps, dim),
)
)
model.add(
keras.layers.Bidirectional(rnn(output_dim), merge_mode=mode)
)
model.compile(loss="mse", optimizer="sgd")
model.fit(x, y, epochs=1, batch_size=1)
# test with functional API
inputs = keras.layers.Input((timesteps, dim))
output = keras.layers.Bidirectional(
rnn(output_dim), merge_mode=mode
)(inputs)
model = keras.models.Model(inputs, output)
model.compile(loss="mse", optimizer="sgd")
model.fit(x, y, epochs=1, batch_size=1)
def test_bidirectional_statefulness(self):
# Bidirectional and stateful
def run_test():
rnn = keras.layers.SimpleRNN
samples = 2
dim = 2
timesteps = 2
output_dim = 2
mode = "sum"
with self.cached_session():
x = np.random.random((samples, timesteps, dim))
target_dim = 2 * output_dim if mode == "concat" else output_dim
y = np.random.random((samples, target_dim))
inputs = keras.layers.Input(batch_shape=(1, timesteps, dim))
bidi_rnn = keras.layers.Bidirectional(
rnn(output_dim, stateful=True), merge_mode=mode
)
self.assertTrue(bidi_rnn.stateful)
output = bidi_rnn(inputs)
model = keras.models.Model(inputs, output)
y_1 = model.predict(x, batch_size=1)
model.reset_states()
y_2 = model.predict(x, batch_size=1)
self.assertAllClose(y_1, y_2)
model.compile(loss="mse", optimizer="sgd")
model.fit(x, y, epochs=1, batch_size=1)
if tf.executing_eagerly():
run_test()
else:
tf_test_util.enable_output_all_intermediates(run_test)()
@parameterized.parameters(["sum", "mul", "ave", "concat", None])
def test_Bidirectional_merged_value(self, merge_mode):
rnn = keras.layers.LSTM
samples = 2
dim = 5
timesteps = 3
units = 3
x = [np.random.rand(samples, timesteps, dim)]
with self.cached_session():
if merge_mode == "sum":
merge_func = lambda y, y_rev: y + y_rev
elif merge_mode == "mul":
merge_func = lambda y, y_rev: y * y_rev
elif merge_mode == "ave":
merge_func = lambda y, y_rev: (y + y_rev) / 2
elif merge_mode == "concat":
merge_func = lambda y, y_rev: np.concatenate(
(y, y_rev), axis=-1
)
else:
merge_func = lambda y, y_rev: [y, y_rev]
# basic case
inputs = keras.Input((timesteps, dim))
layer = keras.layers.Bidirectional(
rnn(units, return_sequences=True), merge_mode=merge_mode
)
f_merged = keras.backend.function([inputs], _to_list(layer(inputs)))
f_forward = keras.backend.function(
[inputs], [layer.forward_layer(inputs)]
)
f_backward = keras.backend.function(
[inputs],
[keras.backend.reverse(layer.backward_layer(inputs), 1)],
)
y_merged = f_merged(x)
y_expected = _to_list(merge_func(f_forward(x)[0], f_backward(x)[0]))
assert len(y_merged) == len(y_expected)
for x1, x2 in zip(y_merged, y_expected):
self.assertAllClose(x1, x2, atol=1e-5)
# test return_state
inputs = keras.Input((timesteps, dim))
layer = keras.layers.Bidirectional(
rnn(units, return_state=True), merge_mode=merge_mode
)
f_merged = keras.backend.function([inputs], layer(inputs))
f_forward = keras.backend.function(
[inputs], layer.forward_layer(inputs)
)
f_backward = keras.backend.function(
[inputs], layer.backward_layer(inputs)
)
n_states = len(layer.layer.states)
y_merged = f_merged(x)
y_forward = f_forward(x)
y_backward = f_backward(x)
y_expected = _to_list(merge_func(y_forward[0], y_backward[0]))
assert len(y_merged) == len(y_expected) + n_states * 2
for x1, x2 in zip(y_merged, y_expected):
self.assertAllClose(x1, x2, atol=1e-5)
y_merged = y_merged[-n_states * 2 :]
y_forward = y_forward[-n_states:]
y_backward = y_backward[-n_states:]
for state_birnn, state_inner in zip(
y_merged, y_forward + y_backward
):
self.assertAllClose(state_birnn, state_inner, atol=1e-5)
@parameterized.parameters([True, False])
def test_Bidirectional_with_time_major_input(self, time_major):
batch_size, time, input_dim = 2, 3, 1
inputs = tf.zeros((batch_size, time, input_dim))
# length is [1 2]. Within the batch, the first element has 1 step, and
# the second element as 2 steps.
lengths = tf.range(1, 1 + batch_size)
mask = tf.sequence_mask(lengths, maxlen=time, dtype=tf.float32)
forward_cell = _AddOneCell(name="forward")
backward_cell = _AddOneCell(name="backward")
layer = keras.layers.Bidirectional(
layer=keras.layers.RNN(
forward_cell, time_major=time_major, return_sequences=True
),
backward_layer=keras.layers.RNN(
backward_cell,
time_major=time_major,
return_sequences=True,
go_backwards=True,
),
)
# Switch to time-major.
if time_major:
inputs = tf.transpose(inputs, [1, 0, 2])
mask = tf.transpose(mask, [1, 0])
keras_outputs = layer(inputs, mask=mask)
if time_major:
keras_outputs = tf.transpose(keras_outputs, [1, 0, 2])
# expect the first element in batch has 1 step and second element in
# batch has 2 steps.
expected_result = np.array(
[
[[1.0, 1.0], [0.0, 0.0], [0.0, 0.0]],
[[1.0, 1.0], [1.0, 1.0], [0.0, 0.0]],
]
)
self.assertAllClose(expected_result, keras_outputs)
def test_Bidirectional_dropout(self):
rnn = keras.layers.LSTM
samples = 2
dim = 5
timesteps = 3
units = 3
merge_mode = "sum"
x = [np.random.rand(samples, timesteps, dim)]
with self.cached_session():
inputs = keras.Input((timesteps, dim))
wrapped = keras.layers.Bidirectional(
rnn(units, dropout=0.2, recurrent_dropout=0.2),
merge_mode=merge_mode,
)
outputs = _to_list(wrapped(inputs, training=True))
inputs = keras.Input((timesteps, dim))
wrapped = keras.layers.Bidirectional(
rnn(units, dropout=0.2, return_state=True),
merge_mode=merge_mode,
)
outputs = _to_list(wrapped(inputs))
model = keras.Model(inputs, outputs)
y1 = _to_list(model.predict(x))
y2 = _to_list(model.predict(x))
for x1, x2 in zip(y1, y2):
self.assertAllClose(x1, x2, atol=1e-5)
def test_Bidirectional_state_reuse(self):
rnn = keras.layers.LSTM
samples = 2
dim = 5
timesteps = 3
units = 3
with self.cached_session():
input1 = keras.layers.Input((timesteps, dim))
layer = keras.layers.Bidirectional(
rnn(units, return_state=True, return_sequences=True)
)
state = layer(input1)[1:]
# test passing invalid initial_state: passing a tensor
input2 = keras.layers.Input((timesteps, dim))
with self.assertRaises(ValueError):
keras.layers.Bidirectional(rnn(units))(
input2, initial_state=state[0]
)
# test valid usage: passing a list
output = keras.layers.Bidirectional(rnn(units))(
input2, initial_state=state
)
model = keras.models.Model([input1, input2], output)
assert len(model.layers) == 4
assert isinstance(model.layers[-1].input, list)
inputs = [
np.random.rand(samples, timesteps, dim),
np.random.rand(samples, timesteps, dim),
]
model.predict(inputs)
def test_Bidirectional_state_reuse_with_np_input(self):
# See https://github.com/tensorflow/tensorflow/issues/28761 for more
# detail.
rnn = keras.layers.LSTM
samples = 2
dim = 5
timesteps = 3
units = 3
with self.cached_session():
input1 = np.random.rand(samples, timesteps, dim).astype(np.float32)
layer = keras.layers.Bidirectional(
rnn(units, return_state=True, return_sequences=True)
)
state = layer(input1)[1:]
input2 = np.random.rand(samples, timesteps, dim).astype(np.float32)
keras.layers.Bidirectional(rnn(units))(input2, initial_state=state)
def test_Bidirectional_trainable(self):
# test layers that need learning_phase to be set
with self.cached_session():
x = keras.layers.Input(shape=(3, 2))
layer = keras.layers.Bidirectional(keras.layers.SimpleRNN(3))
_ = layer(x)
assert len(layer.trainable_weights) == 6
layer.trainable = False
assert not layer.trainable_weights
layer.trainable = True
assert len(layer.trainable_weights) == 6
def test_Bidirectional_updates(self):
if tf.executing_eagerly():
self.skipTest("layer.updates is only available in graph mode.")
with self.cached_session():
x = keras.layers.Input(shape=(3, 2))
x_reachable_update = x * x
layer = keras.layers.Bidirectional(keras.layers.SimpleRNN(3))
_ = layer(x)
assert not layer.updates
# TODO(b/128684069): Remove when Wrapper sublayers are __call__'d.
with base_layer_utils.call_context().enter(layer, x, True, None):
layer.forward_layer.add_update(x_reachable_update)
layer.forward_layer.add_update(1)
layer.backward_layer.add_update(x_reachable_update)
layer.backward_layer.add_update(1)
assert len(layer.updates) == 4
def test_Bidirectional_losses(self):
x = keras.layers.Input(shape=(3, 2))
layer = keras.layers.Bidirectional(
keras.layers.SimpleRNN(
3,
kernel_regularizer="l1",
bias_regularizer="l1",
activity_regularizer="l1",
)
)
_ = layer(x)
assert len(layer.losses) == 6
loss = x * x
layer.forward_layer.add_loss(loss)
layer.backward_layer.add_loss(loss)
assert len(layer.losses) == 8
def test_Bidirectional_with_constants(self):
with self.cached_session():
# Test basic case.
x = keras.Input((5, 5))
c = keras.Input((3,))
cell = _RNNCellWithConstants(32, 3)
custom_objects = {"_RNNCellWithConstants": _RNNCellWithConstants}
with keras.utils.CustomObjectScope(custom_objects):
layer = keras.layers.Bidirectional(keras.layers.RNN(cell))
y = layer(x, constants=c)
model = keras.Model([x, c], y)
model.compile(optimizer="rmsprop", loss="mse")
model.train_on_batch(
[np.zeros((6, 5, 5)), np.zeros((6, 3))], np.zeros((6, 64))
)
# Test basic case serialization.
x_np = np.random.random((6, 5, 5))
c_np = np.random.random((6, 3))
y_np = model.predict([x_np, c_np])
weights = model.get_weights()
config = layer.get_config()
with keras.utils.CustomObjectScope(custom_objects):
layer = keras.layers.Bidirectional.from_config(
copy.deepcopy(config)
)
y = layer(x, constants=c)
model = keras.Model([x, c], y)
model.set_weights(weights)
y_np_2 = model.predict([x_np, c_np])
self.assertAllClose(y_np, y_np_2, atol=1e-4)
# Test flat list inputs
with keras.utils.CustomObjectScope(custom_objects):
layer = keras.layers.Bidirectional.from_config(
copy.deepcopy(config)
)
y = layer([x, c])
model = keras.Model([x, c], y)
model.set_weights(weights)
y_np_3 = model.predict([x_np, c_np])
self.assertAllClose(y_np, y_np_3, atol=1e-4)
def test_Bidirectional_with_constants_layer_passing_initial_state(self):
with self.cached_session():
# Test basic case.
x = keras.Input((5, 5))
c = keras.Input((3,))
s_for = keras.Input((32,))
s_bac = keras.Input((32,))
cell = _RNNCellWithConstants(32, 3)
custom_objects = {"_RNNCellWithConstants": _RNNCellWithConstants}
with keras.utils.CustomObjectScope(custom_objects):
layer = keras.layers.Bidirectional(keras.layers.RNN(cell))
y = layer(x, initial_state=[s_for, s_bac], constants=c)
model = keras.Model([x, s_for, s_bac, c], y)
model.compile(optimizer="rmsprop", loss="mse")
model.train_on_batch(
[
np.zeros((6, 5, 5)),
np.zeros((6, 32)),
np.zeros((6, 32)),
np.zeros((6, 3)),
],
np.zeros((6, 64)),
)
# Test basic case serialization.
x_np = np.random.random((6, 5, 5))
s_fw_np = np.random.random((6, 32))
s_bk_np = np.random.random((6, 32))
c_np = np.random.random((6, 3))
y_np = model.predict([x_np, s_fw_np, s_bk_np, c_np])
weights = model.get_weights()
config = layer.get_config()
with keras.utils.CustomObjectScope(custom_objects):
layer = keras.layers.Bidirectional.from_config(
copy.deepcopy(config)
)
y = layer(x, initial_state=[s_for, s_bac], constants=c)
model = keras.Model([x, s_for, s_bac, c], y)
model.set_weights(weights)
y_np_2 = model.predict([x_np, s_fw_np, s_bk_np, c_np])
self.assertAllClose(y_np, y_np_2, atol=1e-4)
# Verify that state is used
y_np_2_different_s = model.predict(
[x_np, s_fw_np + 10.0, s_bk_np + 10.0, c_np]
)
assert np.mean(y_np - y_np_2_different_s) != 0
# Test flat list inputs
with keras.utils.CustomObjectScope(custom_objects):
layer = keras.layers.Bidirectional.from_config(
copy.deepcopy(config)
)
y = layer([x, s_for, s_bac, c])
model = keras.Model([x, s_for, s_bac, c], y)
model.set_weights(weights)
y_np_3 = model.predict([x_np, s_fw_np, s_bk_np, c_np])
self.assertAllClose(y_np, y_np_3, atol=1e-4)
@parameterized.parameters([keras.layers.LSTM, keras.layers.GRU])
def test_Bidirectional_output_shape(self, rnn):
input_shape = [None, 2, 1]
num_state = 4 if rnn == keras.layers.LSTM else 2
wrapper = keras.layers.Bidirectional(rnn(3))
output_shape = wrapper.compute_output_shape(input_shape)
self.assertEqual(output_shape.as_list(), [None, 6])
wrapper = keras.layers.Bidirectional(rnn(3, return_state=True))
output_shape = wrapper.compute_output_shape(input_shape)
# 1 for output and the rest for forward and backward states
self.assertLen(output_shape, 1 + num_state)
self.assertEqual(output_shape[0].as_list(), [None, 6])
for shape in output_shape[1:]:
self.assertEqual(shape.as_list(), [None, 3])
wrapper = keras.layers.Bidirectional(
rnn(3, return_state=True), merge_mode=None
)
output_shape = wrapper.compute_output_shape(input_shape)
# 1 for forward output and 1 for backward output, and the rest for
# states
self.assertLen(output_shape, 2 + num_state)
for shape in output_shape:
self.assertEqual(shape.as_list(), [None, 3])
def test_Bidirectional_output_shape_return_types(self):
class TestLayer(keras.layers.SimpleRNN):
def call(self, inputs):
return tf.concat([inputs, inputs], axis=-1)
def compute_output_shape(self, input_shape):
output_shape = tf.TensorShape(input_shape).as_list()
output_shape[-1] = output_shape[-1] * 2
return tf.TensorShape(output_shape)
class TestListLayer(TestLayer):
def compute_output_shape(self, input_shape):
shape = super().compute_output_shape(input_shape)
return shape.as_list()
class TestTupleLayer(TestLayer):
def compute_output_shape(self, input_shape):
shape = super().compute_output_shape(input_shape)
return tuple(shape.as_list())
# Layers can specify output shape as list/tuple/TensorShape
test_layers = [TestLayer, TestListLayer, TestTupleLayer]
for layer in test_layers:
input_layer = keras.layers.Bidirectional(layer(1))
inputs = keras.backend.placeholder(shape=(None, 2, 4))
output = input_layer(inputs)
self.assertEqual(output.shape.as_list(), [None, 2, 16])
self.assertEqual(
input_layer.compute_output_shape([None, 2, 4]).as_list(),
[None, 2, 16],
)
@tf.test.disable_with_predicate(
pred=tf.test.is_built_with_rocm,
skip_message=(
"Skipping as ROCm MIOpen does not support padded input yet."
),
)
def test_Bidirectional_last_output_with_masking(self):
rnn = keras.layers.LSTM
samples = 2
dim = 5
timesteps = 3
units = 3
merge_mode = "concat"
x = np.random.rand(samples, timesteps, dim)
# clear the first record's timestep 2. Last output should be same as
# state, not zeroed.
x[0, 2] = 0
with self.cached_session():
inputs = keras.Input((timesteps, dim))
masked_inputs = keras.layers.Masking()(inputs)
wrapped = keras.layers.Bidirectional(
rnn(units, return_state=True), merge_mode=merge_mode
)
outputs = _to_list(wrapped(masked_inputs, training=True))
self.assertLen(outputs, 5)
self.assertEqual(outputs[0].shape.as_list(), [None, units * 2])
model = keras.Model(inputs, outputs)
y = _to_list(model.predict(x))
self.assertLen(y, 5)
self.assertAllClose(y[0], np.concatenate([y[1], y[3]], axis=1))
@parameterized.parameters([keras.layers.LSTM, keras.layers.GRU])
@tf.test.disable_with_predicate(
pred=tf.test.is_built_with_rocm,
skip_message=(
"Skipping as ROCm MIOpen does not support padded input yet."
),
)
def test_Bidirectional_sequence_output_with_masking(self, rnn):
samples = 2
dim = 5
timesteps = 3
units = 3
merge_mode = "concat"
x = np.random.rand(samples, timesteps, dim)
# clear the first record's timestep 2, and expect the output of timestep
# 2 is also 0s.
x[0, 2] = 0
with self.cached_session():
inputs = keras.Input((timesteps, dim))
masked_inputs = keras.layers.Masking()(inputs)
wrapped = keras.layers.Bidirectional(
rnn(units, return_sequences=True), merge_mode=merge_mode
)
outputs = _to_list(wrapped(masked_inputs, training=True))
self.assertLen(outputs, 1)
self.assertEqual(
outputs[0].shape.as_list(), [None, timesteps, units * 2]
)
model = keras.Model(inputs, outputs)
y = _to_list(model.predict(x))
self.assertLen(y, 1)
self.assertAllClose(y[0][0, 2], np.zeros(units * 2))
@parameterized.parameters(["sum", "concat"])
def test_custom_backward_layer(self, mode):
rnn = keras.layers.SimpleRNN
samples = 2
dim = 2
timesteps = 2
output_dim = 2
x = np.random.random((samples, timesteps, dim))
target_dim = 2 * output_dim if mode == "concat" else output_dim
y = np.random.random((samples, target_dim))
forward_layer = rnn(output_dim)
backward_layer = rnn(output_dim, go_backwards=True)
# test with Sequential model
model = keras.models.Sequential()
model.add(
keras.layers.Bidirectional(
forward_layer,
merge_mode=mode,
backward_layer=backward_layer,
input_shape=(timesteps, dim),
)
)
model.compile(optimizer="rmsprop", loss="mse")
model.fit(x, y, epochs=1, batch_size=1)
# check whether the model variables are present in the
# trackable list of objects
checkpointed_object_ids = {
id(o) for o in trackable_util.list_objects(model)
}
for v in model.variables:
self.assertIn(id(v), checkpointed_object_ids)
# test compute output shape
ref_shape = model.layers[-1].output.shape
shape = model.layers[-1].compute_output_shape((None, timesteps, dim))
self.assertListEqual(shape.as_list(), ref_shape.as_list())
# test config
model.get_config()
model = keras.models.model_from_json(model.to_json())
model.summary()
def test_custom_backward_layer_error_check(self):
rnn = keras.layers.LSTM
units = 2
forward_layer = rnn(units)
backward_layer = rnn(units)
with self.assertRaisesRegex(
ValueError, "should have different `go_backwards` value."
):
keras.layers.Bidirectional(
forward_layer,
merge_mode="concat",
backward_layer=backward_layer,
)
for attr in ("stateful", "return_sequences", "return_state"):
kwargs = {attr: True}
backward_layer = rnn(units, go_backwards=True, **kwargs)
with self.assertRaisesRegex(
ValueError,
'expected to have the same value for attribute "' + attr,
):
keras.layers.Bidirectional(
forward_layer,
merge_mode="concat",
backward_layer=backward_layer,
)
def test_custom_backward_layer_serialization(self):
rnn = keras.layers.LSTM
units = 2
forward_layer = rnn(units)
backward_layer = rnn(units, go_backwards=True)
layer = keras.layers.Bidirectional(
forward_layer, merge_mode="concat", backward_layer=backward_layer
)
config = layer.get_config()
layer_from_config = keras.layers.Bidirectional.from_config(config)
new_config = layer_from_config.get_config()
self.assertDictEqual(config, new_config)
def test_rnn_layer_name(self):
rnn = keras.layers.LSTM
units = 2
layer = keras.layers.Bidirectional(rnn(units, name="rnn"))
config = layer.get_config()
self.assertEqual(config["layer"]["config"]["name"], "rnn")
layer_from_config = keras.layers.Bidirectional.from_config(config)
self.assertEqual(layer_from_config.forward_layer.name, "forward_rnn")
self.assertEqual(layer_from_config.backward_layer.name, "backward_rnn")
def test_custom_backward_rnn_layer_name(self):
rnn = keras.layers.LSTM
units = 2
forward_layer = rnn(units)
backward_layer = rnn(units, go_backwards=True)
layer = keras.layers.Bidirectional(
forward_layer, merge_mode="concat", backward_layer=backward_layer
)
config = layer.get_config()
self.assertEqual(config["layer"]["config"]["name"], "lstm")
self.assertEqual(config["backward_layer"]["config"]["name"], "lstm_1")
layer_from_config = keras.layers.Bidirectional.from_config(config)
self.assertEqual(layer_from_config.forward_layer.name, "forward_lstm")
self.assertEqual(
layer_from_config.backward_layer.name, "backward_lstm_1"
)
def test_rnn_with_customized_cell(self):
batch = 20
dim = 5
timesteps = 3
units = 5
merge_mode = "sum"
cell = _ResidualLSTMCell(units)
forward_layer = keras.layers.RNN(cell)
inputs = keras.Input((timesteps, dim))
bidirectional_rnn = keras.layers.Bidirectional(
forward_layer, merge_mode=merge_mode
)
outputs = _to_list(bidirectional_rnn(inputs))
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop", loss="mse")
model.fit(
np.random.random((batch, timesteps, dim)),
np.random.random((batch, units)),
epochs=1,
batch_size=10,
)
def test_rnn_with_customized_cell_stacking(self):
batch = 20
dim = 5
timesteps = 3
units = 5
merge_mode = "sum"
cell = [_ResidualLSTMCell(units), _ResidualLSTMCell(units)]
forward_layer = keras.layers.RNN(cell)
inputs = keras.Input((timesteps, dim))
bidirectional_rnn = keras.layers.Bidirectional(
forward_layer, merge_mode=merge_mode
)
outputs = _to_list(bidirectional_rnn(inputs))
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop", loss="mse")
model.fit(
np.random.random((batch, timesteps, dim)),
np.random.random((batch, units)),
epochs=1,
batch_size=10,
)
@test_utils.run_v2_only
def test_wrapped_rnn_cell(self):
# See https://github.com/tensorflow/tensorflow/issues/26581.
batch = 20
dim = 5
timesteps = 3
units = 5
merge_mode = "sum"
cell = keras.layers.LSTMCell(units)
cell = ResidualWrapper(cell)
rnn = keras.layers.RNN(cell)
inputs = keras.Input((timesteps, dim))
wrapped = keras.layers.Bidirectional(rnn, merge_mode=merge_mode)
outputs = _to_list(wrapped(inputs))
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop", loss="mse")
model.fit(
np.random.random((batch, timesteps, dim)),
np.random.random((batch, units)),
epochs=1,
batch_size=10,
)
@parameterized.parameters(["ave", "concat", "mul"])
@tf.test.disable_with_predicate(
pred=tf.test.is_built_with_rocm,
skip_message=(
"Skipping as ROCm RNN does not support ragged tensors yet."
),
)
def test_Bidirectional_ragged_input(self, merge_mode):
np.random.seed(100)
rnn = keras.layers.LSTM
units = 3
x = tf.ragged.constant(
[
[[1, 1, 1], [1, 1, 1]],
[[1, 1, 1]],
[[1, 1, 1], [1, 1, 1], [1, 1, 1], [1, 1, 1]],
[[1, 1, 1], [1, 1, 1], [1, 1, 1]],
],
ragged_rank=1,
)
x = tf.cast(x, "float32")
with self.cached_session():
if merge_mode == "ave":
merge_func = lambda y, y_rev: (y + y_rev) / 2
elif merge_mode == "concat":
merge_func = lambda y, y_rev: tf.concat((y, y_rev), axis=-1)
elif merge_mode == "mul":
merge_func = lambda y, y_rev: (y * y_rev)
inputs = keras.Input(
shape=(None, 3), batch_size=4, dtype="float32", ragged=True
)
layer = keras.layers.Bidirectional(
rnn(units, return_sequences=True), merge_mode=merge_mode
)
f_merged = keras.backend.function([inputs], layer(inputs))
f_forward = keras.backend.function(
[inputs], layer.forward_layer(inputs)
)
# TODO(kaftan): after KerasTensor refactor TF op layers should work
# with many composite tensors, and this shouldn't need to be a
# lambda layer.
reverse_layer = core.Lambda(tf.reverse, arguments=dict(axis=[1]))
f_backward = keras.backend.function(
[inputs], reverse_layer(layer.backward_layer(inputs))
)
y_merged = f_merged(x)
y_expected = merge_func(
convert_ragged_tensor_value(f_forward(x)),
convert_ragged_tensor_value(f_backward(x)),
)
y_merged = convert_ragged_tensor_value(y_merged)
self.assertAllClose(y_merged.flat_values, y_expected.flat_values)
def test_Bidirectional_nested_state_reuse(self):
if not tf.executing_eagerly():
self.skipTest("Only test eager mode.")
x = tf.random.normal([4, 8, 16])
layer = keras.layers.Bidirectional(
keras.layers.RNN(
[keras.layers.LSTMCell(5), keras.layers.LSTMCell(5)],
return_sequences=True,
return_state=True,
)
)
y = layer(x)
self.assertAllClose(layer([x] + y[1:]), layer(x, initial_state=y[1:]))
def test_full_input_spec(self):
# See https://github.com/tensorflow/tensorflow/issues/38403
inputs = keras.layers.Input(batch_shape=(1, 1, 1))
fw_state = keras.layers.Input(batch_shape=(1, 1))
bw_state = keras.layers.Input(batch_shape=(1, 1))
states = [fw_state, bw_state]
bidirectional_rnn = keras.layers.Bidirectional(
keras.layers.SimpleRNN(1, stateful=True)
)
rnn_output = bidirectional_rnn(inputs, initial_state=states)
model = keras.Model([inputs, fw_state, bw_state], rnn_output)
output1 = model.predict(
[np.ones((1, 1, 1)), np.ones((1, 1)), np.ones((1, 1))]
)
output2 = model.predict(
[np.ones((1, 1, 1)), np.ones((1, 1)), np.ones((1, 1))]
)
model.reset_states()
output3 = model.predict(
[np.ones((1, 1, 1)), np.ones((1, 1)), np.ones((1, 1))]
)
self.assertAllClose(output1, output3)
self.assertNotAllClose(output1, output2)
def test_reset_states(self):
ref_state = np.random.rand(1, 3).astype(np.float32)
# build model
inp = keras.Input(batch_shape=[1, 2, 3])
stateful = keras.layers.SimpleRNN(units=3, stateful=True)
stateless = keras.layers.SimpleRNN(units=3, stateful=False)
bid_stateless = keras.layers.Bidirectional(stateless)
bid_stateful = keras.layers.Bidirectional(stateful)
# required to correctly initialize the state in the layers
_ = keras.Model(
inp,
[
bid_stateless(inp),
bid_stateful(inp),
],
)
with self.assertRaisesRegex(
AttributeError,
"Layer must be stateful.",
):
bid_stateless.reset_states()
with self.assertRaisesRegex(AttributeError, "Layer must be stateful."):
bid_stateless.reset_states([])
bid_stateful.reset_states()
bid_stateful.reset_states([ref_state, ref_state])
with self.assertRaisesRegex(
ValueError,
"Unrecognized value for `states`. Expected `states` "
"to be list or tuple",
):
bid_stateful.reset_states({})
def test_trainable_parameter_argument(self):
inp = keras.layers.Input([None, 3])
def test(fwd, bwd, **kwargs):
def _remove_from_dict(d, remove_key):
if isinstance(d, dict):
d.pop(remove_key, None)
for key in list(d.keys()):
_remove_from_dict(d[key], remove_key)
bid = keras.layers.Bidirectional(fwd, backward_layer=bwd, **kwargs)
model = keras.Model(inp, bid(inp))
clone = keras.models.clone_model(model)
# Comparison should exclude `build_config`
clone_config = _remove_from_dict(clone.get_config(), "build_config")
model_config = _remove_from_dict(model.get_config(), "build_config")
self.assertEqual(clone_config, model_config)
# test fetching trainable from `layer`
fwd = keras.layers.SimpleRNN(units=3)
bwd = keras.layers.SimpleRNN(units=3, go_backwards=True)
fwd.trainable = True
test(fwd, None)
fwd.trainable = False
test(fwd, None)
fwd.trainable = True
bwd.trainable = False
test(fwd, bwd)
fwd.trainable = False
bwd.trainable = True
test(fwd, bwd)
fwd.trainable = True
bwd.trainable = True
test(fwd, bwd)
fwd.trainable = False
bwd.trainable = False
test(fwd, bwd)
# test fetching trainable from `kwargs`
test(fwd, None, trainable=True)
test(fwd, None, trainable=False)
def _to_list(ls):
if isinstance(ls, list):
return ls
else:
return [ls]
def convert_ragged_tensor_value(inputs):
if isinstance(inputs, tf.compat.v1.ragged.RaggedTensorValue):
flat_values = tf.convert_to_tensor(
value=inputs.flat_values, name="flat_values"
)
return tf.RaggedTensor.from_nested_row_splits(
flat_values, inputs.nested_row_splits, validate=False
)
return inputs
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/layers/rnn/bidirectional_test.py/0 | {
"file_path": "tf-keras/tf_keras/layers/rnn/bidirectional_test.py",
"repo_id": "tf-keras",
"token_count": 21198
} | 191 |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for GRU V1 layer."""
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
from tensorflow.core.protobuf import rewriter_config_pb2
import tf_keras as keras
from tf_keras.layers.rnn import gru
from tf_keras.layers.rnn import gru_v1
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
from tf_keras.utils import np_utils
# Global config for grappler setting that is used for graph mode test.
_rewrites = rewriter_config_pb2.RewriterConfig()
_rewrites.implementation_selector = rewriter_config_pb2.RewriterConfig.ON
_rewrites.min_graph_nodes = -1
_graph_options = tf.compat.v1.GraphOptions(rewrite_options=_rewrites)
_config = tf.compat.v1.ConfigProto(graph_options=_graph_options)
@test_utils.run_all_without_tensor_float_32("RNN GRU can use TF32 on GPU")
@test_combinations.run_all_keras_modes(config=_config)
class GRUGraphRewriteTest(test_combinations.TestCase):
@tf.test.disable_with_predicate(
pred=tf.test.is_built_with_rocm,
skip_message=(
"Skipping as ROCm MIOpen does not support padded input yet."
),
)
@test_utils.run_v2_only
def test_gru_feature_parity_v1_v2(self):
input_shape = 10
rnn_state_size = 8
timestep = 4
batch = 20
(x_train, y_train), _ = test_utils.get_test_data(
train_samples=batch,
test_samples=0,
input_shape=(timestep, input_shape),
num_classes=rnn_state_size,
random_seed=87654321,
)
y_train = np_utils.to_categorical(y_train, rnn_state_size)
# For the last batch item of the test data, we filter out the last
# timestep to simulate the variable length sequence and masking test.
x_train[-2:, -1, :] = 0.0
y_train[-2:] = 0
inputs = keras.layers.Input(
shape=[timestep, input_shape], dtype=tf.float32
)
masked_input = keras.layers.Masking()(inputs)
gru_layer = gru_v1.GRU(
rnn_state_size, recurrent_activation="sigmoid", reset_after=True
)
output = gru_layer(masked_input)
gru_model = keras.models.Model(inputs, output)
weights = gru_model.get_weights()
y_1 = gru_model.predict(x_train)
gru_model.compile("rmsprop", "mse")
gru_model.fit(x_train, y_train)
y_2 = gru_model.predict(x_train)
with test_utils.device(should_use_gpu=True):
cudnn_layer = gru.GRU(
rnn_state_size, recurrent_activation="sigmoid", reset_after=True
)
cudnn_model = keras.models.Model(inputs, cudnn_layer(masked_input))
cudnn_model.set_weights(weights)
y_3 = cudnn_model.predict(x_train)
cudnn_model.compile("rmsprop", "mse")
cudnn_model.fit(x_train, y_train)
y_4 = cudnn_model.predict(x_train)
self.assertAllClose(y_1, y_3, rtol=2e-5, atol=2e-5)
self.assertAllClose(y_2, y_4, rtol=2e-5, atol=2e-5)
@parameterized.named_parameters(
# test_name, time_major, go_backwards
("normal", False, False),
("time_major", True, False),
("go_backwards", False, True),
("both", True, True),
)
def test_time_major_and_go_backward_v1_v2(self, time_major, go_backwards):
input_shape = 10
rnn_state_size = 8
timestep = 4
batch = 100
x_train = np.random.random((batch, timestep, input_shape))
def build_model(layer_cls):
inputs = keras.layers.Input(
shape=[timestep, input_shape], dtype=tf.float32
)
layer = layer_cls(
rnn_state_size,
recurrent_activation="sigmoid",
time_major=time_major,
return_sequences=True,
go_backwards=go_backwards,
reset_after=True,
)
if time_major:
converted_input = keras.layers.Lambda(
lambda t: tf.transpose(t, [1, 0, 2])
)(inputs)
outputs = layer(converted_input)
outputs = keras.layers.Lambda(
lambda t: tf.transpose(t, [1, 0, 2])
)(outputs)
else:
outputs = layer(inputs)
return keras.models.Model(inputs, outputs)
gru_model = build_model(gru_v1.GRU)
y_ref = gru_model.predict(x_train)
weights = gru_model.get_weights()
gru_v2_model = build_model(gru.GRU)
gru_v2_model.set_weights(weights)
y = gru_v2_model.predict(x_train)
self.assertAllClose(y, y_ref)
@tf.test.disable_with_predicate(
pred=tf.test.is_built_with_rocm,
skip_message=(
"Skipping as ROCm MIOpen does not support padded input yet."
),
)
@test_utils.run_v2_only
def test_explicit_device_with_go_backward_and_mask_v1(self):
batch_size = 8
timestep = 7
masksteps = 5
units = 4
inputs = np.random.randn(batch_size, timestep, units).astype(np.float32)
mask = np.ones((batch_size, timestep)).astype(bool)
mask[:, masksteps:] = 0
gru_layer = gru_v1.GRU(units, return_sequences=True, go_backwards=True)
with test_utils.device(should_use_gpu=True):
outputs_masked = gru_layer(inputs, mask=tf.constant(mask))
outputs_trimmed = gru_layer(inputs[:, :masksteps])
self.assertAllClose(outputs_masked[:, -masksteps:], outputs_trimmed)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/layers/rnn/gru_v1_test.py/0 | {
"file_path": "tf-keras/tf_keras/layers/rnn/gru_v1_test.py",
"repo_id": "tf-keras",
"token_count": 2927
} | 192 |
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for TF-Keras subclassed layers utilizing desired user syntax."""
import tensorflow.compat.v2 as tf
import tf_keras as keras
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
from tf_keras.utils import tf_utils
@test_combinations.run_all_keras_modes
@test_combinations.run_with_all_model_types
class SubclassedLayersTest(test_combinations.TestCase):
def test_simple_build_with_constant(self):
class BuildConstantLayer(keras.layers.Layer):
def build(self, input_shape):
self.b = tf.convert_to_tensor(2.0)
def call(self, inputs):
return self.b * inputs
layer = BuildConstantLayer()
model = test_utils.get_model_from_layers(
[layer, keras.layers.Dense(1)], input_shape=(1,)
)
x = tf.convert_to_tensor([[3.0]])
self.assertEqual(
tf_utils.is_symbolic_tensor(model(x)), not tf.executing_eagerly()
)
self.assertEqual(
tf_utils.is_symbolic_tensor(layer(x)), not tf.executing_eagerly()
)
self.assertAllClose(keras.backend.get_value(layer(x)), [[6.0]])
def test_build_with_derived_constant(self):
class BuildDerivedConstantLayer(keras.layers.Layer):
def build(self, input_shape):
a = tf.convert_to_tensor(1.0)
b = 2.0 * a
self.variable = tf.Variable(b)
self.constant = tf.convert_to_tensor(self.variable)
def call(self, inputs):
return self.variable * self.constant * inputs
layer = BuildDerivedConstantLayer()
model = test_utils.get_model_from_layers(
[layer, keras.layers.Dense(1)], input_shape=(1,)
)
x = tf.convert_to_tensor([[3.0]])
self.assertEqual(
tf_utils.is_symbolic_tensor(model(x)), not tf.executing_eagerly()
)
self.assertEqual(
tf_utils.is_symbolic_tensor(layer(x)), not tf.executing_eagerly()
)
self.assertAllClose(keras.backend.get_value(layer(x)), [[12.0]])
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/layers/subclassed_layers_test.py/0 | {
"file_path": "tf-keras/tf_keras/layers/subclassed_layers_test.py",
"repo_id": "tf-keras",
"token_count": 1210
} | 193 |
# Copyright 2021 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# =============================================================================
"""Contains a shim to allow using TF1 get_variable code in TF2."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import contextlib
import functools
import tensorflow.compat.v2 as tf
from tf_keras.engine import base_layer
from tf_keras.utils import layer_utils
from tf_keras.utils import tf_inspect
# isort: off
from tensorflow.python.ops import variable_scope as vs
from tensorflow.python.platform import tf_logging as logging
from tensorflow.python.util.tf_export import keras_export
def as_shape(shape):
"""Converts the given object to a TensorShape."""
if isinstance(shape, tf.TensorShape):
return shape
else:
return tf.TensorShape(shape)
def _is_callable_object(obj):
return hasattr(obj, "__call__") and tf_inspect.ismethod(obj.__call__)
def _has_kwargs(fn):
"""Returns whether the passed callable has **kwargs in its signature.
Args:
fn: Function, or function-like object (e.g., result of
`functools.partial`).
Returns:
`bool`: if `fn` has **kwargs in its signature.
Raises:
`TypeError`: If fn is not a Function, or function-like object.
"""
if isinstance(fn, functools.partial):
fn = fn.func
elif _is_callable_object(fn):
fn = fn.__call__
elif not callable(fn):
raise TypeError(
f"fn should be a function-like object, but is of type {type(fn)}."
)
return tf_inspect.getfullargspec(fn).varkw is not None
def fn_args(fn):
"""Get argument names for function-like object.
Args:
fn: Function, or function-like object (e.g., result of
`functools.partial`).
Returns:
`tuple` of string argument names.
Raises:
ValueError: if partial function has positionally bound arguments
"""
if isinstance(fn, functools.partial):
args = fn_args(fn.func)
args = [a for a in args[len(fn.args) :] if a not in (fn.keywords or [])]
else:
if hasattr(fn, "__call__") and tf_inspect.ismethod(fn.__call__):
fn = fn.__call__
args = tf_inspect.getfullargspec(fn).args
if _is_bound_method(fn) and args:
# If it's a bound method, it may or may not have a self/cls first
# argument; for example, self could be captured in *args.
# If it does have a positional argument, it is self/cls.
args.pop(0)
return tuple(args)
def _is_bound_method(fn):
_, fn = tf.__internal__.decorator.unwrap(fn)
return tf_inspect.ismethod(fn) and (fn.__self__ is not None)
def validate_synchronization_aggregation_trainable(
synchronization, aggregation, trainable, name
):
"""Given user-provided variable properties, sets defaults and validates."""
if aggregation is None:
aggregation = tf.compat.v1.VariableAggregation.NONE
else:
if not isinstance(
aggregation,
(tf.compat.v1.VariableAggregation, tf.VariableAggregation),
):
try:
aggregation = tf.VariableAggregation(aggregation)
except ValueError:
raise ValueError(
"Invalid variable aggregation mode: {} "
"for variable: {}".format(aggregation, name)
)
if synchronization is None:
synchronization = tf.VariableSynchronization.AUTO
else:
try:
synchronization = tf.VariableSynchronization(synchronization)
except ValueError:
raise ValueError(
"Invalid variable synchronization mode: {} "
"for variable: {}".format(synchronization, name)
)
if trainable is None:
trainable = synchronization != tf.VariableSynchronization.ON_READ
return synchronization, aggregation, trainable
class _EagerVariableStore(tf.Module):
"""TF2-safe VariableStore that avoids collections & tracks regularizers.
New variable names and new variables can be created; all stored
variables are initialized with the initializer passed to __init__.
All variables get created in `tf.init_scope.` to avoid a bad
interaction between `tf.function` `FuncGraph` internals, Keras
Functional Models, and TPUStrategy variable initialization.
Also, it always acts as if reuse is set to either "TRUE" or
tf.compat.v1.AUTO_REUSE
Attributes:
vars: a dictionary with string names (same as passed in GetVar) as keys
and the corresponding TensorFlow Variables as values.
regularizers: a dictionary with string names as keys and the corresponding
callables that return losses as values.
layers: a dictionary with string names as keys and the corresponding
nested keras layers as values.
"""
def __init__(self):
"""Create a variable store."""
self._vars = {} # A dictionary of the stored TensorFlow variables.
self._regularizers = (
{}
) # A dict mapping var names to their regularizers.
self._layers = {} # A dictionary of stored keras layers.
self._store_eager_variables = True
@contextlib.contextmanager
def scope(self):
with vs.with_variable_store(self):
yield
def get_variable(
self,
name,
shape=None,
dtype=tf.float32,
initializer=None,
regularizer=None,
reuse=None,
trainable=None,
collections=None,
caching_device=None,
partitioner=None,
validate_shape=True,
use_resource=None,
custom_getter=None,
constraint=None,
synchronization=tf.VariableSynchronization.AUTO,
aggregation=tf.compat.v1.VariableAggregation.NONE,
):
"""Gets an existing variable with these parameters or create a new one.
If a variable with the given name is already stored, we return the
stored variable. Otherwise, we create a new one.
Set `reuse` to `True` when you only want to reuse existing Variables.
Set `reuse` to None (the default) or tf.compat.v1.AUTO_REUSE when you
want variables to be created if they don't exist or returned if they do.
In this shim, `reuse` of `False` will be treated as auto-reuse.
If initializer is `None` (the default), the default initializer passed
in the constructor is used. If that one is `None` too, we use a new
`glorot_uniform_initializer`. If initializer is a Tensor, we use it as a
value and derive the shape from the initializer.
If a partitioner is provided, a `PartitionedVariable` is returned.
Accessing this object as a `Tensor` returns the shards concatenated
along the partition axis.
Some useful partitioners are available. See, e.g.,
`variable_axis_size_partitioner` and `min_max_variable_partitioner`.
Args:
name: The name of the new or existing variable.
shape: Shape of the new or existing variable.
dtype: Type of the new or existing variable. Defaults to `DT_FLOAT`.
initializer: Initializer for the variable.
regularizer: A (Tensor -> Tensor or None) function; the result of
applying it on a newly created variable will be added to the
collection GraphKeys.REGULARIZATION_LOSSES and can be used for
regularization.
reuse: a Boolean, None, or tf.AUTO_REUSE. Controls reuse or creation
of variables. When eager execution is enabled this argument is
always forced to be False.
trainable: If `True` also add the variable to the graph collection
`GraphKeys.TRAINABLE_VARIABLES` (see `tf.Variable`). `trainable`
becomes `True`, unless `synchronization` is set to `ON_READ`, in
which case it becomes `False`. Defaults to `True`.
collections: List of graph collections keys to add the `Variable` to.
Defaults to `[GraphKeys.GLOBAL_VARIABLES]` (see `tf.Variable`).
caching_device: Optional device string or function describing where
the Variable should be cached for reading. `None` to use the
Variable's device. If not `None`, caches on another device.
Typical use is to cache on the device where the Ops using the
`Variable` reside, to deduplicate copying through `Switch` and other
conditional statements. Defaults to `None`.
partitioner: Optional callable that accepts a fully defined
`TensorShape` and dtype of the `Variable` to be created, and returns
a list of partitions for each axis (currently only one axis can be
partitioned).
validate_shape: If False, allows the variable to be initialized with a
value of unknown shape. If True, the default, the shape of
initial_value must be known.
use_resource: If False, creates a regular Variable. If True, creates
instead an experimental ResourceVariable which has well-defined
semantics. When starting off as False it will later change to True.
When eager execution is enabled this argument always True.
Defaults to `False`.
custom_getter: Callable that takes as a first argument the true
getter, and allows overwriting the internal get_variable method. The
signature of `custom_getter` should match that of this method, but
the most future-proof version will allow for changes:
`def custom_getter(getter, *args, **kwargs)`.
Direct access to all `get_variable` parameters is also allowed:
`def custom_getter(getter, name, *args, **kwargs)`.
A simple identity custom getter that simply creates variables with
modified names is:
```python
def custom_getter(getter, name, *args, **kwargs):
return getter(name + '_suffix', *args, **kwargs)
```
constraint: An optional projection function to be applied to the
variable after being updated by an `Optimizer` (e.g. used to
implement norm constraints or value constraints for layer weights).
The function must take as input the unprojected Tensor representing
the value of the variable and return the Tensor for the projected
value (which must have the same shape). Constraints are not safe to
use when doing asynchronous distributed training.
synchronization: Indicates when a distributed a variable will be
aggregated. Accepted values are constants defined in the class
`tf.VariableSynchronization`. By default the synchronization is set
to `AUTO` and the current `DistributionStrategy` chooses when to
synchronize.
aggregation: Indicates how a distributed variable will be aggregated.
Accepted values are constants defined in the class
`tf.VariableAggregation`.
Returns:
The created or existing `Variable` (or `PartitionedVariable`, if a
partitioner was used).
Raises:
ValueError: when creating a new variable and shape is not declared,
when reusing a variable and specifying a conflicting shape,
or when violating reuse during variable creation.
RuntimeError: when eager execution is enabled and not called from an
EagerVariableStore.
"""
if custom_getter is not None and not callable(custom_getter):
raise ValueError(
f"Passed a custom_getter which is not callable: {custom_getter}"
)
with tf.init_scope():
if tf.executing_eagerly():
# Variable creation and initialization takes place in
# `init_scope`s; as such, if an `init_scope` lifts us into the
# eager context, then we need to use `ResourceVariable`s.
use_resource = True
# Note that it's fine to reuse eager variables whose initialization was
# lifted from a function-building graph into the eager context (that's
# why the following clause is not wrapped in an `init_scope`); lifted
# variables are tracked by the graph's `VariableStore`.
if not reuse:
reuse = tf.compat.v1.AUTO_REUSE
# If a *_ref type is passed in an error would be triggered further down
# the stack. We prevent this using base_dtype to get a non-ref version
# of the type, before doing anything else. When _ref types are removed
# in favor of resources, this line can be removed.
try:
dtype = dtype.base_dtype
except AttributeError:
# .base_dtype not existing means that we will try and use the raw
# dtype which was passed in - this might be a NumPy type which is
# valid.
pass
# This is the main logic of get_variable. However, custom_getter
# may override this logic. So we save it as a callable and pass
# it to custom_getter.
# Note: the parameters of _true_getter, and their documentation, match
# *exactly* item-for-item with the docstring of this method.
def _true_getter(
name,
shape=None,
dtype=tf.float32,
initializer=None,
regularizer=None,
reuse=None,
trainable=None,
collections=None,
caching_device=None,
partitioner=None,
validate_shape=True,
use_resource=None,
constraint=None,
synchronization=tf.VariableSynchronization.AUTO,
aggregation=tf.compat.v1.VariableAggregation.NONE,
):
# Partitioned variable currently unsupported w/ the shim
if partitioner is not None:
raise ValueError(
"`partitioner` arg for `get_variable` is unsupported in "
"TF2. File a bug if you need help. "
"You passed %s" % partitioner
)
# Single variable case
if f"{name}/part_0" in self._vars:
raise ValueError(
"No partitioner was provided, but a partitioned version of "
"the variable was found: %s/part_0. Perhaps a variable of "
"the same name was already created with "
"partitioning?" % name
)
return self._get_single_variable(
name=name,
shape=shape,
dtype=dtype,
initializer=initializer,
regularizer=regularizer,
reuse=reuse,
trainable=trainable,
caching_device=caching_device,
validate_shape=validate_shape,
constraint=constraint,
synchronization=synchronization,
aggregation=aggregation,
)
(
synchronization,
aggregation,
trainable,
) = validate_synchronization_aggregation_trainable(
synchronization, aggregation, trainable, name
)
if custom_getter is not None:
# Handle backwards compatibility with getter arguments that were
# added to the API after users started writing custom getters.
custom_getter_kwargs = {
"getter": _true_getter,
"name": name,
"shape": shape,
"dtype": dtype,
"initializer": initializer,
"regularizer": regularizer,
"reuse": reuse,
"trainable": trainable,
"collections": collections,
"caching_device": caching_device,
"partitioner": partitioner,
"validate_shape": validate_shape,
"use_resource": use_resource,
"synchronization": synchronization,
"aggregation": aggregation,
}
# `fn_args` and `has_kwargs` can handle functions,
# `functools.partial`, `lambda`.
if "constraint" in fn_args(custom_getter) or _has_kwargs(
custom_getter
):
custom_getter_kwargs["constraint"] = constraint
return custom_getter(**custom_getter_kwargs)
else:
return _true_getter(
name,
shape=shape,
dtype=dtype,
initializer=initializer,
regularizer=regularizer,
reuse=reuse,
trainable=trainable,
collections=collections,
caching_device=caching_device,
partitioner=partitioner,
validate_shape=validate_shape,
use_resource=use_resource,
constraint=constraint,
synchronization=synchronization,
aggregation=aggregation,
)
def _get_single_variable(
self,
name,
shape=None,
dtype=tf.float32,
initializer=None,
regularizer=None,
partition_info=None,
reuse=None,
trainable=None,
caching_device=None,
validate_shape=True,
constraint=None,
synchronization=tf.VariableSynchronization.AUTO,
aggregation=tf.compat.v1.VariableAggregation.NONE,
):
"""Get or create a single Variable (e.g. a shard or entire variable).
See the documentation of get_variable above (ignore partitioning
components) for details.
Args:
name: see get_variable.
shape: see get_variable.
dtype: see get_variable.
initializer: see get_variable.
regularizer: see get_variable.
partition_info: _PartitionInfo object.
reuse: see get_variable.
trainable: see get_variable.
caching_device: see get_variable.
validate_shape: see get_variable.
constraint: see get_variable.
synchronization: see get_variable.
aggregation: see get_variable.
Returns:
A Variable. See documentation of get_variable above.
Raises:
ValueError: See documentation of get_variable above.
"""
# Set to true if initializer is a constant.
initializing_from_value = False
if initializer is not None and not callable(initializer):
initializing_from_value = True
if shape is not None and initializing_from_value:
raise ValueError(
"If initializer is a constant, do not specify shape."
)
dtype = tf.as_dtype(dtype)
shape = as_shape(shape)
if name in self._vars:
# Here we handle the case when returning an existing variable.
found_var = self._vars[name]
if not shape.is_compatible_with(found_var.get_shape()):
raise ValueError(
"Trying to share variable %s, but specified shape %s"
" and found shape %s."
% (name, shape, found_var.get_shape())
)
if not dtype.is_compatible_with(found_var.dtype):
dtype_str = dtype.name
found_type_str = found_var.dtype.name
raise ValueError(
"Trying to share variable %s, but specified dtype %s"
" and found dtype %s." % (name, dtype_str, found_type_str)
)
return found_var
# The code below handles only the case of creating a new variable.
if reuse is True:
raise ValueError(
"Variable %s does not exist, or was not created with "
"tf.get_variable(). Did you mean to set "
"reuse=tf.AUTO_REUSE in VarScope?" % name
)
# Create the tensor to initialize the variable with default value.
if initializer is None:
(
initializer,
initializing_from_value,
) = self._get_default_initializer(
name=name, shape=shape, dtype=dtype
)
# Enter an init scope when creating the initializer.
with tf.init_scope():
if initializing_from_value:
init_val = initializer
variable_dtype = None
else:
# Instantiate initializer if provided initializer is a type
# object.
if tf_inspect.isclass(initializer):
initializer = initializer()
if shape.is_fully_defined():
if (
"partition_info"
in tf_inspect.getargspec(initializer).args
):
init_val = functools.partial(
initializer,
shape.as_list(),
dtype=dtype,
partition_info=partition_info,
)
else:
init_val = functools.partial(
initializer, shape.as_list(), dtype=dtype
)
variable_dtype = dtype.base_dtype
else:
init_val = initializer
variable_dtype = None
# Create the variable (Always eagerly as a workaround for a strange
# tpu / funcgraph / keras functional model interaction )
with tf.init_scope():
v = tf.Variable(
initial_value=init_val,
name=name,
trainable=trainable,
caching_device=caching_device,
dtype=variable_dtype,
validate_shape=validate_shape,
constraint=constraint,
synchronization=synchronization,
aggregation=aggregation,
)
self._vars[name] = v
logging.vlog(
1,
"Created variable %s with shape %s and init %s",
v.name,
format(shape),
initializer,
)
# Run the regularizer if requested and save the resulting loss.
if regularizer:
self.add_regularizer(v, regularizer)
return v
def get_or_create_layer(self, name, create_layer_method):
if name not in self._layers:
layer = create_layer_method()
self._layers[name] = layer
if isinstance(layer, base_layer.Layer):
self._regularizers[name] = lambda: tf.math.reduce_sum(
layer.losses
)
return self._layers[name]
def add_regularizer(self, var, regularizer):
self._regularizers[var.name] = functools.partial(regularizer, var)
# Initialize variable when no initializer provided
def _get_default_initializer(self, name, shape=None, dtype=tf.float32):
"""Provide a default initializer and a corresponding value.
Args:
name: see get_variable.
shape: see get_variable.
dtype: see get_variable.
Returns:
initializer and initializing_from_value. See get_variable above.
Raises:
ValueError: When giving unsupported dtype.
"""
del shape
# If dtype is DT_FLOAT, provide a uniform unit scaling initializer
if dtype.is_floating:
initializer = tf.compat.v1.glorot_uniform_initializer()
initializing_from_value = False
# If dtype is DT_INT/DT_UINT, provide a default value `zero`
# If dtype is DT_BOOL, provide a default value `FALSE`
elif (
dtype.is_integer
or dtype.is_unsigned
or dtype.is_bool
or dtype == tf.string
):
initializer = tf.compat.v1.zeros_initializer()
initializing_from_value = False
# NOTES:Do we need to support for handling DT_STRING and DT_COMPLEX
# here?
else:
raise ValueError(
"An initializer for variable %s of %s is required"
% (name, dtype.base_dtype)
)
return initializer, initializing_from_value
@keras_export(v1=["keras.utils.track_tf1_style_variables"])
def track_tf1_style_variables(method):
"""Wrap layer & module methods in this decorator to capture tf1-style
weights.
Decorating a `tf.keras.Layer`'s or `tf.Module`'s methods with this
decorator will cause the layer/module to track weights created/used
via `tf.compat.v1.get_variable` (and by extension `tf.compat.v1.layers`)
inside the decorated method.
In addition to tracking the weights themselves under the standard
`layer.variable`/`module.variable`/etc. properties, if the method belongs
to a `tf.keras.Layer` then any regularization losses specified via the
`get_variable` or `tf.compat.v1.layers` regularizer arguments will get
tracked by the layer under the standard `layer.losses` property.
This tracking enables using large classes of TF1-style model-forward-pass
code inside of TF-Keras layers or `tf.Modules` in TF2 with TF2 behaviors
enabled.
Example of capturing tf.compat.v1.layer-based modeling code as a Keras
layer:
```python
class WrappedDoubleDenseLayer(tf.keras.layers.Layer):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
with tf.compat.v1.variable_scope("double_dense_layer"):
out = tf.compat.v1.layers.dense(
inputs, self.units, name="dense_one",
kernel_initializer=tf.compat.v1.random_normal_initializer,
kernel_regularizer="l2")
out = tf.compat.v1.layers.dense(
out, self.units, name="dense_two",
kernel_initializer=tf.compat.v1.random_normal_initializer(),
kernel_regularizer="l2")
return out
# Create a layer that can be used as a standard keras layer
layer = WrappedDoubleDenseLayer(10)
# call the layer on inputs
layer(...)
# Variables created/used within the scope will be tracked by the layer
layer.weights
layer.trainable_variables
# Regularization losses will be captured in layer.losses after a call,
# just like any other TF-Keras layer
reg_losses = layer.losses
```
Example of capturing tf.compat.v1.get_variable-based modeling code as
a TF-Keras layer:
```python
class WrappedDoubleDenseLayer(tf.keras.layers.Layer):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
out = inputs
with tf.compat.v1.variable_scope("double_dense_layer"):
with tf.compat.v1.variable_scope("dense_one"):
# The weights are created with a `regularizer`,
# so the layer should track their regularization losses
kernel = tf.compat.v1.get_variable(
shape=[out.shape[-1], self.units],
regularizer=regularizers.L2(),
initializer=init_ops.ones_initializer(),
name="kernel")
bias = tf.compat.v1.get_variable(
shape=[self.units,],
initializer=init_ops.zeros_initializer(),
name="bias")
out = tf.compat.v1.math.matmul(out, kernel)
out = tf.compat.v1.nn.bias_add(out, bias)
with tf.compat.v1.variable_scope("dense_two"):
kernel = tf.compat.v1.get_variable(
shape=[out.shape[-1], self.units],
regularizer=regularizers.L2(),
initializer=init_ops.ones_initializer(),
name="kernel")
bias = tf.compat.v1.get_variable(
shape=[self.units,],
initializer=init_ops.zeros_initializer(),
name="bias")
out = tf.compat.v1.math.matmul(out, kernel)
out = tf.compat.v1.nn.bias_add(out, bias)
return out
# Create a layer that can be used as a standard keras layer
layer = WrappedDoubleDenseLayer(10)
# call the layer on inputs
layer(...)
# Variables created/used within the scope will be tracked by the layer
layer.weights
layer.trainable_variables
# Regularization losses will be captured in layer.losses after a call,
# just like any other TF-Keras layer
reg_losses = layer.losses
```
Regularization losses:
Any regularizers specified in the `get_variable` calls or
`compat.v1.layer` creations will get captured if they occur in your
decorated method and the method belongs to a
`tf.keras.Layer`/`tf.keras.Module`. Regularization losses
are accessible in `layer.losses` after a call just like in a standard
TF-Keras layer, and will be captured by any model that includes this
layer. Regularization losses attached to TF-Keras layers/models set as
attributes of your layer will also get captured in the standard TF-Keras
regularization loss tracking.
(While Modules have no `losses` property, no-arg callables to compute
the regularization losses may be tracked as dict values in a private
`module._tf1_style_var_store._regularizers` property, but only for
`tf.compat.v1.layers` and `get_variable` weights and not for any other
nested TF-Keras layers/tf.Modules)
Variable scope / variable reuse:
variable-scope based reuse in your decorated method will be respected,
and work like variable-scope based reuse in TF1.
Variable Names/Pre-trained checkpoint loading:
Variable naming from get_variable and `compat.v1.layer` layers will match
the TF1 names, so you should be able to re-use your old name-based
checkpoints. Variable naming for TF-Keras layers/models or for variables
created by `tf.Variable` may change when going to eager execution.
Training Arg if you decorate `layer.call`:
TF-Keras will pass a `training` arg to this layer if `call` contains
a `training` arg or a `**kwargs` varargs in its call signature,
similarly to how keras passes `training` to other layers in TF2 that have
similar signatures in their `call` implementations.
See more details in the docs
on `tf.keras.layers.Layer` to understand what will be passed and when.
Note: tf.compat.v1.layers are usually not called with `training=None`,
so the training arg to `forward_pass` might not feed through to them
unless you pass it to their calls explicitly.
Caveats:
* TF2 will not prune unused variable updates (or unused outputs). You may
need to adjust your forward pass code to avoid computations or variable
updates that you don't intend to use.
* Avoid Nesting variable creation in tf.function inside of
methods decorated with `track_tf1_style_variables`
While the method may safely be used from inside a `tf.function`, using
a function inside of a decorated method may break the variable scoping.
* This decorator only adds implicit tracking for legacy tf1-style
get_variable / compat.v1.layers usage.
If you would like to use nested TF-Keras layers/models
inside the decorated method, you need to
assign them as attributes of your layer so that Keras/Module's standard
object-oriented weights (and loss tracking for layers) will kick in.
See the intro to modules, layers, and models
[guide](https://www.tensorflow.org/guide/intro_to_modules) for more
info. As a backup, the `compat.v1.keras.utils.get_or_create_layer`
method will ease tracking nested keras model weights and losses for
existing TF1 code, but new code should use explicit tracking.
Args:
method: The method to decorate. This should belong to a custom tf.Module,
tf.keras.layers.Layer, or tf.keras.Model.
Returns:
The decorated method.
"""
def _method_wrapper(self, *args, **kwargs):
var_store = getattr(self, "_tf1_style_var_store", None)
if not var_store:
if not isinstance(self, tf.Module):
# Raise an error if you incorrectly decorate a method
# that is not a method of a Module, Layer, or Model:
raise ValueError(
"`@tf.compat.v1.keras.utils.track_tf1_layers_and_variables`"
" must be applied to a method of a subclassed `tf.Module`, "
"`tf.keras.layers.Layer`, or `tf.keras.Model` and which "
"takes `self` as the first argument. But, the first "
"argument passed to the decorated method was {}, which "
"does not extend Module, Layer, or Model.".format(self)
)
var_store = _EagerVariableStore()
self._tf1_style_var_store = var_store
existing_regularized_variables = set(var_store._regularizers.keys())
with var_store.scope():
out = method(self, *args, **kwargs)
# If this is a layer method, add the regularization losses
# to the layer for any newly-created regularized variables
if isinstance(self, base_layer.Layer):
for (
var_name,
regularizer,
) in var_store._regularizers.items():
if var_name not in existing_regularized_variables:
self.add_loss(regularizer)
return out
return tf.__internal__.decorator.make_decorator(
target=method, decorator_func=_method_wrapper
)
class VariableScopeLayer(base_layer.Layer):
"""Wrapper Layer to capture `compat.v1.get_variable` and `compat.v1.layers`.
This shim layer allows using large sets of TF1 model-forward-pass code as a
TF-Keras layer that works in TF2 with TF2 behaviors enabled. It will capture
both weights and regularization losses of your forward-pass code. To use it,
override this class and put your TF1 model's forward pass inside your
implementation for `forward_pass`. (Unlike standard custom TF-Keras layers,
do not override `call`.)
Below are some examples, and then more details on the functionality of this
shim layer to wrap TF1 model forward passes.
Example of capturing tf.compat.v1.layer-based modeling code as a Keras
layer:
```python
class WrappedDoubleDenseLayer(variable_scope_shim.VariableScopeLayer):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
def forward_pass(self, inputs):
with variable_scope.variable_scope("double_dense_layer"):
out = tf.compat.v1.layers.dense(
inputs, self.units, name="dense_one",
kernel_initializer=tf.compat.v1.random_normal_initializer,
kernel_regularizer="l2")
out = tf.compat.v1.layers.dense(
out, self.units, name="dense_two",
kernel_initializer=tf.compat.v1.random_normal_initializer(),
kernel_regularizer="l2")
return out
# Create a layer that can be used as a standard keras layer
layer = WrappedDoubleDenseLayer(10)
# call the layer on inputs
layer(...)
# Variables created/used within the scope will be tracked by the layer
layer.weights
layer.trainable_variables
# Regularization losses will be captured in layer.losses after a call,
# just like any other TF-Keras layer
reg_losses = layer.losses
```
Example of capturing tf.compat.v1.get_variable-based modeling code as
a TF-Keras layer:
```python
class WrappedDoubleDenseLayer(variable_scope_shim.VariableScopeLayer):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
def forward_pass(self, inputs):
out = inputs
with tf.compat.v1.variable_scope("double_dense_layer"):
with tf.compat.v1.variable_scope("dense_one"):
# The weights are created with a `regularizer`,
# so the layer should track their regularization losses
kernel = tf.compat.v1.get_variable(
shape=[out.shape[-1], self.units],
regularizer=regularizers.L2(),
initializer=init_ops.ones_initializer(),
name="kernel")
bias = tf.compat.v1.get_variable(
shape=[self.units,],
initializer=init_ops.zeros_initializer(),
name="bias")
out = tf.compat.v1.math.matmul(out, kernel)
out = tf.compat.v1.nn.bias_add(out, bias)
with tf.compat.v1.variable_scope("dense_two"):
kernel = tf.compat.v1.get_variable(
shape=[out.shape[-1], self.units],
regularizer=regularizers.L2(),
initializer=init_ops.ones_initializer(),
name="kernel")
bias = tf.compat.v1.get_variable(
shape=[self.units,],
initializer=init_ops.zeros_initializer(),
name="bias")
out = tf.compat.v1.math.matmul(out, kernel)
out = tf.compat.v1.nn.bias_add(out, bias)
return out
# Create a layer that can be used as a standard keras layer
layer = WrappedDoubleDenseLayer(10)
# call the layer on inputs
layer(...)
# Variables created/used within the scope will be tracked by the layer
layer.weights
layer.trainable_variables
# Regularization losses will be captured in layer.losses after a call,
# just like any other TF-Keras layer
reg_losses = layer.losses
```
Regularization losses:
Any regularizers specified in the `get_variable` calls or
`compat.v1.layer` creations will get captured by this wrapper layer.
Regularization losses are accessible in `layer.losses` after a call just
like in a standard TF-Keras layer, and will be captured by any model that
includes this layer. Regularization losses attached to Keras
layers/models set as attributes of your layer will also get captured in
the standard TF-Keras regularization loss tracking.
Variable scope / variable reuse:
variable-scope based reuse in the `forward_pass` will be respected,
and work like variable-scope based reuse in TF1.
Variable Names/Pre-trained checkpoint loading:
Variable naming from get_variable and `compat.v1.layer` layers will match
the TF1 names, so you should be able to re-use your old name-based
checkpoints. Variable naming for TF-Keras layers/models or for variables
created by `tf.Variable` may change when going to eager execution.
Training Arg in `forward_pass`:
TF-Keras will pass a `training` arg to this layer if `forward_pass`
contains a `training` arg or a `**kwargs` varargs in its call signature,
similarly to how keras passes `training` to other layers in TF2 that have
similar signatures in their `call` implementations.
See more details in the docs
on `tf.keras.layers.Layer` to understand what will be passed and when.
Note: tf.compat.v1.layers are usually not called with `training=None`,
so the training arg to `forward_pass` might not feed through to them
unless you pass it to their calls explicitly.
Call signature of the forward pass:
The semantics of the forward pass signature match the standard
TF-Keras layer `call` signature, including how TF-Keras decides when
to pass in a `training` arg., and the semantics applied to
the first positional arg in the call signature.
Caveats:
* TF2 will not prune unused variable updates (or unused outputs). You may
need to adjust your forward pass code to avoid computations or variable
updates that you don't intend to use. (E.g. by adding a flag to the
`forward_pass` call signature and branching on it).
* Avoid Nesting variable creation in tf.function inside of `forward_pass`
While the layer may safely be used from inside a `tf.function`, using
a function inside of `forward_pass` will break the variable scoping.
* If you would like to nest TF-Keras layers/models or other
`VariableScopeLayer`s directly in `forward_pass`, you need to
assign them as attributes of your layer so that Keras's standard
object-oriented weights and loss tracking will kick in.
See the intro to modules, layers, and models
[guide](https://www.tensorflow.org/guide/intro_to_modules) for more info
"""
@property
@layer_utils.cached_per_instance
def _call_full_argspec(self):
# Argspec inspection is expensive and the call spec is used often, so it
# makes sense to cache the result.
return tf_inspect.getfullargspec(self.forward_pass)
def forward_pass(self, *args, **kwargs):
"""Implement this method. It should include your model forward pass."""
raise NotImplementedError
@track_tf1_style_variables
def call(self, *args, **kwargs):
return self.forward_pass(*args, **kwargs)
@keras_export(v1=["keras.utils.get_or_create_layer"])
def get_or_create_layer(name, create_layer_method):
"""Use this method to track nested keras models in a shim-decorated method.
This method can be used within a `tf.keras.Layer`'s methods decorated by
the`track_tf1_style_variables` shim, to additionally track inner keras Model
objects created within the same method. The inner model's variables and
losses will be accessible via the outer model's `variables` and `losses`
attributes.
This enables tracking of inner keras models using TF2 behaviors, with
minimal changes to existing TF1-style code.
Example:
```python
class NestedLayer(tf.keras.layers.Layer):
def __init__(self, units, *args, **kwargs):
super().__init__(*args, **kwargs)
self.units = units
def build_model(self):
inp = tf.keras.Input(shape=(5, 5))
dense_layer = tf.keras.layers.Dense(
10, name="dense", kernel_regularizer="l2",
kernel_initializer=tf.compat.v1.ones_initializer())
model = tf.keras.Model(inputs=inp, outputs=dense_layer(inp))
return model
@tf.compat.v1.keras.utils.track_tf1_style_variables
def call(self, inputs):
model = tf.compat.v1.keras.utils.get_or_create_layer(
"dense_model", self.build_model)
return model(inputs)
```
The inner model creation should be confined to its own zero-arg function,
which should be passed into this method. In TF1, this method will
immediately create and return the desired model, without any tracking.
Args:
name: A name to give the nested layer to track.
create_layer_method: a Callable that takes no args and returns the nested
layer.
Returns:
The created layer.
"""
store = vs._get_default_variable_store()
if not isinstance(store, _EagerVariableStore):
if not tf.compat.v1.executing_eagerly_outside_functions():
# tf1 case; just create and return layer
return create_layer_method()
else:
raise ValueError(
"Tried to call get_or_create_layer in eager mode from a method "
"notdecorated with "
"@tf.compat.v1.keras.utils.track_tf1_style_variables."
)
vs_name = tf.compat.v1.get_variable_scope().name
name = f"{vs_name}/{name}"
return store.get_or_create_layer(name, create_layer_method)
| tf-keras/tf_keras/legacy_tf_layers/variable_scope_shim.py/0 | {
"file_path": "tf-keras/tf_keras/legacy_tf_layers/variable_scope_shim.py",
"repo_id": "tf-keras",
"token_count": 18779
} | 194 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""IoU metrics."""
from typing import List
from typing import Optional
from typing import Tuple
from typing import Union
import numpy as np
import tensorflow.compat.v2 as tf
from tf_keras import backend
from tf_keras.dtensor import utils as dtensor_utils
from tf_keras.metrics import base_metric
# isort: off
from tensorflow.python.util.tf_export import keras_export
class _IoUBase(base_metric.Metric):
"""Computes the confusion matrix for Intersection-Over-Union metrics.
Intersection-Over-Union is a common evaluation metric for semantic image
segmentation.
For an individual class, the IoU metric is defined as follows:
```
iou = true_positives / (true_positives + false_positives + false_negatives)
```
From IoUs of individual classes, the MeanIoU can be computed as the mean of
the individual IoUs.
To compute IoUs, the predictions are accumulated in a confusion matrix,
weighted by `sample_weight` and the metric is then calculated from it.
If `sample_weight` is `None`, weights default to 1.
Use `sample_weight` of 0 to mask values.
Args:
num_classes: The possible number of labels the prediction task can have.
This value must be provided, since a confusion matrix of size
`(num_classes, num_classes)` will be allocated.
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
ignore_class: Optional integer. The ID of a class to be ignored during
metric computation. This is useful, for example, in segmentation
problems featuring a "void" class (commonly -1 or 255) in segmentation
maps. By default (`ignore_class=None`), all classes are considered.
sparse_y_true: Whether labels are encoded using integers or
dense floating point vectors. If `False`, the `tf.argmax` function
will be used to determine each sample's most likely associated label.
sparse_y_pred: Whether predictions are encoded using integers or
dense floating point vectors. If `False`, the `tf.argmax` function
will be used to determine each sample's most likely associated label.
axis: (Optional) -1 is the dimension containing the logits.
Defaults to `-1`.
"""
def __init__(
self,
num_classes: int,
name: Optional[str] = None,
dtype: Optional[Union[str, tf.dtypes.DType]] = None,
ignore_class: Optional[int] = None,
sparse_y_true: bool = True,
sparse_y_pred: bool = True,
axis: int = -1,
):
super().__init__(name=name, dtype=dtype)
self.num_classes = num_classes
self.ignore_class = ignore_class
self.sparse_y_true = sparse_y_true
self.sparse_y_pred = sparse_y_pred
self.axis = axis
# Variable to accumulate the predictions in the confusion matrix.
self.total_cm = self.add_weight(
"total_confusion_matrix",
shape=(num_classes, num_classes),
initializer="zeros",
)
def update_state(self, y_true, y_pred, sample_weight=None):
"""Accumulates the confusion matrix statistics.
Args:
y_true: The ground truth values.
y_pred: The predicted values.
sample_weight: Optional weighting of each example. Can
be a `Tensor` whose rank is either 0, or the same rank as `y_true`,
and must be broadcastable to `y_true`. Defaults to `1`.
Returns:
Update op.
"""
if not self.sparse_y_true:
y_true = tf.argmax(y_true, axis=self.axis)
if not self.sparse_y_pred:
y_pred = tf.argmax(y_pred, axis=self.axis)
y_true = tf.cast(y_true, self._dtype)
y_pred = tf.cast(y_pred, self._dtype)
# Flatten the input if its rank > 1.
if y_pred.shape.ndims > 1:
y_pred = tf.reshape(y_pred, [-1])
if y_true.shape.ndims > 1:
y_true = tf.reshape(y_true, [-1])
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, self._dtype)
if sample_weight.shape.ndims > 1:
sample_weight = tf.reshape(sample_weight, [-1])
if self.ignore_class is not None:
ignore_class = tf.cast(self.ignore_class, y_true.dtype)
valid_mask = tf.not_equal(y_true, ignore_class)
y_true = y_true[valid_mask]
y_pred = y_pred[valid_mask]
if sample_weight is not None:
sample_weight = sample_weight[valid_mask]
# Accumulate the prediction to current confusion matrix.
current_cm = tf.math.confusion_matrix(
y_true,
y_pred,
self.num_classes,
weights=sample_weight,
dtype=self._dtype,
)
return self.total_cm.assign_add(current_cm)
def reset_state(self):
backend.set_value(
self.total_cm, np.zeros((self.num_classes, self.num_classes))
)
@keras_export("keras.metrics.IoU")
class IoU(_IoUBase):
"""Computes the Intersection-Over-Union metric for specific target classes.
General definition and computation:
Intersection-Over-Union is a common evaluation metric for semantic image
segmentation.
For an individual class, the IoU metric is defined as follows:
```
iou = true_positives / (true_positives + false_positives + false_negatives)
```
To compute IoUs, the predictions are accumulated in a confusion matrix,
weighted by `sample_weight` and the metric is then calculated from it.
If `sample_weight` is `None`, weights default to 1.
Use `sample_weight` of 0 to mask values.
Note, this class first computes IoUs for all individual classes, then
returns the mean of IoUs for the classes that are specified by
`target_class_ids`. If `target_class_ids` has only one id value, the IoU of
that specific class is returned.
Args:
num_classes: The possible number of labels the prediction task can have.
A confusion matrix of dimension = [num_classes, num_classes] will be
allocated to accumulate predictions from which the metric is calculated.
target_class_ids: A tuple or list of target class ids for which the metric
is returned. To compute IoU for a specific class, a list (or tuple) of a
single id value should be provided.
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
ignore_class: Optional integer. The ID of a class to be ignored during
metric computation. This is useful, for example, in segmentation
problems featuring a "void" class (commonly -1 or 255) in segmentation
maps. By default (`ignore_class=None`), all classes are considered.
sparse_y_true: Whether labels are encoded using integers or
dense floating point vectors. If `False`, the `tf.argmax` function
will be used to determine each sample's most likely associated label.
sparse_y_pred: Whether predictions are encoded using integers or
dense floating point vectors. If `False`, the `tf.argmax` function
will be used to determine each sample's most likely associated label.
axis: (Optional) -1 is the dimension containing the logits.
Defaults to `-1`.
Standalone usage:
>>> # cm = [[1, 1],
>>> # [1, 1]]
>>> # sum_row = [2, 2], sum_col = [2, 2], true_positives = [1, 1]
>>> # iou = true_positives / (sum_row + sum_col - true_positives))
>>> # iou = [0.33, 0.33]
>>> m = tf.keras.metrics.IoU(num_classes=2, target_class_ids=[0])
>>> m.update_state([0, 0, 1, 1], [0, 1, 0, 1])
>>> m.result().numpy()
0.33333334
>>> m.reset_state()
>>> m.update_state([0, 0, 1, 1], [0, 1, 0, 1],
... sample_weight=[0.3, 0.3, 0.3, 0.1])
>>> # cm = [[0.3, 0.3],
>>> # [0.3, 0.1]]
>>> # sum_row = [0.6, 0.4], sum_col = [0.6, 0.4],
>>> # true_positives = [0.3, 0.1]
>>> # iou = [0.33, 0.14]
>>> m.result().numpy()
0.33333334
Usage with `compile()` API:
```python
model.compile(
optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.IoU(num_classes=2, target_class_ids=[0])])
```
"""
@dtensor_utils.inject_mesh
def __init__(
self,
num_classes: int,
target_class_ids: Union[List[int], Tuple[int, ...]],
name: Optional[str] = None,
dtype: Optional[Union[str, tf.dtypes.DType]] = None,
ignore_class: Optional[int] = None,
sparse_y_true: bool = True,
sparse_y_pred: bool = True,
axis: int = -1,
):
super().__init__(
name=name,
num_classes=num_classes,
ignore_class=ignore_class,
sparse_y_true=sparse_y_true,
sparse_y_pred=sparse_y_pred,
axis=axis,
dtype=dtype,
)
if max(target_class_ids) >= num_classes:
raise ValueError(
f"Target class id {max(target_class_ids)} "
"is out of range, which is "
f"[{0}, {num_classes})."
)
self.target_class_ids = list(target_class_ids)
def result(self):
"""Compute the intersection-over-union via the confusion matrix."""
sum_over_row = tf.cast(
tf.reduce_sum(self.total_cm, axis=0), dtype=self._dtype
)
sum_over_col = tf.cast(
tf.reduce_sum(self.total_cm, axis=1), dtype=self._dtype
)
true_positives = tf.cast(
tf.linalg.tensor_diag_part(self.total_cm), dtype=self._dtype
)
# sum_over_row + sum_over_col =
# 2 * true_positives + false_positives + false_negatives.
denominator = sum_over_row + sum_over_col - true_positives
# Only keep the target classes
true_positives = tf.gather(true_positives, self.target_class_ids)
denominator = tf.gather(denominator, self.target_class_ids)
# If the denominator is 0, we need to ignore the class.
num_valid_entries = tf.reduce_sum(
tf.cast(tf.not_equal(denominator, 0), dtype=self._dtype)
)
iou = tf.math.divide_no_nan(true_positives, denominator)
return tf.math.divide_no_nan(
tf.reduce_sum(iou, name="mean_iou"), num_valid_entries
)
def get_config(self):
config = {
"num_classes": self.num_classes,
"target_class_ids": self.target_class_ids,
"ignore_class": self.ignore_class,
"sparse_y_true": self.sparse_y_true,
"sparse_y_pred": self.sparse_y_pred,
"axis": self.axis,
}
base_config = super().get_config()
return dict(list(base_config.items()) + list(config.items()))
@keras_export("keras.metrics.BinaryIoU")
class BinaryIoU(IoU):
"""Computes the Intersection-Over-Union metric for class 0 and/or 1.
General definition and computation:
Intersection-Over-Union is a common evaluation metric for semantic image
segmentation.
For an individual class, the IoU metric is defined as follows:
```
iou = true_positives / (true_positives + false_positives + false_negatives)
```
To compute IoUs, the predictions are accumulated in a confusion matrix,
weighted by `sample_weight` and the metric is then calculated from it.
If `sample_weight` is `None`, weights default to 1.
Use `sample_weight` of 0 to mask values.
This class can be used to compute IoUs for a binary classification task
where the predictions are provided as logits. First a `threshold` is applied
to the predicted values such that those that are below the `threshold` are
converted to class 0 and those that are above the `threshold` are converted
to class 1.
IoUs for classes 0 and 1 are then computed, the mean of IoUs for the classes
that are specified by `target_class_ids` is returned.
Note: with `threshold=0`, this metric has the same behavior as `IoU`.
Args:
target_class_ids: A tuple or list of target class ids for which the metric
is returned. Options are `[0]`, `[1]`, or `[0, 1]`. With `[0]` (or
`[1]`), the IoU metric for class 0 (or class 1, respectively) is
returned. With `[0, 1]`, the mean of IoUs for the two classes is
returned.
threshold: A threshold that applies to the prediction logits to convert
them to either predicted class 0 if the logit is below `threshold` or
predicted class 1 if the logit is above `threshold`.
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
Standalone usage:
>>> m = tf.keras.metrics.BinaryIoU(target_class_ids=[0, 1], threshold=0.3)
>>> m.update_state([0, 1, 0, 1], [0.1, 0.2, 0.4, 0.7])
>>> m.result().numpy()
0.33333334
>>> m.reset_state()
>>> m.update_state([0, 1, 0, 1], [0.1, 0.2, 0.4, 0.7],
... sample_weight=[0.2, 0.3, 0.4, 0.1])
>>> # cm = [[0.2, 0.4],
>>> # [0.3, 0.1]]
>>> # sum_row = [0.6, 0.4], sum_col = [0.5, 0.5],
>>> # true_positives = [0.2, 0.1]
>>> # iou = [0.222, 0.125]
>>> m.result().numpy()
0.17361112
Usage with `compile()` API:
```python
model.compile(
optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.BinaryIoU(target_class_ids=[0], threshold=0.5)])
```
"""
@dtensor_utils.inject_mesh
def __init__(
self,
target_class_ids: Union[List[int], Tuple[int, ...]] = (0, 1),
threshold=0.5,
name=None,
dtype=None,
):
super().__init__(
num_classes=2,
target_class_ids=target_class_ids,
name=name,
dtype=dtype,
)
self.threshold = threshold
def update_state(self, y_true, y_pred, sample_weight=None):
"""Accumulates the confusion matrix statistics.
Before the confusion matrix is updated, the predicted values are
thresholded to be:
0 for values that are smaller than the `threshold`
1 for values that are larger or equal to the `threshold`
Args:
y_true: The ground truth values.
y_pred: The predicted values.
sample_weight: Optional weighting of each example. Can
be a `Tensor` whose rank is either 0, or the same rank as `y_true`,
and must be broadcastable to `y_true`. Defaults to `1`.
Returns:
Update op.
"""
y_pred = tf.cast(y_pred, self._dtype)
y_pred = tf.cast(y_pred >= self.threshold, self._dtype)
return super().update_state(y_true, y_pred, sample_weight)
def get_config(self):
return {
"target_class_ids": self.target_class_ids,
"threshold": self.threshold,
"name": self.name,
"dtype": self._dtype,
}
@keras_export("keras.metrics.MeanIoU")
class MeanIoU(IoU):
"""Computes the mean Intersection-Over-Union metric.
General definition and computation:
Intersection-Over-Union is a common evaluation metric for semantic image
segmentation.
For an individual class, the IoU metric is defined as follows:
```
iou = true_positives / (true_positives + false_positives + false_negatives)
```
To compute IoUs, the predictions are accumulated in a confusion matrix,
weighted by `sample_weight` and the metric is then calculated from it.
If `sample_weight` is `None`, weights default to 1.
Use `sample_weight` of 0 to mask values.
Note that this class first computes IoUs for all individual classes, then
returns the mean of these values.
Args:
num_classes: The possible number of labels the prediction task can have.
This value must be provided, since a confusion matrix of dimension =
[num_classes, num_classes] will be allocated.
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
ignore_class: Optional integer. The ID of a class to be ignored during
metric computation. This is useful, for example, in segmentation
problems featuring a "void" class (commonly -1 or 255) in segmentation
maps. By default (`ignore_class=None`), all classes are considered.
sparse_y_true: Whether labels are encoded using integers or
dense floating point vectors. If `False`, the `tf.argmax` function
will be used to determine each sample's most likely associated label.
sparse_y_pred: Whether predictions are encoded using integers or
dense floating point vectors. If `False`, the `tf.argmax` function
will be used to determine each sample's most likely associated label.
axis: (Optional) The dimension containing the logits. Defaults to `-1`.
Standalone usage:
>>> # cm = [[1, 1],
>>> # [1, 1]]
>>> # sum_row = [2, 2], sum_col = [2, 2], true_positives = [1, 1]
>>> # iou = true_positives / (sum_row + sum_col - true_positives))
>>> # result = (1 / (2 + 2 - 1) + 1 / (2 + 2 - 1)) / 2 = 0.33
>>> m = tf.keras.metrics.MeanIoU(num_classes=2)
>>> m.update_state([0, 0, 1, 1], [0, 1, 0, 1])
>>> m.result().numpy()
0.33333334
>>> m.reset_state()
>>> m.update_state([0, 0, 1, 1], [0, 1, 0, 1],
... sample_weight=[0.3, 0.3, 0.3, 0.1])
>>> m.result().numpy()
0.23809525
Usage with `compile()` API:
```python
model.compile(
optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.MeanIoU(num_classes=2)])
```
"""
@dtensor_utils.inject_mesh
def __init__(
self,
num_classes: int,
name: Optional[str] = None,
dtype: Optional[Union[str, tf.dtypes.DType]] = None,
ignore_class: Optional[int] = None,
sparse_y_true: bool = True,
sparse_y_pred: bool = True,
axis: int = -1,
):
target_class_ids = list(range(num_classes))
super().__init__(
name=name,
num_classes=num_classes,
target_class_ids=target_class_ids,
axis=axis,
dtype=dtype,
ignore_class=ignore_class,
sparse_y_true=sparse_y_true,
sparse_y_pred=sparse_y_pred,
)
def get_config(self):
return {
"num_classes": self.num_classes,
"name": self.name,
"dtype": self._dtype,
"ignore_class": self.ignore_class,
"sparse_y_true": self.sparse_y_true,
"sparse_y_pred": self.sparse_y_pred,
"axis": self.axis,
}
@keras_export("keras.metrics.OneHotIoU")
class OneHotIoU(IoU):
"""Computes the Intersection-Over-Union metric for one-hot encoded labels.
General definition and computation:
Intersection-Over-Union is a common evaluation metric for semantic image
segmentation.
For an individual class, the IoU metric is defined as follows:
```
iou = true_positives / (true_positives + false_positives + false_negatives)
```
To compute IoUs, the predictions are accumulated in a confusion matrix,
weighted by `sample_weight` and the metric is then calculated from it.
If `sample_weight` is `None`, weights default to 1.
Use `sample_weight` of 0 to mask values.
This class can be used to compute IoU for multi-class classification tasks
where the labels are one-hot encoded (the last axis should have one
dimension per class). Note that the predictions should also have the same
shape. To compute the IoU, first the labels and predictions are converted
back into integer format by taking the argmax over the class axis. Then the
same computation steps as for the base `IoU` class apply.
Note, if there is only one channel in the labels and predictions, this class
is the same as class `IoU`. In this case, use `IoU` instead.
Also, make sure that `num_classes` is equal to the number of classes in the
data, to avoid a "labels out of bound" error when the confusion matrix is
computed.
Args:
num_classes: The possible number of labels the prediction task can have.
A confusion matrix of shape `(num_classes, num_classes)` will be
allocated to accumulate predictions from which the metric is calculated.
target_class_ids: A tuple or list of target class ids for which the metric
is returned. To compute IoU for a specific class, a list (or tuple) of a
single id value should be provided.
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
ignore_class: Optional integer. The ID of a class to be ignored during
metric computation. This is useful, for example, in segmentation
problems featuring a "void" class (commonly -1 or 255) in segmentation
maps. By default (`ignore_class=None`), all classes are considered.
sparse_y_pred: Whether predictions are encoded using natural numbers or
probability distribution vectors. If `False`, the `tf.argmax` function
will be used to determine each sample's most likely associated label.
axis: (Optional) The dimension containing the logits. Defaults to `-1`.
Standalone usage:
>>> y_true = tf.constant([[0, 0, 1], [1, 0, 0], [0, 1, 0], [1, 0, 0]])
>>> y_pred = tf.constant([[0.2, 0.3, 0.5], [0.1, 0.2, 0.7], [0.5, 0.3, 0.1],
... [0.1, 0.4, 0.5]])
>>> sample_weight = [0.1, 0.2, 0.3, 0.4]
>>> m = tf.keras.metrics.OneHotIoU(num_classes=3, target_class_ids=[0, 2])
>>> m.update_state(
... y_true=y_true, y_pred=y_pred, sample_weight=sample_weight)
>>> # cm = [[0, 0, 0.2+0.4],
>>> # [0.3, 0, 0],
>>> # [0, 0, 0.1]]
>>> # sum_row = [0.3, 0, 0.7], sum_col = [0.6, 0.3, 0.1]
>>> # true_positives = [0, 0, 0.1]
>>> # single_iou = true_positives / (sum_row + sum_col - true_positives))
>>> # mean_iou = (0 / (0.3 + 0.6 - 0) + 0.1 / (0.7 + 0.1 - 0.1)) / 2
>>> m.result().numpy()
0.071
Usage with `compile()` API:
```python
model.compile(
optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.OneHotIoU(num_classes=3, target_class_id=[1])])
```
"""
@dtensor_utils.inject_mesh
def __init__(
self,
num_classes: int,
target_class_ids: Union[List[int], Tuple[int, ...]],
name=None,
dtype=None,
ignore_class: Optional[int] = None,
sparse_y_pred: bool = False,
axis: int = -1,
):
super().__init__(
num_classes=num_classes,
target_class_ids=target_class_ids,
name=name,
dtype=dtype,
ignore_class=ignore_class,
sparse_y_true=False,
sparse_y_pred=sparse_y_pred,
axis=axis,
)
def get_config(self):
return {
"num_classes": self.num_classes,
"target_class_ids": self.target_class_ids,
"name": self.name,
"dtype": self._dtype,
"ignore_class": self.ignore_class,
"sparse_y_pred": self.sparse_y_pred,
"axis": self.axis,
}
@keras_export("keras.metrics.OneHotMeanIoU")
class OneHotMeanIoU(MeanIoU):
"""Computes mean Intersection-Over-Union metric for one-hot encoded labels.
General definition and computation:
Intersection-Over-Union is a common evaluation metric for semantic image
segmentation.
For an individual class, the IoU metric is defined as follows:
```
iou = true_positives / (true_positives + false_positives + false_negatives)
```
To compute IoUs, the predictions are accumulated in a confusion matrix,
weighted by `sample_weight` and the metric is then calculated from it.
If `sample_weight` is `None`, weights default to 1.
Use `sample_weight` of 0 to mask values.
This class can be used to compute the mean IoU for multi-class
classification tasks where the labels are one-hot encoded (the last axis
should have one dimension per class). Note that the predictions should also
have the same shape. To compute the mean IoU, first the labels and
predictions are converted back into integer format by taking the argmax over
the class axis. Then the same computation steps as for the base `MeanIoU`
class apply.
Note, if there is only one channel in the labels and predictions, this class
is the same as class `MeanIoU`. In this case, use `MeanIoU` instead.
Also, make sure that `num_classes` is equal to the number of classes in the
data, to avoid a "labels out of bound" error when the confusion matrix is
computed.
Args:
num_classes: The possible number of labels the prediction task can have.
A confusion matrix of shape `(num_classes, num_classes)` will be
allocated to accumulate predictions from which the metric is calculated.
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
ignore_class: Optional integer. The ID of a class to be ignored during
metric computation. This is useful, for example, in segmentation
problems featuring a "void" class (commonly -1 or 255) in segmentation
maps. By default (`ignore_class=None`), all classes are considered.
sparse_y_pred: Whether predictions are encoded using natural numbers or
probability distribution vectors. If `False`, the `tf.argmax` function
will be used to determine each sample's most likely associated label.
axis: (Optional) The dimension containing the logits. Defaults to `-1`.
Standalone usage:
>>> y_true = tf.constant([[0, 0, 1], [1, 0, 0], [0, 1, 0], [1, 0, 0]])
>>> y_pred = tf.constant([[0.2, 0.3, 0.5], [0.1, 0.2, 0.7], [0.5, 0.3, 0.1],
... [0.1, 0.4, 0.5]])
>>> sample_weight = [0.1, 0.2, 0.3, 0.4]
>>> m = tf.keras.metrics.OneHotMeanIoU(num_classes=3)
>>> m.update_state(
... y_true=y_true, y_pred=y_pred, sample_weight=sample_weight)
>>> # cm = [[0, 0, 0.2+0.4],
>>> # [0.3, 0, 0],
>>> # [0, 0, 0.1]]
>>> # sum_row = [0.3, 0, 0.7], sum_col = [0.6, 0.3, 0.1]
>>> # true_positives = [0, 0, 0.1]
>>> # single_iou = true_positives / (sum_row + sum_col - true_positives))
>>> # mean_iou = (0 + 0 + 0.1 / (0.7 + 0.1 - 0.1)) / 3
>>> m.result().numpy()
0.048
Usage with `compile()` API:
```python
model.compile(
optimizer='sgd',
loss='mse',
metrics=[tf.keras.metrics.OneHotMeanIoU(num_classes=3)])
```
"""
@dtensor_utils.inject_mesh
def __init__(
self,
num_classes: int,
name: str = None,
dtype: Optional[Union[str, tf.dtypes.DType]] = None,
ignore_class: Optional[int] = None,
sparse_y_pred: bool = False,
axis: int = -1,
):
super().__init__(
num_classes=num_classes,
axis=axis,
name=name,
dtype=dtype,
ignore_class=ignore_class,
sparse_y_true=False,
sparse_y_pred=sparse_y_pred,
)
def get_config(self):
return {
"num_classes": self.num_classes,
"name": self.name,
"dtype": self._dtype,
"ignore_class": self.ignore_class,
"sparse_y_pred": self.sparse_y_pred,
"axis": self.axis,
}
| tf-keras/tf_keras/metrics/iou_metrics.py/0 | {
"file_path": "tf-keras/tf_keras/metrics/iou_metrics.py",
"repo_id": "tf-keras",
"token_count": 11729
} | 195 |
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests various Layer subclasses have correct outputs with mixed precision."""
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
from tf_keras import layers
from tf_keras import models
from tf_keras.layers import activation
from tf_keras.layers import attention
from tf_keras.layers import convolutional
from tf_keras.layers import core
from tf_keras.layers import locally_connected
from tf_keras.layers import merging
from tf_keras.layers import pooling
from tf_keras.layers import regularization
from tf_keras.layers import reshaping
from tf_keras.layers.normalization import batch_normalization
from tf_keras.layers.normalization import layer_normalization
from tf_keras.layers.preprocessing import image_preprocessing
from tf_keras.layers.preprocessing import normalization
from tf_keras.layers.rnn import bidirectional
from tf_keras.layers.rnn import conv_lstm2d
from tf_keras.layers.rnn import gru
from tf_keras.layers.rnn import gru_v1
from tf_keras.layers.rnn import lstm
from tf_keras.layers.rnn import lstm_v1
from tf_keras.layers.rnn import simple_rnn
from tf_keras.layers.rnn import time_distributed
from tf_keras.mixed_precision import policy
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
def create_mirrored_strategy():
# The test creates two virtual CPUs, and we use both of them to test with
# multiple devices.
# pylint: disable=protected-access
tf.distribute.MirroredStrategy._collective_key_base += 1
return tf.distribute.MirroredStrategy(["cpu:0", "cpu:1"])
def _create_normalization_layer_with_adapt():
layer = normalization.Normalization()
layer.adapt(np.random.normal(size=(10, 4)))
return layer
def _create_normalization_layer_without_adapt():
return normalization.Normalization(
mean=np.random.normal(size=(4,)),
variance=np.random.uniform(0.5, 2.0, size=(4,)),
)
@test_utils.run_v2_only
class LayerCorrectnessTest(test_combinations.TestCase):
def setUp(self):
super().setUp()
# Set two virtual CPUs to test MirroredStrategy with multiple devices
cpus = tf.config.list_physical_devices("CPU")
tf.config.set_logical_device_configuration(
cpus[0],
[
tf.config.LogicalDeviceConfiguration(),
tf.config.LogicalDeviceConfiguration(),
],
)
self.strategy = create_mirrored_strategy()
def _create_model_from_layer(self, layer, input_shapes):
inputs = [layers.Input(batch_input_shape=s) for s in input_shapes]
if len(inputs) == 1:
inputs = inputs[0]
y = layer(inputs)
model = models.Model(inputs, y)
model.compile("sgd", "mse")
return model
@parameterized.named_parameters(
("LeakyReLU", activation.LeakyReLU, (2, 2)),
("PReLU", activation.PReLU, (2, 2)),
("ELU", activation.ELU, (2, 2)),
("ThresholdedReLU", activation.ThresholdedReLU, (2, 2)),
("Softmax", activation.Softmax, (2, 2)),
("ReLU", activation.ReLU, (2, 2)),
("Conv1D", lambda: convolutional.Conv1D(2, 2), (2, 2, 1)),
("Conv2D", lambda: convolutional.Conv2D(2, 2), (2, 2, 2, 1)),
("Conv3D", lambda: convolutional.Conv3D(2, 2), (2, 2, 2, 2, 1)),
(
"Conv2DTranspose",
lambda: convolutional.Conv2DTranspose(2, 2),
(2, 2, 2, 2),
),
(
"SeparableConv2D",
lambda: convolutional.SeparableConv2D(2, 2),
(2, 2, 2, 1),
),
(
"DepthwiseConv2D",
lambda: convolutional.DepthwiseConv2D(2, 2),
(2, 2, 2, 1),
),
("UpSampling2D", reshaping.UpSampling2D, (2, 2, 2, 1)),
("ZeroPadding2D", reshaping.ZeroPadding2D, (2, 2, 2, 1)),
("Cropping2D", reshaping.Cropping2D, (2, 3, 3, 1)),
(
"ConvLSTM2D",
lambda: conv_lstm2d.ConvLSTM2D(4, kernel_size=(2, 2)),
(4, 4, 4, 4, 4),
),
("Dense", lambda: core.Dense(2), (2, 2)),
("Dropout", lambda: regularization.Dropout(0.5), (2, 2)),
(
"SpatialDropout2D",
lambda: regularization.SpatialDropout2D(0.5),
(2, 2, 2, 2),
),
("Activation", lambda: core.Activation("sigmoid"), (2, 2)),
("Reshape", lambda: reshaping.Reshape((1, 4, 1)), (2, 2, 2)),
("Permute", lambda: reshaping.Permute((2, 1)), (2, 2, 2)),
("Attention", attention.Attention, [(2, 2, 3), (2, 3, 3), (2, 3, 3)]),
(
"AdditiveAttention",
attention.AdditiveAttention,
[(2, 2, 3), (2, 3, 3), (2, 3, 3)],
),
(
"Embedding",
lambda: core.Embedding(4, 4),
(2, 4),
2e-3,
2e-3,
np.random.randint(4, size=(2, 4)),
),
(
"LocallyConnected1D",
lambda: locally_connected.LocallyConnected1D(2, 2),
(2, 2, 1),
),
(
"LocallyConnected2D",
lambda: locally_connected.LocallyConnected2D(2, 2),
(2, 2, 2, 1),
),
("Add", merging.Add, [(2, 2), (2, 2)]),
("Subtract", merging.Subtract, [(2, 2), (2, 2)]),
("Multiply", merging.Multiply, [(2, 2), (2, 2)]),
("Average", merging.Average, [(2, 2), (2, 2)]),
("Maximum", merging.Maximum, [(2, 2), (2, 2)]),
("Minimum", merging.Minimum, [(2, 2), (2, 2)]),
("Concatenate", merging.Concatenate, [(2, 2), (2, 2)]),
("Dot", lambda: merging.Dot(1), [(2, 2), (2, 2)]),
("GaussianNoise", lambda: regularization.GaussianNoise(0.5), (2, 2)),
(
"GaussianDropout",
lambda: regularization.GaussianDropout(0.5),
(2, 2),
),
("AlphaDropout", lambda: regularization.AlphaDropout(0.5), (2, 2)),
(
"BatchNormalization",
batch_normalization.BatchNormalization,
(2, 2),
1e-2,
1e-2,
),
("LayerNormalization", layer_normalization.LayerNormalization, (2, 2)),
(
"LayerNormalizationUnfused",
lambda: layer_normalization.LayerNormalization(axis=1),
(2, 2, 2),
),
("MaxPooling2D", pooling.MaxPooling2D, (2, 2, 2, 1)),
("AveragePooling2D", pooling.AveragePooling2D, (2, 2, 2, 1)),
("GlobalMaxPooling2D", pooling.GlobalMaxPooling2D, (2, 2, 2, 1)),
(
"GlobalAveragePooling2D",
pooling.GlobalAveragePooling2D,
(2, 2, 2, 1),
),
(
"SimpleRNN",
lambda: simple_rnn.SimpleRNN(units=4),
(4, 4, 4),
1e-2,
1e-2,
),
(
"SimpleRNN_stateful",
lambda: simple_rnn.SimpleRNN(units=4, stateful=True),
(4, 4, 4),
1e-2,
1e-2,
),
("GRU", lambda: gru_v1.GRU(units=4), (4, 4, 4)),
("LSTM", lambda: lstm_v1.LSTM(units=4), (4, 4, 4)),
("GRUV2", lambda: gru.GRU(units=4), (4, 4, 4)),
("GRUV2_stateful", lambda: gru.GRU(units=4, stateful=True), (4, 4, 4)),
("LSTMV2", lambda: lstm.LSTM(units=4), (4, 4, 4)),
(
"LSTMV2_stateful",
lambda: lstm.LSTM(units=4, stateful=True),
(4, 4, 4),
),
(
"TimeDistributed",
lambda: time_distributed.TimeDistributed(core.Dense(2)),
(2, 2, 2),
),
(
"Bidirectional",
lambda: bidirectional.Bidirectional(simple_rnn.SimpleRNN(units=4)),
(2, 2, 2),
),
("NormalizationAdapt", _create_normalization_layer_with_adapt, (4, 4)),
(
"NormalizationNoAdapt",
_create_normalization_layer_without_adapt,
(4, 4),
),
("Resizing", lambda: image_preprocessing.Resizing(3, 3), (2, 5, 5, 1)),
("Rescaling", lambda: image_preprocessing.Rescaling(2.0, 1.0), (6, 6)),
(
"CenterCrop",
lambda: image_preprocessing.CenterCrop(3, 3),
(2, 5, 5, 1),
),
)
def test_layer(
self, f32_layer_fn, input_shape, rtol=2e-3, atol=2e-3, input_data=None
):
"""Tests a layer by comparing the float32 and mixed precision weights.
A float32 layer, a mixed precision layer, and a distributed mixed
precision layer are run. The three layers are identical other than their
dtypes and distribution strategies. The outputs after predict() and
weights after fit() are asserted to be close.
Args:
f32_layer_fn: A function returning a float32 layer. The other two
layers will automatically be created from this.
input_shape: The shape of the input to the layer, including the batch
dimension. Or a list of shapes if the layer takes multiple inputs.
rtol: The relative tolerance to be asserted.
atol: The absolute tolerance to be asserted.
input_data: A Numpy array with the data of the input. If None, input
data will be randomly generated.
"""
if (
f32_layer_fn == reshaping.ZeroPadding2D
and tf.test.is_built_with_rocm()
):
return
if isinstance(input_shape[0], int):
input_shapes = [input_shape]
else:
input_shapes = input_shape
f32_layer = f32_layer_fn()
# Create the layers
assert f32_layer.dtype == f32_layer._compute_dtype == "float32"
config = f32_layer.get_config()
config["dtype"] = policy.Policy("mixed_float16")
mp_layer = f32_layer.__class__.from_config(config)
distributed_mp_layer = f32_layer.__class__.from_config(config)
# Compute per_replica_input_shapes for the distributed model
global_batch_size = input_shapes[0][0]
assert global_batch_size % self.strategy.num_replicas_in_sync == 0, (
"The number of replicas, %d, does not divide the global batch "
"size of %d"
% (self.strategy.num_replicas_in_sync, global_batch_size)
)
per_replica_batch_size = (
global_batch_size // self.strategy.num_replicas_in_sync
)
per_replica_input_shapes = [
(per_replica_batch_size,) + s[1:] for s in input_shapes
]
# Create the models
f32_model = self._create_model_from_layer(f32_layer, input_shapes)
mp_model = self._create_model_from_layer(mp_layer, input_shapes)
with self.strategy.scope():
distributed_mp_model = self._create_model_from_layer(
distributed_mp_layer, per_replica_input_shapes
)
# Set all model weights to the same values
f32_weights = f32_model.get_weights()
mp_model.set_weights(f32_weights)
distributed_mp_model.set_weights(f32_weights)
# Generate input data
if input_data is None:
# Cast inputs to float16 to avoid measuring error from having f16
# layers cast to float16.
input_data = [
np.random.normal(size=s).astype("float16") for s in input_shapes
]
if len(input_data) == 1:
input_data = input_data[0]
# Assert all models have close outputs.
f32_output = f32_model.predict(input_data)
mp_output = mp_model.predict(input_data)
self.assertAllClose(mp_output, f32_output, rtol=rtol, atol=atol)
self.assertAllClose(
distributed_mp_model.predict(input_data),
f32_output,
rtol=rtol,
atol=atol,
)
# Run fit() on models
output = np.random.normal(size=f32_model.outputs[0].shape).astype(
"float16"
)
for model in f32_model, mp_model, distributed_mp_model:
model.fit(input_data, output, batch_size=global_batch_size)
# Assert all models have close weights
f32_weights = f32_model.get_weights()
self.assertAllClose(
mp_model.get_weights(), f32_weights, rtol=rtol, atol=atol
)
self.assertAllClose(
distributed_mp_model.get_weights(),
f32_weights,
rtol=rtol,
atol=atol,
)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/mixed_precision/layer_correctness_test.py/0 | {
"file_path": "tf-keras/tf_keras/mixed_precision/layer_correctness_test.py",
"repo_id": "tf-keras",
"token_count": 6308
} | 196 |
# Copyright 2022 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""AdamW optimizer implementation."""
import tensorflow.compat.v2 as tf
from tf_keras.optimizers import optimizer
from tf_keras.saving.object_registration import register_keras_serializable
# isort: off
from tensorflow.python.util.tf_export import keras_export
@register_keras_serializable()
@keras_export(
"keras.optimizers.AdamW",
"keras.optimizers.experimental.AdamW",
"keras.dtensor.experimental.optimizers.AdamW",
v1=[],
)
class AdamW(optimizer.Optimizer):
r"""Optimizer that implements the AdamW algorithm.
AdamW optimization is a stochastic gradient descent method that is based on
adaptive estimation of first-order and second-order moments with an added
method to decay weights per the techniques discussed in the paper,
'Decoupled Weight Decay Regularization' by
[Loshchilov, Hutter et al., 2019](https://arxiv.org/abs/1711.05101).
According to
[Kingma et al., 2014](http://arxiv.org/abs/1412.6980),
the underying Adam method is "*computationally
efficient, has little memory requirement, invariant to diagonal rescaling of
gradients, and is well suited for problems that are large in terms of
data/parameters*".
Args:
learning_rate: A `tf.Tensor`, floating point value, a schedule that is a
`tf.keras.optimizers.schedules.LearningRateSchedule`, or a callable
that takes no arguments and returns the actual value to use. The
learning rate. Defaults to 0.001.
beta_1: A float value or a constant float tensor, or a callable
that takes no arguments and returns the actual value to use. The
exponential decay rate for the 1st moment estimates.
Defaults to 0.9.
beta_2: A float value or a constant float tensor, or a callable
that takes no arguments and returns the actual value to use. The
exponential decay rate for the 2nd moment estimates.
Defaults to 0.999.
epsilon: A small constant for numerical stability. This epsilon is
"epsilon hat" in the Kingma and Ba paper (in the formula just before
Section 2.1), not the epsilon in Algorithm 1 of the paper.
Defaults to 1e-7.
amsgrad: Boolean. Whether to apply AMSGrad variant of this algorithm
from the paper "On the Convergence of Adam and beyond".
Defaults to `False`.
{{base_optimizer_keyword_args}}
Reference:
- [Loshchilov et al., 2019](https://arxiv.org/abs/1711.05101)
- [Kingma et al., 2014](http://arxiv.org/abs/1412.6980) for `adam`
- [Reddi et al., 2018](
https://openreview.net/pdf?id=ryQu7f-RZ) for `amsgrad`.
Notes:
The sparse implementation of this algorithm (used when the gradient is an
IndexedSlices object, typically because of `tf.gather` or an embedding
lookup in the forward pass) does apply momentum to variable slices even if
they were not used in the forward pass (meaning they have a gradient equal
to zero). Momentum decay (beta1) is also applied to the entire momentum
accumulator. This means that the sparse behavior is equivalent to the dense
behavior (in contrast to some momentum implementations which ignore momentum
unless a variable slice was actually used).
"""
def __init__(
self,
learning_rate=0.001,
weight_decay=0.004,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-7,
amsgrad=False,
clipnorm=None,
clipvalue=None,
global_clipnorm=None,
use_ema=False,
ema_momentum=0.99,
ema_overwrite_frequency=None,
jit_compile=True,
name="AdamW",
**kwargs
):
super().__init__(
name=name,
clipnorm=clipnorm,
clipvalue=clipvalue,
global_clipnorm=global_clipnorm,
use_ema=use_ema,
ema_momentum=ema_momentum,
ema_overwrite_frequency=ema_overwrite_frequency,
jit_compile=jit_compile,
**kwargs
)
self._learning_rate = self._build_learning_rate(learning_rate)
self.weight_decay = weight_decay
self.beta_1 = beta_1
self.beta_2 = beta_2
self.epsilon = epsilon
self.amsgrad = amsgrad
if self.weight_decay is None:
raise ValueError(
"Missing value of `weight_decay` which is required and"
" must be a float value."
)
def build(self, var_list):
"""Initialize optimizer variables.
AdamW optimizer has 3 types of variables: momentums, velocities and
velocity_hat (only set when amsgrad is applied),
Args:
var_list: list of model variables to build AdamW variables on.
"""
super().build(var_list)
if hasattr(self, "_built") and self._built:
return
self._built = True
self._momentums = []
self._velocities = []
for var in var_list:
self._momentums.append(
self.add_variable_from_reference(
model_variable=var, variable_name="m"
)
)
self._velocities.append(
self.add_variable_from_reference(
model_variable=var, variable_name="v"
)
)
if self.amsgrad:
self._velocity_hats = []
for var in var_list:
self._velocity_hats.append(
self.add_variable_from_reference(
model_variable=var, variable_name="vhat"
)
)
def update_step(self, gradient, variable):
"""Update step given gradient and the associated model variable."""
lr = tf.cast(self.learning_rate, variable.dtype)
local_step = tf.cast(self.iterations + 1, variable.dtype)
beta_1_power = tf.pow(tf.cast(self.beta_1, variable.dtype), local_step)
beta_2_power = tf.pow(tf.cast(self.beta_2, variable.dtype), local_step)
var_key = self._var_key(variable)
m = self._momentums[self._index_dict[var_key]]
v = self._velocities[self._index_dict[var_key]]
alpha = lr * tf.sqrt(1 - beta_2_power) / (1 - beta_1_power)
if isinstance(gradient, tf.IndexedSlices):
# Sparse gradients.
m.assign_add(-m * (1 - self.beta_1))
m.scatter_add(
tf.IndexedSlices(
gradient.values * (1 - self.beta_1), gradient.indices
)
)
v.assign_add(-v * (1 - self.beta_2))
v.scatter_add(
tf.IndexedSlices(
tf.square(gradient.values) * (1 - self.beta_2),
gradient.indices,
)
)
if self.amsgrad:
v_hat = self._velocity_hats[self._index_dict[var_key]]
v_hat.assign(tf.maximum(v_hat, v))
v = v_hat
variable.assign_sub((m * alpha) / (tf.sqrt(v) + self.epsilon))
else:
# Dense gradients.
m.assign_add((gradient - m) * (1 - self.beta_1))
v.assign_add((tf.square(gradient) - v) * (1 - self.beta_2))
if self.amsgrad:
v_hat = self._velocity_hats[self._index_dict[var_key]]
v_hat.assign(tf.maximum(v_hat, v))
v = v_hat
variable.assign_sub((m * alpha) / (tf.sqrt(v) + self.epsilon))
def get_config(self):
config = super().get_config()
config.update(
{
"learning_rate": self._serialize_hyperparameter(
self._learning_rate
),
"weight_decay": self.weight_decay,
"beta_1": self.beta_1,
"beta_2": self.beta_2,
"epsilon": self.epsilon,
"amsgrad": self.amsgrad,
}
)
return config
AdamW.__doc__ = AdamW.__doc__.replace(
"{{base_optimizer_keyword_args}}", optimizer.base_optimizer_keyword_args
)
| tf-keras/tf_keras/optimizers/adamw.py/0 | {
"file_path": "tf-keras/tf_keras/optimizers/adamw.py",
"repo_id": "tf-keras",
"token_count": 3975
} | 197 |
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Nadam optimizer implementation."""
import tensorflow.compat.v2 as tf
from tf_keras import backend_config
from tf_keras.optimizers.legacy import optimizer_v2
from tf_keras.optimizers.schedules import learning_rate_schedule
# isort: off
from tensorflow.python.util.tf_export import keras_export
@keras_export(
"keras.optimizers.legacy.Nadam",
v1=["keras.optimizers.Nadam", "keras.optimizers.legacy.Nadam"],
)
class Nadam(optimizer_v2.OptimizerV2):
r"""Optimizer that implements the NAdam algorithm.
Much like Adam is essentially RMSprop with momentum, Nadam is Adam with
Nesterov momentum.
Args:
learning_rate: A Tensor or a floating point value. The learning rate.
beta_1: A float value or a constant float tensor. The exponential decay
rate for the 1st moment estimates.
beta_2: A float value or a constant float tensor. The exponential decay
rate for the exponentially weighted infinity norm.
epsilon: A small constant for numerical stability.
name: Optional name for the operations created when applying gradients.
Defaults to `"Nadam"`.
**kwargs: keyword arguments. Allowed arguments are `clipvalue`,
`clipnorm`, `global_clipnorm`.
If `clipvalue` (float) is set, the gradient of each weight
is clipped to be no higher than this value.
If `clipnorm` (float) is set, the gradient of each weight
is individually clipped so that its norm is no higher than this value.
If `global_clipnorm` (float) is set the gradient of all weights is
clipped so that their global norm is no higher than this value.
Usage Example:
>>> opt = tf.keras.optimizers.legacy.Nadam(learning_rate=0.2)
>>> var1 = tf.Variable(10.0)
>>> loss = lambda: (var1 ** 2) / 2.0
>>> step_count = opt.minimize(loss, [var1]).numpy()
>>> "{:.1f}".format(var1.numpy())
9.8
Reference:
- [Dozat, 2015](http://cs229.stanford.edu/proj2015/054_report.pdf).
"""
_HAS_AGGREGATE_GRAD = True
def __init__(
self,
learning_rate=0.001,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-7,
name="Nadam",
**kwargs
):
# Backwards compatibility with keras NAdam optimizer.
kwargs["decay"] = kwargs.pop("schedule_decay", 0.004)
learning_rate = kwargs.get("lr", learning_rate)
if isinstance(
learning_rate, learning_rate_schedule.LearningRateSchedule
):
raise ValueError(
"The Nadam optimizer does not support "
"tf.keras.optimizers.LearningRateSchedules as the "
"learning rate."
)
super().__init__(name, **kwargs)
self._set_hyper("learning_rate", kwargs.get("lr", learning_rate))
self._set_hyper("decay", self._initial_decay)
self._set_hyper("beta_1", beta_1)
self._set_hyper("beta_2", beta_2)
self.epsilon = epsilon or backend_config.epsilon()
self._m_cache = None
def _create_slots(self, var_list):
var_dtype = var_list[0].dtype.base_dtype
if self._m_cache is None:
self._m_cache = self.add_weight(
"momentum_cache",
shape=[],
dtype=var_dtype,
initializer="ones",
trainable=False,
aggregation=tf.VariableAggregation.ONLY_FIRST_REPLICA,
)
self._weights.append(self._m_cache)
# Separate for-loops to respect the ordering of slot variables from v1.
for var in var_list:
# Create slots for the first moments.
self.add_slot(var, "m")
for var in var_list:
# Create slots for the second moments.
self.add_slot(var, "v")
def _prepare_local(self, var_device, var_dtype, apply_state):
lr_t = tf.identity(self._get_hyper("learning_rate", var_dtype))
beta_1_t = tf.identity(self._get_hyper("beta_1", var_dtype))
beta_2_t = tf.identity(self._get_hyper("beta_2", var_dtype))
local_step = tf.cast(self.iterations + 1, var_dtype)
next_step = tf.cast(self.iterations + 2, var_dtype)
decay_base = tf.cast(0.96, var_dtype)
m_t = beta_1_t * (
1.0 - 0.5 * (tf.pow(decay_base, self._initial_decay * local_step))
)
m_t_1 = beta_1_t * (
1.0 - 0.5 * (tf.pow(decay_base, self._initial_decay * next_step))
)
m_schedule_new = tf.cast(self._m_cache_read, var_dtype) * m_t
if var_dtype is self._m_cache.dtype:
m_schedule_new = tf.identity(
tf.compat.v1.assign(
self._m_cache, m_schedule_new, use_locking=self._use_locking
)
)
m_schedule_next = m_schedule_new * m_t_1
apply_state[(var_device, var_dtype)] = dict(
lr_t=lr_t,
neg_lr_t=-lr_t,
epsilon=tf.convert_to_tensor(self.epsilon, var_dtype),
beta_1_t=beta_1_t,
beta_2_t=beta_2_t,
m_t=m_t,
m_t_1=m_t_1,
one_minus_beta_1_t=1 - beta_1_t,
one_minus_beta_2_t=1 - beta_2_t,
one_minus_m_t=1.0 - m_t,
one_minus_m_schedule_new=1.0 - m_schedule_new,
one_minus_m_schedule_next=1.0 - m_schedule_next,
v_t_prime_denominator=1.0 - tf.pow(beta_2_t, local_step),
)
def _prepare(self, var_list):
# Get the value of the momentum cache before starting to apply
# gradients.
self._m_cache_read = tf.identity(self._m_cache)
return super()._prepare(var_list)
def _resource_apply_dense(self, grad, var, apply_state=None):
var_device, var_dtype = var.device, var.dtype.base_dtype
coefficients = (apply_state or {}).get(
(var_device, var_dtype)
) or self._fallback_apply_state(var_device, var_dtype)
m = self.get_slot(var, "m")
v = self.get_slot(var, "v")
g_prime = grad / coefficients["one_minus_m_schedule_new"]
m_t = (
coefficients["beta_1_t"] * m
+ coefficients["one_minus_beta_1_t"] * grad
)
m_t = tf.compat.v1.assign(m, m_t, use_locking=self._use_locking)
m_t_prime = m_t / coefficients["one_minus_m_schedule_next"]
v_t = coefficients["beta_2_t"] * v + coefficients[
"one_minus_beta_2_t"
] * tf.square(grad)
v_t = tf.compat.v1.assign(v, v_t, use_locking=self._use_locking)
v_t_prime = v_t / coefficients["v_t_prime_denominator"]
m_t_bar = (
coefficients["one_minus_m_t"] * g_prime
+ coefficients["m_t_1"] * m_t_prime
)
var_t = var - coefficients["lr_t"] * m_t_bar / (
tf.sqrt(v_t_prime) + coefficients["epsilon"]
)
return tf.compat.v1.assign(var, var_t, use_locking=self._use_locking).op
def _resource_apply_sparse(self, grad, var, indices, apply_state=None):
var_device, var_dtype = var.device, var.dtype.base_dtype
coefficients = (apply_state or {}).get(
(var_device, var_dtype)
) or self._fallback_apply_state(var_device, var_dtype)
m = self.get_slot(var, "m")
v = self.get_slot(var, "v")
g_prime = grad / coefficients["one_minus_m_schedule_new"]
# m_t = beta1 * m + (1 - beta1) * g_t
m_scaled_g_values = grad * coefficients["one_minus_beta_1_t"]
m_t = tf.compat.v1.assign(
m, m * coefficients["beta_1_t"], use_locking=self._use_locking
)
with tf.control_dependencies([m_t]):
m_t = self._resource_scatter_add(m, indices, m_scaled_g_values)
m_t_slice = tf.gather(m_t, indices)
m_t_prime = m_t_slice / coefficients["one_minus_m_schedule_next"]
m_t_bar = (
coefficients["one_minus_m_t"] * g_prime
+ coefficients["m_t_1"] * m_t_prime
)
# v_t = beta2 * v + (1 - beta2) * (g_t * g_t)
v_scaled_g_values = (grad * grad) * coefficients["one_minus_beta_2_t"]
v_t = tf.compat.v1.assign(
v, v * coefficients["beta_2_t"], use_locking=self._use_locking
)
with tf.control_dependencies([v_t]):
v_t = self._resource_scatter_add(v, indices, v_scaled_g_values)
v_t_slice = tf.gather(v_t, indices)
v_t_prime = v_t_slice / coefficients["v_t_prime_denominator"]
v_prime_sqrt_plus_eps = tf.sqrt(v_t_prime) + coefficients["epsilon"]
var_update = self._resource_scatter_add(
var,
indices,
coefficients["neg_lr_t"] * m_t_bar / v_prime_sqrt_plus_eps,
)
return tf.group(*[var_update, m_t_bar, v_t])
def get_config(self):
config = super().get_config()
config.update(
{
"learning_rate": self._serialize_hyperparameter(
"learning_rate"
),
"decay": self._initial_decay,
"beta_1": self._serialize_hyperparameter("beta_1"),
"beta_2": self._serialize_hyperparameter("beta_2"),
"epsilon": self.epsilon,
}
)
return config
| tf-keras/tf_keras/optimizers/legacy/nadam.py/0 | {
"file_path": "tf-keras/tf_keras/optimizers/legacy/nadam.py",
"repo_id": "tf-keras",
"token_count": 4738
} | 198 |
Subsets and Splits