text
stringlengths 5
261k
| id
stringlengths 16
106
| metadata
dict | __index_level_0__
int64 0
266
|
---|---|---|---|
<jupyter_start><jupyter_text>Knowledge distillation recipes**Author:** [Sayak Paul](https://twitter.com/RisingSayak)**Date created:** 2021/08/01**Last modified:** 2021/08/01**Description:** Training better student models via knowledge distillation with function matching. IntroductionKnowledge distillation ([Hinton et al.](https://arxiv.org/abs/1503.02531)) is a techniquethat enables us to compress larger models into smaller ones. This allows us to reap thebenefits of high performing larger models, while reducing storage and memory costs andachieving higher inference speed:* Smaller models -> smaller memory footprint* Reduced complexity -> fewer floating-point operations (FLOPs)In [Knowledge distillation: A good teacher is patient and consistent](https://arxiv.org/abs/2106.05237),Beyer et al. investigate various existing setups for performing knowledge distillationand show that all of them lead to sub-optimal performance. Due to this,practitioners often settle for other alternatives (quantization, pruning, weightclustering, etc.) when developing production systems that are resource-constrained.Beyer et al. investigate how we can improve the student models that come outof the knowledge distillation process and always match the performance oftheir teacher models. In this example, we will study the recipes introduced by them, usingthe [Flowers102 dataset](https://www.robots.ox.ac.uk/~vgg/data/flowers/102/). As areference, with these recipes, the authors were able to produce a ResNet50 model thatachieves 82.8% accuracy on the ImageNet-1k dataset.In case you need a refresher on knowledge distillation and want to study how it isimplemented in Keras, you can refer to[this example](https://keras.io/examples/vision/knowledge_distillation/).You can also follow[this example](https://keras.io/examples/vision/consistency_training/)that shows an extension of knowledge distillation applied to consistency training.To follow this example, you will need TensorFlow 2.5 or higher as well as TensorFlow Addons,which can be installed using the command below:<jupyter_code>!!pip install -q tensorflow-addons<jupyter_output><empty_output><jupyter_text>Imports<jupyter_code>from tensorflow import keras
import tensorflow_addons as tfa
import tensorflow as tf
import matplotlib.pyplot as plt
import numpy as np
import tensorflow_datasets as tfds
tfds.disable_progress_bar()<jupyter_output><empty_output><jupyter_text>Hyperparameters and contants<jupyter_code>AUTO = tf.data.AUTOTUNE # Used to dynamically adjust parallelism.
BATCH_SIZE = 64
# Comes from Table 4 and "Training setup" section.
TEMPERATURE = 10 # Used to soften the logits before they go to softmax.
INIT_LR = 0.003 # Initial learning rate that will be decayed over the training period.
WEIGHT_DECAY = 0.001 # Used for regularization.
CLIP_THRESHOLD = 1.0 # Used for clipping the gradients by L2-norm.
# We will first resize the training images to a bigger size and then we will take
# random crops of a lower size.
BIGGER = 160
RESIZE = 128<jupyter_output><empty_output><jupyter_text>Load the Flowers102 dataset<jupyter_code>train_ds, validation_ds, test_ds = tfds.load(
"oxford_flowers102", split=["train", "validation", "test"], as_supervised=True
)
print(f"Number of training examples: {train_ds.cardinality()}.")
print(
f"Number of validation examples: {validation_ds.cardinality()}."
)
print(f"Number of test examples: {test_ds.cardinality()}.")<jupyter_output><empty_output><jupyter_text>Teacher modelAs is common with any distillation technique, it's important to first train awell-performing teacher model which is usually larger than the subsequent student model.The authors distill a BiT ResNet152x2 model (teacher) into a BiT ResNet50 model(student).BiT stands for Big Transfer and was introduced in[Big Transfer (BiT): General Visual Representation Learning](https://arxiv.org/abs/1912.11370).BiT variants of ResNets use Group Normalization ([Wu et al.](https://arxiv.org/abs/1803.08494))and Weight Standardization ([Qiao et al.](https://arxiv.org/abs/1903.10520v2))in place of Batch Normalization ([Ioffe et al.](https://arxiv.org/abs/1502.03167)).In order to limit the time it takes to run this example, we will be using a BiTResNet101x3 already trained on the Flowers102 dataset. You can refer to[this notebook](https://github.com/sayakpaul/FunMatch-Distillation/blob/main/train_bit.ipynb)to learn more about the training process. This model reaches 98.18% accuracy on thetest set of Flowers102.The model weights are hosted on Kaggle as a dataset.To download the weights, follow these steps:1. Create an account on Kaggle [here](https://www.kaggle.com).2. Go to the "Account" tab of your [user profile](https://www.kaggle.com/account).3. Select "Create API Token". This will trigger the download of `kaggle.json`, a filecontaining your API credentials.4. From that JSON file, copy your Kaggle username and API key.Now run the following:```pythonimport osos.environ["KAGGLE_USERNAME"] = "" TODO: enter your Kaggle user name hereos.environ["KAGGLE_KEY"] = "" TODO: enter your Kaggle key here```Once the environment variables are set, run:```shell$ kaggle datasets download -d spsayakpaul/bitresnet101x3flowers102$ unzip -qq bitresnet101x3flowers102.zip```This should generate a folder named `T-r101x3-128` which is essentially a teacher[`SavedModel`](https://www.tensorflow.org/guide/saved_model).<jupyter_code>import os
os.environ["KAGGLE_USERNAME"] = "" # TODO: enter your Kaggle user name here
os.environ["KAGGLE_KEY"] = "" # TODO: enter your Kaggle API key here
!!kaggle datasets download -d spsayakpaul/bitresnet101x3flowers102
!!unzip -qq bitresnet101x3flowers102.zip
# Since the teacher model is not going to be trained further we make
# it non-trainable.
teacher_model = keras.models.load_model(
"/home/jupyter/keras-io/examples/keras_recipes/T-r101x3-128"
)
teacher_model.trainable = False
teacher_model.summary()<jupyter_output><empty_output><jupyter_text>The "function matching" recipeTo train a high-quality student model, the authors propose the following changes to thestudent training workflow:* Use an aggressive variant of MixUp ([Zhang et al.](https://arxiv.org/abs/1710.09412)).This is done by sampling the `alpha` parameter from a uniform distribution instead of abeta distribution. MixUp is used here in order to help the student model capture thefunction underlying the teacher model. MixUp linearly interpolates between differentsamples across the data manifold. So the rationale here is if the student is trained tofit that it should be able to match the teacher model better. To incorporate moreinvariance MixUp is coupled with "Inception-style" cropping([Szegedy et al.](https://arxiv.org/abs/1409.4842)). This is where the"function matching" term makes its way in the[original paper](https://arxiv.org/abs/2106.05237).* Unlike other works ([Noisy Student Training](https://arxiv.org/abs/1911.04252) forexample), both the teacher and student models receive the same copy of an image, which ismixed up and randomly cropped. By providing the same inputs to both the models, theauthors make the teacher consistent with the student.* With MixUp, we are essentially introducing a strong form of regularization whentraining the student. As such, it should be trained for arelatively long period of time (1000 epochs at least). Since the student is trained withstrong regularization, the risk of overfitting due to a longer trainingschedule are also mitigated.In summary, one needs to be consistent and patient while training the student model. Data input pipeline<jupyter_code>def mixup(images, labels):
alpha = tf.random.uniform([], 0, 1)
mixedup_images = alpha * images + (1 - alpha) * tf.reverse(images, axis=[0])
# The labels do not matter here since they are NOT used during
# training.
return mixedup_images, labels
def preprocess_image(image, label, train=True):
image = tf.cast(image, tf.float32) / 255.0
if train:
image = tf.image.resize(image, (BIGGER, BIGGER))
image = tf.image.random_crop(image, (RESIZE, RESIZE, 3))
image = tf.image.random_flip_left_right(image)
else:
# Central fraction amount is from here:
# https://git.io/J8Kda.
image = tf.image.central_crop(image, central_fraction=0.875)
image = tf.image.resize(image, (RESIZE, RESIZE))
return image, label
def prepare_dataset(dataset, train=True, batch_size=BATCH_SIZE):
if train:
dataset = dataset.map(preprocess_image, num_parallel_calls=AUTO)
dataset = dataset.shuffle(BATCH_SIZE * 10)
else:
dataset = dataset.map(
lambda x, y: (preprocess_image(x, y, train)), num_parallel_calls=AUTO
)
dataset = dataset.batch(batch_size)
if train:
dataset = dataset.map(mixup, num_parallel_calls=AUTO)
dataset = dataset.prefetch(AUTO)
return dataset<jupyter_output><empty_output><jupyter_text>Note that for brevity, we used mild crops for the training set but in practice"Inception-style" preprocessing should be applied. You can refer to[this script](https://github.com/sayakpaul/FunMatch-Distillation/blob/main/crop_resize.py)for a closer implementation. Also, _**the ground-truth labels are not used fortraining the student.**_<jupyter_code>train_ds = prepare_dataset(train_ds, True)
validation_ds = prepare_dataset(validation_ds, False)
test_ds = prepare_dataset(test_ds, False)<jupyter_output><empty_output><jupyter_text>Visualization<jupyter_code>sample_images, _ = next(iter(train_ds))
plt.figure(figsize=(10, 10))
for n in range(25):
ax = plt.subplot(5, 5, n + 1)
plt.imshow(sample_images[n].numpy())
plt.axis("off")
plt.show()<jupyter_output><empty_output><jupyter_text>Student modelFor the purpose of this example, we will use the standard ResNet50V2([He et al.](https://arxiv.org/abs/1603.05027)).<jupyter_code>def get_resnetv2():
resnet_v2 = keras.applications.ResNet50V2(
weights=None,
input_shape=(RESIZE, RESIZE, 3),
classes=102,
classifier_activation="linear",
)
return resnet_v2
get_resnetv2().count_params()<jupyter_output><empty_output><jupyter_text>Compared to the teacher model, this model has 358 Million fewer parameters. Distillation utilityWe will reuse some code from[this example](https://keras.io/examples/vision/knowledge_distillation/)on knowledge distillation.<jupyter_code>class Distiller(tf.keras.Model):
def __init__(self, student, teacher):
super().__init__()
self.student = student
self.teacher = teacher
self.loss_tracker = keras.metrics.Mean(name="distillation_loss")
@property
def metrics(self):
metrics = super().metrics
metrics.append(self.loss_tracker)
return metrics
def compile(
self, optimizer, metrics, distillation_loss_fn, temperature=TEMPERATURE,
):
super().compile(optimizer=optimizer, metrics=metrics)
self.distillation_loss_fn = distillation_loss_fn
self.temperature = temperature
def train_step(self, data):
# Unpack data
x, _ = data
# Forward pass of teacher
teacher_predictions = self.teacher(x, training=False)
with tf.GradientTape() as tape:
# Forward pass of student
student_predictions = self.student(x, training=True)
# Compute loss
distillation_loss = self.distillation_loss_fn(
tf.nn.softmax(teacher_predictions / self.temperature, axis=1),
tf.nn.softmax(student_predictions / self.temperature, axis=1),
)
# Compute gradients
trainable_vars = self.student.trainable_variables
gradients = tape.gradient(distillation_loss, trainable_vars)
# Update weights
self.optimizer.apply_gradients(zip(gradients, trainable_vars))
# Report progress
self.loss_tracker.update_state(distillation_loss)
return {"distillation_loss": self.loss_tracker.result()}
def test_step(self, data):
# Unpack data
x, y = data
# Forward passes
teacher_predictions = self.teacher(x, training=False)
student_predictions = self.student(x, training=False)
# Calculate the loss
distillation_loss = self.distillation_loss_fn(
tf.nn.softmax(teacher_predictions / self.temperature, axis=1),
tf.nn.softmax(student_predictions / self.temperature, axis=1),
)
# Report progress
self.loss_tracker.update_state(distillation_loss)
self.compiled_metrics.update_state(y, student_predictions)
results = {m.name: m.result() for m in self.metrics}
return results<jupyter_output><empty_output><jupyter_text>Learning rate scheduleA warmup cosine learning rate schedule is used in the paper. This schedule is alsotypical for many pre-training methods especially for computer vision.<jupyter_code># Some code is taken from:
# https://www.kaggle.com/ashusma/training-rfcx-tensorflow-tpu-effnet-b2.
class WarmUpCosine(keras.optimizers.schedules.LearningRateSchedule):
def __init__(
self, learning_rate_base, total_steps, warmup_learning_rate, warmup_steps
):
super().__init__()
self.learning_rate_base = learning_rate_base
self.total_steps = total_steps
self.warmup_learning_rate = warmup_learning_rate
self.warmup_steps = warmup_steps
self.pi = tf.constant(np.pi)
def __call__(self, step):
if self.total_steps < self.warmup_steps:
raise ValueError("Total_steps must be larger or equal to warmup_steps.")
cos_annealed_lr = tf.cos(
self.pi
* (tf.cast(step, tf.float32) - self.warmup_steps)
/ float(self.total_steps - self.warmup_steps)
)
learning_rate = 0.5 * self.learning_rate_base * (1 + cos_annealed_lr)
if self.warmup_steps > 0:
if self.learning_rate_base < self.warmup_learning_rate:
raise ValueError(
"Learning_rate_base must be larger or equal to "
"warmup_learning_rate."
)
slope = (
self.learning_rate_base - self.warmup_learning_rate
) / self.warmup_steps
warmup_rate = slope * tf.cast(step, tf.float32) + self.warmup_learning_rate
learning_rate = tf.where(
step < self.warmup_steps, warmup_rate, learning_rate
)
return tf.where(
step > self.total_steps, 0.0, learning_rate, name="learning_rate"
)<jupyter_output><empty_output><jupyter_text>We can now plot a a graph of learning rates generated using this schedule.<jupyter_code>ARTIFICIAL_EPOCHS = 1000
ARTIFICIAL_BATCH_SIZE = 512
DATASET_NUM_TRAIN_EXAMPLES = 1020
TOTAL_STEPS = int(
DATASET_NUM_TRAIN_EXAMPLES / ARTIFICIAL_BATCH_SIZE * ARTIFICIAL_EPOCHS
)
scheduled_lrs = WarmUpCosine(
learning_rate_base=INIT_LR,
total_steps=TOTAL_STEPS,
warmup_learning_rate=0.0,
warmup_steps=1500,
)
lrs = [scheduled_lrs(step) for step in range(TOTAL_STEPS)]
plt.plot(lrs)
plt.xlabel("Step", fontsize=14)
plt.ylabel("LR", fontsize=14)
plt.show()<jupyter_output><empty_output><jupyter_text>The original paper uses at least 1000 epochs and a batch size of 512 to perform"function matching". The objective of this example is to present a workflow toimplement the recipe and not to demonstrate the results when they are applied at full scale.However, these recipes will transfer to the original settings from the paper. Pleaserefer to [this repository](https://github.com/sayakpaul/FunMatch-Distillation) if you areinterested in finding out more. Training<jupyter_code>optimizer = tfa.optimizers.AdamW(
weight_decay=WEIGHT_DECAY, learning_rate=scheduled_lrs, clipnorm=CLIP_THRESHOLD
)
student_model = get_resnetv2()
distiller = Distiller(student=student_model, teacher=teacher_model)
distiller.compile(
optimizer,
metrics=[keras.metrics.SparseCategoricalAccuracy()],
distillation_loss_fn=keras.losses.KLDivergence(),
temperature=TEMPERATURE,
)
history = distiller.fit(
train_ds,
steps_per_epoch=int(np.ceil(DATASET_NUM_TRAIN_EXAMPLES / BATCH_SIZE)),
validation_data=validation_ds,
epochs=30, # This should be at least 1000.
)
student = distiller.student
student_model.compile(metrics=["accuracy"])
_, top1_accuracy = student.evaluate(test_ds)
print(f"Top-1 accuracy on the test set: {round(top1_accuracy * 100, 2)}%")<jupyter_output><empty_output><jupyter_text>ResultsWith just 30 epochs of training, the results are nowhere near expected.This is where the benefits of patience aka a longer training schedulewill come into play. Let's investigate what the model trained for 1000 epochs can do.<jupyter_code>!# Download the pre-trained weights.
!!wget https://git.io/JBO3Y -O S-r50x1-128-1000.tar.gz
!!tar xf S-r50x1-128-1000.tar.gz
pretrained_student = keras.models.load_model("S-r50x1-128-1000")
pretrained_student.summary()<jupyter_output><empty_output><jupyter_text>This model exactly follows what the authors have used in their student models. This iswhy the model summary is a bit different.<jupyter_code>_, top1_accuracy = pretrained_student.evaluate(test_ds)
print(f"Top-1 accuracy on the test set: {round(top1_accuracy * 100, 2)}%")<jupyter_output><empty_output> | keras-io/examples/keras_recipes/ipynb/better_knowledge_distillation.ipynb/0 | {
"file_path": "keras-io/examples/keras_recipes/ipynb/better_knowledge_distillation.ipynb",
"repo_id": "keras-io",
"token_count": 6101
} | 89 |
"""
Title: Packaging Keras models for wide distribution using Functional Subclassing
Author: Martin Görner
Date created: 2023-12-13
Last modified: 2023-12-13
Description: When sharing your deep learning models, package them using the Functional Subclassing pattern.
Accelerator: GPU
"""
"""
## Introduction
Keras is the ideal framework for sharing your cutting-edge deep learning models, in a
library of pre-trained (or not) models. Millions of ML engineers are fluent in the
familiar Keras API, making your models accessible to a global community, whatever their
preferred backend (Jax, PyTorch or TensorFlow).
One of the benefits of the Keras API is that it lets users programmatically inspect or
edit a model, a feature that is necessary when creating new architectures or workflows
based on a pre-trained model.
When distributing models, the Keras team recommends packaging them using the **Functional
Subclassing** pattern. Models implemented in this way combine two benefits:
* They can be instantiated in the normal pythonic way:<br/>
`model = model_collection_xyz.AmazingModel()`
* They are Keras functional models which means that they have a programmatically
accessible graph of layers, for introspection or model surgery.
This guide explains [how to use](#functional-subclassing-model) the Functional
Subclassing pattern, and showcases its benefits for [programmatic model
introspection](#model-introspection) and [model surgery](#model-surgery). It also shows
two other best practices for sharable Keras models: [configuring
models](#unconstrained-inputs) for the widest range of supported inputs, for example
images of various sizes, and [using dictionary inputs](#model-with-dictionary-inputs) for
clarity in more complex models.
"""
"""
## Setup
"""
import keras
import tensorflow as tf # only for tf.data
print("Keras version", keras.version())
print("Keras is running on", keras.config.backend())
"""
## Dataset
Let's load an MNIST dataset so that we have something to train with.
"""
# tf.data is a great API for putting together a data stream.
# It works wether you use the TensorFlow, PyTorch or Jax backend,
# as long as you use it in the data stream only and not inside of a model.
BATCH_SIZE = 256
(x_train, train_labels), (x_test, test_labels) = keras.datasets.mnist.load_data()
train_data = tf.data.Dataset.from_tensor_slices((x_train, train_labels))
train_data = train_data.map(
lambda x, y: (tf.expand_dims(x, axis=-1), y)
) # 1-channel monochrome
train_data = train_data.batch(BATCH_SIZE)
train_data = train_data.cache()
train_data = train_data.shuffle(5000, reshuffle_each_iteration=True)
train_data = train_data.repeat()
test_data = tf.data.Dataset.from_tensor_slices((x_test, test_labels))
test_data = test_data.map(
lambda x, y: (tf.expand_dims(x, axis=-1), y)
) # 1-channel monochrome
test_data = test_data.batch(10000)
test_data = test_data.cache()
STEPS_PER_EPOCH = len(train_labels) // BATCH_SIZE
EPOCHS = 5
"""
## Functional Subclassing Model
The model is wrapped in a class so that end users can instantiate it normally by calling
the constructor `MnistModel()` rather than calling a factory function.
"""
class MnistModel(keras.Model):
def __init__(self, **kwargs):
# Keras Functional model definition. This could have used Sequential as
# well. Sequential is just syntactic sugar for simple functional models.
# 1-channel monochrome input
inputs = keras.layers.Input(shape=(None, None, 1), dtype="uint8")
# pixel format conversion from uint8 to float32
y = keras.layers.Rescaling(1 / 255.0)(inputs)
# 3 convolutional layers
y = keras.layers.Conv2D(
filters=16, kernel_size=3, padding="same", activation="relu"
)(y)
y = keras.layers.Conv2D(
filters=32, kernel_size=6, padding="same", activation="relu", strides=2
)(y)
y = keras.layers.Conv2D(
filters=48, kernel_size=6, padding="same", activation="relu", strides=2
)(y)
# 2 dense layers
y = keras.layers.GlobalAveragePooling2D()(y)
y = keras.layers.Dense(48, activation="relu")(y)
y = keras.layers.Dropout(0.4)(y)
outputs = keras.layers.Dense(
10, activation="softmax", name="classification_head" # 10 classes
)(y)
# A Keras Functional model is created by calling keras.Model(inputs, outputs)
super().__init__(inputs=inputs, outputs=outputs, **kwargs)
"""
Let's instantiate and train this model.
"""
model = MnistModel()
model.compile(
optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
history = model.fit(
train_data,
steps_per_epoch=STEPS_PER_EPOCH,
epochs=EPOCHS,
validation_data=test_data,
)
"""
## Unconstrained inputs
Notice, in the model definition above, that the input is specified with undefined
dimensions: `Input(shape=(None, None, 1)`
This allows the model to accept any image size as an input. However, this
only works if the loosely defined shape can be propagated through all the layers and
still determine the size of all weights.
* So if you have a model architecture that can handle different input sizes
with the same weights (like here), then your users will be able to instantiate it without
parameters:<br/> `model = MnistModel()`
* If on the other hand, the model must provision different weights for different input
sizes, you will have to ask your users to specify the size in the constructor:<br/>
`model = ModelXYZ(input_size=...)`
"""
"""
## Model introspection
Keras maintains a programmatically accessible graph of layers for every model. It can be
used for introspection and is accessed through the `model.layers` or `layer.layers`
attribute. The utility function `model.summary()` also uses this mechanism internally.
"""
model = MnistModel()
# Model summary works
model.summary()
# Recursively walking the layer graph works as well
def walk_layers(layer):
if hasattr(layer, "layers"):
for layer in layer.layers:
walk_layers(layer)
else:
print(layer.name)
print("\nWalking model layers:\n")
walk_layers(model)
"""
## Model surgery
End users might want to instantiate the model from your library but modify it before use.
Functional models have a programmatically accessible graph of layers. Edits are possible
by slicing and splicing the graph and creating a new functional model.
The alternative is to fork the model code and make the modifications but that forces
users to then maintain their fork indefinitely.
Example: instantiate the model but change the classification head to do a binary
classification, "0" or "not 0", instead of the original 10-way digits classification.
"""
model = MnistModel()
input = model.input
# cut before the classification head
y = model.get_layer("classification_head").input
# add a new classification head
output = keras.layers.Dense(
1, # single class for binary classification
activation="sigmoid",
name="binary_classification_head",
)(y)
# create a new functional model
binary_model = keras.Model(input, output)
binary_model.summary()
"""
We can now train the new model as a binary classifier.
"""
# new dataset with 0 / 1 labels (1 = digit '0', 0 = all other digits)
bin_train_data = train_data.map(
lambda x, y: (x, tf.cast(tf.math.equal(y, tf.zeros_like(y)), dtype=tf.uint8))
)
bin_test_data = test_data.map(
lambda x, y: (x, tf.cast(tf.math.equal(y, tf.zeros_like(y)), dtype=tf.uint8))
)
# appropriate loss and metric for binary classification
binary_model.compile(
optimizer="adam", loss="binary_crossentropy", metrics=["binary_accuracy"]
)
history = binary_model.fit(
bin_train_data,
steps_per_epoch=STEPS_PER_EPOCH,
epochs=EPOCHS,
validation_data=bin_test_data,
)
"""
## Model with dictionary inputs
In more complex models, with multiple inputs, structuring the inputs as a dictionary can
improve readability and usability. This is straightforward to do with a functional model:
"""
class MnistDictModel(keras.Model):
def __init__(self, **kwargs):
#
# The input is a dictionary
#
inputs = {
"image": keras.layers.Input(
shape=(None, None, 1), # 1-channel monochrome
dtype="uint8",
name="image",
)
}
# pixel format conversion from uint8 to float32
y = keras.layers.Rescaling(1 / 255.0)(inputs["image"])
# 3 conv layers
y = keras.layers.Conv2D(
filters=16, kernel_size=3, padding="same", activation="relu"
)(y)
y = keras.layers.Conv2D(
filters=32, kernel_size=6, padding="same", activation="relu", strides=2
)(y)
y = keras.layers.Conv2D(
filters=48, kernel_size=6, padding="same", activation="relu", strides=2
)(y)
# 2 dense layers
y = keras.layers.GlobalAveragePooling2D()(y)
y = keras.layers.Dense(48, activation="relu")(y)
y = keras.layers.Dropout(0.4)(y)
outputs = keras.layers.Dense(
10, activation="softmax", name="classification_head" # 10 classes
)(y)
# A Keras Functional model is created by calling keras.Model(inputs, outputs)
super().__init__(inputs=inputs, outputs=outputs, **kwargs)
"""
We can now train the model on inputs structured as a dictionary.
"""
model = MnistDictModel()
# reformat the dataset as a dictionary
dict_train_data = train_data.map(lambda x, y: ({"image": x}, y))
dict_test_data = test_data.map(lambda x, y: ({"image": x}, y))
model.compile(
optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["sparse_categorical_accuracy"],
)
history = model.fit(
dict_train_data,
steps_per_epoch=STEPS_PER_EPOCH,
epochs=EPOCHS,
validation_data=dict_test_data,
)
| keras-io/examples/keras_recipes/packaging_keras_models_for_wide_distribution.py/0 | {
"file_path": "keras-io/examples/keras_recipes/packaging_keras_models_for_wide_distribution.py",
"repo_id": "keras-io",
"token_count": 3507
} | 90 |
<jupyter_start><jupyter_text>Question Answering with Hugging Face Transformers**Author:** Matthew Carrigan and Merve Noyan**Date created:** 13/01/2022**Last modified:** 13/01/2022**Description:** Question answering implementation using Keras and Hugging Face Transformers. Introduction to Question AnsweringQuestion answering is a common NLP task with several variants. In some variants, the taskis multiple-choice:A list of possible answers are supplied with each question, and the model simply needs toreturn a probability distribution over the options. A more challenging variant ofquestion answering, which is more applicable to real-life tasks, is when the options arenot provided. Instead, the model is given an input document -- called context -- and aquestion about the document, and it must extract the span of text in the document thatcontains the answer. In this case, the model is not computing a probability distributionover answers, but two probability distributions over the tokens in the document text,representing the start and end of the span containing the answer. This variant is called"extractive question answering".Extractive question answering is a very challenging NLP task, and the dataset sizerequired to train such a model from scratch when the questions and answers are naturallanguage is prohibitively huge. As a result, question answering (like almost all NLPtasks) benefits enormously from starting from a strong pretrained foundation model -starting from a strong pretrained language model can reduce the dataset size required toreach a given accuracy by multiple orders of magnitude, enabling you to reach very strongperformance with surprisingly reasonable datasets.Starting with a pretrained model adds difficulties, though - where do you get the modelfrom? How do you ensure that your input data is preprocessed and tokenized the same wayas the original model? How do you modify the model to add an output head that matchesyour task of interest?In this example, we'll show you how to load a model from the Hugging Face[🤗Transformers](https://github.com/huggingface/transformers) library to tackle thischallenge. We'll also load a benchmark question answering dataset from the[🤗Datasets](https://github.com/huggingface/datasets) library - this is another open-sourcerepository containing a wide range of datasets across many modalities, from NLP to visionand beyond. Note, though, that there is no requirement that these libraries must be usedwith each other. If you want to train a model from[🤗Transformers](https://github.com/huggingface/transformers) on your own data, or you wantto load data from [🤗 Datasets](https://github.com/huggingface/datasets) and train yourown entirely unrelated models with it, that is of course possible (and highlyencouraged!) Installing the requirements<jupyter_code>!pip install git+https://github.com/huggingface/transformers.git
!pip install datasets
!pip install huggingface-hub<jupyter_output><empty_output><jupyter_text>Loading the dataset We will use the [🤗 Datasets](https://github.com/huggingface/datasets) library to downloadthe SQUAD question answering dataset using `load_dataset()`.<jupyter_code>from datasets import load_dataset
datasets = load_dataset("squad")<jupyter_output><empty_output><jupyter_text>The `datasets` object itself is a`DatasetDict`, which contains one key for the training, validation and test set. We can seethe training, validation and test sets all have a column for the context, the questionand the answers to those questions. To access an actual element, you need to select asplit first, then give an index. We can see the answers are indicated by their startposition in the text and their full text, which is a substring of the context as wementioned above. Let's take a look at what a single training example looks like.<jupyter_code>print(datasets["train"][0])<jupyter_output><empty_output><jupyter_text>Preprocessing the training data Before we can feed those texts to our model, we need to preprocess them. This is done bya 🤗 Transformers `Tokenizer` which will (as the name indicates) tokenize the inputs(including converting the tokens to their corresponding IDs in the pretrained vocabulary)and put it in a format the model expects, as well as generate the other inputs that modelrequires.To do all of this, we instantiate our tokenizer with the `AutoTokenizer.from_pretrained`method, which will ensure:- We get a tokenizer that corresponds to the model architecture we want to use.- We download the vocabulary used when pretraining this specific checkpoint.That vocabulary will be cached, so it's not downloaded again the next time we run thecell.The `from_pretrained()` method expects the name of a model. If you're unsure which model topick, don't panic! The list of models to choose from can be bewildering, but in generalthere is a simple tradeoff: Larger models are slower and consume more memory, but usuallyyield slightly better final accuracies after fine-tuning. For this example, we havechosen the (relatively) lightweight `"distilbert"`, a smaller, distilled version of thefamous BERT language model. If you absolutely must have the highest possible accuracy foran important task, though, and you have the GPU memory (and free time) to handle it, youmay prefer to use a larger model, such as `"roberta-large"`. Newer and even larger modelsthan `"roberta"` exist in [🤗 Transformers](https://github.com/huggingface/transformers),but we leave the task of finding and training them as an exercise to readers who areeither particularly masochistic or have 40GB of VRAM to throw around.<jupyter_code>from transformers import AutoTokenizer
model_checkpoint = "distilbert-base-cased"
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)<jupyter_output><empty_output><jupyter_text>Depending on the model you selected, you will see different keys in the dictionaryreturned by the cell above. They don't matter much for what we're doing here (just knowthey are required by the model we will instantiate later), but you can learn more aboutthem in [this tutorial](https://huggingface.co/transformers/preprocessing.html) if you'reinterested.One specific issue for the preprocessing in question answering is how to deal with verylong documents. We usually truncate them in other tasks, when they are longer than themodel maximum sentence length, but here, removing part of the the context might result inlosing the answer we are looking for. To deal with this, we will allow one (long) examplein our dataset to give several input features, each of length shorter than the maximumlength of the model (or the one we set as a hyper-parameter). Also, just in case theanswer lies at the point we split a long context, we allow some overlap between thefeatures we generate controlled by the hyper-parameter `doc_stride`.If we simply truncate with a fixed size (`max_length`), we will lose information. We want toavoid truncating the question, and instead only truncate the context to ensure the taskremains solvable. To do that, we'll set `truncation` to `"only_second"`, so that only thesecond sequence (the context) in each pair is truncated. To get the list of featurescapped by the maximum length, we need to set `return_overflowing_tokens` to True and passthe `doc_stride` to `stride`. To see which feature of the original context contain theanswer, we can return `"offset_mapping"`.<jupyter_code>max_length = 384 # The maximum length of a feature (question and context)
doc_stride = (
128 # The authorized overlap between two part of the context when splitting
)
# it is needed.<jupyter_output><empty_output><jupyter_text>In the case of impossible answers (the answer is in another feature given by an examplewith a long context), we set the cls index for both the start and end position. We couldalso simply discard those examples from the training set if the flag`allow_impossible_answers` is `False`. Since the preprocessing is already complex enoughas it is, we've kept is simple for this part.<jupyter_code>def prepare_train_features(examples):
# Tokenize our examples with truncation and padding, but keep the overflows using a
# stride. This results in one example possible giving several features when a context is long,
# each of those features having a context that overlaps a bit the context of the previous
# feature.
examples["question"] = [q.lstrip() for q in examples["question"]]
examples["context"] = [c.lstrip() for c in examples["context"]]
tokenized_examples = tokenizer(
examples["question"],
examples["context"],
truncation="only_second",
max_length=max_length,
stride=doc_stride,
return_overflowing_tokens=True,
return_offsets_mapping=True,
padding="max_length",
)
# Since one example might give us several features if it has a long context, we need a
# map from a feature to its corresponding example. This key gives us just that.
sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping")
# The offset mappings will give us a map from token to character position in the original
# context. This will help us compute the start_positions and end_positions.
offset_mapping = tokenized_examples.pop("offset_mapping")
# Let's label those examples!
tokenized_examples["start_positions"] = []
tokenized_examples["end_positions"] = []
for i, offsets in enumerate(offset_mapping):
# We will label impossible answers with the index of the CLS token.
input_ids = tokenized_examples["input_ids"][i]
cls_index = input_ids.index(tokenizer.cls_token_id)
# Grab the sequence corresponding to that example (to know what is the context and what
# is the question).
sequence_ids = tokenized_examples.sequence_ids(i)
# One example can give several spans, this is the index of the example containing this
# span of text.
sample_index = sample_mapping[i]
answers = examples["answers"][sample_index]
# If no answers are given, set the cls_index as answer.
if len(answers["answer_start"]) == 0:
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Start/end character index of the answer in the text.
start_char = answers["answer_start"][0]
end_char = start_char + len(answers["text"][0])
# Start token index of the current span in the text.
token_start_index = 0
while sequence_ids[token_start_index] != 1:
token_start_index += 1
# End token index of the current span in the text.
token_end_index = len(input_ids) - 1
while sequence_ids[token_end_index] != 1:
token_end_index -= 1
# Detect if the answer is out of the span (in which case this feature is labeled with the
# CLS index).
if not (
offsets[token_start_index][0] <= start_char
and offsets[token_end_index][1] >= end_char
):
tokenized_examples["start_positions"].append(cls_index)
tokenized_examples["end_positions"].append(cls_index)
else:
# Otherwise move the token_start_index and token_end_index to the two ends of the
# answer.
# Note: we could go after the last offset if the answer is the last word (edge
# case).
while (
token_start_index < len(offsets)
and offsets[token_start_index][0] <= start_char
):
token_start_index += 1
tokenized_examples["start_positions"].append(token_start_index - 1)
while offsets[token_end_index][1] >= end_char:
token_end_index -= 1
tokenized_examples["end_positions"].append(token_end_index + 1)
return tokenized_examples<jupyter_output><empty_output><jupyter_text>To apply this function on all the sentences (or pairs of sentences) in our dataset, wejust use the `map()` method of our `Dataset` object, which will apply the function on allthe elements of.We'll use `batched=True` to encode the texts in batches together. This is to leverage thefull benefit of the fast tokenizer we loaded earlier, which will use multi-threading totreat the texts in a batch concurrently. We also use the `remove_columns` argument toremove the columns that existed before tokenization was applied - this ensures that theonly features remaining are the ones we actually want to pass to our model.<jupyter_code>tokenized_datasets = datasets.map(
prepare_train_features,
batched=True,
remove_columns=datasets["train"].column_names,
num_proc=3,
)<jupyter_output><empty_output><jupyter_text>Even better, the results are automatically cached by the 🤗 Datasets library to avoidspending time on this step the next time you run your notebook. The 🤗 Datasets library isnormally smart enough to detect when the function you pass to map has changed (and thusrequires to not use the cache data). For instance, it will properly detect if you changethe task in the first cell and rerun the notebook. 🤗 Datasets warns you when it usescached files, you can pass `load_from_cache_file=False` in the call to `map()` to not usethe cached files and force the preprocessing to be applied again.Because all our data has been padded or truncated to the same length, and it is not toolarge, we can now simply convert it to a dict of numpy arrays, ready for training.Although we will not use it here, 🤗 Datasets have a `to_tf_dataset()` helper methoddesigned to assist you when the data cannot be easily converted to arrays, such as whenit has variable sequence lengths, or is too large to fit in memory. This method wraps a`tf.data.Dataset` around the underlying 🤗 Dataset, streaming samples from the underlyingdataset and batching them on the fly, thus minimizing wasted memory and computation fromunnecessary padding. If your use-case requires it, please see the[docs](https://huggingface.co/docs/transformers/custom_datasetsfinetune-with-tensorflow)on to_tf_dataset and data collator for an example. If not, feel free to follow this exampleand simply convert to dicts!<jupyter_code>train_set = tokenized_datasets["train"].with_format("numpy")[
:
] # Load the whole dataset as a dict of numpy arrays
validation_set = tokenized_datasets["validation"].with_format("numpy")[:]<jupyter_output><empty_output><jupyter_text>Fine-tuning the model That was a lot of work! But now that our data is ready, everything is going to run verysmoothly. First, we download the pretrained model and fine-tune it. Since our task isquestion answering, we use the `TFAutoModelForQuestionAnswering` class. Like with thetokenizer, the `from_pretrained()` method will download and cache the model for us:<jupyter_code>from transformers import TFAutoModelForQuestionAnswering
model = TFAutoModelForQuestionAnswering.from_pretrained(model_checkpoint)<jupyter_output><empty_output><jupyter_text>The warning is telling us we are throwing away some weights and newly initializing someothers. Don't panic! This is absolutely normal. Recall that models like BERT andDistilbert are trained on a **language modeling** task, but we're loading the model asa `TFAutoModelForQuestionAnswering`, which means we want the model to perform a**question answering** task. This change requires the final output layer or "head" to beremoved and replaced with a new head suited for the new task. The `from_pretrained`method will handle all of this for us, and the warning is there simply to remind us thatsome model surgery has been performed, and that the model will not generate usefulpredictions until the newly-initialized layers have been fine-tuned on some data.Next, we can create an optimizer and specify a loss function. You can usually getslightly better performance by using learning rate decay and decoupled weight decay, butfor the purposes of this example the standard `Adam` optimizer will work fine. Note,however, that when fine-tuning a pretrained transformer model you will generally want touse a low learning rate! We find the best results are obtained with values in the range1e-5 to 1e-4, and training may completely diverge at the default Adam learning rate of 1e-3.<jupyter_code>import tensorflow as tf
from tensorflow import keras
optimizer = keras.optimizers.Adam(learning_rate=5e-5)<jupyter_output><empty_output><jupyter_text>And now we just compile and fit the model. As a convenience, all 🤗 Transformers modelscome with a default loss which matches their output head, although you're of course freeto use your own. Because the built-in loss is computed internally during the forwardpass, when using it you may find that some Keras metrics misbehave or give unexpectedoutputs. This is an area of very active development in 🤗 Transformers, though, sohopefully we'll have a good solution to that issue soon!For now, though, let's use the built-in loss without any metrics. To get the built-inloss, simply leave out the `loss` argument to `compile`.<jupyter_code># Optionally uncomment the next line for float16 training
keras.mixed_precision.set_global_policy("mixed_float16")
model.compile(optimizer=optimizer)<jupyter_output><empty_output><jupyter_text>And now we can train our model. Note that we're not passing separate labels - the labelsare keys in the input dict, to make them visible to the model during the forward pass soit can compute the built-in loss.<jupyter_code>model.fit(train_set, validation_data=validation_set, epochs=1)<jupyter_output><empty_output><jupyter_text>And we're done! Let's give it a try, using some text from the keras.io frontpage:<jupyter_code>context = """Keras is an API designed for human beings, not machines. Keras follows best
practices for reducing cognitive load: it offers consistent & simple APIs, it minimizes
the number of user actions required for common use cases, and it provides clear &
actionable error messages. It also has extensive documentation and developer guides. """
question = "What is Keras?"
inputs = tokenizer([context], [question], return_tensors="np")
outputs = model(inputs)
start_position = tf.argmax(outputs.start_logits, axis=1)
end_position = tf.argmax(outputs.end_logits, axis=1)
print(int(start_position), int(end_position[0]))<jupyter_output><empty_output><jupyter_text>Looks like our model thinks the answer is the span from tokens 1 to 12 (inclusive). Noprizes for guessing which tokens those are!<jupyter_code>answer = inputs["input_ids"][0, int(start_position) : int(end_position) + 1]
print(answer)<jupyter_output><empty_output><jupyter_text>And now we can use the `tokenizer.decode()` method to turn those token IDs back into text:<jupyter_code>print(tokenizer.decode(answer))<jupyter_output><empty_output> | keras-io/examples/nlp/ipynb/question_answering.ipynb/0 | {
"file_path": "keras-io/examples/nlp/ipynb/question_answering.ipynb",
"repo_id": "keras-io",
"token_count": 5535
} | 91 |
# Data Parallel Training with KerasNLP and tf.distribute
**Author:** Anshuman Mishra<br>
**Date created:** 2023/07/07<br>
**Last modified:** 2023/07/07<br>
**Description:** Data Parallel training with KerasNLP and tf.distribute.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/nlp/ipynb/data_parallel_training_with_keras_nlp.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/nlp/data_parallel_training_with_keras_nlp.py)
---
## Introduction
Distributed training is a technique used to train deep learning models on multiple devices
or machines simultaneously. It helps to reduce training time and allows for training larger
models with more data. KerasNLP is a library that provides tools and utilities for natural
language processing tasks, including distributed training.
In this tutorial, we will use KerasNLP to train a BERT-based masked language model (MLM)
on the wikitext-2 dataset (a 2 million word dataset of wikipedia articles). The MLM task
involves predicting the masked words in a sentence, which helps the model learn contextual
representations of words.
This guide focuses on data parallelism, in particular synchronous data parallelism, where
each accelerator (a GPU or TPU) holds a complete replica of the model, and sees a
different partial batch of the input data. Partial gradients are computed on each device,
aggregated, and used to compute a global gradient update.
Specifically, this guide teaches you how to use the `tf.distribute` API to train Keras
models on multiple GPUs, with minimal changes to your code, in the following two setups:
- On multiple GPUs (typically 2 to 8) installed on a single machine (single host,
multi-device training). This is the most common setup for researchers and small-scale
industry workflows.
- On a cluster of many machines, each hosting one or multiple GPUs (multi-worker
distributed training). This is a good setup for large-scale industry workflows, e.g.
training high-resolution text summarization models on billion word datasets on 20-100 GPUs.
```python
!pip install -q --upgrade keras-nlp
!pip install -q --upgrade keras # Upgrade to Keras 3.
```
---
## Imports
```python
import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import tensorflow as tf
import keras
import keras_nlp
```
Before we start any training, let's configure our single GPU to show up as two logical
devices.
When you are training with two or more phsyical GPUs, this is totally uncessary. This
is just a trick to show real distributed training on the default colab GPU runtime,
which has only one GPU available.
```python
!nvidia-smi --query-gpu=memory.total --format=csv,noheader
```
```python
physical_devices = tf.config.list_physical_devices("GPU")
tf.config.set_logical_device_configuration(
physical_devices[0],
[
tf.config.LogicalDeviceConfiguration(memory_limit=15360 // 2),
tf.config.LogicalDeviceConfiguration(memory_limit=15360 // 2),
],
)
logical_devices = tf.config.list_logical_devices("GPU")
logical_devices
EPOCHS = 3
```
<div class="k-default-codeblock">
```
24576 MiB
```
</div>
To do single-host, multi-device synchronous training with a Keras model, you would use
the `tf.distribute.MirroredStrategy` API. Here's how it works:
- Instantiate a `MirroredStrategy`, optionally configuring which specific devices you
want to use (by default the strategy will use all GPUs available).
- Use the strategy object to open a scope, and within this scope, create all the Keras
objects you need that contain variables. Typically, that means **creating & compiling the
model** inside the distribution scope.
- Train the model via `fit()` as usual.
```python
strategy = tf.distribute.MirroredStrategy()
print(f"Number of devices: {strategy.num_replicas_in_sync}")
```
<div class="k-default-codeblock">
```
INFO:tensorflow:Using MirroredStrategy with devices ('/job:localhost/replica:0/task:0/device:GPU:0', '/job:localhost/replica:0/task:0/device:GPU:1')
Number of devices: 2
```
</div>
Base batch size and learning rate
```python
base_batch_size = 32
base_learning_rate = 1e-4
```
Calculate scaled batch size and learning rate
```python
scaled_batch_size = base_batch_size * strategy.num_replicas_in_sync
scaled_learning_rate = base_learning_rate * strategy.num_replicas_in_sync
```
Now, we need to download and preprocess the wikitext-2 dataset. This dataset will be
used for pretraining the BERT model. We will filter out short lines to ensure that the
data has enough context for training.
```python
keras.utils.get_file(
origin="https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-2-v1.zip",
extract=True,
)
wiki_dir = os.path.expanduser("~/.keras/datasets/wikitext-2/")
# Load wikitext-103 and filter out short lines.
wiki_train_ds = (
tf.data.TextLineDataset(
wiki_dir + "wiki.train.tokens",
)
.filter(lambda x: tf.strings.length(x) > 100)
.shuffle(buffer_size=500)
.batch(scaled_batch_size)
.cache()
.prefetch(tf.data.AUTOTUNE)
)
wiki_val_ds = (
tf.data.TextLineDataset(wiki_dir + "wiki.valid.tokens")
.filter(lambda x: tf.strings.length(x) > 100)
.shuffle(buffer_size=500)
.batch(scaled_batch_size)
.cache()
.prefetch(tf.data.AUTOTUNE)
)
wiki_test_ds = (
tf.data.TextLineDataset(wiki_dir + "wiki.test.tokens")
.filter(lambda x: tf.strings.length(x) > 100)
.shuffle(buffer_size=500)
.batch(scaled_batch_size)
.cache()
.prefetch(tf.data.AUTOTUNE)
)
```
In the above code, we download the wikitext-2 dataset and extract it. Then, we define
three datasets: wiki_train_ds, wiki_val_ds, and wiki_test_ds. These datasets are
filtered to remove short lines and are batched for efficient training.
It's a common practice to use a decayed learning rate in NLP training/tuning. We'll
use `PolynomialDecay` schedule here.
```python
total_training_steps = sum(1 for _ in wiki_train_ds.as_numpy_iterator()) * EPOCHS
lr_schedule = tf.keras.optimizers.schedules.PolynomialDecay(
initial_learning_rate=scaled_learning_rate,
decay_steps=total_training_steps,
end_learning_rate=0.0,
)
class PrintLR(tf.keras.callbacks.Callback):
def on_epoch_end(self, epoch, logs=None):
print(
f"\nLearning rate for epoch {epoch + 1} is {model_dist.optimizer.learning_rate.numpy()}"
)
```
Let's also make a callback to TensorBoard, this will enable visualization of different
metrics while we train the model in later part of this tutorial. We put all the callbacks
together as follows:
```python
callbacks = [
tf.keras.callbacks.TensorBoard(log_dir="./logs"),
PrintLR(),
]
print(tf.config.list_physical_devices("GPU"))
```
<div class="k-default-codeblock">
```
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
```
</div>
With the datasets prepared, we now initialize and compile our model and optimizer within
the `strategy.scope()`:
```python
with strategy.scope():
# Everything that creates variables should be under the strategy scope.
# In general this is only model construction & `compile()`.
model_dist = keras_nlp.models.BertMaskedLM.from_preset("bert_tiny_en_uncased")
# This line just sets pooled_dense layer as non-trainiable, we do this to avoid
# warnings of this layer being unused
model_dist.get_layer("bert_backbone").get_layer("pooled_dense").trainable = False
model_dist.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.AdamW(learning_rate=scaled_learning_rate),
weighted_metrics=[keras.metrics.SparseCategoricalAccuracy()],
jit_compile=False,
)
model_dist.fit(
wiki_train_ds, validation_data=wiki_val_ds, epochs=EPOCHS, callbacks=callbacks
)
```
<div class="k-default-codeblock">
```
Epoch 1/3
Learning rate for epoch 1 is 0.00019999999494757503
239/239 ━━━━━━━━━━━━━━━━━━━━ 43s 136ms/step - loss: 3.7009 - sparse_categorical_accuracy: 0.1499 - val_loss: 1.1509 - val_sparse_categorical_accuracy: 0.3485
Epoch 2/3
239/239 ━━━━━━━━━━━━━━━━━━━━ 0s 122ms/step - loss: 2.6094 - sparse_categorical_accuracy: 0.5284
Learning rate for epoch 2 is 0.00019999999494757503
239/239 ━━━━━━━━━━━━━━━━━━━━ 32s 133ms/step - loss: 2.6038 - sparse_categorical_accuracy: 0.5274 - val_loss: 0.9812 - val_sparse_categorical_accuracy: 0.4006
Epoch 3/3
239/239 ━━━━━━━━━━━━━━━━━━━━ 0s 123ms/step - loss: 2.3564 - sparse_categorical_accuracy: 0.6053
Learning rate for epoch 3 is 0.00019999999494757503
239/239 ━━━━━━━━━━━━━━━━━━━━ 32s 134ms/step - loss: 2.3514 - sparse_categorical_accuracy: 0.6040 - val_loss: 0.9213 - val_sparse_categorical_accuracy: 0.4230
```
</div>
After fitting our model under the scope, we evaluate it normally!
```python
model_dist.evaluate(wiki_test_ds)
```
<div class="k-default-codeblock">
```
29/29 ━━━━━━━━━━━━━━━━━━━━ 3s 60ms/step - loss: 1.9197 - sparse_categorical_accuracy: 0.8527
[0.9470901489257812, 0.4373602867126465]
```
</div>
For distributed training across multiple machines (as opposed to training that only leverages
multiple devices on a single machine), there are two distribution strategies you
could use: `MultiWorkerMirroredStrategy` and `ParameterServerStrategy`:
- `tf.distribute.MultiWorkerMirroredStrategy` implements a synchronous CPU/GPU
multi-worker solution to work with Keras-style model building and training loop,
using synchronous reduction of gradients across the replicas.
- `tf.distribute.experimental.ParameterServerStrategy` implements an asynchronous CPU/GPU
multi-worker solution, where the parameters are stored on parameter servers, and
workers update the gradients to parameter servers asynchronously.
### Further reading
1. [TensorFlow distributed training guide](https://www.tensorflow.org/guide/distributed_training)
2. [Tutorial on multi-worker training with Keras](https://www.tensorflow.org/tutorials/distribute/multi_worker_with_keras)
3. [MirroredStrategy docs](https://www.tensorflow.org/api_docs/python/tf/distribute/MirroredStrategy)
4. [MultiWorkerMirroredStrategy docs](https://www.tensorflow.org/api_docs/python/tf/distribute/experimental/MultiWorkerMirroredStrategy)
5. [Distributed training in tf.keras with Weights & Biases](https://towardsdatascience.com/distributed-training-in-tf-keras-with-w-b-ccf021f9322e)
| keras-io/examples/nlp/md/data_parallel_training_with_keras_nlp.md/0 | {
"file_path": "keras-io/examples/nlp/md/data_parallel_training_with_keras_nlp.md",
"repo_id": "keras-io",
"token_count": 3631
} | 92 |
# Semantic Similarity with KerasNLP
**Author:** [Anshuman Mishra](https://github.com/shivance/)<br>
**Date created:** 2023/02/25<br>
**Last modified:** 2023/02/25<br>
**Description:** Use pretrained models from KerasNLP for the Semantic Similarity Task.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/nlp/ipynb/semantic_similarity_with_keras_nlp.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/nlp/semantic_similarity_with_keras_nlp.py)
---
## Introduction
Semantic similarity refers to the task of determining the degree of similarity between two
sentences in terms of their meaning. We already saw in [this](https://keras.io/examples/nlp/semantic_similarity_with_bert/)
example how to use SNLI (Stanford Natural Language Inference) corpus to predict sentence
semantic similarity with the HuggingFace Transformers library. In this tutorial we will
learn how to use [KerasNLP](https://keras.io/keras_nlp/), an extension of the core Keras API,
for the same task. Furthermore, we will discover how KerasNLP effectively reduces boilerplate
code and simplifies the process of building and utilizing models. For more information on KerasNLP,
please refer to [KerasNLP's official documentation](https://keras.io/keras_nlp/).
This guide is broken down into the following parts:
1. *Setup*, task definition, and establishing a baseline.
2. *Establishing baseline* with BERT.
3. *Saving and Reloading* the model.
4. *Performing inference* with the model.
5 *Improving accuracy* with RoBERTa
---
## Setup
The following guide uses [Keras Core](https://keras.io/keras_core/) to work in
any of `tensorflow`, `jax` or `torch`. Support for Keras Core is baked into
KerasNLP, simply change the `KERAS_BACKEND` environment variable below to change
the backend you would like to use. We select the `jax` backend below, which will
give us a particularly fast train step below.
```python
!pip install -q --upgrade keras-nlp
!pip install -q --upgrade keras # Upgrade to Keras 3.
```
```python
import numpy as np
import tensorflow as tf
import keras
import keras_nlp
import tensorflow_datasets as tfds
```
<div class="k-default-codeblock">
```
```
</div>
To load the SNLI dataset, we use the tensorflow-datasets library, which
contains over 550,000 samples in total. However, to ensure that this example runs
quickly, we use only 20% of the training samples.
---
## Overview of SNLI Dataset
Every sample in the dataset contains three components: `hypothesis`, `premise`,
and `label`. epresents the original caption provided to the author of the pair,
while the hypothesis refers to the hypothesis caption created by the author of
the pair. The label is assigned by annotators to indicate the similarity between
the two sentences.
The dataset contains three possible similarity label values: Contradiction, Entailment,
and Neutral. Contradiction represents completely dissimilar sentences, while Entailment
denotes similar meaning sentences. Lastly, Neutral refers to sentences where no clear
similarity or dissimilarity can be established between them.
```python
snli_train = tfds.load("snli", split="train[:20%]")
snli_val = tfds.load("snli", split="validation")
snli_test = tfds.load("snli", split="test")
# Here's an example of how our training samples look like, where we randomly select
# four samples:
sample = snli_test.batch(4).take(1).get_single_element()
sample
```
<div class="k-default-codeblock">
```
{'hypothesis': <tf.Tensor: shape=(4,), dtype=string, numpy=
array([b'A girl is entertaining on stage',
b'A group of people posing in front of a body of water.',
b"The group of people aren't inide of the building.",
b'The people are taking a carriage ride.'], dtype=object)>,
'label': <tf.Tensor: shape=(4,), dtype=int64, numpy=array([0, 0, 0, 0])>,
'premise': <tf.Tensor: shape=(4,), dtype=string, numpy=
array([b'A girl in a blue leotard hula hoops on a stage with balloon shapes in the background.',
b'A group of people taking pictures on a walkway in front of a large body of water.',
b'Many people standing outside of a place talking to each other in front of a building that has a sign that says "HI-POINTE."',
b'Three people are riding a carriage pulled by four horses.'],
dtype=object)>}
```
</div>
### Preprocessing
In our dataset, we have identified that some samples have missing or incorrectly labeled
data, which is denoted by a value of -1. To ensure the accuracy and reliability of our model,
we simply filter out these samples from our dataset.
```python
def filter_labels(sample):
return sample["label"] >= 0
```
Here's a utility function that splits the example into an `(x, y)` tuple that is suitable
for `model.fit()`. By default, `keras_nlp.models.BertClassifier` will tokenize and pack
together raw strings using a `"[SEP]"` token during training. Therefore, this label
splitting is all the data preparation that we need to perform.
```python
def split_labels(sample):
x = (sample["hypothesis"], sample["premise"])
y = sample["label"]
return x, y
train_ds = (
snli_train.filter(filter_labels)
.map(split_labels, num_parallel_calls=tf.data.AUTOTUNE)
.batch(16)
)
val_ds = (
snli_val.filter(filter_labels)
.map(split_labels, num_parallel_calls=tf.data.AUTOTUNE)
.batch(16)
)
test_ds = (
snli_test.filter(filter_labels)
.map(split_labels, num_parallel_calls=tf.data.AUTOTUNE)
.batch(16)
)
```
---
## Establishing baseline with BERT.
We use the BERT model from KerasNLP to establish a baseline for our semantic similarity
task. The `keras_nlp.models.BertClassifier` class attaches a classification head to the BERT
Backbone, mapping the backbone outputs to a logit output suitable for a classification task.
This significantly reduces the need for custom code.
KerasNLP models have built-in tokenization capabilities that handle tokenization by default
based on the selected model. However, users can also use custom preprocessing techniques
as per their specific needs. If we pass a tuple as input, the model will tokenize all the
strings and concatenate them with a `"[SEP]"` separator.
We use this model with pretrained weights, and we can use the `from_preset()` method
to use our own preprocessor. For the SNLI dataset, we set `num_classes` to 3.
```python
bert_classifier = keras_nlp.models.BertClassifier.from_preset(
"bert_tiny_en_uncased", num_classes=3
)
```
Please note that the BERT Tiny model has only 4,386,307 trainable parameters.
KerasNLP task models come with compilation defaults. We can now train the model we just
instantiated by calling the `fit()` method.
```python
bert_classifier.fit(train_ds, validation_data=val_ds, epochs=1)
```
<div class="k-default-codeblock">
```
6867/6867 ━━━━━━━━━━━━━━━━━━━━ 61s 8ms/step - loss: 0.8732 - sparse_categorical_accuracy: 0.5864 - val_loss: 0.5900 - val_sparse_categorical_accuracy: 0.7602
<keras.src.callbacks.history.History at 0x7f4660171fc0>
```
</div>
Our BERT classifier achieved an accuracy of around 76% on the validation split. Now,
let's evaluate its performance on the test split.
### Evaluate the performance of the trained model on test data.
```python
bert_classifier.evaluate(test_ds)
```
<div class="k-default-codeblock">
```
614/614 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.5815 - sparse_categorical_accuracy: 0.7628
[0.5895748734474182, 0.7618078589439392]
```
</div>
Our baseline BERT model achieved a similar accuracy of around 76% on the test split.
Now, let's try to improve its performance by recompiling the model with a slightly
higher learning rate.
```python
bert_classifier = keras_nlp.models.BertClassifier.from_preset(
"bert_tiny_en_uncased", num_classes=3
)
bert_classifier.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(5e-5),
metrics=["accuracy"],
)
bert_classifier.fit(train_ds, validation_data=val_ds, epochs=1)
bert_classifier.evaluate(test_ds)
```
<div class="k-default-codeblock">
```
6867/6867 ━━━━━━━━━━━━━━━━━━━━ 59s 8ms/step - accuracy: 0.6007 - loss: 0.8636 - val_accuracy: 0.7648 - val_loss: 0.5800
614/614 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.7700 - loss: 0.5692
[0.578984260559082, 0.7686278820037842]
```
</div>
Just tweaking the learning rate alone was not enough to boost performance, which
stayed right around 76%. Let's try again, but this time with
`keras.optimizers.AdamW`, and a learning rate schedule.
```python
class TriangularSchedule(keras.optimizers.schedules.LearningRateSchedule):
"""Linear ramp up for `warmup` steps, then linear decay to zero at `total` steps."""
def __init__(self, rate, warmup, total):
self.rate = rate
self.warmup = warmup
self.total = total
def get_config(self):
config = {"rate": self.rate, "warmup": self.warmup, "total": self.total}
return config
def __call__(self, step):
step = keras.ops.cast(step, dtype="float32")
rate = keras.ops.cast(self.rate, dtype="float32")
warmup = keras.ops.cast(self.warmup, dtype="float32")
total = keras.ops.cast(self.total, dtype="float32")
warmup_rate = rate * step / self.warmup
cooldown_rate = rate * (total - step) / (total - warmup)
triangular_rate = keras.ops.minimum(warmup_rate, cooldown_rate)
return keras.ops.maximum(triangular_rate, 0.0)
bert_classifier = keras_nlp.models.BertClassifier.from_preset(
"bert_tiny_en_uncased", num_classes=3
)
# Get the total count of training batches.
# This requires walking the dataset to filter all -1 labels.
epochs = 3
total_steps = sum(1 for _ in train_ds.as_numpy_iterator()) * epochs
warmup_steps = int(total_steps * 0.2)
bert_classifier.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.AdamW(
TriangularSchedule(1e-4, warmup_steps, total_steps)
),
metrics=["accuracy"],
)
bert_classifier.fit(train_ds, validation_data=val_ds, epochs=epochs)
```
<div class="k-default-codeblock">
```
Epoch 1/3
6867/6867 ━━━━━━━━━━━━━━━━━━━━ 59s 8ms/step - accuracy: 0.5457 - loss: 0.9317 - val_accuracy: 0.7633 - val_loss: 0.5825
Epoch 2/3
6867/6867 ━━━━━━━━━━━━━━━━━━━━ 55s 8ms/step - accuracy: 0.7291 - loss: 0.6515 - val_accuracy: 0.7809 - val_loss: 0.5399
Epoch 3/3
6867/6867 ━━━━━━━━━━━━━━━━━━━━ 55s 8ms/step - accuracy: 0.7708 - loss: 0.5695 - val_accuracy: 0.7918 - val_loss: 0.5214
<keras.src.callbacks.history.History at 0x7f45645b3370>
```
</div>
Success! With the learning rate scheduler and the `AdamW` optimizer, our validation
accuracy improved to around 79%.
Now, let's evaluate our final model on the test set and see how it performs.
```python
bert_classifier.evaluate(test_ds)
```
<div class="k-default-codeblock">
```
614/614 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - accuracy: 0.7956 - loss: 0.5128
[0.5245093703269958, 0.7890879511833191]
```
</div>
Our Tiny BERT model achieved an accuracy of approximately 79% on the test set
with the use of a learning rate scheduler. This is a significant improvement over
our previous results. Fine-tuning a pretrained BERT
model can be a powerful tool in natural language processing tasks, and even a
small model like Tiny BERT can achieve impressive results.
Let's save our model for now
and move on to learning how to perform inference with it.
---
## Save and Reload the model
```python
bert_classifier.save("bert_classifier.keras")
restored_model = keras.models.load_model("bert_classifier.keras")
restored_model.evaluate(test_ds)
```
<div class="k-default-codeblock">
```
614/614 ━━━━━━━━━━━━━━━━━━━━ 2s 3ms/step - loss: 0.5128 - sparse_categorical_accuracy: 0.7956
[0.5245093703269958, 0.7890879511833191]
```
</div>
---
## Performing inference with the model.
Let's see how to perform inference with KerasNLP models
```python
# Convert to Hypothesis-Premise pair, for forward pass through model
sample = (sample["hypothesis"], sample["premise"])
sample
```
<div class="k-default-codeblock">
```
(<tf.Tensor: shape=(4,), dtype=string, numpy=
array([b'A girl is entertaining on stage',
b'A group of people posing in front of a body of water.',
b"The group of people aren't inide of the building.",
b'The people are taking a carriage ride.'], dtype=object)>,
<tf.Tensor: shape=(4,), dtype=string, numpy=
array([b'A girl in a blue leotard hula hoops on a stage with balloon shapes in the background.',
b'A group of people taking pictures on a walkway in front of a large body of water.',
b'Many people standing outside of a place talking to each other in front of a building that has a sign that says "HI-POINTE."',
b'Three people are riding a carriage pulled by four horses.'],
dtype=object)>)
```
</div>
The default preprocessor in KerasNLP models handles input tokenization automatically,
so we don't need to perform tokenization explicitly.
```python
predictions = bert_classifier.predict(sample)
def softmax(x):
return np.exp(x) / np.exp(x).sum(axis=0)
# Get the class predictions with maximum probabilities
predictions = softmax(predictions)
```
<div class="k-default-codeblock">
```
1/1 ━━━━━━━━━━━━━━━━━━━━ 1s 711ms/step
```
</div>
---
## Improving accuracy with RoBERTa
Now that we have established a baseline, we can attempt to improve our results
by experimenting with different models. Thanks to KerasNLP, fine-tuning a RoBERTa
checkpoint on the same dataset is easy with just a few lines of code.
```python
# Inittializing a RoBERTa from preset
roberta_classifier = keras_nlp.models.RobertaClassifier.from_preset(
"roberta_base_en", num_classes=3
)
roberta_classifier.fit(train_ds, validation_data=val_ds, epochs=1)
roberta_classifier.evaluate(test_ds)
```
<div class="k-default-codeblock">
```
6867/6867 ━━━━━━━━━━━━━━━━━━━━ 2049s 297ms/step - loss: 0.5509 - sparse_categorical_accuracy: 0.7740 - val_loss: 0.3292 - val_sparse_categorical_accuracy: 0.8789
614/614 ━━━━━━━━━━━━━━━━━━━━ 56s 88ms/step - loss: 0.3307 - sparse_categorical_accuracy: 0.8784
[0.33771008253097534, 0.874796450138092]
```
</div>
The RoBERTa base model has significantly more trainable parameters than the BERT
Tiny model, with almost 30 times as many at 124,645,635 parameters. As a result, it took
approximately 1.5 hours to train on a P100 GPU. However, the performance
improvement was substantial, with accuracy increasing to 88% on both the validation
and test splits. With RoBERTa, we were able to fit a maximum batch size of 16 on
our P100 GPU.
Despite using a different model, the steps to perform inference with RoBERTa are
the same as with BERT!
```python
predictions = roberta_classifier.predict(sample)
print(tf.math.argmax(predictions, axis=1).numpy())
```
<div class="k-default-codeblock">
```
1/1 ━━━━━━━━━━━━━━━━━━━━ 4s 4s/step
[0 0 0 0]
```
</div>
We hope this tutorial has been helpful in demonstrating the ease and effectiveness
of using KerasNLP and BERT for semantic similarity tasks.
Throughout this tutorial, we demonstrated how to use a pretrained BERT model to
establish a baseline and improve performance by training a larger RoBERTa model
using just a few lines of code.
The KerasNLP toolbox provides a range of modular building blocks for preprocessing
text, including pretrained state-of-the-art models and low-level Transformer Encoder
layers. We believe that this makes experimenting with natural language solutions
more accessible and efficient.
| keras-io/examples/nlp/md/semantic_similarity_with_keras_nlp.md/0 | {
"file_path": "keras-io/examples/nlp/md/semantic_similarity_with_keras_nlp.md",
"repo_id": "keras-io",
"token_count": 5462
} | 93 |
<jupyter_start><jupyter_text>Actor Critic Method**Author:** [Apoorv Nandan](https://twitter.com/NandanApoorv)**Date created:** 2020/05/13**Last modified:** 2020/05/13**Description:** Implement Actor Critic Method in CartPole environment. IntroductionThis script shows an implementation of Actor Critic method on CartPole-V0 environment. Actor Critic MethodAs an agent takes actions and moves through an environment, it learns to mapthe observed state of the environment to two possible outputs:1. Recommended action: A probability value for each action in the action space. The part of the agent responsible for this output is called the **actor**.2. Estimated rewards in the future: Sum of all rewards it expects to receive in the future. The part of the agent responsible for this output is the **critic**.Agent and Critic learn to perform their tasks, such that the recommended actionsfrom the actor maximize the rewards. CartPole-V0A pole is attached to a cart placed on a frictionless track. The agent has to applyforce to move the cart. It is rewarded for every time step the poleremains upright. The agent, therefore, must learn to keep the pole from falling over. References- [CartPole](http://www.derongliu.org/adp/adp-cdrom/Barto1983.pdf)- [Actor Critic Method](https://hal.inria.fr/hal-00840470/document) Setup<jupyter_code>import gym
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
# Configuration parameters for the whole setup
seed = 42
gamma = 0.99 # Discount factor for past rewards
max_steps_per_episode = 10000
env = gym.make("CartPole-v0") # Create the environment
env.seed(seed)
eps = np.finfo(np.float32).eps.item() # Smallest number such that 1.0 + eps != 1.0<jupyter_output><empty_output><jupyter_text>Implement Actor Critic networkThis network learns two functions:1. Actor: This takes as input the state of our environment and returns aprobability value for each action in its action space.2. Critic: This takes as input the state of our environment and returnsan estimate of total rewards in the future.In our implementation, they share the initial layer.<jupyter_code>num_inputs = 4
num_actions = 2
num_hidden = 128
inputs = layers.Input(shape=(num_inputs,))
common = layers.Dense(num_hidden, activation="relu")(inputs)
action = layers.Dense(num_actions, activation="softmax")(common)
critic = layers.Dense(1)(common)
model = keras.Model(inputs=inputs, outputs=[action, critic])<jupyter_output><empty_output><jupyter_text>Train<jupyter_code>optimizer = keras.optimizers.Adam(learning_rate=0.01)
huber_loss = keras.losses.Huber()
action_probs_history = []
critic_value_history = []
rewards_history = []
running_reward = 0
episode_count = 0
while True: # Run until solved
state = env.reset()
episode_reward = 0
with tf.GradientTape() as tape:
for timestep in range(1, max_steps_per_episode):
# env.render(); Adding this line would show the attempts
# of the agent in a pop up window.
state = tf.convert_to_tensor(state)
state = tf.expand_dims(state, 0)
# Predict action probabilities and estimated future rewards
# from environment state
action_probs, critic_value = model(state)
critic_value_history.append(critic_value[0, 0])
# Sample action from action probability distribution
action = np.random.choice(num_actions, p=np.squeeze(action_probs))
action_probs_history.append(tf.math.log(action_probs[0, action]))
# Apply the sampled action in our environment
state, reward, done, _ = env.step(action)
rewards_history.append(reward)
episode_reward += reward
if done:
break
# Update running reward to check condition for solving
running_reward = 0.05 * episode_reward + (1 - 0.05) * running_reward
# Calculate expected value from rewards
# - At each timestep what was the total reward received after that timestep
# - Rewards in the past are discounted by multiplying them with gamma
# - These are the labels for our critic
returns = []
discounted_sum = 0
for r in rewards_history[::-1]:
discounted_sum = r + gamma * discounted_sum
returns.insert(0, discounted_sum)
# Normalize
returns = np.array(returns)
returns = (returns - np.mean(returns)) / (np.std(returns) + eps)
returns = returns.tolist()
# Calculating loss values to update our network
history = zip(action_probs_history, critic_value_history, returns)
actor_losses = []
critic_losses = []
for log_prob, value, ret in history:
# At this point in history, the critic estimated that we would get a
# total reward = `value` in the future. We took an action with log probability
# of `log_prob` and ended up recieving a total reward = `ret`.
# The actor must be updated so that it predicts an action that leads to
# high rewards (compared to critic's estimate) with high probability.
diff = ret - value
actor_losses.append(-log_prob * diff) # actor loss
# The critic must be updated so that it predicts a better estimate of
# the future rewards.
critic_losses.append(
huber_loss(tf.expand_dims(value, 0), tf.expand_dims(ret, 0))
)
# Backpropagation
loss_value = sum(actor_losses) + sum(critic_losses)
grads = tape.gradient(loss_value, model.trainable_variables)
optimizer.apply_gradients(zip(grads, model.trainable_variables))
# Clear the loss and reward history
action_probs_history.clear()
critic_value_history.clear()
rewards_history.clear()
# Log details
episode_count += 1
if episode_count % 10 == 0:
template = "running reward: {:.2f} at episode {}"
print(template.format(running_reward, episode_count))
if running_reward > 195: # Condition to consider the task solved
print("Solved at episode {}!".format(episode_count))
break<jupyter_output><empty_output> | keras-io/examples/rl/ipynb/actor_critic_cartpole.ipynb/0 | {
"file_path": "keras-io/examples/rl/ipynb/actor_critic_cartpole.ipynb",
"repo_id": "keras-io",
"token_count": 2253
} | 94 |
# Structured data learning with Wide, Deep, and Cross networks
**Author:** [Khalid Salama](https://www.linkedin.com/in/khalid-salama-24403144/)<br>
**Date created:** 2020/12/31<br>
**Last modified:** 2021/05/05<br>
**Description:** Using Wide & Deep and Deep & Cross networks for structured data classification.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/structured_data/ipynb/wide_deep_cross_networks.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/structured_data/wide_deep_cross_networks.py)
---
## Introduction
This example demonstrates how to do structured data classification using the two modeling
techniques:
1. [Wide & Deep](https://ai.googleblog.com/2016/06/wide-deep-learning-better-together-with.html) models
2. [Deep & Cross](https://arxiv.org/abs/1708.05123) models
Note that this example should be run with TensorFlow 2.5 or higher.
---
## The dataset
This example uses the [Covertype](https://archive.ics.uci.edu/ml/datasets/covertype) dataset from the UCI
Machine Learning Repository. The task is to predict forest cover type from cartographic variables.
The dataset includes 506,011 instances with 12 input features: 10 numerical features and 2
categorical features. Each instance is categorized into 1 of 7 classes.
---
## Setup
```python
import os
# Only the TensorFlow backend supports string inputs.
os.environ["KERAS_BACKEND"] = "tensorflow"
import math
import numpy as np
import pandas as pd
from tensorflow import data as tf_data
import keras
from keras import layers
```
---
## Prepare the data
First, let's load the dataset from the UCI Machine Learning Repository into a Pandas
DataFrame:
```python
data_url = (
"https://archive.ics.uci.edu/ml/machine-learning-databases/covtype/covtype.data.gz"
)
raw_data = pd.read_csv(data_url, header=None)
print(f"Dataset shape: {raw_data.shape}")
raw_data.head()
```
<div class="k-default-codeblock">
```
Dataset shape: (581012, 55)
```
</div>
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
<div class="k-default-codeblock">
```
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
```
</div>
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
<th>5</th>
<th>6</th>
<th>7</th>
<th>8</th>
<th>9</th>
<th>...</th>
<th>45</th>
<th>46</th>
<th>47</th>
<th>48</th>
<th>49</th>
<th>50</th>
<th>51</th>
<th>52</th>
<th>53</th>
<th>54</th>
</tr>
</thead>
<tbody>
<tr>
<th>0</th>
<td>2596</td>
<td>51</td>
<td>3</td>
<td>258</td>
<td>0</td>
<td>510</td>
<td>221</td>
<td>232</td>
<td>148</td>
<td>6279</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>5</td>
</tr>
<tr>
<th>1</th>
<td>2590</td>
<td>56</td>
<td>2</td>
<td>212</td>
<td>-6</td>
<td>390</td>
<td>220</td>
<td>235</td>
<td>151</td>
<td>6225</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>5</td>
</tr>
<tr>
<th>2</th>
<td>2804</td>
<td>139</td>
<td>9</td>
<td>268</td>
<td>65</td>
<td>3180</td>
<td>234</td>
<td>238</td>
<td>135</td>
<td>6121</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>2</td>
</tr>
<tr>
<th>3</th>
<td>2785</td>
<td>155</td>
<td>18</td>
<td>242</td>
<td>118</td>
<td>3090</td>
<td>238</td>
<td>238</td>
<td>122</td>
<td>6211</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>2</td>
</tr>
<tr>
<th>4</th>
<td>2595</td>
<td>45</td>
<td>2</td>
<td>153</td>
<td>-1</td>
<td>391</td>
<td>220</td>
<td>234</td>
<td>150</td>
<td>6172</td>
<td>...</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>0</td>
<td>5</td>
</tr>
</tbody>
</table>
<p>5 rows × 55 columns</p>
</div>
The two categorical features in the dataset are binary-encoded.
We will convert this dataset representation to the typical representation, where each
categorical feature is represented as a single integer value.
```python
soil_type_values = [f"soil_type_{idx+1}" for idx in range(40)]
wilderness_area_values = [f"area_type_{idx+1}" for idx in range(4)]
soil_type = raw_data.loc[:, 14:53].apply(
lambda x: soil_type_values[0::1][x.to_numpy().nonzero()[0][0]], axis=1
)
wilderness_area = raw_data.loc[:, 10:13].apply(
lambda x: wilderness_area_values[0::1][x.to_numpy().nonzero()[0][0]], axis=1
)
CSV_HEADER = [
"Elevation",
"Aspect",
"Slope",
"Horizontal_Distance_To_Hydrology",
"Vertical_Distance_To_Hydrology",
"Horizontal_Distance_To_Roadways",
"Hillshade_9am",
"Hillshade_Noon",
"Hillshade_3pm",
"Horizontal_Distance_To_Fire_Points",
"Wilderness_Area",
"Soil_Type",
"Cover_Type",
]
data = pd.concat(
[raw_data.loc[:, 0:9], wilderness_area, soil_type, raw_data.loc[:, 54]],
axis=1,
ignore_index=True,
)
data.columns = CSV_HEADER
# Convert the target label indices into a range from 0 to 6 (there are 7 labels in total).
data["Cover_Type"] = data["Cover_Type"] - 1
print(f"Dataset shape: {data.shape}")
data.head().T
```
<div class="k-default-codeblock">
```
Dataset shape: (581012, 13)
```
</div>
<div>
<style scoped>
.dataframe tbody tr th:only-of-type {
vertical-align: middle;
}
<div class="k-default-codeblock">
```
.dataframe tbody tr th {
vertical-align: top;
}
.dataframe thead th {
text-align: right;
}
```
</div>
</style>
<table border="1" class="dataframe">
<thead>
<tr style="text-align: right;">
<th></th>
<th>0</th>
<th>1</th>
<th>2</th>
<th>3</th>
<th>4</th>
</tr>
</thead>
<tbody>
<tr>
<th>Elevation</th>
<td>2596</td>
<td>2590</td>
<td>2804</td>
<td>2785</td>
<td>2595</td>
</tr>
<tr>
<th>Aspect</th>
<td>51</td>
<td>56</td>
<td>139</td>
<td>155</td>
<td>45</td>
</tr>
<tr>
<th>Slope</th>
<td>3</td>
<td>2</td>
<td>9</td>
<td>18</td>
<td>2</td>
</tr>
<tr>
<th>Horizontal_Distance_To_Hydrology</th>
<td>258</td>
<td>212</td>
<td>268</td>
<td>242</td>
<td>153</td>
</tr>
<tr>
<th>Vertical_Distance_To_Hydrology</th>
<td>0</td>
<td>-6</td>
<td>65</td>
<td>118</td>
<td>-1</td>
</tr>
<tr>
<th>Horizontal_Distance_To_Roadways</th>
<td>510</td>
<td>390</td>
<td>3180</td>
<td>3090</td>
<td>391</td>
</tr>
<tr>
<th>Hillshade_9am</th>
<td>221</td>
<td>220</td>
<td>234</td>
<td>238</td>
<td>220</td>
</tr>
<tr>
<th>Hillshade_Noon</th>
<td>232</td>
<td>235</td>
<td>238</td>
<td>238</td>
<td>234</td>
</tr>
<tr>
<th>Hillshade_3pm</th>
<td>148</td>
<td>151</td>
<td>135</td>
<td>122</td>
<td>150</td>
</tr>
<tr>
<th>Horizontal_Distance_To_Fire_Points</th>
<td>6279</td>
<td>6225</td>
<td>6121</td>
<td>6211</td>
<td>6172</td>
</tr>
<tr>
<th>Wilderness_Area</th>
<td>area_type_1</td>
<td>area_type_1</td>
<td>area_type_1</td>
<td>area_type_1</td>
<td>area_type_1</td>
</tr>
<tr>
<th>Soil_Type</th>
<td>soil_type_29</td>
<td>soil_type_29</td>
<td>soil_type_12</td>
<td>soil_type_30</td>
<td>soil_type_29</td>
</tr>
<tr>
<th>Cover_Type</th>
<td>4</td>
<td>4</td>
<td>1</td>
<td>1</td>
<td>4</td>
</tr>
</tbody>
</table>
</div>
The shape of the DataFrame shows there are 13 columns per sample
(12 for the features and 1 for the target label).
Let's split the data into training (85%) and test (15%) sets.
```python
train_splits = []
test_splits = []
for _, group_data in data.groupby("Cover_Type"):
random_selection = np.random.rand(len(group_data.index)) <= 0.85
train_splits.append(group_data[random_selection])
test_splits.append(group_data[~random_selection])
train_data = pd.concat(train_splits).sample(frac=1).reset_index(drop=True)
test_data = pd.concat(test_splits).sample(frac=1).reset_index(drop=True)
print(f"Train split size: {len(train_data.index)}")
print(f"Test split size: {len(test_data.index)}")
```
<div class="k-default-codeblock">
```
Train split size: 493323
Test split size: 87689
```
</div>
Next, store the training and test data in separate CSV files.
```python
train_data_file = "train_data.csv"
test_data_file = "test_data.csv"
train_data.to_csv(train_data_file, index=False)
test_data.to_csv(test_data_file, index=False)
```
---
## Define dataset metadata
Here, we define the metadata of the dataset that will be useful for reading and parsing
the data into input features, and encoding the input features with respect to their types.
```python
TARGET_FEATURE_NAME = "Cover_Type"
TARGET_FEATURE_LABELS = ["0", "1", "2", "3", "4", "5", "6"]
NUMERIC_FEATURE_NAMES = [
"Aspect",
"Elevation",
"Hillshade_3pm",
"Hillshade_9am",
"Hillshade_Noon",
"Horizontal_Distance_To_Fire_Points",
"Horizontal_Distance_To_Hydrology",
"Horizontal_Distance_To_Roadways",
"Slope",
"Vertical_Distance_To_Hydrology",
]
CATEGORICAL_FEATURES_WITH_VOCABULARY = {
"Soil_Type": list(data["Soil_Type"].unique()),
"Wilderness_Area": list(data["Wilderness_Area"].unique()),
}
CATEGORICAL_FEATURE_NAMES = list(CATEGORICAL_FEATURES_WITH_VOCABULARY.keys())
FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES
COLUMN_DEFAULTS = [
[0] if feature_name in NUMERIC_FEATURE_NAMES + [TARGET_FEATURE_NAME] else ["NA"]
for feature_name in CSV_HEADER
]
NUM_CLASSES = len(TARGET_FEATURE_LABELS)
```
---
## Experiment setup
Next, let's define an input function that reads and parses the file, then converts features
and labels into a[`tf.data.Dataset`](https://www.tensorflow.org/guide/datasets)
for training or evaluation.
```python
def get_dataset_from_csv(csv_file_path, batch_size, shuffle=False):
dataset = tf_data.experimental.make_csv_dataset(
csv_file_path,
batch_size=batch_size,
column_names=CSV_HEADER,
column_defaults=COLUMN_DEFAULTS,
label_name=TARGET_FEATURE_NAME,
num_epochs=1,
header=True,
shuffle=shuffle,
)
return dataset.cache()
```
Here we configure the parameters and implement the procedure for running a training and
evaluation experiment given a model.
```python
learning_rate = 0.001
dropout_rate = 0.1
batch_size = 265
num_epochs = 50
hidden_units = [32, 32]
def run_experiment(model):
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
loss=keras.losses.SparseCategoricalCrossentropy(),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
train_dataset = get_dataset_from_csv(train_data_file, batch_size, shuffle=True)
test_dataset = get_dataset_from_csv(test_data_file, batch_size)
print("Start training the model...")
history = model.fit(train_dataset, epochs=num_epochs)
print("Model training finished")
_, accuracy = model.evaluate(test_dataset, verbose=0)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
```
---
## Create model inputs
Now, define the inputs for the models as a dictionary, where the key is the feature name,
and the value is a `keras.layers.Input` tensor with the corresponding feature shape
and data type.
```python
def create_model_inputs():
inputs = {}
for feature_name in FEATURE_NAMES:
if feature_name in NUMERIC_FEATURE_NAMES:
inputs[feature_name] = layers.Input(
name=feature_name, shape=(), dtype="float32"
)
else:
inputs[feature_name] = layers.Input(
name=feature_name, shape=(), dtype="string"
)
return inputs
```
---
## Encode features
We create two representations of our input features: sparse and dense:
1. In the **sparse** representation, the categorical features are encoded with one-hot
encoding using the `CategoryEncoding` layer. This representation can be useful for the
model to *memorize* particular feature values to make certain predictions.
2. In the **dense** representation, the categorical features are encoded with
low-dimensional embeddings using the `Embedding` layer. This representation helps
the model to *generalize* well to unseen feature combinations.
```python
def encode_inputs(inputs, use_embedding=False):
encoded_features = []
for feature_name in inputs:
if feature_name in CATEGORICAL_FEATURE_NAMES:
vocabulary = CATEGORICAL_FEATURES_WITH_VOCABULARY[feature_name]
# Create a lookup to convert string values to an integer indices.
# Since we are not using a mask token nor expecting any out of vocabulary
# (oov) token, we set mask_token to None and num_oov_indices to 0.
lookup = layers.StringLookup(
vocabulary=vocabulary,
mask_token=None,
num_oov_indices=0,
output_mode="int" if use_embedding else "binary",
)
if use_embedding:
# Convert the string input values into integer indices.
encoded_feature = lookup(inputs[feature_name])
embedding_dims = int(math.sqrt(len(vocabulary)))
# Create an embedding layer with the specified dimensions.
embedding = layers.Embedding(
input_dim=len(vocabulary), output_dim=embedding_dims
)
# Convert the index values to embedding representations.
encoded_feature = embedding(encoded_feature)
else:
# Convert the string input values into a one hot encoding.
encoded_feature = lookup(
keras.ops.expand_dims(inputs[feature_name], -1)
)
else:
# Use the numerical features as-is.
encoded_feature = keras.ops.expand_dims(inputs[feature_name], -1)
encoded_features.append(encoded_feature)
all_features = layers.concatenate(encoded_features)
return all_features
```
---
## Experiment 1: a baseline model
In the first experiment, let's create a multi-layer feed-forward network,
where the categorical features are one-hot encoded.
```python
def create_baseline_model():
inputs = create_model_inputs()
features = encode_inputs(inputs)
for units in hidden_units:
features = layers.Dense(units)(features)
features = layers.BatchNormalization()(features)
features = layers.ReLU()(features)
features = layers.Dropout(dropout_rate)(features)
outputs = layers.Dense(units=NUM_CLASSES, activation="softmax")(features)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
baseline_model = create_baseline_model()
keras.utils.plot_model(baseline_model, show_shapes=True, rankdir="LR")
```
<div class="k-default-codeblock">
```
/Users/fchollet/Library/Python/3.10/lib/python/site-packages/numpy/core/numeric.py:2468: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
return bool(asarray(a1 == a2).all())
```
</div>

Let's run it:
```python
run_experiment(baseline_model)
```
<div class="k-default-codeblock">
```
Start training the model...
Epoch 1/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 6s 3ms/step - loss: 1.0713 - sparse_categorical_accuracy: 0.5634
Epoch 2/50
179/1862 ━[37m━━━━━━━━━━━━━━━━━━━ 1s 848us/step - loss: 0.7473 - sparse_categorical_accuracy: 0.6840
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py:153: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.
self.gen.throw(typ, value, traceback)
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 904us/step - loss: 0.7386 - sparse_categorical_accuracy: 0.6866
Epoch 3/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 909us/step - loss: 0.7135 - sparse_categorical_accuracy: 0.6958
Epoch 4/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 878us/step - loss: 0.6975 - sparse_categorical_accuracy: 0.7051
Epoch 5/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 941us/step - loss: 0.6876 - sparse_categorical_accuracy: 0.7089
Epoch 6/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 936us/step - loss: 0.6848 - sparse_categorical_accuracy: 0.7106
Epoch 7/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 934us/step - loss: 0.7165 - sparse_categorical_accuracy: 0.6969
Epoch 8/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 924us/step - loss: 0.6979 - sparse_categorical_accuracy: 0.7053
Epoch 9/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 967us/step - loss: 0.6913 - sparse_categorical_accuracy: 0.7088
Epoch 10/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 975us/step - loss: 0.6807 - sparse_categorical_accuracy: 0.7124
Epoch 11/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 987us/step - loss: 0.6829 - sparse_categorical_accuracy: 0.7110
Epoch 12/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 917us/step - loss: 0.6823 - sparse_categorical_accuracy: 0.7109
Epoch 13/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 879us/step - loss: 0.6658 - sparse_categorical_accuracy: 0.7175
Epoch 14/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 948us/step - loss: 0.6677 - sparse_categorical_accuracy: 0.7170
Epoch 15/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 866us/step - loss: 0.6695 - sparse_categorical_accuracy: 0.7130
Epoch 16/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 860us/step - loss: 0.6847 - sparse_categorical_accuracy: 0.7074
Epoch 17/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 853us/step - loss: 0.6660 - sparse_categorical_accuracy: 0.7174
Epoch 18/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 855us/step - loss: 0.6620 - sparse_categorical_accuracy: 0.7184
Epoch 19/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 900us/step - loss: 0.6642 - sparse_categorical_accuracy: 0.7163
Epoch 20/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 969us/step - loss: 0.6614 - sparse_categorical_accuracy: 0.7167
Epoch 21/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 988us/step - loss: 0.6560 - sparse_categorical_accuracy: 0.7199
Epoch 22/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 969us/step - loss: 0.6559 - sparse_categorical_accuracy: 0.7201
Epoch 23/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 868us/step - loss: 0.6514 - sparse_categorical_accuracy: 0.7217
Epoch 24/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 925us/step - loss: 0.6509 - sparse_categorical_accuracy: 0.7222
Epoch 25/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 879us/step - loss: 0.6464 - sparse_categorical_accuracy: 0.7233
Epoch 26/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 898us/step - loss: 0.6442 - sparse_categorical_accuracy: 0.7237
Epoch 27/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 842us/step - loss: 0.6476 - sparse_categorical_accuracy: 0.7210
Epoch 28/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 815us/step - loss: 0.6427 - sparse_categorical_accuracy: 0.7247
Epoch 29/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 837us/step - loss: 0.6414 - sparse_categorical_accuracy: 0.7244
Epoch 30/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 865us/step - loss: 0.6408 - sparse_categorical_accuracy: 0.7256
Epoch 31/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 845us/step - loss: 0.6378 - sparse_categorical_accuracy: 0.7269
Epoch 32/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 842us/step - loss: 0.6432 - sparse_categorical_accuracy: 0.7235
Epoch 33/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 905us/step - loss: 0.6482 - sparse_categorical_accuracy: 0.7226
Epoch 34/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6586 - sparse_categorical_accuracy: 0.7191
Epoch 35/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 958us/step - loss: 0.6511 - sparse_categorical_accuracy: 0.7215
Epoch 36/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 910us/step - loss: 0.6571 - sparse_categorical_accuracy: 0.7217
Epoch 37/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 897us/step - loss: 0.6451 - sparse_categorical_accuracy: 0.7253
Epoch 38/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 846us/step - loss: 0.6455 - sparse_categorical_accuracy: 0.7254
Epoch 39/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 907us/step - loss: 0.6722 - sparse_categorical_accuracy: 0.7131
Epoch 40/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1000us/step - loss: 0.6393 - sparse_categorical_accuracy: 0.7282
Epoch 41/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 872us/step - loss: 0.6804 - sparse_categorical_accuracy: 0.7078
Epoch 42/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 884us/step - loss: 0.6657 - sparse_categorical_accuracy: 0.7135
Epoch 43/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 960us/step - loss: 0.6557 - sparse_categorical_accuracy: 0.7180
Epoch 44/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 870us/step - loss: 0.6671 - sparse_categorical_accuracy: 0.7115
Epoch 45/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 871us/step - loss: 0.6730 - sparse_categorical_accuracy: 0.7069
Epoch 46/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 875us/step - loss: 0.6669 - sparse_categorical_accuracy: 0.7105
Epoch 47/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 847us/step - loss: 0.6634 - sparse_categorical_accuracy: 0.7129
Epoch 48/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 846us/step - loss: 0.6625 - sparse_categorical_accuracy: 0.7137
Epoch 49/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 824us/step - loss: 0.6596 - sparse_categorical_accuracy: 0.7146
Epoch 50/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 833us/step - loss: 0.6714 - sparse_categorical_accuracy: 0.7106
Model training finished
Test accuracy: 69.5%
```
</div>
The baseline linear model achieves ~76% test accuracy.
---
## Experiment 2: Wide & Deep model
In the second experiment, we create a Wide & Deep model. The wide part of the model
a linear model, while the deep part of the model is a multi-layer feed-forward network.
Use the sparse representation of the input features in the wide part of the model and the
dense representation of the input features for the deep part of the model.
Note that every input features contributes to both parts of the model with different
representations.
```python
def create_wide_and_deep_model():
inputs = create_model_inputs()
wide = encode_inputs(inputs)
wide = layers.BatchNormalization()(wide)
deep = encode_inputs(inputs, use_embedding=True)
for units in hidden_units:
deep = layers.Dense(units)(deep)
deep = layers.BatchNormalization()(deep)
deep = layers.ReLU()(deep)
deep = layers.Dropout(dropout_rate)(deep)
merged = layers.concatenate([wide, deep])
outputs = layers.Dense(units=NUM_CLASSES, activation="softmax")(merged)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
wide_and_deep_model = create_wide_and_deep_model()
keras.utils.plot_model(wide_and_deep_model, show_shapes=True, rankdir="LR")
```
<div class="k-default-codeblock">
```
/Users/fchollet/Library/Python/3.10/lib/python/site-packages/numpy/core/numeric.py:2468: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
return bool(asarray(a1 == a2).all())
```
</div>

Let's run it:
```python
run_experiment(wide_and_deep_model)
```
<div class="k-default-codeblock">
```
Start training the model...
Epoch 1/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 5s 2ms/step - loss: 0.8979 - sparse_categorical_accuracy: 0.6386
Epoch 2/50
128/1862 ━[37m━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6317 - sparse_categorical_accuracy: 0.7302
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py:153: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.
self.gen.throw(typ, value, traceback)
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6290 - sparse_categorical_accuracy: 0.7295
Epoch 3/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6130 - sparse_categorical_accuracy: 0.7350
Epoch 4/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6029 - sparse_categorical_accuracy: 0.7397
Epoch 5/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.6010 - sparse_categorical_accuracy: 0.7397
Epoch 6/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5924 - sparse_categorical_accuracy: 0.7445
Epoch 7/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5917 - sparse_categorical_accuracy: 0.7442
Epoch 8/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5945 - sparse_categorical_accuracy: 0.7438
Epoch 9/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5933 - sparse_categorical_accuracy: 0.7443
Epoch 10/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5862 - sparse_categorical_accuracy: 0.7481
Epoch 11/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5809 - sparse_categorical_accuracy: 0.7507
Epoch 12/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5777 - sparse_categorical_accuracy: 0.7519
Epoch 13/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5736 - sparse_categorical_accuracy: 0.7534
Epoch 14/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5716 - sparse_categorical_accuracy: 0.7545
Epoch 15/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5736 - sparse_categorical_accuracy: 0.7537
Epoch 16/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5712 - sparse_categorical_accuracy: 0.7559
Epoch 17/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5683 - sparse_categorical_accuracy: 0.7564
Epoch 18/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5666 - sparse_categorical_accuracy: 0.7569
Epoch 19/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5652 - sparse_categorical_accuracy: 0.7575
Epoch 20/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5634 - sparse_categorical_accuracy: 0.7583
Epoch 21/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5677 - sparse_categorical_accuracy: 0.7563
Epoch 22/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5651 - sparse_categorical_accuracy: 0.7578
Epoch 23/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5628 - sparse_categorical_accuracy: 0.7586
Epoch 24/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5619 - sparse_categorical_accuracy: 0.7593
Epoch 25/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5603 - sparse_categorical_accuracy: 0.7589
Epoch 26/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5644 - sparse_categorical_accuracy: 0.7585
Epoch 27/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5592 - sparse_categorical_accuracy: 0.7604
Epoch 28/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5571 - sparse_categorical_accuracy: 0.7616
Epoch 29/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5556 - sparse_categorical_accuracy: 0.7629
Epoch 30/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5538 - sparse_categorical_accuracy: 0.7640
Epoch 31/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5535 - sparse_categorical_accuracy: 0.7635
Epoch 32/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5521 - sparse_categorical_accuracy: 0.7645
Epoch 33/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5505 - sparse_categorical_accuracy: 0.7648
Epoch 34/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5494 - sparse_categorical_accuracy: 0.7657
Epoch 35/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5496 - sparse_categorical_accuracy: 0.7660
Epoch 36/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5488 - sparse_categorical_accuracy: 0.7673
Epoch 37/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5471 - sparse_categorical_accuracy: 0.7668
Epoch 38/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5474 - sparse_categorical_accuracy: 0.7673
Epoch 39/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5457 - sparse_categorical_accuracy: 0.7674
Epoch 40/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5452 - sparse_categorical_accuracy: 0.7689
Epoch 41/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5448 - sparse_categorical_accuracy: 0.7679
Epoch 42/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5442 - sparse_categorical_accuracy: 0.7692
Epoch 43/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5436 - sparse_categorical_accuracy: 0.7701
Epoch 44/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5419 - sparse_categorical_accuracy: 0.7706
Epoch 45/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5432 - sparse_categorical_accuracy: 0.7691
Epoch 46/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5406 - sparse_categorical_accuracy: 0.7708
Epoch 47/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5412 - sparse_categorical_accuracy: 0.7701
Epoch 48/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5400 - sparse_categorical_accuracy: 0.7701
Epoch 49/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5416 - sparse_categorical_accuracy: 0.7699
Epoch 50/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5403 - sparse_categorical_accuracy: 0.7701
Model training finished
Test accuracy: 79.04%
```
</div>
The wide and deep model achieves ~79% test accuracy.
---
## Experiment 3: Deep & Cross model
In the third experiment, we create a Deep & Cross model. The deep part of this model
is the same as the deep part created in the previous experiment. The key idea of
the cross part is to apply explicit feature crossing in an efficient way,
where the degree of cross features grows with layer depth.
```python
def create_deep_and_cross_model():
inputs = create_model_inputs()
x0 = encode_inputs(inputs, use_embedding=True)
cross = x0
for _ in hidden_units:
units = cross.shape[-1]
x = layers.Dense(units)(cross)
cross = x0 * x + cross
cross = layers.BatchNormalization()(cross)
deep = x0
for units in hidden_units:
deep = layers.Dense(units)(deep)
deep = layers.BatchNormalization()(deep)
deep = layers.ReLU()(deep)
deep = layers.Dropout(dropout_rate)(deep)
merged = layers.concatenate([cross, deep])
outputs = layers.Dense(units=NUM_CLASSES, activation="softmax")(merged)
model = keras.Model(inputs=inputs, outputs=outputs)
return model
deep_and_cross_model = create_deep_and_cross_model()
keras.utils.plot_model(deep_and_cross_model, show_shapes=True, rankdir="LR")
```
<div class="k-default-codeblock">
```
/Users/fchollet/Library/Python/3.10/lib/python/site-packages/numpy/core/numeric.py:2468: FutureWarning: elementwise comparison failed; returning scalar instead, but in the future will perform elementwise comparison
return bool(asarray(a1 == a2).all())
```
</div>

Let's run it:
```python
run_experiment(deep_and_cross_model)
```
<div class="k-default-codeblock">
```
Start training the model...
Epoch 1/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 5s 2ms/step - loss: 0.9221 - sparse_categorical_accuracy: 0.6235
Epoch 2/50
116/1862 ━[37m━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.6388 - sparse_categorical_accuracy: 0.7257
/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/contextlib.py:153: UserWarning: Your input ran out of data; interrupting training. Make sure that your dataset or generator can generate at least `steps_per_epoch * epochs` batches. You may need to use the `.repeat()` function when building your dataset.
self.gen.throw(typ, value, traceback)
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 2ms/step - loss: 0.6271 - sparse_categorical_accuracy: 0.7316
Epoch 3/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.6023 - sparse_categorical_accuracy: 0.7403
Epoch 4/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5896 - sparse_categorical_accuracy: 0.7453
Epoch 5/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5899 - sparse_categorical_accuracy: 0.7438
Epoch 6/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5960 - sparse_categorical_accuracy: 0.7421
Epoch 7/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5813 - sparse_categorical_accuracy: 0.7481
Epoch 8/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5748 - sparse_categorical_accuracy: 0.7500
Epoch 9/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5743 - sparse_categorical_accuracy: 0.7502
Epoch 10/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5739 - sparse_categorical_accuracy: 0.7506
Epoch 11/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5673 - sparse_categorical_accuracy: 0.7540
Epoch 12/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5649 - sparse_categorical_accuracy: 0.7561
Epoch 13/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5651 - sparse_categorical_accuracy: 0.7548
Epoch 14/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5618 - sparse_categorical_accuracy: 0.7563
Epoch 15/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5599 - sparse_categorical_accuracy: 0.7571
Epoch 16/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5568 - sparse_categorical_accuracy: 0.7585
Epoch 17/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5556 - sparse_categorical_accuracy: 0.7592
Epoch 18/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5544 - sparse_categorical_accuracy: 0.7595
Epoch 19/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5533 - sparse_categorical_accuracy: 0.7603
Epoch 20/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5532 - sparse_categorical_accuracy: 0.7597
Epoch 21/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5531 - sparse_categorical_accuracy: 0.7602
Epoch 22/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5516 - sparse_categorical_accuracy: 0.7608
Epoch 23/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5503 - sparse_categorical_accuracy: 0.7611
Epoch 24/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5492 - sparse_categorical_accuracy: 0.7619
Epoch 25/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5482 - sparse_categorical_accuracy: 0.7623
Epoch 26/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5464 - sparse_categorical_accuracy: 0.7635
Epoch 27/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5483 - sparse_categorical_accuracy: 0.7625
Epoch 28/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5654 - sparse_categorical_accuracy: 0.7555
Epoch 29/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5545 - sparse_categorical_accuracy: 0.7593
Epoch 30/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5512 - sparse_categorical_accuracy: 0.7603
Epoch 31/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5493 - sparse_categorical_accuracy: 0.7616
Epoch 32/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5485 - sparse_categorical_accuracy: 0.7627
Epoch 33/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5593 - sparse_categorical_accuracy: 0.7588
Epoch 34/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5536 - sparse_categorical_accuracy: 0.7608
Epoch 35/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5537 - sparse_categorical_accuracy: 0.7612
Epoch 36/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5518 - sparse_categorical_accuracy: 0.7621
Epoch 37/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5502 - sparse_categorical_accuracy: 0.7618
Epoch 38/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5537 - sparse_categorical_accuracy: 0.7597
Epoch 39/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5526 - sparse_categorical_accuracy: 0.7609
Epoch 40/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5508 - sparse_categorical_accuracy: 0.7608
Epoch 41/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5495 - sparse_categorical_accuracy: 0.7613
Epoch 42/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 3s 1ms/step - loss: 0.5478 - sparse_categorical_accuracy: 0.7625
Epoch 43/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5471 - sparse_categorical_accuracy: 0.7629
Epoch 44/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5462 - sparse_categorical_accuracy: 0.7640
Epoch 45/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5458 - sparse_categorical_accuracy: 0.7633
Epoch 46/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5466 - sparse_categorical_accuracy: 0.7635
Epoch 47/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5492 - sparse_categorical_accuracy: 0.7633
Epoch 48/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5474 - sparse_categorical_accuracy: 0.7639
Epoch 49/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5452 - sparse_categorical_accuracy: 0.7645
Epoch 50/50
1862/1862 ━━━━━━━━━━━━━━━━━━━━ 2s 1ms/step - loss: 0.5446 - sparse_categorical_accuracy: 0.7663
Model training finished
Test accuracy: 77.98%
```
</div>
The deep and cross model achieves ~81% test accuracy.
---
## Conclusion
You can use Keras Preprocessing Layers to easily handle categorical features
with different encoding mechanisms, including one-hot encoding and feature embedding.
In addition, different model architectures — like wide, deep, and cross networks
— have different advantages, with respect to different dataset properties.
You can explore using them independently or combining them to achieve the best result
for your dataset.
| keras-io/examples/structured_data/md/wide_deep_cross_networks.md/0 | {
"file_path": "keras-io/examples/structured_data/md/wide_deep_cross_networks.md",
"repo_id": "keras-io",
"token_count": 18811
} | 95 |
<jupyter_start><jupyter_text>Electroencephalogram Signal Classification for action identification**Author:** [Suvaditya Mukherjee](https://github.com/suvadityamuk)**Date created:** 2022/11/03**Last modified:** 2022/11/05**Description:** Training a Convolutional model to classify EEG signals produced by exposure to certain stimuli. IntroductionThe following example explores how we can make a Convolution-based Neural Network toperform classification on Electroencephalogram signals captured when subjects wereexposed to different stimuli.We train a model from scratch since such signal-classification models are fairly scarcein pre-trained format.The data we use is sourced from the UC Berkeley-Biosense Lab where the data was collectedfrom 15 subjects at the same time.Our process is as follows:- Load the [UC Berkeley-Biosense Synchronized Brainwave Dataset](https://www.kaggle.com/datasets/berkeley-biosense/synchronized-brainwave-dataset)- Visualize random samples from the data- Pre-process, collate and scale the data to finally make a `tf.data.Dataset`- Prepare class weights in order to tackle major imbalances- Create a Conv1D and Dense-based model to perform classification- Define callbacks and hyperparameters- Train the model- Plot metrics from History and perform evaluationThis example needs the following external dependencies (Gdown, Scikit-learn, Pandas,Numpy, Matplotlib). You can install it via the following commands.Gdown is an external package used to download large files from Google Drive. To knowmore, you can refer to its [PyPi page here](https://pypi.org/project/gdown) Setup and Data DownloadsFirst, lets install our dependencies:<jupyter_code>!pip install gdown -q
!pip install sklearn -q
!pip install pandas -q
!pip install numpy -q
!pip install matplotlib -q<jupyter_output><empty_output><jupyter_text>Next, lets download our dataset.The gdown package makes it easy to download the data from Google Drive:<jupyter_code>!gdown 1V5B7Bt6aJm0UHbR7cRKBEK8jx7lYPVuX
!# gdown will download eeg-data.csv onto the local drive for use. Total size of
!# eeg-data.csv is 105.7 MB
import pandas as pd
import matplotlib.pyplot as plt
import json
import numpy as np
import keras
from keras import layers
import tensorflow as tf
from sklearn import preprocessing, model_selection
import random
QUALITY_THRESHOLD = 128
BATCH_SIZE = 64
SHUFFLE_BUFFER_SIZE = BATCH_SIZE * 2<jupyter_output><empty_output><jupyter_text>Read data from `eeg-data.csv`We use the Pandas library to read the `eeg-data.csv` file and display the first 5 rowsusing the `.head()` command<jupyter_code>eeg = pd.read_csv("eeg-data.csv")<jupyter_output><empty_output><jupyter_text>We remove unlabeled samples from our dataset as they do not contribute to the model. Wealso perform a `.drop()` operation on the columns that are not required for training datapreparation<jupyter_code>unlabeled_eeg = eeg[eeg["label"] == "unlabeled"]
eeg = eeg.loc[eeg["label"] != "unlabeled"]
eeg = eeg.loc[eeg["label"] != "everyone paired"]
eeg.drop(
[
"indra_time",
"Unnamed: 0",
"browser_latency",
"reading_time",
"attention_esense",
"meditation_esense",
"updatedAt",
"createdAt",
],
axis=1,
inplace=True,
)
eeg.reset_index(drop=True, inplace=True)
eeg.head()<jupyter_output><empty_output><jupyter_text>In the data, the samples recorded are given a score from 0 to 128 based on howwell-calibrated the sensor was (0 being best, 200 being worst). We filter the valuesbased on an arbitrary cutoff limit of 128.<jupyter_code>def convert_string_data_to_values(value_string):
str_list = json.loads(value_string)
return str_list
eeg["raw_values"] = eeg["raw_values"].apply(convert_string_data_to_values)
eeg = eeg.loc[eeg["signal_quality"] < QUALITY_THRESHOLD]
eeg.head()<jupyter_output><empty_output><jupyter_text>Visualize one random sample from the data We visualize one sample from the data to understand how the stimulus-induced signal lookslike<jupyter_code>def view_eeg_plot(idx):
data = eeg.loc[idx, "raw_values"]
plt.plot(data)
plt.title(f"Sample random plot")
plt.show()
view_eeg_plot(7)<jupyter_output><empty_output><jupyter_text>Pre-process and collate data There are a total of 67 different labels present in the data, where there are numberedsub-labels. We collate them under a single label as per their numbering and replace themin the data itself. Following this process, we perform simple Label encoding to get themin an integer format.<jupyter_code>print("Before replacing labels")
print(eeg["label"].unique(), "\n")
print(len(eeg["label"].unique()), "\n")
eeg.replace(
{
"label": {
"blink1": "blink",
"blink2": "blink",
"blink3": "blink",
"blink4": "blink",
"blink5": "blink",
"math1": "math",
"math2": "math",
"math3": "math",
"math4": "math",
"math5": "math",
"math6": "math",
"math7": "math",
"math8": "math",
"math9": "math",
"math10": "math",
"math11": "math",
"math12": "math",
"thinkOfItems-ver1": "thinkOfItems",
"thinkOfItems-ver2": "thinkOfItems",
"video-ver1": "video",
"video-ver2": "video",
"thinkOfItemsInstruction-ver1": "thinkOfItemsInstruction",
"thinkOfItemsInstruction-ver2": "thinkOfItemsInstruction",
"colorRound1-1": "colorRound1",
"colorRound1-2": "colorRound1",
"colorRound1-3": "colorRound1",
"colorRound1-4": "colorRound1",
"colorRound1-5": "colorRound1",
"colorRound1-6": "colorRound1",
"colorRound2-1": "colorRound2",
"colorRound2-2": "colorRound2",
"colorRound2-3": "colorRound2",
"colorRound2-4": "colorRound2",
"colorRound2-5": "colorRound2",
"colorRound2-6": "colorRound2",
"colorRound3-1": "colorRound3",
"colorRound3-2": "colorRound3",
"colorRound3-3": "colorRound3",
"colorRound3-4": "colorRound3",
"colorRound3-5": "colorRound3",
"colorRound3-6": "colorRound3",
"colorRound4-1": "colorRound4",
"colorRound4-2": "colorRound4",
"colorRound4-3": "colorRound4",
"colorRound4-4": "colorRound4",
"colorRound4-5": "colorRound4",
"colorRound4-6": "colorRound4",
"colorRound5-1": "colorRound5",
"colorRound5-2": "colorRound5",
"colorRound5-3": "colorRound5",
"colorRound5-4": "colorRound5",
"colorRound5-5": "colorRound5",
"colorRound5-6": "colorRound5",
"colorInstruction1": "colorInstruction",
"colorInstruction2": "colorInstruction",
"readyRound1": "readyRound",
"readyRound2": "readyRound",
"readyRound3": "readyRound",
"readyRound4": "readyRound",
"readyRound5": "readyRound",
"colorRound1": "colorRound",
"colorRound2": "colorRound",
"colorRound3": "colorRound",
"colorRound4": "colorRound",
"colorRound5": "colorRound",
}
},
inplace=True,
)
print("After replacing labels")
print(eeg["label"].unique())
print(len(eeg["label"].unique()))
le = preprocessing.LabelEncoder() # Generates a look-up table
le.fit(eeg["label"])
eeg["label"] = le.transform(eeg["label"])<jupyter_output><empty_output><jupyter_text>We extract the number of unique classes present in the data<jupyter_code>num_classes = len(eeg["label"].unique())
print(num_classes)<jupyter_output><empty_output><jupyter_text>We now visualize the number of samples present in each class using a Bar plot.<jupyter_code>plt.bar(range(num_classes), eeg["label"].value_counts())
plt.title("Number of samples per class")
plt.show()<jupyter_output><empty_output><jupyter_text>Scale and split data We perform a simple Min-Max scaling to bring the value-range between 0 and 1. We do notuse Standard Scaling as the data does not follow a Gaussian distribution.<jupyter_code>scaler = preprocessing.MinMaxScaler()
series_list = [
scaler.fit_transform(np.asarray(i).reshape(-1, 1)) for i in eeg["raw_values"]
]
labels_list = [i for i in eeg["label"]]<jupyter_output><empty_output><jupyter_text>We now create a Train-test split with a 15% holdout set. Following this, we reshape thedata to create a sequence of length 512. We also convert the labels from their currentlabel-encoded form to a one-hot encoding to enable use of several different`keras.metrics` functions.<jupyter_code>x_train, x_test, y_train, y_test = model_selection.train_test_split(
series_list, labels_list, test_size=0.15, random_state=42, shuffle=True
)
print(
f"Length of x_train : {len(x_train)}\nLength of x_test : {len(x_test)}\nLength of y_train : {len(y_train)}\nLength of y_test : {len(y_test)}"
)
x_train = np.asarray(x_train).astype(np.float32).reshape(-1, 512, 1)
y_train = np.asarray(y_train).astype(np.float32).reshape(-1, 1)
y_train = keras.utils.to_categorical(y_train)
x_test = np.asarray(x_test).astype(np.float32).reshape(-1, 512, 1)
y_test = np.asarray(y_test).astype(np.float32).reshape(-1, 1)
y_test = keras.utils.to_categorical(y_test)<jupyter_output><empty_output><jupyter_text>Prepare `tf.data.Dataset` We now create a `tf.data.Dataset` from this data to prepare it for training. We alsoshuffle and batch the data for use later.<jupyter_code>train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
train_dataset = train_dataset.shuffle(SHUFFLE_BUFFER_SIZE).batch(BATCH_SIZE)
test_dataset = test_dataset.batch(BATCH_SIZE)<jupyter_output><empty_output><jupyter_text>Make Class Weights using Naive method As we can see from the plot of number of samples per class, the dataset is imbalanced.Hence, we **calculate weights for each class** to make sure that the model is trained ina fair manner without preference to any specific class due to greater number of samples.We use a naive method to calculate these weights, finding an **inverse proportion** ofeach class and using that as the weight.<jupyter_code>vals_dict = {}
for i in eeg["label"]:
if i in vals_dict.keys():
vals_dict[i] += 1
else:
vals_dict[i] = 1
total = sum(vals_dict.values())
# Formula used - Naive method where
# weight = 1 - (no. of samples present / total no. of samples)
# So more the samples, lower the weight
weight_dict = {k: (1 - (v / total)) for k, v in vals_dict.items()}
print(weight_dict)<jupyter_output><empty_output><jupyter_text>Define simple function to plot all the metrics present in a `keras.callbacks.History`object<jupyter_code>def plot_history_metrics(history: keras.callbacks.History):
total_plots = len(history.history)
cols = total_plots // 2
rows = total_plots // cols
if total_plots % cols != 0:
rows += 1
pos = range(1, total_plots + 1)
plt.figure(figsize=(15, 10))
for i, (key, value) in enumerate(history.history.items()):
plt.subplot(rows, cols, pos[i])
plt.plot(range(len(value)), value)
plt.title(str(key))
plt.show()<jupyter_output><empty_output><jupyter_text>Define function to generate Convolutional model<jupyter_code>def create_model():
input_layer = keras.Input(shape=(512, 1))
x = layers.Conv1D(
filters=32, kernel_size=3, strides=2, activation="relu", padding="same"
)(input_layer)
x = layers.BatchNormalization()(x)
x = layers.Conv1D(
filters=64, kernel_size=3, strides=2, activation="relu", padding="same"
)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv1D(
filters=128, kernel_size=5, strides=2, activation="relu", padding="same"
)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv1D(
filters=256, kernel_size=5, strides=2, activation="relu", padding="same"
)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv1D(
filters=512, kernel_size=7, strides=2, activation="relu", padding="same"
)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv1D(
filters=1024,
kernel_size=7,
strides=2,
activation="relu",
padding="same",
)(x)
x = layers.BatchNormalization()(x)
x = layers.Dropout(0.2)(x)
x = layers.Flatten()(x)
x = layers.Dense(4096, activation="relu")(x)
x = layers.Dropout(0.2)(x)
x = layers.Dense(
2048, activation="relu", kernel_regularizer=keras.regularizers.L2()
)(x)
x = layers.Dropout(0.2)(x)
x = layers.Dense(
1024, activation="relu", kernel_regularizer=keras.regularizers.L2()
)(x)
x = layers.Dropout(0.2)(x)
x = layers.Dense(
128, activation="relu", kernel_regularizer=keras.regularizers.L2()
)(x)
output_layer = layers.Dense(num_classes, activation="softmax")(x)
return keras.Model(inputs=input_layer, outputs=output_layer)<jupyter_output><empty_output><jupyter_text>Get Model summary<jupyter_code>conv_model = create_model()
conv_model.summary()<jupyter_output><empty_output><jupyter_text>Define callbacks, optimizer, loss and metrics We set the number of epochs at 30 after performing extensive experimentation. It was seenthat this was the optimal number, after performing Early-Stopping analysis as well.We define a Model Checkpoint callback to make sure that we only get the best modelweights.We also define a ReduceLROnPlateau as there were several cases found duringexperimentation where the loss stagnated after a certain point. On the other hand, adirect LRScheduler was found to be too aggressive in its decay.<jupyter_code>epochs = 30
callbacks = [
keras.callbacks.ModelCheckpoint(
"best_model.keras", save_best_only=True, monitor="loss"
),
keras.callbacks.ReduceLROnPlateau(
monitor="val_top_k_categorical_accuracy",
factor=0.2,
patience=2,
min_lr=0.000001,
),
]
optimizer = keras.optimizers.Adam(amsgrad=True, learning_rate=0.001)
loss = keras.losses.CategoricalCrossentropy()<jupyter_output><empty_output><jupyter_text>Compile model and call `model.fit()` We use the `Adam` optimizer since it is commonly considered the best choice forpreliminary training, and was found to be the best optimizer.We use `CategoricalCrossentropy` as the loss as our labels are in a one-hot-encoded form.We define the `TopKCategoricalAccuracy(k=3)`, `AUC`, `Precision` and `Recall` metrics tofurther aid in understanding the model better.<jupyter_code>conv_model.compile(
optimizer=optimizer,
loss=loss,
metrics=[
keras.metrics.TopKCategoricalAccuracy(k=3),
keras.metrics.AUC(),
keras.metrics.Precision(),
keras.metrics.Recall(),
],
)
conv_model_history = conv_model.fit(
train_dataset,
epochs=epochs,
callbacks=callbacks,
validation_data=test_dataset,
class_weight=weight_dict,
)<jupyter_output><empty_output><jupyter_text>Visualize model metrics during training We use the function defined above to see model metrics during training.<jupyter_code>plot_history_metrics(conv_model_history)<jupyter_output><empty_output><jupyter_text>Evaluate model on test data<jupyter_code>loss, accuracy, auc, precision, recall = conv_model.evaluate(test_dataset)
print(f"Loss : {loss}")
print(f"Top 3 Categorical Accuracy : {accuracy}")
print(f"Area under the Curve (ROC) : {auc}")
print(f"Precision : {precision}")
print(f"Recall : {recall}")
def view_evaluated_eeg_plots(model):
start_index = random.randint(10, len(eeg))
end_index = start_index + 11
data = eeg.loc[start_index:end_index, "raw_values"]
data_array = [scaler.fit_transform(np.asarray(i).reshape(-1, 1)) for i in data]
data_array = [np.asarray(data_array).astype(np.float32).reshape(-1, 512, 1)]
original_labels = eeg.loc[start_index:end_index, "label"]
predicted_labels = np.argmax(model.predict(data_array, verbose=0), axis=1)
original_labels = [
le.inverse_transform(np.array(label).reshape(-1))[0]
for label in original_labels
]
predicted_labels = [
le.inverse_transform(np.array(label).reshape(-1))[0]
for label in predicted_labels
]
total_plots = 12
cols = total_plots // 3
rows = total_plots // cols
if total_plots % cols != 0:
rows += 1
pos = range(1, total_plots + 1)
fig = plt.figure(figsize=(20, 10))
for i, (plot_data, og_label, pred_label) in enumerate(
zip(data, original_labels, predicted_labels)
):
plt.subplot(rows, cols, pos[i])
plt.plot(plot_data)
plt.title(f"Actual Label : {og_label}\nPredicted Label : {pred_label}")
fig.subplots_adjust(hspace=0.5)
plt.show()
view_evaluated_eeg_plots(conv_model)<jupyter_output><empty_output> | keras-io/examples/timeseries/ipynb/eeg_signal_classification.ipynb/0 | {
"file_path": "keras-io/examples/timeseries/ipynb/eeg_signal_classification.ipynb",
"repo_id": "keras-io",
"token_count": 6757
} | 96 |
"""
Title: Image Classification using Global Context Vision Transformer
Author: Md Awsafur Rahman
Date created: 2023/10/30
Last modified: 2023/10/30
Description: Implementation and fine-tuning of Global Context Vision Transformer for image classification.
Accelerator: GPU
"""
"""
# Setup
"""
"""shell
pip install --upgrade keras_cv tensorflow
pip install --upgrade keras
"""
import keras
from keras_cv.layers import DropPath
from keras import ops
from keras import layers
import tensorflow as tf # only for dataloader
import tensorflow_datasets as tfds # for flower dataset
from skimage.data import chelsea
import matplotlib.pyplot as plt
import numpy as np
"""
## Introduction
In this notebook, we will utilize multi-backend Keras 3.0 to implement the
[**GCViT: Global Context Vision Transformer**](https://arxiv.org/abs/2206.09959) paper,
presented at ICML 2023 by A Hatamizadeh et al. The, we will fine-tune the model on the
Flower dataset for image classification task, leveraging the official ImageNet pre-trained
weights. A highlight of this notebook is its compatibility with multiple backends:
TensorFlow, PyTorch, and JAX, showcasing the true potential of multi-backend Keras.
"""
"""
## Motivation
> **Note:** In this section we'll learn about the backstory of GCViT and try to
understand why it is proposed.
* During recent years, **Transformers** have achieved dominance in **Natural Language
Processing (NLP)** tasks and with the **self-attention** mechanism which allows for
capturing both long and short-range information.
* Following this trend, **Vision Transformer (ViT)** proposed to utilize image patches as
tokens in a gigantic architecture similar to encoder of the original Transformer.
* Despite the historic dominance of **Convolutional Neural Network (CNN)** in computer
vision, **ViT-based** models have shown **SOTA or competitive performance** in various
computer vision tasks.
<img src="https://raw.githubusercontent.com/awsaf49/gcvit-tf/main/image/vit_gif.gif"
width=600>
* However, **quadratic [`O(n^2)`] computational complexity** of self-attention and **lack
of multi-scale information** makes it difficult for **ViT** to be considered as
general-purpose architecture for Compute Vision tasks like **segmentation and object
detection** where it requires **dense prediction at the pixel level**.
* Swin Transformer has attempted to address the issues of **ViT** by proposing
**multi-resolution/hierarchical** architectures in which the self-attention is computed
in **local windows** and cross-window connections such as **window shifting** are used
for modeling the interactions across different regions. But the **limited receptive field
of local windows** can not capture long-range information, and cross-window-connection
schemes such as **window-shifting only cover a small neighborhood** in the vicinity of
each window. Also, it lacks **inductive-bias** that encourages certain translation
invariance is still preferable for general-purpose visual modeling, particularly for the
dense prediction tasks of object detection and semantic segmentation.
<img src="https://raw.githubusercontent.com/awsaf49/gcvit-tf/main/image/swin_vs_vit.JPG"
width=400> <img
src="https://raw.githubusercontent.com/awsaf49/gcvit-tf/main/image/shifted_window.JPG"
width=400>
<img src="https://raw.githubusercontent.com/awsaf49/gcvit-tf/main/image/swin_arch.JPG"
width=800>
* To address above limitations, **Global Context (GC) ViT** network is proposed.
"""
"""
## Architecture
Let's have a quick **overview** of our key components,
1. `Stem/PatchEmbed:` A stem/patchify layer processes images at the network’s beginning.
For this network, it creates **patches/tokens** and converts them into **embeddings**.
2. `Level:` It is the repetitive building block that extracts features using different
blocks.
3. `Global Token Gen./FeatureExtraction:` It generates **global tokens/patches** with
**Depthwise-CNN**, **SqueezeAndExcitation (Squeeze-Excitation)**, **CNN** and
**MaxPooling**. So basically
it's a Feature Extractor.
4. `Block:` It is the repetitive module that applies attention to the features and
projects them to a certain dimension.
1. `Local-MSA:` Local Multi head Self Attention.
2. `Global-MSA:` Global Multi head Self Attention.
3. `MLP:` Linear layer that projects a vector to another dimension.
5. `Downsample/ReduceSize:` It is very similar to **Global Token Gen.** module except it
uses **CNN** instead of **MaxPooling** to downsample with additional **Layer
Normalization** modules.
6. `Head:` It is the module responsible for the classification task.
1. `Pooling:` It converts `N x 2D` features to `N x 1D` features.
2. `Classifier:` It processes `N x 1D` features to make a decision about class.
I've annotated the architecture figure to make it easier to digest,
<img src="https://raw.githubusercontent.com/awsaf49/gcvit-tf/main/image/arch_annot.png">
"""
"""
### Unit Blocks
> **Note:** This blocks are used to build other modules throughout the paper. Most of the
blocks are either borrowed from other work or modified version old work.
1. `SqueezeAndExcitation`: **Squeeze-Excitation (SE)** aka **Bottleneck** module acts sd
kind of **channel
attention**. It consits of **AvgPooling**, **Dense/FullyConnected (FC)/Linear** ,
**GELU** and **Sigmoid** module.
<img src="https://raw.githubusercontent.com/awsaf49/gcvit-tf/main/image/se_annot.png"
width=400>
2. `Fused-MBConv:` This is similar to the one used in **EfficientNetV2**. It uses
**Depthwise-Conv**, **GELU**, **SqueezeAndExcitation**, **Conv**, to extract feature with
a resiudal
connection. Note that, no new module is declared for this one, we simply applied
corresponding modules directly.
<img src="https://raw.githubusercontent.com/awsaf49/gcvit-tf/main/image/fmb_annot.png"
width=350>
3. `ReduceSize`: It is a **CNN** based **downsample** module which abvobe mentioned
`Fused-MBConv` module to extract feature, **Strided Conv** to simultaneously reduce
spatial dimension and increse channelwise dimention of the features and finally
**LayerNormalization** module to normalize features. In the paper/figure this module is
referred as **downsample** module. I think it is mention worthy that **SwniTransformer**
used `PatchMerging` module instead of `ReduceSize` to reduce the spatial dimention and
increase channelwise dimension which uses **fully-connected/dense/linear** module.
According to the **GCViT** paper, one of the purposes of using `ReduceSize` is to add
inductive bias through **CNN** module.
<img src="https://raw.githubusercontent.com/awsaf49/gcvit-tf/main/image/down_annot.png"
width=300>
4. `MLP:` This is our very own **Multi Layer Perceptron** module. This a
feed-forward/fully-connected/linear module which simply projects input to an arbitary
dimension.
"""
class SqueezeAndExcitation(layers.Layer):
"""Squeeze and excitation block.
Args:
output_dim: output features dimension, if `None` use same dim as input.
expansion: expansion ratio.
"""
def __init__(self, output_dim=None, expansion=0.25, **kwargs):
super().__init__(**kwargs)
self.expansion = expansion
self.output_dim = output_dim
def build(self, input_shape):
inp = input_shape[-1]
self.output_dim = self.output_dim or inp
self.avg_pool = layers.GlobalAvgPool2D(keepdims=True, name="avg_pool")
self.fc = [
layers.Dense(int(inp * self.expansion), use_bias=False, name="fc_0"),
layers.Activation("gelu", name="fc_1"),
layers.Dense(self.output_dim, use_bias=False, name="fc_2"),
layers.Activation("sigmoid", name="fc_3"),
]
super().build(input_shape)
def call(self, inputs, **kwargs):
x = self.avg_pool(inputs)
for layer in self.fc:
x = layer(x)
return x * inputs
class ReduceSize(layers.Layer):
"""Down-sampling block.
Args:
keepdims: if False spatial dim is reduced and channel dim is increased
"""
def __init__(self, keepdims=False, **kwargs):
super().__init__(**kwargs)
self.keepdims = keepdims
def build(self, input_shape):
embed_dim = input_shape[-1]
dim_out = embed_dim if self.keepdims else 2 * embed_dim
self.pad1 = layers.ZeroPadding2D(1, name="pad1")
self.pad2 = layers.ZeroPadding2D(1, name="pad2")
self.conv = [
layers.DepthwiseConv2D(
kernel_size=3, strides=1, padding="valid", use_bias=False, name="conv_0"
),
layers.Activation("gelu", name="conv_1"),
SqueezeAndExcitation(name="conv_2"),
layers.Conv2D(
embed_dim,
kernel_size=1,
strides=1,
padding="valid",
use_bias=False,
name="conv_3",
),
]
self.reduction = layers.Conv2D(
dim_out,
kernel_size=3,
strides=2,
padding="valid",
use_bias=False,
name="reduction",
)
self.norm1 = layers.LayerNormalization(
-1, 1e-05, name="norm1"
) # eps like PyTorch
self.norm2 = layers.LayerNormalization(-1, 1e-05, name="norm2")
def call(self, inputs, **kwargs):
x = self.norm1(inputs)
xr = self.pad1(x)
for layer in self.conv:
xr = layer(xr)
x = x + xr
x = self.pad2(x)
x = self.reduction(x)
x = self.norm2(x)
return x
class MLP(layers.Layer):
"""Multi-Layer Perceptron (MLP) block.
Args:
hidden_features: hidden features dimension.
out_features: output features dimension.
activation: activation function.
dropout: dropout rate.
"""
def __init__(
self,
hidden_features=None,
out_features=None,
activation="gelu",
dropout=0.0,
**kwargs,
):
super().__init__(**kwargs)
self.hidden_features = hidden_features
self.out_features = out_features
self.activation = activation
self.dropout = dropout
def build(self, input_shape):
self.in_features = input_shape[-1]
self.hidden_features = self.hidden_features or self.in_features
self.out_features = self.out_features or self.in_features
self.fc1 = layers.Dense(self.hidden_features, name="fc1")
self.act = layers.Activation(self.activation, name="act")
self.fc2 = layers.Dense(self.out_features, name="fc2")
self.drop1 = layers.Dropout(self.dropout, name="drop1")
self.drop2 = layers.Dropout(self.dropout, name="drop2")
def call(self, inputs, **kwargs):
x = self.fc1(inputs)
x = self.act(x)
x = self.drop1(x)
x = self.fc2(x)
x = self.drop2(x)
return x
"""
### Stem
> **Notes**: In the code, this module is referred to as **PatchEmbed** but on paper, it
is referred to as **Stem**.
In the model, we have first used `patch_embed` module. Let's try to understand this
module. As we can see from the `call` method,
1. This module first **pads** input
2. Then uses **convolutions** to extract patches with embeddings.
3. Finally, uses `ReduceSize` module to first extract features with **convolution** but
neither reduces spatial dimension nor increases spatial dimension.
4. One important point to notice, unlike **ViT** or **SwinTransformer**, **GCViT**
creates **overlapping patches**. We can notice that from the code,
`Conv2D(self.embed_dim, kernel_size=3, strides=2, name='proj')`. If we wanted
**non-overlapping** patches then we would've used the same `kernel_size` and `stride`.
5. This module reduces the spatial dimension of input by `4x`.
> Summary: image → padding → convolution →
(feature_extract + downsample)
"""
class PatchEmbed(layers.Layer):
"""Patch embedding block.
Args:
embed_dim: feature size dimension.
"""
def __init__(self, embed_dim, **kwargs):
super().__init__(**kwargs)
self.embed_dim = embed_dim
def build(self, input_shape):
self.pad = layers.ZeroPadding2D(1, name="pad")
self.proj = layers.Conv2D(self.embed_dim, 3, 2, name="proj")
self.conv_down = ReduceSize(keepdims=True, name="conv_down")
def call(self, inputs, **kwargs):
x = self.pad(inputs)
x = self.proj(x)
x = self.conv_down(x)
return x
"""
### Global Token Gen.
> **Notes:** It is one of the two **CNN** modules that is used to imppose inductive bias.
As we can see from above cell, in the `level` we have first used `to_q_global/Global
Token Gen./FeatureExtraction`. Let's try to understand how it works,
* This module is series of `FeatureExtract` module, according to paper we need to
repeat this module `K` times, where `K = log2(H/h)`, `H = feature_map_height`,
`W = feature_map_width`.
* `FeatureExtraction:` This layer is very similar to `ReduceSize` module except it uses
**MaxPooling** module to reduce the dimension, it doesn't increse feature dimension
(channelsie) and it doesn't uses **LayerNormalizaton**. This module is used to in
`Generate Token Gen.` module repeatedly to generte **global tokens** for
**global-context-attention**.
* One important point to notice from the figure is that, **global tokens** is shared
across the whole image which means we use only **one global window** for **all local
tokens** in a image. This makes the computation very efficient.
* For input feature map with shape `(B, H, W, C)`, we'll get output shape `(B, h, w, C)`.
If we copy these global tokens for total `M` local windows in an image where,
`M = (H x W)/(h x w) = num_window`, then output shape: `(B * M, h, w, C)`."
> Summary: This module is used to `resize` the image to fit window.
<img
src="https://raw.githubusercontent.com/awsaf49/gcvit-tf/main/image/global_token_annot.png"
width=800>
"""
class FeatureExtraction(layers.Layer):
"""Feature extraction block.
Args:
keepdims: bool argument for maintaining the resolution.
"""
def __init__(self, keepdims=False, **kwargs):
super().__init__(**kwargs)
self.keepdims = keepdims
def build(self, input_shape):
embed_dim = input_shape[-1]
self.pad1 = layers.ZeroPadding2D(1, name="pad1")
self.pad2 = layers.ZeroPadding2D(1, name="pad2")
self.conv = [
layers.DepthwiseConv2D(3, 1, use_bias=False, name="conv_0"),
layers.Activation("gelu", name="conv_1"),
SqueezeAndExcitation(name="conv_2"),
layers.Conv2D(embed_dim, 1, 1, use_bias=False, name="conv_3"),
]
if not self.keepdims:
self.pool = layers.MaxPool2D(3, 2, name="pool")
super().build(input_shape)
def call(self, inputs, **kwargs):
x = inputs
xr = self.pad1(x)
for layer in self.conv:
xr = layer(xr)
x = x + xr
if not self.keepdims:
x = self.pool(self.pad2(x))
return x
class GlobalQueryGenerator(layers.Layer):
"""Global query generator.
Args:
keepdims: to keep the dimension of FeatureExtraction layer.
For instance, repeating log(56/7) = 3 blocks, with input
window dimension 56 and output window dimension 7 at down-sampling
ratio 2. Please check Fig.5 of GC ViT paper for details.
"""
def __init__(self, keepdims=False, **kwargs):
super().__init__(**kwargs)
self.keepdims = keepdims
def build(self, input_shape):
self.to_q_global = [
FeatureExtraction(keepdims, name=f"to_q_global_{i}")
for i, keepdims in enumerate(self.keepdims)
]
super().build(input_shape)
def call(self, inputs, **kwargs):
x = inputs
for layer in self.to_q_global:
x = layer(x)
return x
"""
### Attention
> **Notes:** This is the core contribution of the paper.
As we can see from the `call` method,
1. `WindowAttention` module applies both **local** and **global** window attention
depending on `global_query` parameter.
2. First it converts input features into `query, key, value` for local attention and
`key, value` for global attention. For global attention, it takes global query from
`Global Token Gen.`. One thing to notice from the code is that we divide the **features
or embed_dim** among all the **heads of Transformer** to reduce the computation.
`qkv = tf.reshape(qkv, [B_, N, self.qkv_size, self.num_heads, C // self.num_heads])`
3. Before sending query, key and value for attention, **global token** goes through an
important process. Same global tokens or one global window gets copied for all the local
windows to increase efficiency.
`q_global = tf.repeat(q_global, repeats=B_//B, axis=0)`, here `B_//B` means `num_windows`
in a image.
4. Then simply applies `local-window-self-attention` or `global-window-attention`
depending on `global_query` parameter. One thing to notice from the code is that we are
adding **relative-positional-embedding** with the **attention mask** instead of the
**patch embedding**.
`attn = attn + relative_position_bias[tf.newaxis,]`
<img src="https://raw.githubusercontent.com/awsaf49/gcvit-tf/main/image/lvg_msa.PNG"
width=800>
5. Now, let's think for a bit and try to understand what is happening here. Let's focus
on the figure below. We can see from the left, that in the **local-attention** the
**query is local** and it's **limited to the local window** (red square border) hence we
don't have access to long-range information. But on the right that due to **global
query** we're now **not limited to local-windows** (blue square border) and we have
access to long-range information.
<img src="https://raw.githubusercontent.com/awsaf49/gcvit-tf/main/image/lvg_arch.PNG"
width=800>
6. In **ViT** we compare (attention) image-tokens with image-tokens, in
**SwinTransformer** we compare window-tokens with window-tokens but in **GCViT** we
compare image-tokens with window-tokens. But now you may ask, how can compare(attention)
image-tokens with window-tokens even after image-tokens have larger dimensions than
window-tokens? (from above figure image-tokens have shape `(1, 8, 8, 3)` and
window-tokens have shape `(1, 4, 4, 3)`). Yes, you are right we can't directly compare
them hence we resize image-tokens to fit window-tokens with `Global Token
Gen./FeatureExtraction` **CNN** module. The following table should give you a clear
comparison,
| Model | Query Tokens | Key-Value Tokens | Attention Type | Attention Coverage |
|------------------|-----------------|-------------------|---------------------------|--------------------|
| ViT | image | image | self-attention | global |
| SwinTransformer | window | window | self-attention | local |
| **GCViT** | **resized-image** | **window** | **image-window attention** | **global** |
"""
class WindowAttention(layers.Layer):
"""Local window attention.
This implementation was proposed by
[Liu et al., 2021](https://arxiv.org/abs/2103.14030) in SwinTransformer.
Args:
window_size: window size.
num_heads: number of attention head.
global_query: if the input contains global_query
qkv_bias: bool argument for query, key, value learnable bias.
qk_scale: bool argument to scaling query, key.
attention_dropout: attention dropout rate.
projection_dropout: output dropout rate.
"""
def __init__(
self,
window_size,
num_heads,
global_query,
qkv_bias=True,
qk_scale=None,
attention_dropout=0.0,
projection_dropout=0.0,
**kwargs,
):
super().__init__(**kwargs)
window_size = (window_size, window_size)
self.window_size = window_size
self.num_heads = num_heads
self.global_query = global_query
self.qkv_bias = qkv_bias
self.qk_scale = qk_scale
self.attention_dropout = attention_dropout
self.projection_dropout = projection_dropout
def build(self, input_shape):
embed_dim = input_shape[0][-1]
head_dim = embed_dim // self.num_heads
self.scale = self.qk_scale or head_dim**-0.5
self.qkv_size = 3 - int(self.global_query)
self.qkv = layers.Dense(
embed_dim * self.qkv_size, use_bias=self.qkv_bias, name="qkv"
)
self.relative_position_bias_table = self.add_weight(
name="relative_position_bias_table",
shape=[
(2 * self.window_size[0] - 1) * (2 * self.window_size[1] - 1),
self.num_heads,
],
initializer=keras.initializers.TruncatedNormal(stddev=0.02),
trainable=True,
dtype=self.dtype,
)
self.attn_drop = layers.Dropout(self.attention_dropout, name="attn_drop")
self.proj = layers.Dense(embed_dim, name="proj")
self.proj_drop = layers.Dropout(self.projection_dropout, name="proj_drop")
self.softmax = layers.Activation("softmax", name="softmax")
super().build(input_shape)
def get_relative_position_index(self):
coords_h = ops.arange(self.window_size[0])
coords_w = ops.arange(self.window_size[1])
coords = ops.stack(ops.meshgrid(coords_h, coords_w, indexing="ij"), axis=0)
coords_flatten = ops.reshape(coords, [2, -1])
relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :]
relative_coords = ops.transpose(relative_coords, axes=[1, 2, 0])
relative_coords_xx = relative_coords[:, :, 0] + self.window_size[0] - 1
relative_coords_yy = relative_coords[:, :, 1] + self.window_size[1] - 1
relative_coords_xx = relative_coords_xx * (2 * self.window_size[1] - 1)
relative_position_index = relative_coords_xx + relative_coords_yy
return relative_position_index
def call(self, inputs, **kwargs):
if self.global_query:
inputs, q_global = inputs
B = ops.shape(q_global)[0] # B, N, C
else:
inputs = inputs[0]
B_, N, C = ops.shape(inputs) # B*num_window, num_tokens, channels
qkv = self.qkv(inputs)
qkv = ops.reshape(
qkv, [B_, N, self.qkv_size, self.num_heads, C // self.num_heads]
)
qkv = ops.transpose(qkv, [2, 0, 3, 1, 4])
if self.global_query:
k, v = ops.split(
qkv, indices_or_sections=2, axis=0
) # for unknown shame num=None will throw error
q_global = ops.repeat(
q_global, repeats=B_ // B, axis=0
) # num_windows = B_//B => q_global same for all windows in a img
q = ops.reshape(q_global, [B_, N, self.num_heads, C // self.num_heads])
q = ops.transpose(q, axes=[0, 2, 1, 3])
else:
q, k, v = ops.split(qkv, indices_or_sections=3, axis=0)
q = ops.squeeze(q, axis=0)
k = ops.squeeze(k, axis=0)
v = ops.squeeze(v, axis=0)
q = q * self.scale
attn = q @ ops.transpose(k, axes=[0, 1, 3, 2])
relative_position_bias = ops.take(
self.relative_position_bias_table,
ops.reshape(self.get_relative_position_index(), [-1]),
)
relative_position_bias = ops.reshape(
relative_position_bias,
[
self.window_size[0] * self.window_size[1],
self.window_size[0] * self.window_size[1],
-1,
],
)
relative_position_bias = ops.transpose(relative_position_bias, axes=[2, 0, 1])
attn = attn + relative_position_bias[None,]
attn = self.softmax(attn)
attn = self.attn_drop(attn)
x = ops.transpose((attn @ v), axes=[0, 2, 1, 3])
x = ops.reshape(x, [B_, N, C])
x = self.proj_drop(self.proj(x))
return x
"""
### Block
> **Notes:** This module doesn't have any Convolutional module.
In the `level` second module that we have used is `block`. Let's try to understand how it
works. As we can see from the `call` method,
1. `Block` module takes either only feature_maps for local attention or additional global
query for global attention.
2. Before sending feature maps for attention, this module converts **batch feature maps**
to **batch windows** as we'll be applying **Window Attention**.
3. Then we send batch **batch windows** for attention.
4. After attention has been applied we revert **batch windows** to **batch feature maps**.
5. Before sending the attention to applied features for output, this module applies
**Stochastic Depth** regularization in the residual connection. Also, before applying
**Stochastic Depth** it rescales the input with trainable parameters. Note that, this
**Stochastic Depth** block hasn't been shown in the figure of the paper.
<img src="https://raw.githubusercontent.com/awsaf49/gcvit-tf/main/image/block2.JPG"
width=400>
### Window
In the `block` module, we have created **windows** before and after applying attention.
Let's try to understand how we're creating windows,
* Following module converts feature maps `(B, H, W, C)` to stacked windows
`(B x H/h x W/w, h, w, C)` → `(num_windows_batch, window_size, window_size, channel)`
* This module uses `reshape` & `transpose` to create these windows out of image instead
of iterating over them.
"""
class Block(layers.Layer):
"""GCViT block.
Args:
window_size: window size.
num_heads: number of attention head.
global_query: apply global window attention
mlp_ratio: MLP ratio.
qkv_bias: bool argument for query, key, value learnable bias.
qk_scale: bool argument to scaling query, key.
drop: dropout rate.
attention_dropout: attention dropout rate.
path_drop: drop path rate.
activation: activation function.
layer_scale: layer scaling coefficient.
"""
def __init__(
self,
window_size,
num_heads,
global_query,
mlp_ratio=4.0,
qkv_bias=True,
qk_scale=None,
dropout=0.0,
attention_dropout=0.0,
path_drop=0.0,
activation="gelu",
layer_scale=None,
**kwargs,
):
super().__init__(**kwargs)
self.window_size = window_size
self.num_heads = num_heads
self.global_query = global_query
self.mlp_ratio = mlp_ratio
self.qkv_bias = qkv_bias
self.qk_scale = qk_scale
self.dropout = dropout
self.attention_dropout = attention_dropout
self.path_drop = path_drop
self.activation = activation
self.layer_scale = layer_scale
def build(self, input_shape):
B, H, W, C = input_shape[0]
self.norm1 = layers.LayerNormalization(-1, 1e-05, name="norm1")
self.attn = WindowAttention(
window_size=self.window_size,
num_heads=self.num_heads,
global_query=self.global_query,
qkv_bias=self.qkv_bias,
qk_scale=self.qk_scale,
attention_dropout=self.attention_dropout,
projection_dropout=self.dropout,
name="attn",
)
self.drop_path1 = DropPath(self.path_drop)
self.drop_path2 = DropPath(self.path_drop)
self.norm2 = layers.LayerNormalization(-1, 1e-05, name="norm2")
self.mlp = MLP(
hidden_features=int(C * self.mlp_ratio),
dropout=self.dropout,
activation=self.activation,
name="mlp",
)
if self.layer_scale is not None:
self.gamma1 = self.add_weight(
name="gamma1",
shape=[C],
initializer=keras.initializers.Constant(self.layer_scale),
trainable=True,
dtype=self.dtype,
)
self.gamma2 = self.add_weight(
name="gamma2",
shape=[C],
initializer=keras.initializers.Constant(self.layer_scale),
trainable=True,
dtype=self.dtype,
)
else:
self.gamma1 = 1.0
self.gamma2 = 1.0
self.num_windows = int(H // self.window_size) * int(W // self.window_size)
super().build(input_shape)
def call(self, inputs, **kwargs):
if self.global_query:
inputs, q_global = inputs
else:
inputs = inputs[0]
B, H, W, C = ops.shape(inputs)
x = self.norm1(inputs)
# create windows and concat them in batch axis
x = self.window_partition(x, self.window_size) # (B_, win_h, win_w, C)
# flatten patch
x = ops.reshape(x, [-1, self.window_size * self.window_size, C])
# attention
if self.global_query:
x = self.attn([x, q_global])
else:
x = self.attn([x])
# reverse window partition
x = self.window_reverse(x, self.window_size, H, W, C)
# FFN
x = inputs + self.drop_path1(x * self.gamma1)
x = x + self.drop_path2(self.gamma2 * self.mlp(self.norm2(x)))
return x
def window_partition(self, x, window_size):
"""
Args:
x: (B, H, W, C)
window_size: window size
Returns:
local window features (num_windows*B, window_size, window_size, C)
"""
B, H, W, C = ops.shape(x)
x = ops.reshape(
x,
[
-1,
H // window_size,
window_size,
W // window_size,
window_size,
C,
],
)
x = ops.transpose(x, axes=[0, 1, 3, 2, 4, 5])
windows = ops.reshape(x, [-1, window_size, window_size, C])
return windows
def window_reverse(self, windows, window_size, H, W, C):
"""
Args:
windows: local window features (num_windows*B, window_size, window_size, C)
window_size: Window size
H: Height of image
W: Width of image
C: Channel of image
Returns:
x: (B, H, W, C)
"""
x = ops.reshape(
windows,
[
-1,
H // window_size,
W // window_size,
window_size,
window_size,
C,
],
)
x = ops.transpose(x, axes=[0, 1, 3, 2, 4, 5])
x = ops.reshape(x, [-1, H, W, C])
return x
"""
### Level
> **Note:** This module has both Transformer and CNN modules.
In the model, the second module that we have used is `level`. Let's try to understand
this module. As we can see from the `call` method,
1. First it creates **global_token** with a series of `FeatureExtraction` modules. As
we'll see
later that `FeatureExtraction` is nothing but a simple **CNN** based module.
2. Then it uses series of`Block` modules to apply **local or global window attention**
depending on depth level.
3. Finally, it uses `ReduceSize` to reduce the dimension of **contextualized features**.
> Summary: feature_map → global_token → local/global window
attention → dowsample
<img src="https://raw.githubusercontent.com/awsaf49/gcvit-tf/main/image/level.png"
width=400>
"""
class Level(layers.Layer):
"""GCViT level.
Args:
depth: number of layers in each stage.
num_heads: number of heads in each stage.
window_size: window size in each stage.
keepdims: dims to keep in FeatureExtraction.
downsample: bool argument for down-sampling.
mlp_ratio: MLP ratio.
qkv_bias: bool argument for query, key, value learnable bias.
qk_scale: bool argument to scaling query, key.
drop: dropout rate.
attention_dropout: attention dropout rate.
path_drop: drop path rate.
layer_scale: layer scaling coefficient.
"""
def __init__(
self,
depth,
num_heads,
window_size,
keepdims,
downsample=True,
mlp_ratio=4.0,
qkv_bias=True,
qk_scale=None,
dropout=0.0,
attention_dropout=0.0,
path_drop=0.0,
layer_scale=None,
**kwargs,
):
super().__init__(**kwargs)
self.depth = depth
self.num_heads = num_heads
self.window_size = window_size
self.keepdims = keepdims
self.downsample = downsample
self.mlp_ratio = mlp_ratio
self.qkv_bias = qkv_bias
self.qk_scale = qk_scale
self.dropout = dropout
self.attention_dropout = attention_dropout
self.path_drop = path_drop
self.layer_scale = layer_scale
def build(self, input_shape):
path_drop = (
[self.path_drop] * self.depth
if not isinstance(self.path_drop, list)
else self.path_drop
)
self.blocks = [
Block(
window_size=self.window_size,
num_heads=self.num_heads,
global_query=bool(i % 2),
mlp_ratio=self.mlp_ratio,
qkv_bias=self.qkv_bias,
qk_scale=self.qk_scale,
dropout=self.dropout,
attention_dropout=self.attention_dropout,
path_drop=path_drop[i],
layer_scale=self.layer_scale,
name=f"blocks_{i}",
)
for i in range(self.depth)
]
self.down = ReduceSize(keepdims=False, name="downsample")
self.q_global_gen = GlobalQueryGenerator(self.keepdims, name="q_global_gen")
super().build(input_shape)
def call(self, inputs, **kwargs):
x = inputs
q_global = self.q_global_gen(x) # shape: (B, win_size, win_size, C)
for i, blk in enumerate(self.blocks):
if i % 2:
x = blk([x, q_global]) # shape: (B, H, W, C)
else:
x = blk([x]) # shape: (B, H, W, C)
if self.downsample:
x = self.down(x) # shape: (B, H//2, W//2, 2*C)
return x
"""
### Model
Let's directly jump to the model. As we can see from the `call` method,
1. It creates patch embeddings from an image. This layer doesn't flattens these
embeddings which means output of this module will be
`(batch, height/window_size, width/window_size, embed_dim)` instead of
`(batch, height x width/window_size^2, embed_dim)`.
2. Then it applies `Dropout` module which randomly sets input units to 0.
3. It passes these embeddings to series of `Level` modules which we are calling `level`
where,
1. Global token is generated
1. Both local & global attention is applied
1. Finally downsample is applied.
4. So, output after `n` number of **levels**, shape: `(batch, width/window_size x 2^{n-1},
width/window_size x 2^{n-1}, embed_dim x 2^{n-1})`. In the last layer,
paper doesn't use **downsample** and increase **channels**.
5. Output of above layer is normalized using `LayerNormalization` module.
6. In the head, 2D features are converted to 1D features with `Pooling` module. Output
shape after this module is `(batch, embed_dim x 2^{n-1})`
7. Finally, pooled features are sent to `Dense/Linear` module for classification.
> Sumamry: image → (patchs + embedding) → dropout
→ (attention + feature extraction) → normalizaion →
pooling → classify
"""
class GCViT(keras.Model):
"""GCViT model.
Args:
window_size: window size in each stage.
embed_dim: feature size dimension.
depths: number of layers in each stage.
num_heads: number of heads in each stage.
drop_rate: dropout rate.
mlp_ratio: MLP ratio.
qkv_bias: bool argument for query, key, value learnable bias.
qk_scale: bool argument to scaling query, key.
attention_dropout: attention dropout rate.
path_drop: drop path rate.
layer_scale: layer scaling coefficient.
num_classes: number of classes.
head_activation: activation function for head.
"""
def __init__(
self,
window_size,
embed_dim,
depths,
num_heads,
drop_rate=0.0,
mlp_ratio=3.0,
qkv_bias=True,
qk_scale=None,
attention_dropout=0.0,
path_drop=0.1,
layer_scale=None,
num_classes=1000,
head_activation="softmax",
**kwargs,
):
super().__init__(**kwargs)
self.window_size = window_size
self.embed_dim = embed_dim
self.depths = depths
self.num_heads = num_heads
self.drop_rate = drop_rate
self.mlp_ratio = mlp_ratio
self.qkv_bias = qkv_bias
self.qk_scale = qk_scale
self.attention_dropout = attention_dropout
self.path_drop = path_drop
self.layer_scale = layer_scale
self.num_classes = num_classes
self.head_activation = head_activation
self.patch_embed = PatchEmbed(embed_dim=embed_dim, name="patch_embed")
self.pos_drop = layers.Dropout(drop_rate, name="pos_drop")
path_drops = np.linspace(0.0, path_drop, sum(depths))
keepdims = [(0, 0, 0), (0, 0), (1,), (1,)]
self.levels = []
for i in range(len(depths)):
path_drop = path_drops[sum(depths[:i]) : sum(depths[: i + 1])].tolist()
level = Level(
depth=depths[i],
num_heads=num_heads[i],
window_size=window_size[i],
keepdims=keepdims[i],
downsample=(i < len(depths) - 1),
mlp_ratio=mlp_ratio,
qkv_bias=qkv_bias,
qk_scale=qk_scale,
dropout=drop_rate,
attention_dropout=attention_dropout,
path_drop=path_drop,
layer_scale=layer_scale,
name=f"levels_{i}",
)
self.levels.append(level)
self.norm = layers.LayerNormalization(axis=-1, epsilon=1e-05, name="norm")
self.pool = layers.GlobalAvgPool2D(name="pool")
self.head = layers.Dense(num_classes, name="head", activation=head_activation)
def build(self, input_shape):
super().build(input_shape)
self.built = True
def call(self, inputs, **kwargs):
x = self.patch_embed(inputs) # shape: (B, H, W, C)
x = self.pos_drop(x)
for level in self.levels:
x = level(x) # shape: (B, H_, W_, C_)
x = self.norm(x)
x = self.pool(x) # shape: (B, C__)
x = self.head(x)
return x
def build_graph(self, input_shape=(224, 224, 3)):
"""
ref: https://www.kaggle.com/code/ipythonx/tf-hybrid-efficientnet-swin-transformer-gradcam
"""
x = keras.Input(shape=input_shape)
return keras.Model(inputs=[x], outputs=self.call(x), name=self.name)
def summary(self, input_shape=(224, 224, 3)):
return self.build_graph(input_shape).summary()
"""
## Build Model
* Let's build a complete model with all the modules that we've explained above. We'll
build **GCViT-XXTiny** model with the configuration mentioned in the paper.
* Also we'll load the ported official **pre-trained** weights and try for some
predictions.
"""
# Model Configs
config = {
"window_size": (7, 7, 14, 7),
"embed_dim": 64,
"depths": (2, 2, 6, 2),
"num_heads": (2, 4, 8, 16),
"mlp_ratio": 3.0,
"path_drop": 0.2,
}
ckpt_link = (
"https://github.com/awsaf49/gcvit-tf/releases/download/v1.1.6/gcvitxxtiny.keras"
)
# Build Model
model = GCViT(**config)
inp = ops.array(np.random.uniform(size=(1, 224, 224, 3)))
out = model(inp)
# Load Weights
ckpt_path = keras.utils.get_file(ckpt_link.split("/")[-1], ckpt_link)
model.load_weights(ckpt_path)
# Summary
model.summary((224, 224, 3))
"""
## Sanity check for Pre-Trained Weights
"""
img = keras.applications.imagenet_utils.preprocess_input(
chelsea(), mode="torch"
) # Chelsea the cat
img = ops.image.resize(img, (224, 224))[None,] # resize & create batch
pred = model(img)
pred_dec = keras.applications.imagenet_utils.decode_predictions(pred)[0]
print("\n# Image:")
plt.figure(figsize=(6, 6))
plt.imshow(chelsea())
plt.show()
print()
print("# Prediction (Top 5):")
for i in range(5):
print("{:<12} : {:0.2f}".format(pred_dec[i][1], pred_dec[i][2]))
"""
# Fine-tune **GCViT** Model
In the following cells, we will fine-tune **GCViT** model on Flower Dataset which
consists `104` classes.
"""
"""
### Configs
"""
# Model
IMAGE_SIZE = (224, 224)
# Hyper Params
BATCH_SIZE = 32
EPOCHS = 5
# Dataset
CLASSES = [
"dandelion",
"daisy",
"tulips",
"sunflowers",
"roses",
] # don't change the order
# Other constants
MEAN = 255 * np.array([0.485, 0.456, 0.406], dtype="float32") # imagenet mean
STD = 255 * np.array([0.229, 0.224, 0.225], dtype="float32") # imagenet std
AUTO = tf.data.AUTOTUNE
"""
## Data Loader
"""
def make_dataset(dataset: tf.data.Dataset, train: bool, image_size: int = IMAGE_SIZE):
def preprocess(image, label):
# for training, do augmentation
if train:
if tf.random.uniform(shape=[]) > 0.5:
image = tf.image.flip_left_right(image)
image = tf.image.resize(image, size=image_size, method="bicubic")
image = (image - MEAN) / STD # normalization
return image, label
if train:
dataset = dataset.shuffle(BATCH_SIZE * 10)
return dataset.map(preprocess, AUTO).batch(BATCH_SIZE).prefetch(AUTO)
"""
### Flower Dataset
"""
train_dataset, val_dataset = tfds.load(
"tf_flowers",
split=["train[:90%]", "train[90%:]"],
as_supervised=True,
try_gcs=False, # gcs_path is necessary for tpu,
)
train_dataset = make_dataset(train_dataset, True)
val_dataset = make_dataset(val_dataset, False)
"""
### Re-Build Model for Flower Dataset
"""
# Re-Build Model
model = GCViT(**config, num_classes=104)
inp = ops.array(np.random.uniform(size=(1, 224, 224, 3)))
out = model(inp)
# Load Weights
ckpt_path = keras.utils.get_file(ckpt_link.split("/")[-1], ckpt_link)
model.load_weights(ckpt_path, skip_mismatch=True)
model.compile(
loss="sparse_categorical_crossentropy", optimizer="adam", metrics=["accuracy"]
)
"""
### Training
"""
history = model.fit(
train_dataset, validation_data=val_dataset, epochs=EPOCHS, verbose=1
)
"""
## Reference
* [gcvit-tf - A Python library for GCViT with TF2.0](https://github.com/awsaf49/gcvit-tf)
* [gcvit - Official codebase for GCViT](https://github.com/NVlabs/GCVit)
"""
| keras-io/examples/vision/image_classification_using_global_context_vision_transformer.py/0 | {
"file_path": "keras-io/examples/vision/image_classification_using_global_context_vision_transformer.py",
"repo_id": "keras-io",
"token_count": 18120
} | 97 |
<jupyter_start><jupyter_text>Gradient Centralization for Better Training Performance**Author:** [Rishit Dagli](https://github.com/Rishit-dagli)**Date created:** 06/18/21**Last modified:** 07/25/23**Description:** Implement Gradient Centralization to improve training performance of DNNs. IntroductionThis example implements [Gradient Centralization](https://arxiv.org/abs/2004.01461), anew optimization technique for Deep Neural Networks by Yong et al., and demonstrates iton Laurence Moroney's [Horses or HumansDataset](https://www.tensorflow.org/datasets/catalog/horses_or_humans). GradientCentralization can both speedup training process and improve the final generalizationperformance of DNNs. It operates directly on gradients by centralizing the gradientvectors to have zero mean. Gradient Centralization morever improves the Lipschitzness ofthe loss function and its gradient so that the training process becomes more efficientand stable.This example requires `tensorflow_datasets` which can be installed with this command:```pip install tensorflow-datasets``` Setup<jupyter_code>from time import time
import keras
from keras import layers
from keras.optimizers import RMSprop
from keras import ops
from tensorflow import data as tf_data
import tensorflow_datasets as tfds<jupyter_output><empty_output><jupyter_text>Prepare the dataFor this example, we will be using the [Horses or Humansdataset](https://www.tensorflow.org/datasets/catalog/horses_or_humans).<jupyter_code>num_classes = 2
input_shape = (300, 300, 3)
dataset_name = "horses_or_humans"
batch_size = 128
AUTOTUNE = tf_data.AUTOTUNE
(train_ds, test_ds), metadata = tfds.load(
name=dataset_name,
split=[tfds.Split.TRAIN, tfds.Split.TEST],
with_info=True,
as_supervised=True,
)
print(f"Image shape: {metadata.features['image'].shape}")
print(f"Training images: {metadata.splits['train'].num_examples}")
print(f"Test images: {metadata.splits['test'].num_examples}")<jupyter_output><empty_output><jupyter_text>Use Data AugmentationWe will rescale the data to `[0, 1]` and perform simple augmentations to our data.<jupyter_code>rescale = layers.Rescaling(1.0 / 255)
data_augmentation = [
layers.RandomFlip("horizontal_and_vertical"),
layers.RandomRotation(0.3),
layers.RandomZoom(0.2),
]
# Helper to apply augmentation
def apply_aug(x):
for aug in data_augmentation:
x = aug(x)
return x
def prepare(ds, shuffle=False, augment=False):
# Rescale dataset
ds = ds.map(lambda x, y: (rescale(x), y), num_parallel_calls=AUTOTUNE)
if shuffle:
ds = ds.shuffle(1024)
# Batch dataset
ds = ds.batch(batch_size)
# Use data augmentation only on the training set
if augment:
ds = ds.map(
lambda x, y: (apply_aug(x), y),
num_parallel_calls=AUTOTUNE,
)
# Use buffered prefecting
return ds.prefetch(buffer_size=AUTOTUNE)<jupyter_output><empty_output><jupyter_text>Rescale and augment the data<jupyter_code>train_ds = prepare(train_ds, shuffle=True, augment=True)
test_ds = prepare(test_ds)<jupyter_output><empty_output><jupyter_text>Define a modelIn this section we will define a Convolutional neural network.<jupyter_code>model = keras.Sequential(
[
layers.Input(shape=input_shape),
layers.Conv2D(16, (3, 3), activation="relu"),
layers.MaxPooling2D(2, 2),
layers.Conv2D(32, (3, 3), activation="relu"),
layers.Dropout(0.5),
layers.MaxPooling2D(2, 2),
layers.Conv2D(64, (3, 3), activation="relu"),
layers.Dropout(0.5),
layers.MaxPooling2D(2, 2),
layers.Conv2D(64, (3, 3), activation="relu"),
layers.MaxPooling2D(2, 2),
layers.Conv2D(64, (3, 3), activation="relu"),
layers.MaxPooling2D(2, 2),
layers.Flatten(),
layers.Dropout(0.5),
layers.Dense(512, activation="relu"),
layers.Dense(1, activation="sigmoid"),
]
)<jupyter_output><empty_output><jupyter_text>Implement Gradient CentralizationWe will nowsubclass the `RMSProp` optimizer class modifying the`keras.optimizers.Optimizer.get_gradients()` method where we now implement GradientCentralization. On a high level the idea is that let us say we obtain our gradientsthrough back propogation for a Dense or Convolution layer we then compute the mean of thecolumn vectors of the weight matrix, and then remove the mean from each column vector.The experiments in [this paper](https://arxiv.org/abs/2004.01461) on variousapplications, including general image classification, fine-grained image classification,detection and segmentation and Person ReID demonstrate that GC can consistently improvethe performance of DNN learning.Also, for simplicity at the moment we are not implementing gradient cliiping functionality,however this quite easy to implement.At the moment we are just creating a subclass for the `RMSProp` optimizerhowever you could easily reproduce this for any other optimizer or on a customoptimizer in the same way. We will be using this class in the later section whenwe train a model with Gradient Centralization.<jupyter_code>class GCRMSprop(RMSprop):
def get_gradients(self, loss, params):
# We here just provide a modified get_gradients() function since we are
# trying to just compute the centralized gradients.
grads = []
gradients = super().get_gradients()
for grad in gradients:
grad_len = len(grad.shape)
if grad_len > 1:
axis = list(range(grad_len - 1))
grad -= ops.mean(grad, axis=axis, keep_dims=True)
grads.append(grad)
return grads
optimizer = GCRMSprop(learning_rate=1e-4)<jupyter_output><empty_output><jupyter_text>Training utilitiesWe will also create a callback which allows us to easily measure the total training timeand the time taken for each epoch since we are interested in comparing the effect ofGradient Centralization on the model we built above.<jupyter_code>class TimeHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.times = []
def on_epoch_begin(self, batch, logs={}):
self.epoch_time_start = time()
def on_epoch_end(self, batch, logs={}):
self.times.append(time() - self.epoch_time_start)<jupyter_output><empty_output><jupyter_text>Train the model without GCWe now train the model we built earlier without Gradient Centralization which we cancompare to the training performance of the model trained with Gradient Centralization.<jupyter_code>time_callback_no_gc = TimeHistory()
model.compile(
loss="binary_crossentropy",
optimizer=RMSprop(learning_rate=1e-4),
metrics=["accuracy"],
)
model.summary()<jupyter_output><empty_output><jupyter_text>We also save the history since we later want to compare our model trained with and nottrained with Gradient Centralization<jupyter_code>history_no_gc = model.fit(
train_ds, epochs=10, verbose=1, callbacks=[time_callback_no_gc]
)<jupyter_output><empty_output><jupyter_text>Train the model with GCWe will now train the same model, this time using Gradient Centralization,notice our optimizer is the one using Gradient Centralization this time.<jupyter_code>time_callback_gc = TimeHistory()
model.compile(loss="binary_crossentropy", optimizer=optimizer, metrics=["accuracy"])
model.summary()
history_gc = model.fit(train_ds, epochs=10, verbose=1, callbacks=[time_callback_gc])<jupyter_output><empty_output><jupyter_text>Comparing performance<jupyter_code>print("Not using Gradient Centralization")
print(f"Loss: {history_no_gc.history['loss'][-1]}")
print(f"Accuracy: {history_no_gc.history['accuracy'][-1]}")
print(f"Training Time: {sum(time_callback_no_gc.times)}")
print("Using Gradient Centralization")
print(f"Loss: {history_gc.history['loss'][-1]}")
print(f"Accuracy: {history_gc.history['accuracy'][-1]}")
print(f"Training Time: {sum(time_callback_gc.times)}")<jupyter_output><empty_output> | keras-io/examples/vision/ipynb/gradient_centralization.ipynb/0 | {
"file_path": "keras-io/examples/vision/ipynb/gradient_centralization.ipynb",
"repo_id": "keras-io",
"token_count": 2793
} | 98 |
<jupyter_start><jupyter_text>MixUp augmentation for image classification**Author:** [Sayak Paul](https://twitter.com/RisingSayak)**Date created:** 2021/03/06**Last modified:** 2023/07/24**Description:** Data augmentation using the mixup technique for image classification. Introduction _mixup_ is a *domain-agnostic* data augmentation technique proposed in [mixup: Beyond Empirical Risk Minimization](https://arxiv.org/abs/1710.09412)by Zhang et al. It's implemented with the following formulas:(Note that the lambda values are values with the [0, 1] range and are sampled from the[Beta distribution](https://en.wikipedia.org/wiki/Beta_distribution).)The technique is quite systematically named. We are literally mixing up the features andtheir corresponding labels. Implementation-wise it's simple. Neural networks are proneto [memorizing corrupt labels](https://arxiv.org/abs/1611.03530). mixup relaxes this bycombining different features with one another (same happens for the labels too) so thata network does not get overconfident about the relationship between the features andtheir labels.mixup is specifically useful when we are not sure about selecting a set of augmentationtransforms for a given dataset, medical imaging datasets, for example. mixup can beextended to a variety of data modalities such as computer vision, naturallanguageprocessing, speech, and so on. Setup<jupyter_code>import os
os.environ["KERAS_BACKEND"] = "tensorflow"
import numpy as np
import keras
import matplotlib.pyplot as plt
from keras import layers
# TF imports related to tf.data preprocessing
from tensorflow import data as tf_data
from tensorflow import image as tf_image
from tensorflow.random import gamma as tf_random_gamma<jupyter_output><empty_output><jupyter_text>Prepare the datasetIn this example, we will be using the [FashionMNIST](https://github.com/zalandoresearch/fashion-mnist) dataset. But this same recipe canbe used for other classification datasets as well.<jupyter_code>(x_train, y_train), (x_test, y_test) = keras.datasets.fashion_mnist.load_data()
x_train = x_train.astype("float32") / 255.0
x_train = np.reshape(x_train, (-1, 28, 28, 1))
y_train = keras.ops.one_hot(y_train, 10)
x_test = x_test.astype("float32") / 255.0
x_test = np.reshape(x_test, (-1, 28, 28, 1))
y_test = keras.ops.one_hot(y_test, 10)<jupyter_output><empty_output><jupyter_text>Define hyperparameters<jupyter_code>AUTO = tf_data.AUTOTUNE
BATCH_SIZE = 64
EPOCHS = 10<jupyter_output><empty_output><jupyter_text>Convert the data into TensorFlow `Dataset` objects<jupyter_code># Put aside a few samples to create our validation set
val_samples = 2000
x_val, y_val = x_train[:val_samples], y_train[:val_samples]
new_x_train, new_y_train = x_train[val_samples:], y_train[val_samples:]
train_ds_one = (
tf_data.Dataset.from_tensor_slices((new_x_train, new_y_train))
.shuffle(BATCH_SIZE * 100)
.batch(BATCH_SIZE)
)
train_ds_two = (
tf_data.Dataset.from_tensor_slices((new_x_train, new_y_train))
.shuffle(BATCH_SIZE * 100)
.batch(BATCH_SIZE)
)
# Because we will be mixing up the images and their corresponding labels, we will be
# combining two shuffled datasets from the same training data.
train_ds = tf_data.Dataset.zip((train_ds_one, train_ds_two))
val_ds = tf_data.Dataset.from_tensor_slices((x_val, y_val)).batch(BATCH_SIZE)
test_ds = tf_data.Dataset.from_tensor_slices((x_test, y_test)).batch(BATCH_SIZE)<jupyter_output><empty_output><jupyter_text>Define the mixup technique functionTo perform the mixup routine, we create new virtual datasets using the training data fromthe same dataset, and apply a lambda value within the [0, 1] range sampled from a [Beta distribution](https://en.wikipedia.org/wiki/Beta_distribution)— such that, for example, `new_x = lambda * x1 + (1 - lambda) * x2` (where`x1` and `x2` are images) and the same equation is applied to the labels as well.<jupyter_code>def sample_beta_distribution(size, concentration_0=0.2, concentration_1=0.2):
gamma_1_sample = tf_random_gamma(shape=[size], alpha=concentration_1)
gamma_2_sample = tf_random_gamma(shape=[size], alpha=concentration_0)
return gamma_1_sample / (gamma_1_sample + gamma_2_sample)
def mix_up(ds_one, ds_two, alpha=0.2):
# Unpack two datasets
images_one, labels_one = ds_one
images_two, labels_two = ds_two
batch_size = keras.ops.shape(images_one)[0]
# Sample lambda and reshape it to do the mixup
l = sample_beta_distribution(batch_size, alpha, alpha)
x_l = keras.ops.reshape(l, (batch_size, 1, 1, 1))
y_l = keras.ops.reshape(l, (batch_size, 1))
# Perform mixup on both images and labels by combining a pair of images/labels
# (one from each dataset) into one image/label
images = images_one * x_l + images_two * (1 - x_l)
labels = labels_one * y_l + labels_two * (1 - y_l)
return (images, labels)<jupyter_output><empty_output><jupyter_text>**Note** that here , we are combining two images to create a single one. Theoretically,we can combine as many we want but that comes at an increased computation cost. Incertain cases, it may not help improve the performance as well. Visualize the new augmented dataset<jupyter_code># First create the new dataset using our `mix_up` utility
train_ds_mu = train_ds.map(
lambda ds_one, ds_two: mix_up(ds_one, ds_two, alpha=0.2),
num_parallel_calls=AUTO,
)
# Let's preview 9 samples from the dataset
sample_images, sample_labels = next(iter(train_ds_mu))
plt.figure(figsize=(10, 10))
for i, (image, label) in enumerate(zip(sample_images[:9], sample_labels[:9])):
ax = plt.subplot(3, 3, i + 1)
plt.imshow(image.numpy().squeeze())
print(label.numpy().tolist())
plt.axis("off")<jupyter_output><empty_output><jupyter_text>Model building<jupyter_code>def get_training_model():
model = keras.Sequential(
[
layers.Input(shape=(28, 28, 1)),
layers.Conv2D(16, (5, 5), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Conv2D(32, (5, 5), activation="relu"),
layers.MaxPooling2D(pool_size=(2, 2)),
layers.Dropout(0.2),
layers.GlobalAveragePooling2D(),
layers.Dense(128, activation="relu"),
layers.Dense(10, activation="softmax"),
]
)
return model<jupyter_output><empty_output><jupyter_text>For the sake of reproducibility, we serialize the initial random weights of our shallownetwork.<jupyter_code>initial_model = get_training_model()
initial_model.save_weights("initial_weights.weights.h5")<jupyter_output><empty_output><jupyter_text>1. Train the model with the mixed up dataset<jupyter_code>model = get_training_model()
model.load_weights("initial_weights.weights.h5")
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
model.fit(train_ds_mu, validation_data=val_ds, epochs=EPOCHS)
_, test_acc = model.evaluate(test_ds)
print("Test accuracy: {:.2f}%".format(test_acc * 100))<jupyter_output><empty_output><jupyter_text>2. Train the model *without* the mixed up dataset<jupyter_code>model = get_training_model()
model.load_weights("initial_weights.weights.h5")
model.compile(loss="categorical_crossentropy", optimizer="adam", metrics=["accuracy"])
# Notice that we are NOT using the mixed up dataset here
model.fit(train_ds_one, validation_data=val_ds, epochs=EPOCHS)
_, test_acc = model.evaluate(test_ds)
print("Test accuracy: {:.2f}%".format(test_acc * 100))<jupyter_output><empty_output> | keras-io/examples/vision/ipynb/mixup.ipynb/0 | {
"file_path": "keras-io/examples/vision/ipynb/mixup.ipynb",
"repo_id": "keras-io",
"token_count": 2679
} | 99 |
<jupyter_start><jupyter_text>Video Classification with Transformers**Author:** [Sayak Paul](https://twitter.com/RisingSayak)**Date created:** 2021/06/08**Last modified:** 2023/22/07**Description:** Training a video classifier with hybrid transformers. This example is a follow-up to the[Video Classification with a CNN-RNN Architecture](https://keras.io/examples/vision/video_classification/)example. This time, we will be using a Transformer-based model([Vaswani et al.](https://arxiv.org/abs/1706.03762)) to classify videos. You can follow[this book chapter](https://livebook.manning.com/book/deep-learning-with-python-second-edition/chapter-11)in case you need an introduction to Transformers (with code). After reading thisexample, you will know how to develop hybrid Transformer-based models for videoclassification that operate on CNN feature maps.<jupyter_code>!pip install -q git+https://github.com/tensorflow/docs<jupyter_output><empty_output><jupyter_text>Data collectionAs done in the [predecessor](https://keras.io/examples/vision/video_classification/) tothis example, we will be using a subsampled version of the[UCF101 dataset](https://www.crcv.ucf.edu/data/UCF101.php),a well-known benchmark dataset. In case you want to operate on a larger subsample oreven the entire dataset, please refer to[this notebook](https://colab.research.google.com/github/sayakpaul/Action-Recognition-in-TensorFlow/blob/main/Data_Preparation_UCF101.ipynb).<jupyter_code>!wget -q https://github.com/sayakpaul/Action-Recognition-in-TensorFlow/releases/download/v1.0.0/ucf101_top5.tar.gz
!tar -xf ucf101_top5.tar.gz<jupyter_output><empty_output><jupyter_text>Setup<jupyter_code>import os
import keras
from keras import layers
from keras.applications.densenet import DenseNet121
from tensorflow_docs.vis import embed
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import imageio
import cv2<jupyter_output><empty_output><jupyter_text>Define hyperparameters<jupyter_code>MAX_SEQ_LENGTH = 20
NUM_FEATURES = 1024
IMG_SIZE = 128
EPOCHS = 5<jupyter_output><empty_output><jupyter_text>Data preparationWe will mostly be following the same data preparation steps in this example, except forthe following changes:* We reduce the image size to 128x128 instead of 224x224 to speed up computation.* Instead of using a pre-trained [InceptionV3](https://arxiv.org/abs/1512.00567) network,we use a pre-trained[DenseNet121](http://openaccess.thecvf.com/content_cvpr_2017/papers/Huang_Densely_Connected_Convolutional_CVPR_2017_paper.pdf)for feature extraction.* We directly pad shorter videos to length `MAX_SEQ_LENGTH`.First, let's load up the[DataFrames](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html).<jupyter_code>train_df = pd.read_csv("train.csv")
test_df = pd.read_csv("test.csv")
print(f"Total videos for training: {len(train_df)}")
print(f"Total videos for testing: {len(test_df)}")
center_crop_layer = layers.CenterCrop(IMG_SIZE, IMG_SIZE)
def crop_center(frame):
cropped = center_crop_layer(frame[None, ...])
cropped = keras.ops.convert_to_numpy(cropped)
cropped = keras.ops.squeeze(cropped)
return cropped
# Following method is modified from this tutorial:
# https://www.tensorflow.org/hub/tutorials/action_recognition_with_tf_hub
def load_video(path, max_frames=0, offload_to_cpu=False):
cap = cv2.VideoCapture(path)
frames = []
try:
while True:
ret, frame = cap.read()
if not ret:
break
frame = frame[:, :, [2, 1, 0]]
frame = crop_center(frame)
if offload_to_cpu and keras.backend.backend() == "torch":
frame = frame.to("cpu")
frames.append(frame)
if len(frames) == max_frames:
break
finally:
cap.release()
if offload_to_cpu and keras.backend.backend() == "torch":
return np.array([frame.to("cpu").numpy() for frame in frames])
return np.array(frames)
def build_feature_extractor():
feature_extractor = DenseNet121(
weights="imagenet",
include_top=False,
pooling="avg",
input_shape=(IMG_SIZE, IMG_SIZE, 3),
)
preprocess_input = keras.applications.densenet.preprocess_input
inputs = keras.Input((IMG_SIZE, IMG_SIZE, 3))
preprocessed = preprocess_input(inputs)
outputs = feature_extractor(preprocessed)
return keras.Model(inputs, outputs, name="feature_extractor")
feature_extractor = build_feature_extractor()
# Label preprocessing with StringLookup.
label_processor = keras.layers.StringLookup(
num_oov_indices=0, vocabulary=np.unique(train_df["tag"]), mask_token=None
)
print(label_processor.get_vocabulary())
def prepare_all_videos(df, root_dir):
num_samples = len(df)
video_paths = df["video_name"].values.tolist()
labels = df["tag"].values
labels = label_processor(labels[..., None]).numpy()
# `frame_features` are what we will feed to our sequence model.
frame_features = np.zeros(
shape=(num_samples, MAX_SEQ_LENGTH, NUM_FEATURES), dtype="float32"
)
# For each video.
for idx, path in enumerate(video_paths):
# Gather all its frames and add a batch dimension.
frames = load_video(os.path.join(root_dir, path))
# Pad shorter videos.
if len(frames) < MAX_SEQ_LENGTH:
diff = MAX_SEQ_LENGTH - len(frames)
padding = np.zeros((diff, IMG_SIZE, IMG_SIZE, 3))
frames = np.concatenate(frames, padding)
frames = frames[None, ...]
# Initialize placeholder to store the features of the current video.
temp_frame_features = np.zeros(
shape=(1, MAX_SEQ_LENGTH, NUM_FEATURES), dtype="float32"
)
# Extract features from the frames of the current video.
for i, batch in enumerate(frames):
video_length = batch.shape[0]
length = min(MAX_SEQ_LENGTH, video_length)
for j in range(length):
if np.mean(batch[j, :]) > 0.0:
temp_frame_features[i, j, :] = feature_extractor.predict(
batch[None, j, :]
)
else:
temp_frame_features[i, j, :] = 0.0
frame_features[idx,] = temp_frame_features.squeeze()
return frame_features, labels<jupyter_output><empty_output><jupyter_text>Calling `prepare_all_videos()` on `train_df` and `test_df` takes ~20 minutes tocomplete. For this reason, to save time, here we download already preprocessed NumPy arrays:<jupyter_code>!!wget -q https://git.io/JZmf4 -O top5_data_prepared.tar.gz
!!tar -xf top5_data_prepared.tar.gz
train_data, train_labels = np.load("train_data.npy"), np.load("train_labels.npy")
test_data, test_labels = np.load("test_data.npy"), np.load("test_labels.npy")
print(f"Frame features in train set: {train_data.shape}")<jupyter_output><empty_output><jupyter_text>Building the Transformer-based modelWe will be building on top of the code shared in[this book chapter](https://livebook.manning.com/book/deep-learning-with-python-second-edition/chapter-11) of[Deep Learning with Python (Second ed.)](https://www.manning.com/books/deep-learning-with-python)by François Chollet.First, self-attention layers that form the basic blocks of a Transformer areorder-agnostic. Since videos are ordered sequences of frames, we need ourTransformer model to take into account order information.We do this via **positional encoding**.We simply embed the positions of the frames present inside videos with an[`Embedding` layer](https://keras.io/api/layers/core_layers/embedding). We thenadd these positional embeddings to the precomputed CNN feature maps.<jupyter_code>class PositionalEmbedding(layers.Layer):
def __init__(self, sequence_length, output_dim, **kwargs):
super().__init__(**kwargs)
self.position_embeddings = layers.Embedding(
input_dim=sequence_length, output_dim=output_dim
)
self.sequence_length = sequence_length
self.output_dim = output_dim
def build(self, input_shape):
self.position_embeddings.build(input_shape)
def call(self, inputs):
# The inputs are of shape: `(batch_size, frames, num_features)`
inputs = keras.ops.cast(inputs, self.compute_dtype)
length = keras.ops.shape(inputs)[1]
positions = keras.ops.arange(start=0, stop=length, step=1)
embedded_positions = self.position_embeddings(positions)
return inputs + embedded_positions<jupyter_output><empty_output><jupyter_text>Now, we can create a subclassed layer for the Transformer.<jupyter_code>class TransformerEncoder(layers.Layer):
def __init__(self, embed_dim, dense_dim, num_heads, **kwargs):
super().__init__(**kwargs)
self.embed_dim = embed_dim
self.dense_dim = dense_dim
self.num_heads = num_heads
self.attention = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim, dropout=0.3
)
self.dense_proj = keras.Sequential(
[
layers.Dense(dense_dim, activation=keras.activations.gelu),
layers.Dense(embed_dim),
]
)
self.layernorm_1 = layers.LayerNormalization()
self.layernorm_2 = layers.LayerNormalization()
def call(self, inputs, mask=None):
attention_output = self.attention(inputs, inputs, attention_mask=mask)
proj_input = self.layernorm_1(inputs + attention_output)
proj_output = self.dense_proj(proj_input)
return self.layernorm_2(proj_input + proj_output)<jupyter_output><empty_output><jupyter_text>Utility functions for training<jupyter_code>def get_compiled_model(shape):
sequence_length = MAX_SEQ_LENGTH
embed_dim = NUM_FEATURES
dense_dim = 4
num_heads = 1
classes = len(label_processor.get_vocabulary())
inputs = keras.Input(shape=shape)
x = PositionalEmbedding(
sequence_length, embed_dim, name="frame_position_embedding"
)(inputs)
x = TransformerEncoder(embed_dim, dense_dim, num_heads, name="transformer_layer")(x)
x = layers.GlobalMaxPooling1D()(x)
x = layers.Dropout(0.5)(x)
outputs = layers.Dense(classes, activation="softmax")(x)
model = keras.Model(inputs, outputs)
model.compile(
optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
return model
def run_experiment():
filepath = "/tmp/video_classifier.weights.h5"
checkpoint = keras.callbacks.ModelCheckpoint(
filepath, save_weights_only=True, save_best_only=True, verbose=1
)
model = get_compiled_model(train_data.shape[1:])
history = model.fit(
train_data,
train_labels,
validation_split=0.15,
epochs=EPOCHS,
callbacks=[checkpoint],
)
model.load_weights(filepath)
_, accuracy = model.evaluate(test_data, test_labels)
print(f"Test accuracy: {round(accuracy * 100, 2)}%")
return model<jupyter_output><empty_output><jupyter_text>Model training and inference<jupyter_code>trained_model = run_experiment()<jupyter_output><empty_output><jupyter_text>**Note**: This model has ~4.23 Million parameters, which is way more than the sequencemodel (99918 parameters) we used in the prequel of this example. This kind ofTransformer model works best with a larger dataset and a longer pre-training schedule.<jupyter_code>def prepare_single_video(frames):
frame_features = np.zeros(shape=(1, MAX_SEQ_LENGTH, NUM_FEATURES), dtype="float32")
# Pad shorter videos.
if len(frames) < MAX_SEQ_LENGTH:
diff = MAX_SEQ_LENGTH - len(frames)
padding = np.zeros((diff, IMG_SIZE, IMG_SIZE, 3))
frames = np.concatenate(frames, padding)
frames = frames[None, ...]
# Extract features from the frames of the current video.
for i, batch in enumerate(frames):
video_length = batch.shape[0]
length = min(MAX_SEQ_LENGTH, video_length)
for j in range(length):
if np.mean(batch[j, :]) > 0.0:
frame_features[i, j, :] = feature_extractor.predict(batch[None, j, :])
else:
frame_features[i, j, :] = 0.0
return frame_features
def predict_action(path):
class_vocab = label_processor.get_vocabulary()
frames = load_video(os.path.join("test", path), offload_to_cpu=True)
frame_features = prepare_single_video(frames)
probabilities = trained_model.predict(frame_features)[0]
plot_x_axis, plot_y_axis = [], []
for i in np.argsort(probabilities)[::-1]:
plot_x_axis.append(class_vocab[i])
plot_y_axis.append(probabilities[i])
print(f" {class_vocab[i]}: {probabilities[i] * 100:5.2f}%")
plt.bar(plot_x_axis, plot_y_axis, label=plot_x_axis)
plt.xlabel("class_label")
plt.xlabel("Probability")
plt.show()
return frames
# This utility is for visualization.
# Referenced from:
# https://www.tensorflow.org/hub/tutorials/action_recognition_with_tf_hub
def to_gif(images):
converted_images = images.astype(np.uint8)
imageio.mimsave("animation.gif", converted_images, fps=10)
return embed.embed_file("animation.gif")
test_video = np.random.choice(test_df["video_name"].values.tolist())
print(f"Test video path: {test_video}")
test_frames = predict_action(test_video)
to_gif(test_frames[:MAX_SEQ_LENGTH])<jupyter_output><empty_output> | keras-io/examples/vision/ipynb/video_transformers.ipynb/0 | {
"file_path": "keras-io/examples/vision/ipynb/video_transformers.ipynb",
"repo_id": "keras-io",
"token_count": 5325
} | 100 |
# Using the Forward-Forward Algorithm for Image Classification
**Author:** [Suvaditya Mukherjee](https://twitter.com/halcyonrayes)<br>
**Date created:** 2023/01/08<br>
**Last modified:** 2023/01/08<br>
**Description:** Training a Dense-layer model using the Forward-Forward algorithm.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/examples/vision/ipynb/forwardforward.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/examples/vision/forwardforward.py)
---
## Introduction
The following example explores how to use the Forward-Forward algorithm to perform
training instead of the traditionally-used method of backpropagation, as proposed by
Hinton in
[The Forward-Forward Algorithm: Some Preliminary Investigations](https://www.cs.toronto.edu/~hinton/FFA13.pdf)
(2022).
The concept was inspired by the understanding behind
[Boltzmann Machines](http://www.cs.toronto.edu/~fritz/absps/dbm.pdf). Backpropagation
involves calculating the difference between actual and predicted output via a cost
function to adjust network weights. On the other hand, the FF Algorithm suggests the
analogy of neurons which get "excited" based on looking at a certain recognized
combination of an image and its correct corresponding label.
This method takes certain inspiration from the biological learning process that occurs in
the cortex. A significant advantage that this method brings is the fact that
backpropagation through the network does not need to be performed anymore, and that
weight updates are local to the layer itself.
As this is yet still an experimental method, it does not yield state-of-the-art results.
But with proper tuning, it is supposed to come close to the same.
Through this example, we will examine a process that allows us to implement the
Forward-Forward algorithm within the layers themselves, instead of the traditional method
of relying on the global loss functions and optimizers.
The tutorial is structured as follows:
- Perform necessary imports
- Load the [MNIST dataset](http://yann.lecun.com/exdb/mnist/)
- Visualize Random samples from the MNIST dataset
- Define a `FFDense` Layer to override `call` and implement a custom `forwardforward`
method which performs weight updates.
- Define a `FFNetwork` Layer to override `train_step`, `predict` and implement 2 custom
functions for per-sample prediction and overlaying labels
- Convert MNIST from `NumPy` arrays to `tf.data.Dataset`
- Fit the network
- Visualize results
- Perform inference on test samples
As this example requires the customization of certain core functions with
`keras.layers.Layer` and `keras.models.Model`, refer to the following resources for
a primer on how to do so:
- [Customizing what happens in `model.fit()`](https://www.tensorflow.org/guide/keras/customizing_what_happens_in_fit)
- [Making new Layers and Models via subclassing](https://www.tensorflow.org/guide/keras/custom_layers_and_models)
---
## Setup imports
```python
import tensorflow as tf
from tensorflow import keras
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score
import random
from tensorflow.compiler.tf2xla.python import xla
```
---
## Load the dataset and visualize the data
We use the `keras.datasets.mnist.load_data()` utility to directly pull the MNIST dataset
in the form of `NumPy` arrays. We then arrange it in the form of the train and test
splits.
Following loading the dataset, we select 4 random samples from within the training set
and visualize them using `matplotlib.pyplot`.
```python
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data()
print("4 Random Training samples and labels")
idx1, idx2, idx3, idx4 = random.sample(range(0, x_train.shape[0]), 4)
img1 = (x_train[idx1], y_train[idx1])
img2 = (x_train[idx2], y_train[idx2])
img3 = (x_train[idx3], y_train[idx3])
img4 = (x_train[idx4], y_train[idx4])
imgs = [img1, img2, img3, img4]
plt.figure(figsize=(10, 10))
for idx, item in enumerate(imgs):
image, label = item[0], item[1]
plt.subplot(2, 2, idx + 1)
plt.imshow(image, cmap="gray")
plt.title(f"Label : {label}")
plt.show()
```
<div class="k-default-codeblock">
```
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11490434/11490434 [==============================] - 0s 0us/step
4 Random Training samples and labels
```
</div>

---
## Define `FFDense` custom layer
In this custom layer, we have a base `keras.layers.Dense` object which acts as the
base `Dense` layer within. Since weight updates will happen within the layer itself, we
add an `keras.optimizers.Optimizer` object that is accepted from the user. Here, we
use `Adam` as our optimizer with a rather higher learning rate of `0.03`.
Following the algorithm's specifics, we must set a `threshold` parameter that will be
used to make the positive-negative decision in each prediction. This is set to a default
of 2.0.
As the epochs are localized to the layer itself, we also set a `num_epochs` parameter
(defaults to 50).
We override the `call` method in order to perform a normalization over the complete
input space followed by running it through the base `Dense` layer as would happen in a
normal `Dense` layer call.
We implement the Forward-Forward algorithm which accepts 2 kinds of input tensors, each
representing the positive and negative samples respectively. We write a custom training
loop here with the use of `tf.GradientTape()`, within which we calculate a loss per
sample by taking the distance of the prediction from the threshold to understand the
error and taking its mean to get a `mean_loss` metric.
With the help of `tf.GradientTape()` we calculate the gradient updates for the trainable
base `Dense` layer and apply them using the layer's local optimizer.
Finally, we return the `call` result as the `Dense` results of the positive and negative
samples while also returning the last `mean_loss` metric and all the loss values over a
certain all-epoch run.
```python
class FFDense(keras.layers.Layer):
"""
A custom ForwardForward-enabled Dense layer. It has an implementation of the
Forward-Forward network internally for use.
This layer must be used in conjunction with the `FFNetwork` model.
"""
def __init__(
self,
units,
optimizer,
loss_metric,
num_epochs=50,
use_bias=True,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
kernel_regularizer=None,
bias_regularizer=None,
**kwargs,
):
super().__init__(**kwargs)
self.dense = keras.layers.Dense(
units=units,
use_bias=use_bias,
kernel_initializer=kernel_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
bias_regularizer=bias_regularizer,
)
self.relu = keras.layers.ReLU()
self.optimizer = optimizer
self.loss_metric = loss_metric
self.threshold = 1.5
self.num_epochs = num_epochs
# We perform a normalization step before we run the input through the Dense
# layer.
def call(self, x):
x_norm = tf.norm(x, ord=2, axis=1, keepdims=True)
x_norm = x_norm + 1e-4
x_dir = x / x_norm
res = self.dense(x_dir)
return self.relu(res)
# The Forward-Forward algorithm is below. We first perform the Dense-layer
# operation and then get a Mean Square value for all positive and negative
# samples respectively.
# The custom loss function finds the distance between the Mean-squared
# result and the threshold value we set (a hyperparameter) that will define
# whether the prediction is positive or negative in nature. Once the loss is
# calculated, we get a mean across the entire batch combined and perform a
# gradient calculation and optimization step. This does not technically
# qualify as backpropagation since there is no gradient being
# sent to any previous layer and is completely local in nature.
def forward_forward(self, x_pos, x_neg):
for i in range(self.num_epochs):
with tf.GradientTape() as tape:
g_pos = tf.math.reduce_mean(tf.math.pow(self.call(x_pos), 2), 1)
g_neg = tf.math.reduce_mean(tf.math.pow(self.call(x_neg), 2), 1)
loss = tf.math.log(
1
+ tf.math.exp(
tf.concat([-g_pos + self.threshold, g_neg - self.threshold], 0)
)
)
mean_loss = tf.cast(tf.math.reduce_mean(loss), tf.float32)
self.loss_metric.update_state([mean_loss])
gradients = tape.gradient(mean_loss, self.dense.trainable_weights)
self.optimizer.apply_gradients(zip(gradients, self.dense.trainable_weights))
return (
tf.stop_gradient(self.call(x_pos)),
tf.stop_gradient(self.call(x_neg)),
self.loss_metric.result(),
)
```
---
## Define the `FFNetwork` Custom Model
With our custom layer defined, we also need to override the `train_step` method and
define a custom `keras.models.Model` that works with our `FFDense` layer.
For this algorithm, we must 'embed' the labels onto the original image. To do so, we
exploit the structure of MNIST images where the top-left 10 pixels are always zeros. We
use that as a label space in order to visually one-hot-encode the labels within the image
itself. This action is performed by the `overlay_y_on_x` function.
We break down the prediction function with a per-sample prediction function which is then
called over the entire test set by the overriden `predict()` function. The prediction is
performed here with the help of measuring the `excitation` of the neurons per layer for
each image. This is then summed over all layers to calculate a network-wide 'goodness
score'. The label with the highest 'goodness score' is then chosen as the sample
prediction.
The `train_step` function is overriden to act as the main controlling loop for running
training on each layer as per the number of epochs per layer.
```python
class FFNetwork(keras.Model):
"""
A `keras.Model` that supports a `FFDense` network creation. This model
can work for any kind of classification task. It has an internal
implementation with some details specific to the MNIST dataset which can be
changed as per the use-case.
"""
# Since each layer runs gradient-calculation and optimization locally, each
# layer has its own optimizer that we pass. As a standard choice, we pass
# the `Adam` optimizer with a default learning rate of 0.03 as that was
# found to be the best rate after experimentation.
# Loss is tracked using `loss_var` and `loss_count` variables.
# Use legacy optimizer for Layer Optimizer to fix issue
# https://github.com/keras-team/keras-io/issues/1241
def __init__(
self,
dims,
layer_optimizer=keras.optimizers.legacy.Adam(learning_rate=0.03),
**kwargs,
):
super().__init__(**kwargs)
self.layer_optimizer = layer_optimizer
self.loss_var = tf.Variable(0.0, trainable=False, dtype=tf.float32)
self.loss_count = tf.Variable(0.0, trainable=False, dtype=tf.float32)
self.layer_list = [keras.Input(shape=(dims[0],))]
for d in range(len(dims) - 1):
self.layer_list += [
FFDense(
dims[d + 1],
optimizer=self.layer_optimizer,
loss_metric=keras.metrics.Mean(),
)
]
# This function makes a dynamic change to the image wherein the labels are
# put on top of the original image (for this example, as MNIST has 10
# unique labels, we take the top-left corner's first 10 pixels). This
# function returns the original data tensor with the first 10 pixels being
# a pixel-based one-hot representation of the labels.
@tf.function(reduce_retracing=True)
def overlay_y_on_x(self, data):
X_sample, y_sample = data
max_sample = tf.reduce_max(X_sample, axis=0, keepdims=True)
max_sample = tf.cast(max_sample, dtype=tf.float64)
X_zeros = tf.zeros([10], dtype=tf.float64)
X_update = xla.dynamic_update_slice(X_zeros, max_sample, [y_sample])
X_sample = xla.dynamic_update_slice(X_sample, X_update, [0])
return X_sample, y_sample
# A custom `predict_one_sample` performs predictions by passing the images
# through the network, measures the results produced by each layer (i.e.
# how high/low the output values are with respect to the set threshold for
# each label) and then simply finding the label with the highest values.
# In such a case, the images are tested for their 'goodness' with all
# labels.
@tf.function(reduce_retracing=True)
def predict_one_sample(self, x):
goodness_per_label = []
x = tf.reshape(x, [tf.shape(x)[0] * tf.shape(x)[1]])
for label in range(10):
h, label = self.overlay_y_on_x(data=(x, label))
h = tf.reshape(h, [-1, tf.shape(h)[0]])
goodness = []
for layer_idx in range(1, len(self.layer_list)):
layer = self.layer_list[layer_idx]
h = layer(h)
goodness += [tf.math.reduce_mean(tf.math.pow(h, 2), 1)]
goodness_per_label += [
tf.expand_dims(tf.reduce_sum(goodness, keepdims=True), 1)
]
goodness_per_label = tf.concat(goodness_per_label, 1)
return tf.cast(tf.argmax(goodness_per_label, 1), tf.float64)
def predict(self, data):
x = data
preds = list()
preds = tf.map_fn(fn=self.predict_one_sample, elems=x)
return np.asarray(preds, dtype=int)
# This custom `train_step` function overrides the internal `train_step`
# implementation. We take all the input image tensors, flatten them and
# subsequently produce positive and negative samples on the images.
# A positive sample is an image that has the right label encoded on it with
# the `overlay_y_on_x` function. A negative sample is an image that has an
# erroneous label present on it.
# With the samples ready, we pass them through each `FFLayer` and perform
# the Forward-Forward computation on it. The returned loss is the final
# loss value over all the layers.
@tf.function(jit_compile=True)
def train_step(self, data):
x, y = data
# Flatten op
x = tf.reshape(x, [-1, tf.shape(x)[1] * tf.shape(x)[2]])
x_pos, y = tf.map_fn(fn=self.overlay_y_on_x, elems=(x, y))
random_y = tf.random.shuffle(y)
x_neg, y = tf.map_fn(fn=self.overlay_y_on_x, elems=(x, random_y))
h_pos, h_neg = x_pos, x_neg
for idx, layer in enumerate(self.layers):
if isinstance(layer, FFDense):
print(f"Training layer {idx+1} now : ")
h_pos, h_neg, loss = layer.forward_forward(h_pos, h_neg)
self.loss_var.assign_add(loss)
self.loss_count.assign_add(1.0)
else:
print(f"Passing layer {idx+1} now : ")
x = layer(x)
mean_res = tf.math.divide(self.loss_var, self.loss_count)
return {"FinalLoss": mean_res}
```
---
## Convert MNIST `NumPy` arrays to `tf.data.Dataset`
We now perform some preliminary processing on the `NumPy` arrays and then convert them
into the `tf.data.Dataset` format which allows for optimized loading.
```python
x_train = x_train.astype(float) / 255
x_test = x_test.astype(float) / 255
y_train = y_train.astype(int)
y_test = y_test.astype(int)
train_dataset = tf.data.Dataset.from_tensor_slices((x_train, y_train))
test_dataset = tf.data.Dataset.from_tensor_slices((x_test, y_test))
train_dataset = train_dataset.batch(60000)
test_dataset = test_dataset.batch(10000)
```
---
## Fit the network and visualize results
Having performed all previous set-up, we are now going to run `model.fit()` and run 250
model epochs, which will perform 50*250 epochs on each layer. We get to see the plotted loss
curve as each layer is trained.
```python
model = FFNetwork(dims=[784, 500, 500])
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=0.03),
loss="mse",
jit_compile=True,
metrics=[keras.metrics.Mean()],
)
epochs = 250
history = model.fit(train_dataset, epochs=epochs)
```
<div class="k-default-codeblock">
```
Epoch 1/250
Training layer 1 now :
Training layer 2 now :
Training layer 1 now :
Training layer 2 now :
1/1 [==============================] - 72s 72s/step - FinalLoss: 0.7279
Epoch 2/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.7082
Epoch 3/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.7031
Epoch 4/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.6806
Epoch 5/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.6564
Epoch 6/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.6333
Epoch 7/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.6126
Epoch 8/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.5946
Epoch 9/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.5786
Epoch 10/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.5644
Epoch 11/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.5518
Epoch 12/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.5405
Epoch 13/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.5301
Epoch 14/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.5207
Epoch 15/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.5122
Epoch 16/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.5044
Epoch 17/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4972
Epoch 18/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4906
Epoch 19/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4845
Epoch 20/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4787
Epoch 21/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4734
Epoch 22/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4685
Epoch 23/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4639
Epoch 24/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4596
Epoch 25/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4555
Epoch 26/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4516
Epoch 27/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4479
Epoch 28/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4445
Epoch 29/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4411
Epoch 30/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4380
Epoch 31/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4350
Epoch 32/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4322
Epoch 33/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4295
Epoch 34/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4269
Epoch 35/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4245
Epoch 36/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4222
Epoch 37/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4199
Epoch 38/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4178
Epoch 39/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4157
Epoch 40/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4136
Epoch 41/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4117
Epoch 42/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4098
Epoch 43/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4079
Epoch 44/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4062
Epoch 45/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4045
Epoch 46/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4028
Epoch 47/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.4012
Epoch 48/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3996
Epoch 49/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3982
Epoch 50/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3967
Epoch 51/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3952
Epoch 52/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3938
Epoch 53/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3925
Epoch 54/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3912
Epoch 55/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3899
Epoch 56/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3886
Epoch 57/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3874
Epoch 58/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3862
Epoch 59/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3851
Epoch 60/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3840
Epoch 61/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3829
Epoch 62/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3818
Epoch 63/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3807
Epoch 64/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3797
Epoch 65/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3787
Epoch 66/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3777
Epoch 67/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3767
Epoch 68/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3758
Epoch 69/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3748
Epoch 70/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3739
Epoch 71/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3730
Epoch 72/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3721
Epoch 73/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3712
Epoch 74/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3704
Epoch 75/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3695
Epoch 76/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3688
Epoch 77/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3680
Epoch 78/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3671
Epoch 79/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3664
Epoch 80/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3656
Epoch 81/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3648
Epoch 82/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3641
Epoch 83/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3634
Epoch 84/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3627
Epoch 85/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3620
Epoch 86/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3613
Epoch 87/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3606
Epoch 88/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3599
Epoch 89/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3593
Epoch 90/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3586
Epoch 91/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3580
Epoch 92/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3574
Epoch 93/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3568
Epoch 94/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3561
Epoch 95/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3555
Epoch 96/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3549
Epoch 97/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3544
Epoch 98/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3538
Epoch 99/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3532
Epoch 100/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3526
Epoch 101/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3521
Epoch 102/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3515
Epoch 103/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3510
Epoch 104/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3505
Epoch 105/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3499
Epoch 106/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3494
Epoch 107/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3489
Epoch 108/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3484
Epoch 109/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3478
Epoch 110/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3474
Epoch 111/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3468
Epoch 112/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3464
Epoch 113/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3459
Epoch 114/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3454
Epoch 115/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3450
Epoch 116/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3445
Epoch 117/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3440
Epoch 118/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3436
Epoch 119/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3432
Epoch 120/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3427
Epoch 121/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3423
Epoch 122/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3419
Epoch 123/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3414
Epoch 124/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3410
Epoch 125/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3406
Epoch 126/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3402
Epoch 127/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3398
Epoch 128/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3394
Epoch 129/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3390
Epoch 130/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3386
Epoch 131/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3382
Epoch 132/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3378
Epoch 133/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3375
Epoch 134/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3371
Epoch 135/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3368
Epoch 136/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3364
Epoch 137/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3360
Epoch 138/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3357
Epoch 139/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3353
Epoch 140/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3350
Epoch 141/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3346
Epoch 142/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3343
Epoch 143/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3339
Epoch 144/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3336
Epoch 145/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3333
Epoch 146/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3329
Epoch 147/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3326
Epoch 148/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3323
Epoch 149/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3320
Epoch 150/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3317
Epoch 151/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3313
Epoch 152/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3310
Epoch 153/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3307
Epoch 154/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3304
Epoch 155/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3302
Epoch 156/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3299
Epoch 157/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3296
Epoch 158/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3293
Epoch 159/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3290
Epoch 160/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3287
Epoch 161/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3284
Epoch 162/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3281
Epoch 163/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3279
Epoch 164/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3276
Epoch 165/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3273
Epoch 166/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3270
Epoch 167/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3268
Epoch 168/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3265
Epoch 169/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3262
Epoch 170/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3260
Epoch 171/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3257
Epoch 172/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3255
Epoch 173/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3252
Epoch 174/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3250
Epoch 175/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3247
Epoch 176/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3244
Epoch 177/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3242
Epoch 178/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3240
Epoch 179/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3237
Epoch 180/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3235
Epoch 181/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3232
Epoch 182/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3230
Epoch 183/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3228
Epoch 184/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3225
Epoch 185/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3223
Epoch 186/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3221
Epoch 187/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3219
Epoch 188/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3216
Epoch 189/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3214
Epoch 190/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3212
Epoch 191/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3210
Epoch 192/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3208
Epoch 193/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3205
Epoch 194/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3203
Epoch 195/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3201
Epoch 196/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3199
Epoch 197/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3197
Epoch 198/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3195
Epoch 199/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3193
Epoch 200/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3191
Epoch 201/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3189
Epoch 202/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3187
Epoch 203/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3185
Epoch 204/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3183
Epoch 205/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3181
Epoch 206/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3179
Epoch 207/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3177
Epoch 208/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3175
Epoch 209/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3174
Epoch 210/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3172
Epoch 211/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3170
Epoch 212/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3168
Epoch 213/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3166
Epoch 214/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3165
Epoch 215/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3163
Epoch 216/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3161
Epoch 217/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3159
Epoch 218/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3157
Epoch 219/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3155
Epoch 220/250
1/1 [==============================] - 5s 5s/step - FinalLoss: 0.3154
Epoch 221/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3152
Epoch 222/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3150
Epoch 223/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3148
Epoch 224/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3147
Epoch 225/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3145
Epoch 226/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3143
Epoch 227/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3142
Epoch 228/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3140
Epoch 229/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3139
Epoch 230/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3137
Epoch 231/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3135
Epoch 232/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3134
Epoch 233/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3132
Epoch 234/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3131
Epoch 235/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3129
Epoch 236/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3127
Epoch 237/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3126
Epoch 238/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3124
Epoch 239/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3123
Epoch 240/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3121
Epoch 241/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3120
Epoch 242/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3118
Epoch 243/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3117
Epoch 244/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3116
Epoch 245/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3114
Epoch 246/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3113
Epoch 247/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3111
Epoch 248/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3110
Epoch 249/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3108
Epoch 250/250
1/1 [==============================] - 6s 6s/step - FinalLoss: 0.3107
```
</div>
---
## Perform inference and testing
Having trained the model to a large extent, we now see how it performs on the
test set. We calculate the Accuracy Score to understand the results closely.
```python
preds = model.predict(tf.convert_to_tensor(x_test))
preds = preds.reshape((preds.shape[0], preds.shape[1]))
results = accuracy_score(preds, y_test)
print(f"Test Accuracy score : {results*100}%")
plt.plot(range(len(history.history["FinalLoss"])), history.history["FinalLoss"])
plt.title("Loss over training")
plt.show()
```
<div class="k-default-codeblock">
```
Test Accuracy score : 97.64%
```
</div>

---
## Conclusion
This example has hereby demonstrated how the Forward-Forward algorithm works using
the TensorFlow and Keras packages. While the investigation results presented by Prof. Hinton
in their paper are currently still limited to smaller models and datasets like MNIST and
Fashion-MNIST, subsequent results on larger models like LLMs are expected in future
papers.
Through the paper, Prof. Hinton has reported results of 1.36% test accuracy error with a
2000-units, 4 hidden-layer, fully-connected network run over 60 epochs (while mentioning
that backpropagation takes only 20 epochs to achieve similar performance). Another run of
doubling the learning rate and training for 40 epochs yields a slightly worse error rate
of 1.46%
The current example does not yield state-of-the-art results. But with proper tuning of
the Learning Rate, model architecture (number of units in `Dense` layers, kernel
activations, initializations, regularization etc.), the results can be improved
to match the claims of the paper.
| keras-io/examples/vision/md/forwardforward.md/0 | {
"file_path": "keras-io/examples/vision/md/forwardforward.md",
"repo_id": "keras-io",
"token_count": 14508
} | 101 |
"""
Title: MobileViT: A mobile-friendly Transformer-based model for image classification
Author: [Sayak Paul](https://twitter.com/RisingSayak)
Date created: 2021/10/20
Last modified: 2024/02/11
Description: MobileViT for image classification with combined benefits of convolutions and Transformers.
Accelerator: GPU
"""
"""
## Introduction
In this example, we implement the MobileViT architecture
([Mehta et al.](https://arxiv.org/abs/2110.02178)),
which combines the benefits of Transformers
([Vaswani et al.](https://arxiv.org/abs/1706.03762))
and convolutions. With Transformers, we can capture long-range dependencies that result
in global representations. With convolutions, we can capture spatial relationships that
model locality.
Besides combining the properties of Transformers and convolutions, the authors introduce
MobileViT as a general-purpose mobile-friendly backbone for different image recognition
tasks. Their findings suggest that, performance-wise, MobileViT is better than other
models with the same or higher complexity ([MobileNetV3](https://arxiv.org/abs/1905.02244),
for example), while being efficient on mobile devices.
Note: This example should be run with Tensorflow 2.13 and higher.
"""
"""
## Imports
"""
import os
import tensorflow as tf
os.environ["KERAS_BACKEND"] = "tensorflow"
import keras
from keras import layers
from keras import backend
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
"""
## Hyperparameters
"""
# Values are from table 4.
patch_size = 4 # 2x2, for the Transformer blocks.
image_size = 256
expansion_factor = 2 # expansion factor for the MobileNetV2 blocks.
"""
## MobileViT utilities
The MobileViT architecture is comprised of the following blocks:
* Strided 3x3 convolutions that process the input image.
* [MobileNetV2](https://arxiv.org/abs/1801.04381)-style inverted residual blocks for
downsampling the resolution of the intermediate feature maps.
* MobileViT blocks that combine the benefits of Transformers and convolutions. It is
presented in the figure below (taken from the
[original paper](https://arxiv.org/abs/2110.02178)):

"""
def conv_block(x, filters=16, kernel_size=3, strides=2):
conv_layer = layers.Conv2D(
filters,
kernel_size,
strides=strides,
activation=keras.activations.swish,
padding="same",
)
return conv_layer(x)
# Reference: https://github.com/keras-team/keras/blob/e3858739d178fe16a0c77ce7fab88b0be6dbbdc7/keras/applications/imagenet_utils.py#L413C17-L435
def correct_pad(inputs, kernel_size):
img_dim = 2 if backend.image_data_format() == "channels_first" else 1
input_size = inputs.shape[img_dim : (img_dim + 2)]
if isinstance(kernel_size, int):
kernel_size = (kernel_size, kernel_size)
if input_size[0] is None:
adjust = (1, 1)
else:
adjust = (1 - input_size[0] % 2, 1 - input_size[1] % 2)
correct = (kernel_size[0] // 2, kernel_size[1] // 2)
return (
(correct[0] - adjust[0], correct[0]),
(correct[1] - adjust[1], correct[1]),
)
# Reference: https://git.io/JKgtC
def inverted_residual_block(x, expanded_channels, output_channels, strides=1):
m = layers.Conv2D(expanded_channels, 1, padding="same", use_bias=False)(x)
m = layers.BatchNormalization()(m)
m = keras.activations.swish(m)
if strides == 2:
m = layers.ZeroPadding2D(padding=correct_pad(m, 3))(m)
m = layers.DepthwiseConv2D(
3, strides=strides, padding="same" if strides == 1 else "valid", use_bias=False
)(m)
m = layers.BatchNormalization()(m)
m = keras.activations.swish(m)
m = layers.Conv2D(output_channels, 1, padding="same", use_bias=False)(m)
m = layers.BatchNormalization()(m)
if keras.ops.equal(x.shape[-1], output_channels) and strides == 1:
return layers.Add()([m, x])
return m
# Reference:
# https://keras.io/examples/vision/image_classification_with_vision_transformer/
def mlp(x, hidden_units, dropout_rate):
for units in hidden_units:
x = layers.Dense(units, activation=keras.activations.swish)(x)
x = layers.Dropout(dropout_rate)(x)
return x
def transformer_block(x, transformer_layers, projection_dim, num_heads=2):
for _ in range(transformer_layers):
# Layer normalization 1.
x1 = layers.LayerNormalization(epsilon=1e-6)(x)
# Create a multi-head attention layer.
attention_output = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=projection_dim, dropout=0.1
)(x1, x1)
# Skip connection 1.
x2 = layers.Add()([attention_output, x])
# Layer normalization 2.
x3 = layers.LayerNormalization(epsilon=1e-6)(x2)
# MLP.
x3 = mlp(
x3,
hidden_units=[x.shape[-1] * 2, x.shape[-1]],
dropout_rate=0.1,
)
# Skip connection 2.
x = layers.Add()([x3, x2])
return x
def mobilevit_block(x, num_blocks, projection_dim, strides=1):
# Local projection with convolutions.
local_features = conv_block(x, filters=projection_dim, strides=strides)
local_features = conv_block(
local_features, filters=projection_dim, kernel_size=1, strides=strides
)
# Unfold into patches and then pass through Transformers.
num_patches = int((local_features.shape[1] * local_features.shape[2]) / patch_size)
non_overlapping_patches = layers.Reshape((patch_size, num_patches, projection_dim))(
local_features
)
global_features = transformer_block(
non_overlapping_patches, num_blocks, projection_dim
)
# Fold into conv-like feature-maps.
folded_feature_map = layers.Reshape((*local_features.shape[1:-1], projection_dim))(
global_features
)
# Apply point-wise conv -> concatenate with the input features.
folded_feature_map = conv_block(
folded_feature_map, filters=x.shape[-1], kernel_size=1, strides=strides
)
local_global_features = layers.Concatenate(axis=-1)([x, folded_feature_map])
# Fuse the local and global features using a convoluion layer.
local_global_features = conv_block(
local_global_features, filters=projection_dim, strides=strides
)
return local_global_features
"""
**More on the MobileViT block**:
* First, the feature representations (A) go through convolution blocks that capture local
relationships. The expected shape of a single entry here would be `(h, w, num_channels)`.
* Then they get unfolded into another vector with shape `(p, n, num_channels)`,
where `p` is the area of a small patch, and `n` is `(h * w) / p`. So, we end up with `n`
non-overlapping patches.
* This unfolded vector is then passed through a Tranformer block that captures global
relationships between the patches.
* The output vector (B) is again folded into a vector of shape `(h, w, num_channels)`
resembling a feature map coming out of convolutions.
Vectors A and B are then passed through two more convolutional layers to fuse the local
and global representations. Notice how the spatial resolution of the final vector remains
unchanged at this point. The authors also present an explanation of how the MobileViT
block resembles a convolution block of a CNN. For more details, please refer to the
original paper.
"""
"""
Next, we combine these blocks together and implement the MobileViT architecture (XXS
variant). The following figure (taken from the original paper) presents a schematic
representation of the architecture:

"""
def create_mobilevit(num_classes=5):
inputs = keras.Input((image_size, image_size, 3))
x = layers.Rescaling(scale=1.0 / 255)(inputs)
# Initial conv-stem -> MV2 block.
x = conv_block(x, filters=16)
x = inverted_residual_block(
x, expanded_channels=16 * expansion_factor, output_channels=16
)
# Downsampling with MV2 block.
x = inverted_residual_block(
x, expanded_channels=16 * expansion_factor, output_channels=24, strides=2
)
x = inverted_residual_block(
x, expanded_channels=24 * expansion_factor, output_channels=24
)
x = inverted_residual_block(
x, expanded_channels=24 * expansion_factor, output_channels=24
)
# First MV2 -> MobileViT block.
x = inverted_residual_block(
x, expanded_channels=24 * expansion_factor, output_channels=48, strides=2
)
x = mobilevit_block(x, num_blocks=2, projection_dim=64)
# Second MV2 -> MobileViT block.
x = inverted_residual_block(
x, expanded_channels=64 * expansion_factor, output_channels=64, strides=2
)
x = mobilevit_block(x, num_blocks=4, projection_dim=80)
# Third MV2 -> MobileViT block.
x = inverted_residual_block(
x, expanded_channels=80 * expansion_factor, output_channels=80, strides=2
)
x = mobilevit_block(x, num_blocks=3, projection_dim=96)
x = conv_block(x, filters=320, kernel_size=1, strides=1)
# Classification head.
x = layers.GlobalAvgPool2D()(x)
outputs = layers.Dense(num_classes, activation="softmax")(x)
return keras.Model(inputs, outputs)
mobilevit_xxs = create_mobilevit()
mobilevit_xxs.summary()
"""
## Dataset preparation
We will be using the
[`tf_flowers`](https://www.tensorflow.org/datasets/catalog/tf_flowers)
dataset to demonstrate the model. Unlike other Transformer-based architectures,
MobileViT uses a simple augmentation pipeline primarily because it has the properties
of a CNN.
"""
batch_size = 64
auto = tf.data.AUTOTUNE
resize_bigger = 280
num_classes = 5
def preprocess_dataset(is_training=True):
def _pp(image, label):
if is_training:
# Resize to a bigger spatial resolution and take the random
# crops.
image = tf.image.resize(image, (resize_bigger, resize_bigger))
image = tf.image.random_crop(image, (image_size, image_size, 3))
image = tf.image.random_flip_left_right(image)
else:
image = tf.image.resize(image, (image_size, image_size))
label = tf.one_hot(label, depth=num_classes)
return image, label
return _pp
def prepare_dataset(dataset, is_training=True):
if is_training:
dataset = dataset.shuffle(batch_size * 10)
dataset = dataset.map(preprocess_dataset(is_training), num_parallel_calls=auto)
return dataset.batch(batch_size).prefetch(auto)
"""
The authors use a multi-scale data sampler to help the model learn representations of
varied scales. In this example, we discard this part.
"""
"""
## Load and prepare the dataset
"""
train_dataset, val_dataset = tfds.load(
"tf_flowers", split=["train[:90%]", "train[90%:]"], as_supervised=True
)
num_train = train_dataset.cardinality()
num_val = val_dataset.cardinality()
print(f"Number of training examples: {num_train}")
print(f"Number of validation examples: {num_val}")
train_dataset = prepare_dataset(train_dataset, is_training=True)
val_dataset = prepare_dataset(val_dataset, is_training=False)
"""
## Train a MobileViT (XXS) model
"""
learning_rate = 0.002
label_smoothing_factor = 0.1
epochs = 30
optimizer = keras.optimizers.Adam(learning_rate=learning_rate)
loss_fn = keras.losses.CategoricalCrossentropy(label_smoothing=label_smoothing_factor)
def run_experiment(epochs=epochs):
mobilevit_xxs = create_mobilevit(num_classes=num_classes)
mobilevit_xxs.compile(optimizer=optimizer, loss=loss_fn, metrics=["accuracy"])
# When using `save_weights_only=True` in `ModelCheckpoint`, the filepath provided must end in `.weights.h5`
checkpoint_filepath = "/tmp/checkpoint.weights.h5"
checkpoint_callback = keras.callbacks.ModelCheckpoint(
checkpoint_filepath,
monitor="val_accuracy",
save_best_only=True,
save_weights_only=True,
)
mobilevit_xxs.fit(
train_dataset,
validation_data=val_dataset,
epochs=epochs,
callbacks=[checkpoint_callback],
)
mobilevit_xxs.load_weights(checkpoint_filepath)
_, accuracy = mobilevit_xxs.evaluate(val_dataset)
print(f"Validation accuracy: {round(accuracy * 100, 2)}%")
return mobilevit_xxs
mobilevit_xxs = run_experiment()
"""
## Results and TFLite conversion
With about one million parameters, getting to ~85% top-1 accuracy on 256x256 resolution is
a strong result. This MobileViT mobile is fully compatible with TensorFlow Lite (TFLite)
and can be converted with the following code:
"""
# Serialize the model as a SavedModel.
tf.saved_model.save(mobilevit_xxs, "mobilevit_xxs")
# Convert to TFLite. This form of quantization is called
# post-training dynamic-range quantization in TFLite.
converter = tf.lite.TFLiteConverter.from_saved_model("mobilevit_xxs")
converter.optimizations = [tf.lite.Optimize.DEFAULT]
converter.target_spec.supported_ops = [
tf.lite.OpsSet.TFLITE_BUILTINS, # Enable TensorFlow Lite ops.
tf.lite.OpsSet.SELECT_TF_OPS, # Enable TensorFlow ops.
]
tflite_model = converter.convert()
open("mobilevit_xxs.tflite", "wb").write(tflite_model)
"""
To learn more about different quantization recipes available in TFLite and running
inference with TFLite models, check out
[this official resource](https://www.tensorflow.org/lite/performance/post_training_quantization).
You can use the trained model hosted on [Hugging Face Hub](https://huggingface.co/keras-io/mobile-vit-xxs)
and try the demo on [Hugging Face Spaces](https://huggingface.co/spaces/keras-io/Flowers-Classification-MobileViT).
"""
| keras-io/examples/vision/mobilevit.py/0 | {
"file_path": "keras-io/examples/vision/mobilevit.py",
"repo_id": "keras-io",
"token_count": 5004
} | 102 |
"""
Title: Semantic segmentation with SegFormer and Hugging Face Transformers
Author: [Sayak Paul](https://twitter.com/RisingSayak)
Date created: 2023/01/25
Last modified: 2023/01/29
Description: Fine-tuning a SegFormer model variant for semantic segmentation.
Accelerator: GPU
"""
"""
## Introduction
In this example, we show how to fine-tune a SegFormer model variant to do
semantic segmentation on a custom dataset. Semantic segmentation is the task of
assigning a category to each and every pixel of an image. SegFormer was proposed in
[SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers](https://arxiv.org/abs/2105.15203).
SegFormer uses a hierarchical Transformer architecture (called "Mix Transformer") as
its encoder and a lightweight decoder for segmentation. As a result, it yields
state-of-the-art performance on semantic segmentation while being more efficient than
existing models. For more details, check out the original paper.

We leverage
[Hugging Face Transformers](https://github.com/huggingface/transformers)
to load a pretrained SegFormer checkpoint and fine-tune it on a custom dataset.
**Note:** this example reuses code from the following sources:
* [Official tutorial on segmentation from the TensorFlow team](https://www.tensorflow.org/tutorials/images/segmentation)
* [Hugging Face Task guide on segmentation](https://huggingface.co/docs/transformers/main/en/tasks/semantic_segmentation)
To run this example, we need to install the `transformers` library:
"""
"""shell
pip install transformers -q
"""
"""
## Load the data
We use the [Oxford-IIIT Pets](https://www.robots.ox.ac.uk/~vgg/data/pets/) dataset for
this example. We leverage `tensorflow_datasets` to load the dataset.
"""
import tensorflow_datasets as tfds
dataset, info = tfds.load("oxford_iiit_pet:3.*.*", with_info=True)
"""
## Prepare the datasets
For preparing the datasets for training and evaluation, we:
* Normalize the images with the mean and standard deviation used during pre-training
SegFormer.
* Subtract 1 from the segmentation masks so that the pixel values start from 0.
* Resize the images.
* Transpose the images such that they are in `"channels_first"` format. This is to make
them compatible with the SegFormer model from Hugging Face Transformers.
"""
import tensorflow as tf
from tensorflow.keras import backend
image_size = 512
mean = tf.constant([0.485, 0.456, 0.406])
std = tf.constant([0.229, 0.224, 0.225])
def normalize(input_image, input_mask):
input_image = tf.image.convert_image_dtype(input_image, tf.float32)
input_image = (input_image - mean) / tf.maximum(std, backend.epsilon())
input_mask -= 1
return input_image, input_mask
def load_image(datapoint):
input_image = tf.image.resize(datapoint["image"], (image_size, image_size))
input_mask = tf.image.resize(
datapoint["segmentation_mask"],
(image_size, image_size),
method="bilinear",
)
input_image, input_mask = normalize(input_image, input_mask)
input_image = tf.transpose(input_image, (2, 0, 1))
return {"pixel_values": input_image, "labels": tf.squeeze(input_mask)}
"""
We now use the above utilities to prepare `tf.data.Dataset` objects including
`prefetch()` for performance. Change the `batch_size` to match the size of the GPU memory
on the GPU that you're using for training.
"""
auto = tf.data.AUTOTUNE
batch_size = 4
train_ds = (
dataset["train"]
.cache()
.shuffle(batch_size * 10)
.map(load_image, num_parallel_calls=auto)
.batch(batch_size)
.prefetch(auto)
)
test_ds = (
dataset["test"]
.map(load_image, num_parallel_calls=auto)
.batch(batch_size)
.prefetch(auto)
)
"""
We can check the shapes of the input images and their segmentation maps:
"""
print(train_ds.element_spec)
"""
## Visualize dataset
"""
import matplotlib.pyplot as plt
def display(display_list):
plt.figure(figsize=(15, 15))
title = ["Input Image", "True Mask", "Predicted Mask"]
for i in range(len(display_list)):
plt.subplot(1, len(display_list), i + 1)
plt.title(title[i])
plt.imshow(tf.keras.utils.array_to_img(display_list[i]))
plt.axis("off")
plt.show()
for samples in train_ds.take(2):
sample_image, sample_mask = samples["pixel_values"][0], samples["labels"][0]
sample_image = tf.transpose(sample_image, (1, 2, 0))
sample_mask = tf.expand_dims(sample_mask, -1)
display([sample_image, sample_mask])
"""
## Load a pretrained SegFormer checkpoint
We now load a pretrained SegFormer model variant from Hugging Face Transformers. The
SegFormer model comes in different variants dubbed as **MiT-B0** to **MiT-B5**. You can
find these checkpoints
[here](https://huggingface.co/models?pipeline_tag=image-segmentation&sort=downloads&search=segformer).
We load the smallest variant Mix-B0, which produces a good trade-off
between inference efficiency and predictive performance.
"""
from transformers import TFSegformerForSemanticSegmentation
model_checkpoint = "nvidia/mit-b0"
id2label = {0: "outer", 1: "inner", 2: "border"}
label2id = {label: id for id, label in id2label.items()}
num_labels = len(id2label)
model = TFSegformerForSemanticSegmentation.from_pretrained(
model_checkpoint,
num_labels=num_labels,
id2label=id2label,
label2id=label2id,
ignore_mismatched_sizes=True,
)
"""
The warning is telling us that we're throwing away some weights and newly initializing
some others. Don't panic! This is absolutely normal. Since we're using a custom dataset
which has a different set of semantic class labels than the pre-training dataset,
[`TFSegformerForSemanticSegmentation`](https://huggingface.co/docs/transformers/model_doc/segformer#transformers.TFSegformerForSemanticSegmentation)
is initializing a new decoder head.
We can now initialize an optimizer and compile the model with it.
"""
"""
## Compile the model
"""
lr = 0.00006
optimizer = tf.keras.optimizers.Adam(learning_rate=lr)
model.compile(optimizer=optimizer)
"""
Notice that we are not using any loss function for compiling the model. This is because
the forward pass of the model
[implements](https://github.com/huggingface/transformers/blob/820c46a707ddd033975bc3b0549eea200e64c7da/src/transformers/models/segformer/modeling_tf_segformer.py#L873)
the loss computation part when we provide labels alongside the input images. After
computing the loss, the model returned a structured `dataclass` object which is
then used to guide the training process.
With the compiled model, we can proceed and call `fit()` on it to begin the fine-tuning
process!
"""
"""
## Prediction callback to monitor training progress
It helps us to visualize some sample predictions when the model is being fine-tuned,
thereby helping us to monitor the progress of the model. This callback is inspired from
[this tutorial](https://www.tensorflow.org/tutorials/images/segmentation).
"""
from IPython.display import clear_output
def create_mask(pred_mask):
pred_mask = tf.math.argmax(pred_mask, axis=1)
pred_mask = tf.expand_dims(pred_mask, -1)
return pred_mask[0]
def show_predictions(dataset=None, num=1):
if dataset:
for sample in dataset.take(num):
images, masks = sample["pixel_values"], sample["labels"]
masks = tf.expand_dims(masks, -1)
pred_masks = model.predict(images).logits
images = tf.transpose(images, (0, 2, 3, 1))
display([images[0], masks[0], create_mask(pred_masks)])
else:
display(
[
sample_image,
sample_mask,
create_mask(model.predict(tf.expand_dims(sample_image, 0))),
]
)
class DisplayCallback(tf.keras.callbacks.Callback):
def __init__(self, dataset, **kwargs):
super().__init__(**kwargs)
self.dataset = dataset
def on_epoch_end(self, epoch, logs=None):
clear_output(wait=True)
show_predictions(self.dataset)
print("\nSample Prediction after epoch {}\n".format(epoch + 1))
"""
## Train model
"""
# Increase the number of epochs if the results are not of expected quality.
epochs = 5
history = model.fit(
train_ds,
validation_data=test_ds,
callbacks=[DisplayCallback(test_ds)],
epochs=epochs,
)
"""
## Inference
We perform inference on a few samples from the test set.
"""
show_predictions(test_ds, 5)
"""
## Conclusion
In this example, we learned how to fine-tune a SegFormer model variant on a custom
dataset for semantic segmentation. In the interest of brevity, the example
was kept short. However, there are a couple of things, you can further try out:
* Incorporate data augmentation to potentially improve the results.
* Use a larger SegFormer model checkpoint to see how the results are affected.
* Push the fine-tuned model to the Hugging Face for sharing with the community easily.
You can do so just by doing `model.push_to_hub("your-username/your-awesome-model")`.
And then you can load the model by doing
`TFSegformerForSemanticSegmentation.from_pretrained("your-username/your-awesome-model"`).
[Here](https://github.com/huggingface/notebooks/blob/main/examples/semantic_segmentation-tf.ipynb)
is an end-to-end example if you're looking for a reference.
* If you'd rather push the model checkpoints to the Hub as the model is being
fine-tuned you can instead use the `PushToHubCallback` Keras callback.
[Here](https://gist.github.com/sayakpaul/f474ffb01f0cdcc8ba239357965c3bca) is an example.
[Here](https://huggingface.co/sayakpaul/mit-b0-finetuned-pets) is an example of a model
repository that was created using this callback.
"""
| keras-io/examples/vision/segformer.py/0 | {
"file_path": "keras-io/examples/vision/segformer.py",
"repo_id": "keras-io",
"token_count": 3315
} | 103 |
<jupyter_start><jupyter_text>Getting started with KerasTuner**Authors:** Luca Invernizzi, James Long, Francois Chollet, Tom O'Malley, Haifeng Jin**Date created:** 2019/05/31**Last modified:** 2021/10/27**Description:** The basics of using KerasTuner to tune model hyperparameters.<jupyter_code>!pip install keras-tuner -q<jupyter_output><empty_output><jupyter_text>IntroductionKerasTuner is a general-purpose hyperparameter tuning library. It has strongintegration with Keras workflows, but it isn't limited to them: you could useit to tune scikit-learn models, or anything else. In this tutorial, you willsee how to tune model architecture, training process, and data preprocessingsteps with KerasTuner. Let's start from a simple example. Tune the model architectureThe first thing we need to do is writing a function, which returns a compiledKeras model. It takes an argument `hp` for defining the hyperparameters whilebuilding the model. Define the search spaceIn the following code example, we define a Keras model with two `Dense` layers.We want to tune the number of units in the first `Dense` layer. We just definean integer hyperparameter with `hp.Int('units', min_value=32, max_value=512, step=32)`,whose range is from 32 to 512 inclusive. When sampling from it, the minimumstep for walking through the interval is 32.<jupyter_code>import keras
from keras import layers
def build_model(hp):
model = keras.Sequential()
model.add(layers.Flatten())
model.add(
layers.Dense(
# Define the hyperparameter.
units=hp.Int("units", min_value=32, max_value=512, step=32),
activation="relu",
)
)
model.add(layers.Dense(10, activation="softmax"))
model.compile(
optimizer="adam",
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model<jupyter_output><empty_output><jupyter_text>You can quickly test if the model builds successfully.<jupyter_code>import keras_tuner
build_model(keras_tuner.HyperParameters())<jupyter_output><empty_output><jupyter_text>There are many other types of hyperparameters as well. We can define multiplehyperparameters in the function. In the following code, we tune whether touse a `Dropout` layer with `hp.Boolean()`, tune which activation function touse with `hp.Choice()`, tune the learning rate of the optimizer with`hp.Float()`.<jupyter_code>def build_model(hp):
model = keras.Sequential()
model.add(layers.Flatten())
model.add(
layers.Dense(
# Tune number of units.
units=hp.Int("units", min_value=32, max_value=512, step=32),
# Tune the activation function to use.
activation=hp.Choice("activation", ["relu", "tanh"]),
)
)
# Tune whether to use dropout.
if hp.Boolean("dropout"):
model.add(layers.Dropout(rate=0.25))
model.add(layers.Dense(10, activation="softmax"))
# Define the optimizer learning rate as a hyperparameter.
learning_rate = hp.Float("lr", min_value=1e-4, max_value=1e-2, sampling="log")
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
build_model(keras_tuner.HyperParameters())<jupyter_output><empty_output><jupyter_text>As shown below, the hyperparameters are actual values. In fact, they are justfunctions returning actual values. For example, `hp.Int()` returns an `int`value. Therefore, you can put them into variables, for loops, or ifconditions.<jupyter_code>hp = keras_tuner.HyperParameters()
print(hp.Int("units", min_value=32, max_value=512, step=32))<jupyter_output><empty_output><jupyter_text>You can also define the hyperparameters in advance and keep your Keras code ina separate function.<jupyter_code>def call_existing_code(units, activation, dropout, lr):
model = keras.Sequential()
model.add(layers.Flatten())
model.add(layers.Dense(units=units, activation=activation))
if dropout:
model.add(layers.Dropout(rate=0.25))
model.add(layers.Dense(10, activation="softmax"))
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=lr),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
def build_model(hp):
units = hp.Int("units", min_value=32, max_value=512, step=32)
activation = hp.Choice("activation", ["relu", "tanh"])
dropout = hp.Boolean("dropout")
lr = hp.Float("lr", min_value=1e-4, max_value=1e-2, sampling="log")
# call existing model-building code with the hyperparameter values.
model = call_existing_code(
units=units, activation=activation, dropout=dropout, lr=lr
)
return model
build_model(keras_tuner.HyperParameters())<jupyter_output><empty_output><jupyter_text>Each of the hyperparameters is uniquely identified by its name (the firstargument). To tune the number of units in different `Dense` layers separatelyas different hyperparameters, we give them different names as `f"units_{i}"`.Notably, this is also an example of creating conditional hyperparameters.There are many hyperparameters specifying the number of units in the `Dense`layers. The number of such hyperparameters is decided by the number of layers,which is also a hyperparameter. Therefore, the total number of hyperparametersused may be different from trial to trial. Some hyperparameter is only usedwhen a certain condition is satisfied. For example, `units_3` is only usedwhen `num_layers` is larger than 3. With KerasTuner, you can easily definesuch hyperparameters dynamically while creating the model.<jupyter_code>def build_model(hp):
model = keras.Sequential()
model.add(layers.Flatten())
# Tune the number of layers.
for i in range(hp.Int("num_layers", 1, 3)):
model.add(
layers.Dense(
# Tune number of units separately.
units=hp.Int(f"units_{i}", min_value=32, max_value=512, step=32),
activation=hp.Choice("activation", ["relu", "tanh"]),
)
)
if hp.Boolean("dropout"):
model.add(layers.Dropout(rate=0.25))
model.add(layers.Dense(10, activation="softmax"))
learning_rate = hp.Float("lr", min_value=1e-4, max_value=1e-2, sampling="log")
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
build_model(keras_tuner.HyperParameters())<jupyter_output><empty_output><jupyter_text>Start the searchAfter defining the search space, we need to select a tuner class to run thesearch. You may choose from `RandomSearch`, `BayesianOptimization` and`Hyperband`, which correspond to different tuning algorithms. Here we use`RandomSearch` as an example.To initialize the tuner, we need to specify several arguments in the initializer.* `hypermodel`. The model-building function, which is `build_model` in our case.* `objective`. The name of the objective to optimize (whether to minimize ormaximize is automatically inferred for built-in metrics). We will introduce howto use custom metrics later in this tutorial.* `max_trials`. The total number of trials to run during the search.* `executions_per_trial`. The number of models that should be built and fit foreach trial. Different trials have different hyperparameter values. Theexecutions within the same trial have the same hyperparameter values. Thepurpose of having multiple executions per trial is to reduce results varianceand therefore be able to more accurately assess the performance of a model. Ifyou want to get results faster, you could set `executions_per_trial=1` (singleround of training for each model configuration).* `overwrite`. Control whether to overwrite the previous results in the samedirectory or resume the previous search instead. Here we set `overwrite=True`to start a new search and ignore any previous results.* `directory`. A path to a directory for storing the search results.* `project_name`. The name of the sub-directory in the `directory`.<jupyter_code>tuner = keras_tuner.RandomSearch(
hypermodel=build_model,
objective="val_accuracy",
max_trials=3,
executions_per_trial=2,
overwrite=True,
directory="my_dir",
project_name="helloworld",
)<jupyter_output><empty_output><jupyter_text>You can print a summary of the search space:<jupyter_code>tuner.search_space_summary()<jupyter_output><empty_output><jupyter_text>Before starting the search, let's prepare the MNIST dataset.<jupyter_code>import keras
import numpy as np
(x, y), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x[:-10000]
x_val = x[-10000:]
y_train = y[:-10000]
y_val = y[-10000:]
x_train = np.expand_dims(x_train, -1).astype("float32") / 255.0
x_val = np.expand_dims(x_val, -1).astype("float32") / 255.0
x_test = np.expand_dims(x_test, -1).astype("float32") / 255.0
num_classes = 10
y_train = keras.utils.to_categorical(y_train, num_classes)
y_val = keras.utils.to_categorical(y_val, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)<jupyter_output><empty_output><jupyter_text>Then, start the search for the best hyperparameter configuration.All the arguments passed to `search` is passed to `model.fit()` in eachexecution. Remember to pass `validation_data` to evaluate the model.<jupyter_code>tuner.search(x_train, y_train, epochs=2, validation_data=(x_val, y_val))<jupyter_output><empty_output><jupyter_text>During the `search`, the model-building function is called with differenthyperparameter values in different trial. In each trial, the tuner wouldgenerate a new set of hyperparameter values to build the model. The model isthen fit and evaluated. The metrics are recorded. The tuner progressivelyexplores the space and finally finds a good set of hyperparameter values. Query the resultsWhen search is over, you can retrieve the best model(s). The model is saved atits best performing epoch evaluated on the `validation_data`.<jupyter_code># Get the top 2 models.
models = tuner.get_best_models(num_models=2)
best_model = models[0]
best_model.summary()<jupyter_output><empty_output><jupyter_text>You can also print a summary of the search results.<jupyter_code>tuner.results_summary()<jupyter_output><empty_output><jupyter_text>You will find detailed logs, checkpoints, etc, in the folder`my_dir/helloworld`, i.e. `directory/project_name`.You can also visualize the tuning results using TensorBoard and HParams plugin.For more information, please following[this link](https://keras.io/guides/keras_tuner/visualize_tuning/). Retrain the modelIf you want to train the model with the entire dataset, you may retrieve thebest hyperparameters and retrain the model by yourself.<jupyter_code># Get the top 2 hyperparameters.
best_hps = tuner.get_best_hyperparameters(5)
# Build the model with the best hp.
model = build_model(best_hps[0])
# Fit with the entire dataset.
x_all = np.concatenate((x_train, x_val))
y_all = np.concatenate((y_train, y_val))
model.fit(x=x_all, y=y_all, epochs=1)<jupyter_output><empty_output><jupyter_text>Tune model trainingTo tune the model building process, we need to subclass the `HyperModel` class,which also makes it easy to share and reuse hypermodels.We need to override `HyperModel.build()` and `HyperModel.fit()` to tune themodel building and training process respectively. A `HyperModel.build()`method is the same as the model-building function, which creates a Keras modelusing the hyperparameters and returns it.In `HyperModel.fit()`, you can access the model returned by`HyperModel.build()`,`hp` and all the arguments passed to `search()`. You needto train the model and return the training history.In the following code, we will tune the `shuffle` argument in `model.fit()`.It is generally not needed to tune the number of epochs because a built-incallback is passed to `model.fit()` to save the model at its best epochevaluated by the `validation_data`.> **Note**: The `**kwargs` should always be passed to `model.fit()` because itcontains the callbacks for model saving and tensorboard plugins.<jupyter_code>class MyHyperModel(keras_tuner.HyperModel):
def build(self, hp):
model = keras.Sequential()
model.add(layers.Flatten())
model.add(
layers.Dense(
units=hp.Int("units", min_value=32, max_value=512, step=32),
activation="relu",
)
)
model.add(layers.Dense(10, activation="softmax"))
model.compile(
optimizer="adam",
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
def fit(self, hp, model, *args, **kwargs):
return model.fit(
*args,
# Tune whether to shuffle the data in each epoch.
shuffle=hp.Boolean("shuffle"),
**kwargs,
)<jupyter_output><empty_output><jupyter_text>Again, we can do a quick check to see if the code works correctly.<jupyter_code>hp = keras_tuner.HyperParameters()
hypermodel = MyHyperModel()
model = hypermodel.build(hp)
hypermodel.fit(hp, model, np.random.rand(100, 28, 28), np.random.rand(100, 10))<jupyter_output><empty_output><jupyter_text>Tune data preprocessingTo tune data preprocessing, we just add an additional step in`HyperModel.fit()`, where we can access the dataset from the arguments. In thefollowing code, we tune whether to normalize the data before training themodel. This time we explicitly put `x` and `y` in the function signaturebecause we need to use them.<jupyter_code>class MyHyperModel(keras_tuner.HyperModel):
def build(self, hp):
model = keras.Sequential()
model.add(layers.Flatten())
model.add(
layers.Dense(
units=hp.Int("units", min_value=32, max_value=512, step=32),
activation="relu",
)
)
model.add(layers.Dense(10, activation="softmax"))
model.compile(
optimizer="adam",
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
def fit(self, hp, model, x, y, **kwargs):
if hp.Boolean("normalize"):
x = layers.Normalization()(x)
return model.fit(
x,
y,
# Tune whether to shuffle the data in each epoch.
shuffle=hp.Boolean("shuffle"),
**kwargs,
)
hp = keras_tuner.HyperParameters()
hypermodel = MyHyperModel()
model = hypermodel.build(hp)
hypermodel.fit(hp, model, np.random.rand(100, 28, 28), np.random.rand(100, 10))<jupyter_output><empty_output><jupyter_text>If a hyperparameter is used both in `build()` and `fit()`, you can define it in`build()` and use `hp.get(hp_name)` to retrieve it in `fit()`. We use theimage size as an example. It is both used as the input shape in `build()`, andused by data prerprocessing step to crop the images in `fit()`.<jupyter_code>class MyHyperModel(keras_tuner.HyperModel):
def build(self, hp):
image_size = hp.Int("image_size", 10, 28)
inputs = keras.Input(shape=(image_size, image_size))
outputs = layers.Flatten()(inputs)
outputs = layers.Dense(
units=hp.Int("units", min_value=32, max_value=512, step=32),
activation="relu",
)(outputs)
outputs = layers.Dense(10, activation="softmax")(outputs)
model = keras.Model(inputs, outputs)
model.compile(
optimizer="adam",
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
def fit(self, hp, model, x, y, validation_data=None, **kwargs):
if hp.Boolean("normalize"):
x = layers.Normalization()(x)
image_size = hp.get("image_size")
cropped_x = x[:, :image_size, :image_size, :]
if validation_data:
x_val, y_val = validation_data
cropped_x_val = x_val[:, :image_size, :image_size, :]
validation_data = (cropped_x_val, y_val)
return model.fit(
cropped_x,
y,
# Tune whether to shuffle the data in each epoch.
shuffle=hp.Boolean("shuffle"),
validation_data=validation_data,
**kwargs,
)
tuner = keras_tuner.RandomSearch(
MyHyperModel(),
objective="val_accuracy",
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="tune_hypermodel",
)
tuner.search(x_train, y_train, epochs=2, validation_data=(x_val, y_val))<jupyter_output><empty_output><jupyter_text>Retrain the modelUsing `HyperModel` also allows you to retrain the best model by yourself.<jupyter_code>hypermodel = MyHyperModel()
best_hp = tuner.get_best_hyperparameters()[0]
model = hypermodel.build(best_hp)
hypermodel.fit(best_hp, model, x_all, y_all, epochs=1)<jupyter_output><empty_output><jupyter_text>Specify the tuning objectiveIn all previous examples, we all just used validation accuracy(`"val_accuracy"`) as the tuning objective to select the best model. Actually,you can use any metric as the objective. The most commonly used metric is`"val_loss"`, which is the validation loss. Built-in metric as the objectiveThere are many other built-in metrics in Keras you can use as the objective.Here is [a list of the built-in metrics](https://keras.io/api/metrics/).To use a built-in metric as the objective, you need to follow these steps:* Compile the model with the the built-in metric. For example, you want to use`MeanAbsoluteError()`. You need to compile the model with`metrics=[MeanAbsoluteError()]`. You may also use its name string instead:`metrics=["mean_absolute_error"]`. The name string of the metric is alwaysthe snake case of the class name.* Identify the objective name string. The name string of the objective isalways in the format of `f"val_{metric_name_string}"`. For example, theobjective name string of mean squared error evaluated on the validation datashould be `"val_mean_absolute_error"`.* Wrap it into `keras_tuner.Objective`. We usually need to wrap the objectiveinto a `keras_tuner.Objective` object to specify the direction to optimize theobjective. For example, we want to minimize the mean squared error, we can use`keras_tuner.Objective("val_mean_absolute_error", "min")`. The direction shouldbe either `"min"` or `"max"`.* Pass the wrapped objective to the tuner.You can see the following barebone code example.<jupyter_code>def build_regressor(hp):
model = keras.Sequential(
[
layers.Dense(units=hp.Int("units", 32, 128, 32), activation="relu"),
layers.Dense(units=1),
]
)
model.compile(
optimizer="adam",
loss="mean_squared_error",
# Objective is one of the metrics.
metrics=[keras.metrics.MeanAbsoluteError()],
)
return model
tuner = keras_tuner.RandomSearch(
hypermodel=build_regressor,
# The objective name and direction.
# Name is the f"val_{snake_case_metric_class_name}".
objective=keras_tuner.Objective("val_mean_absolute_error", direction="min"),
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="built_in_metrics",
)
tuner.search(
x=np.random.rand(100, 10),
y=np.random.rand(100, 1),
validation_data=(np.random.rand(20, 10), np.random.rand(20, 1)),
)
tuner.results_summary()<jupyter_output><empty_output><jupyter_text>Custom metric as the objectiveYou may implement your own metric and use it as the hyperparameter searchobjective. Here, we use mean squared error (MSE) as an example. First, weimplement the MSE metric by subclassing `keras.metrics.Metric`. Remember togive a name to your metric using the `name` argument of `super().__init__()`,which will be used later. Note: MSE is actully a build-in metric, which can beimported with `keras.metrics.MeanSquaredError`. This is just an example to showhow to use a custom metric as the hyperparameter search objective.For more information about implementing custom metrics, please see [thistutorial](https://keras.io/api/metrics/creating-custom-metrics). If you wouldlike a metric with a different function signature than `update_state(y_true,y_pred, sample_weight)`, you can override the `train_step()` method of yourmodel following [thistutorial](https://keras.io/guides/customizing_what_happens_in_fit/going-lowerlevel).<jupyter_code>from keras import ops
class CustomMetric(keras.metrics.Metric):
def __init__(self, **kwargs):
# Specify the name of the metric as "custom_metric".
super().__init__(name="custom_metric", **kwargs)
self.sum = self.add_weight(name="sum", initializer="zeros")
self.count = self.add_weight(name="count", dtype="int32", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
values = ops.square(y_true - y_pred)
count = ops.shape(y_true)[0]
if sample_weight is not None:
sample_weight = ops.cast(sample_weight, self.dtype)
values *= sample_weight
count *= sample_weight
self.sum.assign_add(ops.sum(values))
self.count.assign_add(count)
def result(self):
return self.sum / ops.cast(self.count, "float32")
def reset_states(self):
self.sum.assign(0)
self.count.assign(0)<jupyter_output><empty_output><jupyter_text>Run the search with the custom objective.<jupyter_code>def build_regressor(hp):
model = keras.Sequential(
[
layers.Dense(units=hp.Int("units", 32, 128, 32), activation="relu"),
layers.Dense(units=1),
]
)
model.compile(
optimizer="adam",
loss="mean_squared_error",
# Put custom metric into the metrics.
metrics=[CustomMetric()],
)
return model
tuner = keras_tuner.RandomSearch(
hypermodel=build_regressor,
# Specify the name and direction of the objective.
objective=keras_tuner.Objective("val_custom_metric", direction="min"),
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="custom_metrics",
)
tuner.search(
x=np.random.rand(100, 10),
y=np.random.rand(100, 1),
validation_data=(np.random.rand(20, 10), np.random.rand(20, 1)),
)
tuner.results_summary()<jupyter_output><empty_output><jupyter_text>If your custom objective is hard to put into a custom metric, you can alsoevaluate the model by yourself in `HyperModel.fit()` and return the objectivevalue. The objective value would be minimized by default. In this case, youdon't need to specify the `objective` when initializing the tuner. However, inthis case, the metric value will not be tracked in the Keras logs by onlyKerasTuner logs. Therefore, these values would not be displayed by anyTensorBoard view using the Keras metrics.<jupyter_code>class HyperRegressor(keras_tuner.HyperModel):
def build(self, hp):
model = keras.Sequential(
[
layers.Dense(units=hp.Int("units", 32, 128, 32), activation="relu"),
layers.Dense(units=1),
]
)
model.compile(
optimizer="adam",
loss="mean_squared_error",
)
return model
def fit(self, hp, model, x, y, validation_data, **kwargs):
model.fit(x, y, **kwargs)
x_val, y_val = validation_data
y_pred = model.predict(x_val)
# Return a single float to minimize.
return np.mean(np.abs(y_pred - y_val))
tuner = keras_tuner.RandomSearch(
hypermodel=HyperRegressor(),
# No objective to specify.
# Objective is the return value of `HyperModel.fit()`.
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="custom_eval",
)
tuner.search(
x=np.random.rand(100, 10),
y=np.random.rand(100, 1),
validation_data=(np.random.rand(20, 10), np.random.rand(20, 1)),
)
tuner.results_summary()<jupyter_output><empty_output><jupyter_text>If you have multiple metrics to track in KerasTuner, but only use one of themas the objective, you can return a dictionary, whose keys are the metric namesand the values are the metrics values, for example, return `{"metric_a": 1.0,"metric_b", 2.0}`. Use one of the keys as the objective name, for example,`keras_tuner.Objective("metric_a", "min")`.<jupyter_code>class HyperRegressor(keras_tuner.HyperModel):
def build(self, hp):
model = keras.Sequential(
[
layers.Dense(units=hp.Int("units", 32, 128, 32), activation="relu"),
layers.Dense(units=1),
]
)
model.compile(
optimizer="adam",
loss="mean_squared_error",
)
return model
def fit(self, hp, model, x, y, validation_data, **kwargs):
model.fit(x, y, **kwargs)
x_val, y_val = validation_data
y_pred = model.predict(x_val)
# Return a dictionary of metrics for KerasTuner to track.
return {
"metric_a": -np.mean(np.abs(y_pred - y_val)),
"metric_b": np.mean(np.square(y_pred - y_val)),
}
tuner = keras_tuner.RandomSearch(
hypermodel=HyperRegressor(),
# Objective is one of the keys.
# Maximize the negative MAE, equivalent to minimize MAE.
objective=keras_tuner.Objective("metric_a", "max"),
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="custom_eval_dict",
)
tuner.search(
x=np.random.rand(100, 10),
y=np.random.rand(100, 1),
validation_data=(np.random.rand(20, 10), np.random.rand(20, 1)),
)
tuner.results_summary()<jupyter_output><empty_output><jupyter_text>Tune end-to-end workflowsIn some cases, it is hard to align your code into build and fit functions. Youcan also keep your end-to-end workflow in one place by overriding`Tuner.run_trial()`, which gives you full control of a trial. You can see itas a black-box optimizer for anything. Tune any functionFor example, you can find a value of `x`, which minimizes `f(x)=x*x+1`. In thefollowing code, we just define `x` as a hyperparameter, and return `f(x)` asthe objective value. The `hypermodel` and `objective` argument for initializingthe tuner can be omitted.<jupyter_code>class MyTuner(keras_tuner.RandomSearch):
def run_trial(self, trial, *args, **kwargs):
# Get the hp from trial.
hp = trial.hyperparameters
# Define "x" as a hyperparameter.
x = hp.Float("x", min_value=-1.0, max_value=1.0)
# Return the objective value to minimize.
return x * x + 1
tuner = MyTuner(
# No hypermodel or objective specified.
max_trials=20,
overwrite=True,
directory="my_dir",
project_name="tune_anything",
)
# No need to pass anything to search()
# unless you use them in run_trial().
tuner.search()
print(tuner.get_best_hyperparameters()[0].get("x"))<jupyter_output><empty_output><jupyter_text>Keep Keras code separateYou can keep all your Keras code unchanged and use KerasTuner to tune it. Itis useful if you cannot modify the Keras code for some reason.It also gives you more flexibility. You don't have to separate the modelbuilding and training code apart. However, this workflow would not help yousave the model or connect with the TensorBoard plugins.To save the model, you can use `trial.trial_id`, which is a string to uniquelyidentify a trial, to construct different paths to save the models fromdifferent trials.<jupyter_code>import os
def keras_code(units, optimizer, saving_path):
# Build model
model = keras.Sequential(
[
layers.Dense(units=units, activation="relu"),
layers.Dense(units=1),
]
)
model.compile(
optimizer=optimizer,
loss="mean_squared_error",
)
# Prepare data
x_train = np.random.rand(100, 10)
y_train = np.random.rand(100, 1)
x_val = np.random.rand(20, 10)
y_val = np.random.rand(20, 1)
# Train & eval model
model.fit(x_train, y_train)
# Save model
model.save(saving_path)
# Return a single float as the objective value.
# You may also return a dictionary
# of {metric_name: metric_value}.
y_pred = model.predict(x_val)
return np.mean(np.abs(y_pred - y_val))
class MyTuner(keras_tuner.RandomSearch):
def run_trial(self, trial, **kwargs):
hp = trial.hyperparameters
return keras_code(
units=hp.Int("units", 32, 128, 32),
optimizer=hp.Choice("optimizer", ["adam", "adadelta"]),
saving_path=os.path.join("/tmp", f"{trial.trial_id}.keras"),
)
tuner = MyTuner(
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="keep_code_separate",
)
tuner.search()
# Retraining the model
best_hp = tuner.get_best_hyperparameters()[0]
keras_code(**best_hp.values, saving_path="/tmp/best_model.keras")<jupyter_output><empty_output><jupyter_text>KerasTuner includes pre-made tunable applications: HyperResNet and HyperXceptionThese are ready-to-use hypermodels for computer vision.They come pre-compiled with `loss="categorical_crossentropy"` and`metrics=["accuracy"]`.<jupyter_code>from keras_tuner.applications import HyperResNet
hypermodel = HyperResNet(input_shape=(28, 28, 1), classes=10)
tuner = keras_tuner.RandomSearch(
hypermodel,
objective="val_accuracy",
max_trials=2,
overwrite=True,
directory="my_dir",
project_name="built_in_hypermodel",
)<jupyter_output><empty_output> | keras-io/guides/ipynb/keras_tuner/getting_started.ipynb/0 | {
"file_path": "keras-io/guides/ipynb/keras_tuner/getting_started.ipynb",
"repo_id": "keras-io",
"token_count": 10855
} | 104 |
"""
Title: Getting started with KerasTuner
Authors: Luca Invernizzi, James Long, Francois Chollet, Tom O'Malley, Haifeng Jin
Date created: 2019/05/31
Last modified: 2021/10/27
Description: The basics of using KerasTuner to tune model hyperparameters.
Accelerator: GPU
"""
"""shell
pip install keras-tuner -q
"""
"""
## Introduction
KerasTuner is a general-purpose hyperparameter tuning library. It has strong
integration with Keras workflows, but it isn't limited to them: you could use
it to tune scikit-learn models, or anything else. In this tutorial, you will
see how to tune model architecture, training process, and data preprocessing
steps with KerasTuner. Let's start from a simple example.
## Tune the model architecture
The first thing we need to do is writing a function, which returns a compiled
Keras model. It takes an argument `hp` for defining the hyperparameters while
building the model.
### Define the search space
In the following code example, we define a Keras model with two `Dense` layers.
We want to tune the number of units in the first `Dense` layer. We just define
an integer hyperparameter with `hp.Int('units', min_value=32, max_value=512, step=32)`,
whose range is from 32 to 512 inclusive. When sampling from it, the minimum
step for walking through the interval is 32.
"""
import keras
from keras import layers
def build_model(hp):
model = keras.Sequential()
model.add(layers.Flatten())
model.add(
layers.Dense(
# Define the hyperparameter.
units=hp.Int("units", min_value=32, max_value=512, step=32),
activation="relu",
)
)
model.add(layers.Dense(10, activation="softmax"))
model.compile(
optimizer="adam",
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
"""
You can quickly test if the model builds successfully.
"""
import keras_tuner
build_model(keras_tuner.HyperParameters())
"""
There are many other types of hyperparameters as well. We can define multiple
hyperparameters in the function. In the following code, we tune whether to
use a `Dropout` layer with `hp.Boolean()`, tune which activation function to
use with `hp.Choice()`, tune the learning rate of the optimizer with
`hp.Float()`.
"""
def build_model(hp):
model = keras.Sequential()
model.add(layers.Flatten())
model.add(
layers.Dense(
# Tune number of units.
units=hp.Int("units", min_value=32, max_value=512, step=32),
# Tune the activation function to use.
activation=hp.Choice("activation", ["relu", "tanh"]),
)
)
# Tune whether to use dropout.
if hp.Boolean("dropout"):
model.add(layers.Dropout(rate=0.25))
model.add(layers.Dense(10, activation="softmax"))
# Define the optimizer learning rate as a hyperparameter.
learning_rate = hp.Float("lr", min_value=1e-4, max_value=1e-2, sampling="log")
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
build_model(keras_tuner.HyperParameters())
"""
As shown below, the hyperparameters are actual values. In fact, they are just
functions returning actual values. For example, `hp.Int()` returns an `int`
value. Therefore, you can put them into variables, for loops, or if
conditions.
"""
hp = keras_tuner.HyperParameters()
print(hp.Int("units", min_value=32, max_value=512, step=32))
"""
You can also define the hyperparameters in advance and keep your Keras code in
a separate function.
"""
def call_existing_code(units, activation, dropout, lr):
model = keras.Sequential()
model.add(layers.Flatten())
model.add(layers.Dense(units=units, activation=activation))
if dropout:
model.add(layers.Dropout(rate=0.25))
model.add(layers.Dense(10, activation="softmax"))
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=lr),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
def build_model(hp):
units = hp.Int("units", min_value=32, max_value=512, step=32)
activation = hp.Choice("activation", ["relu", "tanh"])
dropout = hp.Boolean("dropout")
lr = hp.Float("lr", min_value=1e-4, max_value=1e-2, sampling="log")
# call existing model-building code with the hyperparameter values.
model = call_existing_code(
units=units, activation=activation, dropout=dropout, lr=lr
)
return model
build_model(keras_tuner.HyperParameters())
"""
Each of the hyperparameters is uniquely identified by its name (the first
argument). To tune the number of units in different `Dense` layers separately
as different hyperparameters, we give them different names as `f"units_{i}"`.
Notably, this is also an example of creating conditional hyperparameters.
There are many hyperparameters specifying the number of units in the `Dense`
layers. The number of such hyperparameters is decided by the number of layers,
which is also a hyperparameter. Therefore, the total number of hyperparameters
used may be different from trial to trial. Some hyperparameter is only used
when a certain condition is satisfied. For example, `units_3` is only used
when `num_layers` is larger than 3. With KerasTuner, you can easily define
such hyperparameters dynamically while creating the model.
"""
def build_model(hp):
model = keras.Sequential()
model.add(layers.Flatten())
# Tune the number of layers.
for i in range(hp.Int("num_layers", 1, 3)):
model.add(
layers.Dense(
# Tune number of units separately.
units=hp.Int(f"units_{i}", min_value=32, max_value=512, step=32),
activation=hp.Choice("activation", ["relu", "tanh"]),
)
)
if hp.Boolean("dropout"):
model.add(layers.Dropout(rate=0.25))
model.add(layers.Dense(10, activation="softmax"))
learning_rate = hp.Float("lr", min_value=1e-4, max_value=1e-2, sampling="log")
model.compile(
optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
build_model(keras_tuner.HyperParameters())
"""
### Start the search
After defining the search space, we need to select a tuner class to run the
search. You may choose from `RandomSearch`, `BayesianOptimization` and
`Hyperband`, which correspond to different tuning algorithms. Here we use
`RandomSearch` as an example.
To initialize the tuner, we need to specify several arguments in the initializer.
* `hypermodel`. The model-building function, which is `build_model` in our case.
* `objective`. The name of the objective to optimize (whether to minimize or
maximize is automatically inferred for built-in metrics). We will introduce how
to use custom metrics later in this tutorial.
* `max_trials`. The total number of trials to run during the search.
* `executions_per_trial`. The number of models that should be built and fit for
each trial. Different trials have different hyperparameter values. The
executions within the same trial have the same hyperparameter values. The
purpose of having multiple executions per trial is to reduce results variance
and therefore be able to more accurately assess the performance of a model. If
you want to get results faster, you could set `executions_per_trial=1` (single
round of training for each model configuration).
* `overwrite`. Control whether to overwrite the previous results in the same
directory or resume the previous search instead. Here we set `overwrite=True`
to start a new search and ignore any previous results.
* `directory`. A path to a directory for storing the search results.
* `project_name`. The name of the sub-directory in the `directory`.
"""
tuner = keras_tuner.RandomSearch(
hypermodel=build_model,
objective="val_accuracy",
max_trials=3,
executions_per_trial=2,
overwrite=True,
directory="my_dir",
project_name="helloworld",
)
"""
You can print a summary of the search space:
"""
tuner.search_space_summary()
"""
Before starting the search, let's prepare the MNIST dataset.
"""
import keras
import numpy as np
(x, y), (x_test, y_test) = keras.datasets.mnist.load_data()
x_train = x[:-10000]
x_val = x[-10000:]
y_train = y[:-10000]
y_val = y[-10000:]
x_train = np.expand_dims(x_train, -1).astype("float32") / 255.0
x_val = np.expand_dims(x_val, -1).astype("float32") / 255.0
x_test = np.expand_dims(x_test, -1).astype("float32") / 255.0
num_classes = 10
y_train = keras.utils.to_categorical(y_train, num_classes)
y_val = keras.utils.to_categorical(y_val, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
"""
Then, start the search for the best hyperparameter configuration.
All the arguments passed to `search` is passed to `model.fit()` in each
execution. Remember to pass `validation_data` to evaluate the model.
"""
tuner.search(x_train, y_train, epochs=2, validation_data=(x_val, y_val))
"""
During the `search`, the model-building function is called with different
hyperparameter values in different trial. In each trial, the tuner would
generate a new set of hyperparameter values to build the model. The model is
then fit and evaluated. The metrics are recorded. The tuner progressively
explores the space and finally finds a good set of hyperparameter values.
### Query the results
When search is over, you can retrieve the best model(s). The model is saved at
its best performing epoch evaluated on the `validation_data`.
"""
# Get the top 2 models.
models = tuner.get_best_models(num_models=2)
best_model = models[0]
best_model.summary()
"""
You can also print a summary of the search results.
"""
tuner.results_summary()
"""
You will find detailed logs, checkpoints, etc, in the folder
`my_dir/helloworld`, i.e. `directory/project_name`.
You can also visualize the tuning results using TensorBoard and HParams plugin.
For more information, please following
[this link](https://keras.io/guides/keras_tuner/visualize_tuning/).
### Retrain the model
If you want to train the model with the entire dataset, you may retrieve the
best hyperparameters and retrain the model by yourself.
"""
# Get the top 2 hyperparameters.
best_hps = tuner.get_best_hyperparameters(5)
# Build the model with the best hp.
model = build_model(best_hps[0])
# Fit with the entire dataset.
x_all = np.concatenate((x_train, x_val))
y_all = np.concatenate((y_train, y_val))
model.fit(x=x_all, y=y_all, epochs=1)
"""
## Tune model training
To tune the model building process, we need to subclass the `HyperModel` class,
which also makes it easy to share and reuse hypermodels.
We need to override `HyperModel.build()` and `HyperModel.fit()` to tune the
model building and training process respectively. A `HyperModel.build()`
method is the same as the model-building function, which creates a Keras model
using the hyperparameters and returns it.
In `HyperModel.fit()`, you can access the model returned by
`HyperModel.build()`,`hp` and all the arguments passed to `search()`. You need
to train the model and return the training history.
In the following code, we will tune the `shuffle` argument in `model.fit()`.
It is generally not needed to tune the number of epochs because a built-in
callback is passed to `model.fit()` to save the model at its best epoch
evaluated by the `validation_data`.
> **Note**: The `**kwargs` should always be passed to `model.fit()` because it
contains the callbacks for model saving and tensorboard plugins.
"""
class MyHyperModel(keras_tuner.HyperModel):
def build(self, hp):
model = keras.Sequential()
model.add(layers.Flatten())
model.add(
layers.Dense(
units=hp.Int("units", min_value=32, max_value=512, step=32),
activation="relu",
)
)
model.add(layers.Dense(10, activation="softmax"))
model.compile(
optimizer="adam",
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
def fit(self, hp, model, *args, **kwargs):
return model.fit(
*args,
# Tune whether to shuffle the data in each epoch.
shuffle=hp.Boolean("shuffle"),
**kwargs,
)
"""
Again, we can do a quick check to see if the code works correctly.
"""
hp = keras_tuner.HyperParameters()
hypermodel = MyHyperModel()
model = hypermodel.build(hp)
hypermodel.fit(hp, model, np.random.rand(100, 28, 28), np.random.rand(100, 10))
"""
## Tune data preprocessing
To tune data preprocessing, we just add an additional step in
`HyperModel.fit()`, where we can access the dataset from the arguments. In the
following code, we tune whether to normalize the data before training the
model. This time we explicitly put `x` and `y` in the function signature
because we need to use them.
"""
class MyHyperModel(keras_tuner.HyperModel):
def build(self, hp):
model = keras.Sequential()
model.add(layers.Flatten())
model.add(
layers.Dense(
units=hp.Int("units", min_value=32, max_value=512, step=32),
activation="relu",
)
)
model.add(layers.Dense(10, activation="softmax"))
model.compile(
optimizer="adam",
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
def fit(self, hp, model, x, y, **kwargs):
if hp.Boolean("normalize"):
x = layers.Normalization()(x)
return model.fit(
x,
y,
# Tune whether to shuffle the data in each epoch.
shuffle=hp.Boolean("shuffle"),
**kwargs,
)
hp = keras_tuner.HyperParameters()
hypermodel = MyHyperModel()
model = hypermodel.build(hp)
hypermodel.fit(hp, model, np.random.rand(100, 28, 28), np.random.rand(100, 10))
"""
If a hyperparameter is used both in `build()` and `fit()`, you can define it in
`build()` and use `hp.get(hp_name)` to retrieve it in `fit()`. We use the
image size as an example. It is both used as the input shape in `build()`, and
used by data prerprocessing step to crop the images in `fit()`.
"""
class MyHyperModel(keras_tuner.HyperModel):
def build(self, hp):
image_size = hp.Int("image_size", 10, 28)
inputs = keras.Input(shape=(image_size, image_size))
outputs = layers.Flatten()(inputs)
outputs = layers.Dense(
units=hp.Int("units", min_value=32, max_value=512, step=32),
activation="relu",
)(outputs)
outputs = layers.Dense(10, activation="softmax")(outputs)
model = keras.Model(inputs, outputs)
model.compile(
optimizer="adam",
loss="categorical_crossentropy",
metrics=["accuracy"],
)
return model
def fit(self, hp, model, x, y, validation_data=None, **kwargs):
if hp.Boolean("normalize"):
x = layers.Normalization()(x)
image_size = hp.get("image_size")
cropped_x = x[:, :image_size, :image_size, :]
if validation_data:
x_val, y_val = validation_data
cropped_x_val = x_val[:, :image_size, :image_size, :]
validation_data = (cropped_x_val, y_val)
return model.fit(
cropped_x,
y,
# Tune whether to shuffle the data in each epoch.
shuffle=hp.Boolean("shuffle"),
validation_data=validation_data,
**kwargs,
)
tuner = keras_tuner.RandomSearch(
MyHyperModel(),
objective="val_accuracy",
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="tune_hypermodel",
)
tuner.search(x_train, y_train, epochs=2, validation_data=(x_val, y_val))
"""
### Retrain the model
Using `HyperModel` also allows you to retrain the best model by yourself.
"""
hypermodel = MyHyperModel()
best_hp = tuner.get_best_hyperparameters()[0]
model = hypermodel.build(best_hp)
hypermodel.fit(best_hp, model, x_all, y_all, epochs=1)
"""
## Specify the tuning objective
In all previous examples, we all just used validation accuracy
(`"val_accuracy"`) as the tuning objective to select the best model. Actually,
you can use any metric as the objective. The most commonly used metric is
`"val_loss"`, which is the validation loss.
### Built-in metric as the objective
There are many other built-in metrics in Keras you can use as the objective.
Here is [a list of the built-in metrics](https://keras.io/api/metrics/).
To use a built-in metric as the objective, you need to follow these steps:
* Compile the model with the the built-in metric. For example, you want to use
`MeanAbsoluteError()`. You need to compile the model with
`metrics=[MeanAbsoluteError()]`. You may also use its name string instead:
`metrics=["mean_absolute_error"]`. The name string of the metric is always
the snake case of the class name.
* Identify the objective name string. The name string of the objective is
always in the format of `f"val_{metric_name_string}"`. For example, the
objective name string of mean squared error evaluated on the validation data
should be `"val_mean_absolute_error"`.
* Wrap it into `keras_tuner.Objective`. We usually need to wrap the objective
into a `keras_tuner.Objective` object to specify the direction to optimize the
objective. For example, we want to minimize the mean squared error, we can use
`keras_tuner.Objective("val_mean_absolute_error", "min")`. The direction should
be either `"min"` or `"max"`.
* Pass the wrapped objective to the tuner.
You can see the following barebone code example.
"""
def build_regressor(hp):
model = keras.Sequential(
[
layers.Dense(units=hp.Int("units", 32, 128, 32), activation="relu"),
layers.Dense(units=1),
]
)
model.compile(
optimizer="adam",
loss="mean_squared_error",
# Objective is one of the metrics.
metrics=[keras.metrics.MeanAbsoluteError()],
)
return model
tuner = keras_tuner.RandomSearch(
hypermodel=build_regressor,
# The objective name and direction.
# Name is the f"val_{snake_case_metric_class_name}".
objective=keras_tuner.Objective("val_mean_absolute_error", direction="min"),
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="built_in_metrics",
)
tuner.search(
x=np.random.rand(100, 10),
y=np.random.rand(100, 1),
validation_data=(np.random.rand(20, 10), np.random.rand(20, 1)),
)
tuner.results_summary()
"""
### Custom metric as the objective
You may implement your own metric and use it as the hyperparameter search
objective. Here, we use mean squared error (MSE) as an example. First, we
implement the MSE metric by subclassing `keras.metrics.Metric`. Remember to
give a name to your metric using the `name` argument of `super().__init__()`,
which will be used later. Note: MSE is actually a build-in metric, which can be
imported with `keras.metrics.MeanSquaredError`. This is just an example to show
how to use a custom metric as the hyperparameter search objective.
For more information about implementing custom metrics, please see [this
tutorial](https://keras.io/api/metrics/#creating-custom-metrics). If you would
like a metric with a different function signature than `update_state(y_true,
y_pred, sample_weight)`, you can override the `train_step()` method of your
model following [this
tutorial](https://keras.io/guides/customizing_what_happens_in_fit/#going-lowerlevel).
"""
from keras import ops
class CustomMetric(keras.metrics.Metric):
def __init__(self, **kwargs):
# Specify the name of the metric as "custom_metric".
super().__init__(name="custom_metric", **kwargs)
self.sum = self.add_weight(name="sum", initializer="zeros")
self.count = self.add_weight(name="count", dtype="int32", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
values = ops.square(y_true - y_pred)
count = ops.shape(y_true)[0]
if sample_weight is not None:
sample_weight = ops.cast(sample_weight, self.dtype)
values *= sample_weight
count *= sample_weight
self.sum.assign_add(ops.sum(values))
self.count.assign_add(count)
def result(self):
return self.sum / ops.cast(self.count, "float32")
def reset_states(self):
self.sum.assign(0)
self.count.assign(0)
"""
Run the search with the custom objective.
"""
def build_regressor(hp):
model = keras.Sequential(
[
layers.Dense(units=hp.Int("units", 32, 128, 32), activation="relu"),
layers.Dense(units=1),
]
)
model.compile(
optimizer="adam",
loss="mean_squared_error",
# Put custom metric into the metrics.
metrics=[CustomMetric()],
)
return model
tuner = keras_tuner.RandomSearch(
hypermodel=build_regressor,
# Specify the name and direction of the objective.
objective=keras_tuner.Objective("val_custom_metric", direction="min"),
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="custom_metrics",
)
tuner.search(
x=np.random.rand(100, 10),
y=np.random.rand(100, 1),
validation_data=(np.random.rand(20, 10), np.random.rand(20, 1)),
)
tuner.results_summary()
"""
If your custom objective is hard to put into a custom metric, you can also
evaluate the model by yourself in `HyperModel.fit()` and return the objective
value. The objective value would be minimized by default. In this case, you
don't need to specify the `objective` when initializing the tuner. However, in
this case, the metric value will not be tracked in the Keras logs by only
KerasTuner logs. Therefore, these values would not be displayed by any
TensorBoard view using the Keras metrics.
"""
class HyperRegressor(keras_tuner.HyperModel):
def build(self, hp):
model = keras.Sequential(
[
layers.Dense(units=hp.Int("units", 32, 128, 32), activation="relu"),
layers.Dense(units=1),
]
)
model.compile(
optimizer="adam",
loss="mean_squared_error",
)
return model
def fit(self, hp, model, x, y, validation_data, **kwargs):
model.fit(x, y, **kwargs)
x_val, y_val = validation_data
y_pred = model.predict(x_val)
# Return a single float to minimize.
return np.mean(np.abs(y_pred - y_val))
tuner = keras_tuner.RandomSearch(
hypermodel=HyperRegressor(),
# No objective to specify.
# Objective is the return value of `HyperModel.fit()`.
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="custom_eval",
)
tuner.search(
x=np.random.rand(100, 10),
y=np.random.rand(100, 1),
validation_data=(np.random.rand(20, 10), np.random.rand(20, 1)),
)
tuner.results_summary()
"""
If you have multiple metrics to track in KerasTuner, but only use one of them
as the objective, you can return a dictionary, whose keys are the metric names
and the values are the metrics values, for example, return `{"metric_a": 1.0,
"metric_b", 2.0}`. Use one of the keys as the objective name, for example,
`keras_tuner.Objective("metric_a", "min")`.
"""
class HyperRegressor(keras_tuner.HyperModel):
def build(self, hp):
model = keras.Sequential(
[
layers.Dense(units=hp.Int("units", 32, 128, 32), activation="relu"),
layers.Dense(units=1),
]
)
model.compile(
optimizer="adam",
loss="mean_squared_error",
)
return model
def fit(self, hp, model, x, y, validation_data, **kwargs):
model.fit(x, y, **kwargs)
x_val, y_val = validation_data
y_pred = model.predict(x_val)
# Return a dictionary of metrics for KerasTuner to track.
return {
"metric_a": -np.mean(np.abs(y_pred - y_val)),
"metric_b": np.mean(np.square(y_pred - y_val)),
}
tuner = keras_tuner.RandomSearch(
hypermodel=HyperRegressor(),
# Objective is one of the keys.
# Maximize the negative MAE, equivalent to minimize MAE.
objective=keras_tuner.Objective("metric_a", "max"),
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="custom_eval_dict",
)
tuner.search(
x=np.random.rand(100, 10),
y=np.random.rand(100, 1),
validation_data=(np.random.rand(20, 10), np.random.rand(20, 1)),
)
tuner.results_summary()
"""
## Tune end-to-end workflows
In some cases, it is hard to align your code into build and fit functions. You
can also keep your end-to-end workflow in one place by overriding
`Tuner.run_trial()`, which gives you full control of a trial. You can see it
as a black-box optimizer for anything.
### Tune any function
For example, you can find a value of `x`, which minimizes `f(x)=x*x+1`. In the
following code, we just define `x` as a hyperparameter, and return `f(x)` as
the objective value. The `hypermodel` and `objective` argument for initializing
the tuner can be omitted.
"""
class MyTuner(keras_tuner.RandomSearch):
def run_trial(self, trial, *args, **kwargs):
# Get the hp from trial.
hp = trial.hyperparameters
# Define "x" as a hyperparameter.
x = hp.Float("x", min_value=-1.0, max_value=1.0)
# Return the objective value to minimize.
return x * x + 1
tuner = MyTuner(
# No hypermodel or objective specified.
max_trials=20,
overwrite=True,
directory="my_dir",
project_name="tune_anything",
)
# No need to pass anything to search()
# unless you use them in run_trial().
tuner.search()
print(tuner.get_best_hyperparameters()[0].get("x"))
"""
### Keep Keras code separate
You can keep all your Keras code unchanged and use KerasTuner to tune it. It
is useful if you cannot modify the Keras code for some reason.
It also gives you more flexibility. You don't have to separate the model
building and training code apart. However, this workflow would not help you
save the model or connect with the TensorBoard plugins.
To save the model, you can use `trial.trial_id`, which is a string to uniquely
identify a trial, to construct different paths to save the models from
different trials.
"""
import os
def keras_code(units, optimizer, saving_path):
# Build model
model = keras.Sequential(
[
layers.Dense(units=units, activation="relu"),
layers.Dense(units=1),
]
)
model.compile(
optimizer=optimizer,
loss="mean_squared_error",
)
# Prepare data
x_train = np.random.rand(100, 10)
y_train = np.random.rand(100, 1)
x_val = np.random.rand(20, 10)
y_val = np.random.rand(20, 1)
# Train & eval model
model.fit(x_train, y_train)
# Save model
model.save(saving_path)
# Return a single float as the objective value.
# You may also return a dictionary
# of {metric_name: metric_value}.
y_pred = model.predict(x_val)
return np.mean(np.abs(y_pred - y_val))
class MyTuner(keras_tuner.RandomSearch):
def run_trial(self, trial, **kwargs):
hp = trial.hyperparameters
return keras_code(
units=hp.Int("units", 32, 128, 32),
optimizer=hp.Choice("optimizer", ["adam", "adadelta"]),
saving_path=os.path.join("/tmp", f"{trial.trial_id}.keras"),
)
tuner = MyTuner(
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="keep_code_separate",
)
tuner.search()
# Retraining the model
best_hp = tuner.get_best_hyperparameters()[0]
keras_code(**best_hp.values, saving_path="/tmp/best_model.keras")
"""
## KerasTuner includes pre-made tunable applications: HyperResNet and HyperXception
These are ready-to-use hypermodels for computer vision.
They come pre-compiled with `loss="categorical_crossentropy"` and
`metrics=["accuracy"]`.
"""
from keras_tuner.applications import HyperResNet
hypermodel = HyperResNet(input_shape=(28, 28, 1), classes=10)
tuner = keras_tuner.RandomSearch(
hypermodel,
objective="val_accuracy",
max_trials=2,
overwrite=True,
directory="my_dir",
project_name="built_in_hypermodel",
)
| keras-io/guides/keras_tuner/getting_started.py/0 | {
"file_path": "keras-io/guides/keras_tuner/getting_started.py",
"repo_id": "keras-io",
"token_count": 10573
} | 105 |
# Introduction to Keras for Researchers
**Author:** [fchollet](https://twitter.com/fchollet)<br>
**Date created:** 2020/04/01<br>
**Last modified:** 2020/10/02<br>
**Description:** Everything you need to know to use Keras & TensorFlow for deep learning research.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/guides/ipynb/intro_to_keras_for_researchers.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/guides/intro_to_keras_for_researchers.py)
---
## Setup
```python
import tensorflow as tf
import keras
```
---
## Introduction
Are you a machine learning researcher? Do you publish at NeurIPS and push the
state-of-the-art in CV and NLP? This guide will serve as your first introduction to core
Keras & TensorFlow API concepts.
In this guide, you will learn about:
- Tensors, variables, and gradients in TensorFlow
- Creating layers by subclassing the `Layer` class
- Writing low-level training loops
- Tracking losses created by layers via the `add_loss()` method
- Tracking metrics in a low-level training loop
- Speeding up execution with a compiled `tf.function`
- Executing layers in training or inference mode
- The Keras Functional API
You will also see the Keras API in action in two end-to-end research examples:
a Variational Autoencoder, and a Hypernetwork.
---
## Tensors
TensorFlow is an infrastructure layer for differentiable programming.
At its heart, it's a framework for manipulating N-dimensional arrays (tensors),
much like NumPy.
However, there are three key differences between NumPy and TensorFlow:
- TensorFlow can leverage hardware accelerators such as GPUs and TPUs.
- TensorFlow can automatically compute the gradient of arbitrary differentiable tensor expressions.
- TensorFlow computation can be distributed to large numbers of devices on a single machine, and large number of
machines (potentially with multiple devices each).
Let's take a look at the object that is at the core of TensorFlow: the Tensor.
Here's a constant tensor:
```python
x = tf.constant([[5, 2], [1, 3]])
print(x)
```
<div class="k-default-codeblock">
```
tf.Tensor(
[[5 2]
[1 3]], shape=(2, 2), dtype=int32)
```
</div>
You can get its value as a NumPy array by calling `.numpy()`:
```python
x.numpy()
```
<div class="k-default-codeblock">
```
array([[5, 2],
[1, 3]], dtype=int32)
```
</div>
Much like a NumPy array, it features the attributes `dtype` and `shape`:
```python
print("dtype:", x.dtype)
print("shape:", x.shape)
```
<div class="k-default-codeblock">
```
dtype: <dtype: 'int32'>
shape: (2, 2)
```
</div>
A common way to create constant tensors is via `tf.ones` and `tf.zeros` (just like `np.ones` and `np.zeros`):
```python
print(tf.ones(shape=(2, 1)))
print(tf.zeros(shape=(2, 1)))
```
<div class="k-default-codeblock">
```
tf.Tensor(
[[1.]
[1.]], shape=(2, 1), dtype=float32)
tf.Tensor(
[[0.]
[0.]], shape=(2, 1), dtype=float32)
```
</div>
You can also create random constant tensors:
```python
x = tf.random.normal(shape=(2, 2), mean=0.0, stddev=1.0)
x = tf.random.uniform(shape=(2, 2), minval=0, maxval=10, dtype="int32")
```
---
## Variables
Variables are special tensors used to store mutable state (such as the weights of a neural network).
You create a `Variable` using some initial value:
```python
initial_value = tf.random.normal(shape=(2, 2))
a = tf.Variable(initial_value)
print(a)
```
<div class="k-default-codeblock">
```
<tf.Variable 'Variable:0' shape=(2, 2) dtype=float32, numpy=
array([[ 0.11058521, 0.55781174],
[-0.7643957 , -2.106184 ]], dtype=float32)>
```
</div>
You update the value of a `Variable` by using the methods `.assign(value)`, `.assign_add(increment)`, or `.assign_sub(decrement)`:
```python
new_value = tf.random.normal(shape=(2, 2))
a.assign(new_value)
for i in range(2):
for j in range(2):
assert a[i, j] == new_value[i, j]
added_value = tf.random.normal(shape=(2, 2))
a.assign_add(added_value)
for i in range(2):
for j in range(2):
assert a[i, j] == new_value[i, j] + added_value[i, j]
```
---
## Doing math in TensorFlow
If you've used NumPy, doing math in TensorFlow will look very familiar.
The main difference is that your TensorFlow code can run on GPU and TPU.
```python
a = tf.random.normal(shape=(2, 2))
b = tf.random.normal(shape=(2, 2))
c = a + b
d = tf.square(c)
e = tf.exp(d)
```
---
## Gradients
Here's another big difference with NumPy: you can automatically retrieve the gradient of any differentiable expression.
Just open a `GradientTape`, start "watching" a tensor via `tape.watch()`,
and compose a differentiable expression using this tensor as input:
```python
a = tf.random.normal(shape=(2, 2))
b = tf.random.normal(shape=(2, 2))
with tf.GradientTape() as tape:
tape.watch(a) # Start recording the history of operations applied to `a`
c = tf.sqrt(tf.square(a) + tf.square(b)) # Do some math using `a`
# What's the gradient of `c` with respect to `a`?
dc_da = tape.gradient(c, a)
print(dc_da)
```
<div class="k-default-codeblock">
```
tf.Tensor(
[[0.6567579 0.4763136]
[0.9858142 0.3558683]], shape=(2, 2), dtype=float32)
```
</div>
By default, variables are watched automatically, so you don't need to manually `watch` them:
```python
a = tf.Variable(a)
with tf.GradientTape() as tape:
c = tf.sqrt(tf.square(a) + tf.square(b))
dc_da = tape.gradient(c, a)
print(dc_da)
```
<div class="k-default-codeblock">
```
tf.Tensor(
[[0.6567579 0.4763136]
[0.9858142 0.3558683]], shape=(2, 2), dtype=float32)
```
</div>
Note that you can compute higher-order derivatives by nesting tapes:
```python
with tf.GradientTape() as outer_tape:
with tf.GradientTape() as tape:
c = tf.sqrt(tf.square(a) + tf.square(b))
dc_da = tape.gradient(c, a)
d2c_da2 = outer_tape.gradient(dc_da, a)
print(d2c_da2)
```
<div class="k-default-codeblock">
```
tf.Tensor(
[[1.4240768 0.9168595 ]
[0.02550167 1.5579035 ]], shape=(2, 2), dtype=float32)
```
</div>
---
## Keras layers
While TensorFlow is an **infrastructure layer for differentiable programming**,
dealing with tensors, variables, and gradients,
Keras is a **user interface for deep learning**, dealing with
layers, models, optimizers, loss functions, metrics, and more.
Keras serves as the high-level API for TensorFlow:
Keras is what makes TensorFlow simple and productive.
The `Layer` class is the fundamental abstraction in Keras.
A `Layer` encapsulates a state (weights) and some computation
(defined in the call method).
A simple layer looks like this.
The `self.add_weight()` method gives you a shortcut for creating weights:
```python
class Linear(keras.layers.Layer):
"""y = w.x + b"""
def __init__(self, units=32, input_dim=32):
super().__init__()
self.w = self.add_weight(
shape=(input_dim, units), initializer="random_normal", trainable=True
)
self.b = self.add_weight(shape=(units,), initializer="zeros", trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
```
You would use a `Layer` instance much like a Python function:
```python
# Instantiate our layer.
linear_layer = Linear(units=4, input_dim=2)
# The layer can be treated as a function.
# Here we call it on some data.
y = linear_layer(tf.ones((2, 2)))
assert y.shape == (2, 4)
```
The weight variables (created in `__init__`) are automatically
tracked under the `weights` property:
```python
assert linear_layer.weights == [linear_layer.w, linear_layer.b]
```
You have many built-in layers available, from `Dense` to `Conv2D` to `LSTM` to
fancier ones like `Conv3DTranspose` or `ConvLSTM2D`. Be smart about reusing
built-in functionality.
---
## Layer weight creation in `build(input_shape)`
It's often a good idea to defer weight creation to the `build()` method, so
that you don't need to specify the input dim/shape at layer construction time:
```python
class Linear(keras.layers.Layer):
"""y = w.x + b"""
def __init__(self, units=32):
super().__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(
shape=(input_shape[-1], self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="random_normal", trainable=True
)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
# Instantiate our layer.
linear_layer = Linear(4)
# This will also call `build(input_shape)` and create the weights.
y = linear_layer(tf.ones((2, 2)))
```
---
## Layer gradients
You can automatically retrieve the gradients of the weights of a layer by
calling it inside a `GradientTape`. Using these gradients, you can update the
weights of the layer, either manually, or using an optimizer object. Of course,
you can modify the gradients before using them, if you need to.
```python
# Prepare a dataset.
(x_train, y_train), _ = keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(x_train.reshape(60000, 784).astype("float32") / 255, y_train)
)
dataset = dataset.shuffle(buffer_size=1024).batch(64)
# Instantiate our linear layer (defined above) with 10 units.
linear_layer = Linear(10)
# Instantiate a logistic loss function that expects integer targets.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
# Instantiate an optimizer.
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
# Iterate over the batches of the dataset.
for step, (x, y) in enumerate(dataset):
# Open a GradientTape.
with tf.GradientTape() as tape:
# Forward pass.
logits = linear_layer(x)
# Loss value for this batch.
loss = loss_fn(y, logits)
# Get gradients of the loss wrt the weights.
gradients = tape.gradient(loss, linear_layer.trainable_weights)
# Update the weights of our linear layer.
optimizer.apply_gradients(zip(gradients, linear_layer.trainable_weights))
# Logging.
if step % 100 == 0:
print("Step:", step, "Loss:", float(loss))
```
<div class="k-default-codeblock">
```
Step: 0 Loss: 2.4040849208831787
Step: 100 Loss: 2.2059175968170166
Step: 200 Loss: 2.1891114711761475
Step: 300 Loss: 2.0599637031555176
Step: 400 Loss: 2.021326780319214
Step: 500 Loss: 1.9289535284042358
Step: 600 Loss: 1.758760929107666
Step: 700 Loss: 1.7004988193511963
Step: 800 Loss: 1.7745165824890137
Step: 900 Loss: 1.6547822952270508
```
</div>
---
## Trainable and non-trainable weights
Weights created by layers can be either trainable or non-trainable. They're
exposed in `trainable_weights` and `non_trainable_weights` respectively.
Here's a layer with a non-trainable weight:
```python
class ComputeSum(keras.layers.Layer):
"""Returns the sum of the inputs."""
def __init__(self, input_dim):
super().__init__()
# Create a non-trainable weight.
self.total = self.add_weight(
initializer="zeros", shape=(input_dim,), trainable=False
)
def call(self, inputs):
self.total.assign_add(tf.reduce_sum(inputs, axis=0))
return self.total
my_sum = ComputeSum(2)
x = tf.ones((2, 2))
y = my_sum(x)
print(y.numpy()) # [2. 2.]
y = my_sum(x)
print(y.numpy()) # [4. 4.]
assert my_sum.weights == [my_sum.total]
assert my_sum.non_trainable_weights == [my_sum.total]
assert my_sum.trainable_weights == []
```
<div class="k-default-codeblock">
```
[2. 2.]
[4. 4.]
```
</div>
---
## Layers that own layers
Layers can be recursively nested to create bigger computation blocks.
Each layer will track the weights of its sublayers
(both trainable and non-trainable).
```python
# Let's reuse the Linear class
# with a `build` method that we defined above.
class MLP(keras.layers.Layer):
"""Simple stack of Linear layers."""
def __init__(self):
super().__init__()
self.linear_1 = Linear(32)
self.linear_2 = Linear(32)
self.linear_3 = Linear(10)
def call(self, inputs):
x = self.linear_1(inputs)
x = tf.nn.relu(x)
x = self.linear_2(x)
x = tf.nn.relu(x)
return self.linear_3(x)
mlp = MLP()
# The first call to the `mlp` object will create the weights.
y = mlp(tf.ones(shape=(3, 64)))
# Weights are recursively tracked.
assert len(mlp.weights) == 6
```
Note that our manually-created MLP above is equivalent to the following
built-in option:
```python
mlp = keras.Sequential(
[
keras.layers.Dense(32, activation=tf.nn.relu),
keras.layers.Dense(32, activation=tf.nn.relu),
keras.layers.Dense(10),
]
)
```
---
## Tracking losses created by layers
Layers can create losses during the forward pass via the `add_loss()` method.
This is especially useful for regularization losses.
The losses created by sublayers are recursively tracked by the parent layers.
Here's a layer that creates an activity regularization loss:
```python
class ActivityRegularization(keras.layers.Layer):
"""Layer that creates an activity sparsity regularization loss."""
def __init__(self, rate=1e-2):
super().__init__()
self.rate = rate
def call(self, inputs):
# We use `add_loss` to create a regularization loss
# that depends on the inputs.
self.add_loss(self.rate * tf.reduce_sum(inputs))
return inputs
```
Any model incorporating this layer will track this regularization loss:
```python
# Let's use the loss layer in a MLP block.
class SparseMLP(keras.layers.Layer):
"""Stack of Linear layers with a sparsity regularization loss."""
def __init__(self):
super().__init__()
self.linear_1 = Linear(32)
self.regularization = ActivityRegularization(1e-2)
self.linear_3 = Linear(10)
def call(self, inputs):
x = self.linear_1(inputs)
x = tf.nn.relu(x)
x = self.regularization(x)
return self.linear_3(x)
mlp = SparseMLP()
y = mlp(tf.ones((10, 10)))
print(mlp.losses) # List containing one float32 scalar
```
<div class="k-default-codeblock">
```
[<tf.Tensor: shape=(), dtype=float32, numpy=0.24654198>]
```
</div>
These losses are cleared by the top-level layer at the start of each forward
pass -- they don't accumulate. `layer.losses` always contains only the losses
created during the last forward pass. You would typically use these losses by
summing them before computing your gradients when writing a training loop.
```python
# Losses correspond to the *last* forward pass.
mlp = SparseMLP()
mlp(tf.ones((10, 10)))
assert len(mlp.losses) == 1
mlp(tf.ones((10, 10)))
assert len(mlp.losses) == 1 # No accumulation.
# Let's demonstrate how to use these losses in a training loop.
# Prepare a dataset.
(x_train, y_train), _ = keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(x_train.reshape(60000, 784).astype("float32") / 255, y_train)
)
dataset = dataset.shuffle(buffer_size=1024).batch(64)
# A new MLP.
mlp = SparseMLP()
# Loss and optimizer.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = keras.optimizers.SGD(learning_rate=1e-3)
for step, (x, y) in enumerate(dataset):
with tf.GradientTape() as tape:
# Forward pass.
logits = mlp(x)
# External loss value for this batch.
loss = loss_fn(y, logits)
# Add the losses created during the forward pass.
loss += sum(mlp.losses)
# Get gradients of the loss wrt the weights.
gradients = tape.gradient(loss, mlp.trainable_weights)
# Update the weights of our linear layer.
optimizer.apply_gradients(zip(gradients, mlp.trainable_weights))
# Logging.
if step % 100 == 0:
print("Step:", step, "Loss:", float(loss))
```
<div class="k-default-codeblock">
```
Step: 0 Loss: 5.629672050476074
Step: 100 Loss: 2.6190948486328125
Step: 200 Loss: 2.4041364192962646
Step: 300 Loss: 2.385746479034424
Step: 400 Loss: 2.3336474895477295
Step: 500 Loss: 2.3487167358398438
Step: 600 Loss: 2.3277230262756348
Step: 700 Loss: 2.3347654342651367
Step: 800 Loss: 2.318131446838379
Step: 900 Loss: 2.313291549682617
```
</div>
---
## Keeping track of training metrics
Keras offers a broad range of built-in metrics, like `keras.metrics.AUC`
or `keras.metrics.PrecisionAtRecall`. It's also easy to create your
own metrics in a few lines of code.
To use a metric in a custom training loop, you would:
- Instantiate the metric object, e.g. `metric = keras.metrics.AUC()`
- Call its `metric.update_state(targets, predictions)` method for each batch of data
- Query its result via `metric.result()`
- Reset the metric's state at the end of an epoch or at the start of an evaluation via
`metric.reset_state()`
Here's a simple example:
```python
# Instantiate a metric object
accuracy = keras.metrics.SparseCategoricalAccuracy()
# Prepare our layer, loss, and optimizer.
model = keras.Sequential(
[
keras.layers.Dense(32, activation="relu"),
keras.layers.Dense(32, activation="relu"),
keras.layers.Dense(10),
]
)
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = keras.optimizers.Adam(learning_rate=1e-3)
for epoch in range(2):
# Iterate over the batches of a dataset.
for step, (x, y) in enumerate(dataset):
with tf.GradientTape() as tape:
logits = model(x)
# Compute the loss value for this batch.
loss_value = loss_fn(y, logits)
# Update the state of the `accuracy` metric.
accuracy.update_state(y, logits)
# Update the weights of the model to minimize the loss value.
gradients = tape.gradient(loss_value, model.trainable_weights)
optimizer.apply_gradients(zip(gradients, model.trainable_weights))
# Logging the current accuracy value so far.
if step % 200 == 0:
print("Epoch:", epoch, "Step:", step)
print("Total running accuracy so far: %.3f" % accuracy.result())
# Reset the metric's state at the end of an epoch
accuracy.reset_state()
```
<div class="k-default-codeblock">
```
Epoch: 0 Step: 0
Total running accuracy so far: 0.047
Epoch: 0 Step: 200
Total running accuracy so far: 0.751
Epoch: 0 Step: 400
Total running accuracy so far: 0.826
Epoch: 0 Step: 600
Total running accuracy so far: 0.856
Epoch: 0 Step: 800
Total running accuracy so far: 0.872
Epoch: 1 Step: 0
Total running accuracy so far: 0.891
Epoch: 1 Step: 200
Total running accuracy so far: 0.936
Epoch: 1 Step: 400
Total running accuracy so far: 0.939
Epoch: 1 Step: 600
Total running accuracy so far: 0.940
Epoch: 1 Step: 800
Total running accuracy so far: 0.941
```
</div>
You can also define your own metrics by subclassing `keras.metrics.Metric`.
You need to override the three functions called above:
- Override `update_state()` to update the statistic values.
- Override `result()` to return the metric value.
- Override `reset_state()` to reset the metric to its initial state.
Here is an example where we implement the F1-score metric
(with support for sample weighting).
```python
class F1Score(keras.metrics.Metric):
def __init__(self, name="f1_score", dtype="float32", threshold=0.5, **kwargs):
super().__init__(name=name, dtype=dtype, **kwargs)
self.threshold = 0.5
self.true_positives = self.add_weight(
name="tp", dtype=dtype, initializer="zeros"
)
self.false_positives = self.add_weight(
name="fp", dtype=dtype, initializer="zeros"
)
self.false_negatives = self.add_weight(
name="fn", dtype=dtype, initializer="zeros"
)
def update_state(self, y_true, y_pred, sample_weight=None):
y_pred = tf.math.greater_equal(y_pred, self.threshold)
y_true = tf.cast(y_true, tf.bool)
y_pred = tf.cast(y_pred, tf.bool)
true_positives = tf.cast(y_true & y_pred, self.dtype)
false_positives = tf.cast(~y_true & y_pred, self.dtype)
false_negatives = tf.cast(y_true & ~y_pred, self.dtype)
if sample_weight is not None:
sample_weight = tf.cast(sample_weight, self.dtype)
true_positives *= sample_weight
false_positives *= sample_weight
false_negatives *= sample_weight
self.true_positives.assign_add(tf.reduce_sum(true_positives))
self.false_positives.assign_add(tf.reduce_sum(false_positives))
self.false_negatives.assign_add(tf.reduce_sum(false_negatives))
def result(self):
precision = self.true_positives / (self.true_positives + self.false_positives)
recall = self.true_positives / (self.true_positives + self.false_negatives)
return precision * recall * 2.0 / (precision + recall)
def reset_state(self):
self.true_positives.assign(0)
self.false_positives.assign(0)
self.false_negatives.assign(0)
```
Let's test-drive it:
```python
m = F1Score()
m.update_state([0, 1, 0, 0], [0.3, 0.5, 0.8, 0.9])
print("Intermediate result:", float(m.result()))
m.update_state([1, 1, 1, 1], [0.1, 0.7, 0.6, 0.0])
print("Final result:", float(m.result()))
```
<div class="k-default-codeblock">
```
Intermediate result: 0.5
Final result: 0.6000000238418579
```
</div>
---
## Compiled functions
Running eagerly is great for debugging, but you will get better performance by
compiling your computation into static graphs. Static graphs are a researcher's
best friends. You can compile any function by wrapping it in a `tf.function`
decorator.
```python
# Prepare our layer, loss, and optimizer.
model = keras.Sequential(
[
keras.layers.Dense(32, activation="relu"),
keras.layers.Dense(32, activation="relu"),
keras.layers.Dense(10),
]
)
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = keras.optimizers.Adam(learning_rate=1e-3)
# Create a training step function.
@tf.function # Make it fast.
def train_on_batch(x, y):
with tf.GradientTape() as tape:
logits = model(x)
loss = loss_fn(y, logits)
gradients = tape.gradient(loss, model.trainable_weights)
optimizer.apply_gradients(zip(gradients, model.trainable_weights))
return loss
# Prepare a dataset.
(x_train, y_train), _ = keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(x_train.reshape(60000, 784).astype("float32") / 255, y_train)
)
dataset = dataset.shuffle(buffer_size=1024).batch(64)
for step, (x, y) in enumerate(dataset):
loss = train_on_batch(x, y)
if step % 100 == 0:
print("Step:", step, "Loss:", float(loss))
```
<div class="k-default-codeblock">
```
Step: 0 Loss: 2.3094160556793213
Step: 100 Loss: 0.53387850522995
Step: 200 Loss: 0.3349820375442505
Step: 300 Loss: 0.23337996006011963
Step: 400 Loss: 0.304066926240921
Step: 500 Loss: 0.180154949426651
Step: 600 Loss: 0.4450702667236328
Step: 700 Loss: 0.16045540571212769
Step: 800 Loss: 0.27985841035842896
Step: 900 Loss: 0.19074323773384094
```
</div>
---
## Training mode & inference mode
Some layers, in particular the `BatchNormalization` layer and the `Dropout`
layer, have different behaviors during training and inference. For such layers,
it is standard practice to expose a `training` (boolean) argument in the `call`
method.
By exposing this argument in `call`, you enable the built-in training and
evaluation loops (e.g. fit) to correctly use the layer in training and
inference modes.
```python
class Dropout(keras.layers.Layer):
def __init__(self, rate):
super().__init__()
self.rate = rate
def call(self, inputs, training=None):
if training:
return tf.nn.dropout(inputs, rate=self.rate)
return inputs
class MLPWithDropout(keras.layers.Layer):
def __init__(self):
super().__init__()
self.linear_1 = Linear(32)
self.dropout = Dropout(0.5)
self.linear_3 = Linear(10)
def call(self, inputs, training=None):
x = self.linear_1(inputs)
x = tf.nn.relu(x)
x = self.dropout(x, training=training)
return self.linear_3(x)
mlp = MLPWithDropout()
y_train = mlp(tf.ones((2, 2)), training=True)
y_test = mlp(tf.ones((2, 2)), training=False)
```
---
## The Functional API for model-building
To build deep learning models, you don't have to use object-oriented programming all the
time. All layers we've seen so far can also be composed functionally, like this (we call
it the "Functional API"):
```python
# We use an `Input` object to describe the shape and dtype of the inputs.
# This is the deep learning equivalent of *declaring a type*.
# The shape argument is per-sample; it does not include the batch size.
# The functional API focused on defining per-sample transformations.
# The model we create will automatically batch the per-sample transformations,
# so that it can be called on batches of data.
inputs = keras.Input(shape=(16,), dtype="float32")
# We call layers on these "type" objects
# and they return updated types (new shapes/dtypes).
x = Linear(32)(inputs) # We are reusing the Linear layer we defined earlier.
x = Dropout(0.5)(x) # We are reusing the Dropout layer we defined earlier.
outputs = Linear(10)(x)
# A functional `Model` can be defined by specifying inputs and outputs.
# A model is itself a layer like any other.
model = keras.Model(inputs, outputs)
# A functional model already has weights, before being called on any data.
# That's because we defined its input shape in advance (in `Input`).
assert len(model.weights) == 4
# Let's call our model on some data, for fun.
y = model(tf.ones((2, 16)))
assert y.shape == (2, 10)
# You can pass a `training` argument in `__call__`
# (it will get passed down to the Dropout layer).
y = model(tf.ones((2, 16)), training=True)
```
The Functional API tends to be more concise than subclassing, and provides a few other
advantages (generally the same advantages that functional, typed languages provide over
untyped OO development). However, it can only be used to define DAGs of layers --
recursive networks should be defined as Layer subclasses instead.
Learn more about the Functional API [here](/guides/functional_api/).
In your research workflows, you may often find yourself mix-and-matching OO models and
Functional models.
Note that the `Model` class also features built-in training & evaluation loops:
`fit()`, `predict()` and `evaluate()` (configured via the `compile()` method).
These built-in functions give you access to the
following built-in training infrastructure features:
* [Callbacks](/api/callbacks/). You can leverage built-in
callbacks for early-stopping, model checkpointing,
and monitoring training with TensorBoard. You can also
[implement custom callbacks](/guides/writing_your_own_callbacks/) if needed.
* [Distributed training](https://keras.io/guides/distributed_training/). You
can easily scale up your training to multiple GPUs, TPU, or even multiple machines
with the `tf.distribute` API -- with no changes to your code.
* [Step fusing](https://keras.io/api/models/model_training_apis/#compile-method).
With the `steps_per_execution` argument in `Model.compile()`, you can process
multiple batches in a single `tf.function` call, which greatly improves
device utilization on TPUs.
We won't go into the details, but we provide a simple code example
below. It leverages the built-in training infrastructure to implement the MNIST
example above.
```python
inputs = keras.Input(shape=(784,), dtype="float32")
x = keras.layers.Dense(32, activation="relu")(inputs)
x = keras.layers.Dense(32, activation="relu")(x)
outputs = keras.layers.Dense(10)(x)
model = keras.Model(inputs, outputs)
# Specify the loss, optimizer, and metrics with `compile()`.
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(learning_rate=1e-3),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
)
# Train the model with the dataset for 2 epochs.
model.fit(dataset, epochs=2)
model.predict(dataset)
model.evaluate(dataset)
```
<div class="k-default-codeblock">
```
Epoch 1/2
938/938 [==============================] - 2s 1ms/step - loss: 0.3988 - sparse_categorical_accuracy: 0.8862
Epoch 2/2
938/938 [==============================] - 1s 1ms/step - loss: 0.1866 - sparse_categorical_accuracy: 0.9461
938/938 [==============================] - 1s 803us/step
938/938 [==============================] - 1s 903us/step - loss: 0.1536 - sparse_categorical_accuracy: 0.9543
[0.15355238318443298, 0.9542833566665649]
```
</div>
You can always subclass the `Model` class (it works exactly like subclassing
`Layer`) if you want to leverage built-in training loops for your OO models.
Just override the `Model.train_step()` to
customize what happens in `fit()` while retaining support
for the built-in infrastructure features outlined above -- callbacks,
zero-code distribution support, and step fusing support.
You may also override `test_step()` to customize what happens in `evaluate()`,
and override `predict_step()` to customize what happens in `predict()`. For more
information, please refer to
[this guide](https://keras.io/guides/customizing_what_happens_in_fit/).
```python
class CustomModel(keras.Model):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.loss_tracker = keras.metrics.Mean(name="loss")
self.accuracy = keras.metrics.SparseCategoricalAccuracy()
self.loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
self.optimizer = keras.optimizers.Adam(learning_rate=1e-3)
def train_step(self, data):
# Unpack the data. Its structure depends on your model and
# on what you pass to `fit()`.
x, y = data
with tf.GradientTape() as tape:
y_pred = self(x, training=True) # Forward pass
loss = self.loss_fn(y, y_pred)
gradients = tape.gradient(loss, self.trainable_weights)
self.optimizer.apply_gradients(zip(gradients, self.trainable_weights))
# Update metrics (includes the metric that tracks the loss)
self.loss_tracker.update_state(loss)
self.accuracy.update_state(y, y_pred)
# Return a dict mapping metric names to current value
return {"loss": self.loss_tracker.result(), "accuracy": self.accuracy.result()}
@property
def metrics(self):
# We list our `Metric` objects here so that `reset_states()` can be
# called automatically at the start of each epoch.
return [self.loss_tracker, self.accuracy]
inputs = keras.Input(shape=(784,), dtype="float32")
x = keras.layers.Dense(32, activation="relu")(inputs)
x = keras.layers.Dense(32, activation="relu")(x)
outputs = keras.layers.Dense(10)(x)
model = CustomModel(inputs, outputs)
model.compile()
model.fit(dataset, epochs=2)
```
<div class="k-default-codeblock">
```
Epoch 1/2
938/938 [==============================] - 1s 1ms/step - loss: 0.3952 - accuracy: 0.8208
Epoch 2/2
938/938 [==============================] - 1s 1ms/step - loss: 0.2055 - accuracy: 0.9364
<keras.src.callbacks.History at 0x7f12882deb10>
```
</div>
---
## End-to-end experiment example 1: variational autoencoders.
Here are some of the things you've learned so far:
- A `Layer` encapsulates a state (created in `__init__` or `build`) and some computation
(defined in `call`).
- Layers can be recursively nested to create new, bigger computation blocks.
- You can easily write highly hackable training loops by opening a
`GradientTape`, calling your model inside the tape's scope, then retrieving
gradients and applying them via an optimizer.
- You can speed up your training loops using the `@tf.function` decorator.
- Layers can create and track losses (typically regularization losses) via
`self.add_loss()`.
Let's put all of these things together into an end-to-end example: we're going to
implement a Variational AutoEncoder (VAE). We'll train it on MNIST digits.
Our VAE will be a subclass of `Layer`, built as a nested composition of layers that
subclass `Layer`. It will feature a regularization loss (KL divergence).
Below is our model definition.
First, we have an `Encoder` class, which uses a `Sampling` layer to map a MNIST digit to
a latent-space triplet `(z_mean, z_log_var, z)`.
```python
from tensorflow.keras import layers
class Sampling(layers.Layer):
"""Uses (z_mean, z_log_var) to sample z, the vector encoding a digit."""
def call(self, inputs):
z_mean, z_log_var = inputs
batch = tf.shape(z_mean)[0]
dim = tf.shape(z_mean)[1]
epsilon = keras.backend.random_normal(shape=(batch, dim))
return z_mean + tf.exp(0.5 * z_log_var) * epsilon
class Encoder(layers.Layer):
"""Maps MNIST digits to a triplet (z_mean, z_log_var, z)."""
def __init__(self, latent_dim=32, intermediate_dim=64, **kwargs):
super().__init__(**kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation=tf.nn.relu)
self.dense_mean = layers.Dense(latent_dim)
self.dense_log_var = layers.Dense(latent_dim)
self.sampling = Sampling()
def call(self, inputs):
x = self.dense_proj(inputs)
z_mean = self.dense_mean(x)
z_log_var = self.dense_log_var(x)
z = self.sampling((z_mean, z_log_var))
return z_mean, z_log_var, z
```
Next, we have a `Decoder` class, which maps the probabilistic latent space coordinates
back to a MNIST digit.
```python
class Decoder(layers.Layer):
"""Converts z, the encoded digit vector, back into a readable digit."""
def __init__(self, original_dim, intermediate_dim=64, **kwargs):
super().__init__(**kwargs)
self.dense_proj = layers.Dense(intermediate_dim, activation=tf.nn.relu)
self.dense_output = layers.Dense(original_dim, activation=tf.nn.sigmoid)
def call(self, inputs):
x = self.dense_proj(inputs)
return self.dense_output(x)
```
Finally, our `VariationalAutoEncoder` composes together an encoder and a decoder, and
creates a KL divergence regularization loss via `add_loss()`.
```python
class VariationalAutoEncoder(layers.Layer):
"""Combines the encoder and decoder into an end-to-end model for training."""
def __init__(self, original_dim, intermediate_dim=64, latent_dim=32, **kwargs):
super().__init__(**kwargs)
self.original_dim = original_dim
self.encoder = Encoder(latent_dim=latent_dim, intermediate_dim=intermediate_dim)
self.decoder = Decoder(original_dim, intermediate_dim=intermediate_dim)
def call(self, inputs):
z_mean, z_log_var, z = self.encoder(inputs)
reconstructed = self.decoder(z)
# Add KL divergence regularization loss.
kl_loss = -0.5 * tf.reduce_mean(
z_log_var - tf.square(z_mean) - tf.exp(z_log_var) + 1
)
self.add_loss(kl_loss)
return reconstructed
```
Now, let's write a training loop. Our training step is decorated with a `@tf.function` to
compile into a super fast graph function.
```python
# Our model.
vae = VariationalAutoEncoder(original_dim=784, intermediate_dim=64, latent_dim=32)
# Loss and optimizer.
loss_fn = keras.losses.MeanSquaredError()
optimizer = keras.optimizers.Adam(learning_rate=1e-3)
# Prepare a dataset.
(x_train, _), _ = keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
x_train.reshape(60000, 784).astype("float32") / 255
)
dataset = dataset.shuffle(buffer_size=1024).batch(32)
@tf.function
def training_step(x):
with tf.GradientTape() as tape:
reconstructed = vae(x) # Compute input reconstruction.
# Compute loss.
loss = loss_fn(x, reconstructed)
loss += sum(vae.losses) # Add KLD term.
# Update the weights of the VAE.
grads = tape.gradient(loss, vae.trainable_weights)
optimizer.apply_gradients(zip(grads, vae.trainable_weights))
return loss
losses = [] # Keep track of the losses over time.
for step, x in enumerate(dataset):
loss = training_step(x)
# Logging.
losses.append(float(loss))
if step % 100 == 0:
print("Step:", step, "Loss:", sum(losses) / len(losses))
# Stop after 1000 steps.
# Training the model to convergence is left
# as an exercise to the reader.
if step >= 1000:
break
```
<div class="k-default-codeblock">
```
Step: 0 Loss: 0.327964723110199
Step: 100 Loss: 0.1264294325420172
Step: 200 Loss: 0.10020137063009822
Step: 300 Loss: 0.08990733624989804
Step: 400 Loss: 0.0848350128962512
Step: 500 Loss: 0.081730601152855
Step: 600 Loss: 0.07928250531066278
Step: 700 Loss: 0.07791465763720058
Step: 800 Loss: 0.07670121117217116
Step: 900 Loss: 0.07572131670937025
Step: 1000 Loss: 0.07478016477960212
```
</div>
As you can see, building and training this type of model in Keras
is quick and painless.
---
## End-to-end experiment example 2: hypernetworks.
Let's take a look at another kind of research experiment: hypernetworks.
The idea is to use a small deep neural network (the hypernetwork) to generate
the weights for a larger network (the main network).
Let's implement a really trivial hypernetwork: we'll use a small 2-layer network to
generate the weights of a larger 3-layer network.
```python
import numpy as np
input_dim = 784
classes = 10
# This is the main network we'll actually use to predict labels.
main_network = keras.Sequential(
[
keras.layers.Dense(64, activation=tf.nn.relu),
keras.layers.Dense(classes),
]
)
# It doesn't need to create its own weights, so let's mark its layers
# as already built. That way, calling `main_network` won't create new variables.
for layer in main_network.layers:
layer.built = True
# This is the number of weight coefficients to generate. Each layer in the
# main network requires output_dim * input_dim + output_dim coefficients.
num_weights_to_generate = (classes * 64 + classes) + (64 * input_dim + 64)
# This is the hypernetwork that generates the weights of the `main_network` above.
hypernetwork = keras.Sequential(
[
keras.layers.Dense(16, activation=tf.nn.relu),
keras.layers.Dense(num_weights_to_generate, activation=tf.nn.sigmoid),
]
)
```
This is our training loop. For each batch of data:
- We use `hypernetwork` to generate an array of weight coefficients, `weights_pred`
- We reshape these coefficients into kernel & bias tensors for the `main_network`
- We run the forward pass of the `main_network` to compute the actual MNIST predictions
- We run backprop through the weights of the `hypernetwork` to minimize the
final classification loss
```python
# Loss and optimizer.
loss_fn = keras.losses.SparseCategoricalCrossentropy(from_logits=True)
optimizer = keras.optimizers.Adam(learning_rate=1e-4)
# Prepare a dataset.
(x_train, y_train), _ = keras.datasets.mnist.load_data()
dataset = tf.data.Dataset.from_tensor_slices(
(x_train.reshape(60000, 784).astype("float32") / 255, y_train)
)
# We'll use a batch size of 1 for this experiment.
dataset = dataset.shuffle(buffer_size=1024).batch(1)
@tf.function
def train_step(x, y):
with tf.GradientTape() as tape:
# Predict weights for the outer model.
weights_pred = hypernetwork(x)
# Reshape them to the expected shapes for w and b for the outer model.
# Layer 0 kernel.
start_index = 0
w0_shape = (input_dim, 64)
w0_coeffs = weights_pred[:, start_index : start_index + np.prod(w0_shape)]
w0 = tf.reshape(w0_coeffs, w0_shape)
start_index += np.prod(w0_shape)
# Layer 0 bias.
b0_shape = (64,)
b0_coeffs = weights_pred[:, start_index : start_index + np.prod(b0_shape)]
b0 = tf.reshape(b0_coeffs, b0_shape)
start_index += np.prod(b0_shape)
# Layer 1 kernel.
w1_shape = (64, classes)
w1_coeffs = weights_pred[:, start_index : start_index + np.prod(w1_shape)]
w1 = tf.reshape(w1_coeffs, w1_shape)
start_index += np.prod(w1_shape)
# Layer 1 bias.
b1_shape = (classes,)
b1_coeffs = weights_pred[:, start_index : start_index + np.prod(b1_shape)]
b1 = tf.reshape(b1_coeffs, b1_shape)
start_index += np.prod(b1_shape)
# Set the weight predictions as the weight variables on the outer model.
main_network.layers[0].kernel = w0
main_network.layers[0].bias = b0
main_network.layers[1].kernel = w1
main_network.layers[1].bias = b1
# Inference on the outer model.
preds = main_network(x)
loss = loss_fn(y, preds)
# Train only inner model.
grads = tape.gradient(loss, hypernetwork.trainable_weights)
optimizer.apply_gradients(zip(grads, hypernetwork.trainable_weights))
return loss
losses = [] # Keep track of the losses over time.
for step, (x, y) in enumerate(dataset):
loss = train_step(x, y)
# Logging.
losses.append(float(loss))
if step % 100 == 0:
print("Step:", step, "Loss:", sum(losses) / len(losses))
# Stop after 1000 steps.
# Training the model to convergence is left
# as an exercise to the reader.
if step >= 1000:
break
```
<div class="k-default-codeblock">
```
Step: 0 Loss: 1.2556400299072266
Step: 100 Loss: 2.5476599238296544
Step: 200 Loss: 2.1573401512346457
Step: 300 Loss: 1.918845683104201
Step: 400 Loss: 1.8333103110458693
Step: 500 Loss: 1.7798502995807328
Step: 600 Loss: 1.6786754470412841
Step: 700 Loss: 1.603073729164222
Step: 800 Loss: 1.532632532587611
Step: 900 Loss: 1.499125787840248
Step: 1000 Loss: 1.4645580406379608
```
</div>
Implementing arbitrary research ideas with Keras is straightforward and highly
productive. Imagine trying out 25 ideas per day (20 minutes per experiment on average)!
Keras has been designed to go from idea to results as fast as possible, because we
believe this is
the key to doing great research.
We hope you enjoyed this quick introduction. Let us know what you build with Keras!
| keras-io/guides/md/intro_to_keras_for_researchers.md/0 | {
"file_path": "keras-io/guides/md/intro_to_keras_for_researchers.md",
"repo_id": "keras-io",
"token_count": 15927
} | 106 |
# Tailor the search space
**Authors:** Luca Invernizzi, James Long, Francois Chollet, Tom O'Malley, Haifeng Jin<br>
**Date created:** 2019/05/31<br>
**Last modified:** 2021/10/27<br>
**Description:** Tune a subset of the hyperparameters without changing the hypermodel.
<img class="k-inline-icon" src="https://colab.research.google.com/img/colab_favicon.ico"/> [**View in Colab**](https://colab.research.google.com/github/keras-team/keras-io/blob/master/guides/ipynb/keras_tuner/tailor_the_search_space.ipynb) <span class="k-dot">•</span><img class="k-inline-icon" src="https://github.com/favicon.ico"/> [**GitHub source**](https://github.com/keras-team/keras-io/blob/master/guides/keras_tuner/tailor_the_search_space.py)
```python
!pip install keras-tuner -q
```
In this guide, we will show how to tailor the search space without changing the
`HyperModel` code directly. For example, you can only tune some of the
hyperparameters and keep the rest fixed, or you can override the compile
arguments, like `optimizer`, `loss`, and `metrics`.
---
## The default value of a hyperparameter
Before we tailor the search space, it is important to know that every
hyperparameter has a default value. This default value is used as the
hyperparameter value when not tuning it during our tailoring the search space.
Whenever you register a hyperparameter, you can use the `default` argument to
specify a default value:
```python
hp.Int("units", min_value=32, max_value=128, step=32, default=64)
```
If you don't, hyperparameters always have a default default (for `Int`, it is
equal to `min_value`).
In the following model-building function, we specified the default value for
the `units` hyperparameter as 64.
```python
import keras
from keras import layers
import keras_tuner
import numpy as np
def build_model(hp):
model = keras.Sequential()
model.add(layers.Flatten())
model.add(
layers.Dense(
units=hp.Int("units", min_value=32, max_value=128, step=32, default=64)
)
)
if hp.Boolean("dropout"):
model.add(layers.Dropout(rate=0.25))
model.add(layers.Dense(units=10, activation="softmax"))
model.compile(
optimizer=keras.optimizers.Adam(
learning_rate=hp.Choice("learning_rate", values=[1e-2, 1e-3, 1e-4])
),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
return model
```
We will reuse this search space in the rest of the tutorial by overriding the
hyperparameters without defining a new search space.
---
## Search a few and fix the rest
If you have an existing hypermodel, and you want to search over only a few
hyperparameters, and keep the rest fixed, you don't have to change the code in
the model-building function or the `HyperModel`. You can pass a
`HyperParameters` to the `hyperparameters` argument to the tuner constructor
with all the hyperparameters you want to tune. Specify
`tune_new_entries=False` to prevent it from tuning other hyperparameters, the
default value of which would be used.
In the following example, we only tune the `learning_rate` hyperparameter, and
changed its type and value ranges.
```python
hp = keras_tuner.HyperParameters()
# This will override the `learning_rate` parameter with your
# own selection of choices
hp.Float("learning_rate", min_value=1e-4, max_value=1e-2, sampling="log")
tuner = keras_tuner.RandomSearch(
hypermodel=build_model,
hyperparameters=hp,
# Prevents unlisted parameters from being tuned
tune_new_entries=False,
objective="val_accuracy",
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="search_a_few",
)
# Generate random data
x_train = np.random.rand(100, 28, 28, 1)
y_train = np.random.randint(0, 10, (100, 1))
x_val = np.random.rand(20, 28, 28, 1)
y_val = np.random.randint(0, 10, (20, 1))
# Run the search
tuner.search(x_train, y_train, epochs=1, validation_data=(x_val, y_val))
```
<div class="k-default-codeblock">
```
Trial 3 Complete [00h 00m 01s]
val_accuracy: 0.20000000298023224
```
</div>
<div class="k-default-codeblock">
```
Best val_accuracy So Far: 0.25
Total elapsed time: 00h 00m 03s
```
</div>
If you summarize the search space, you will see only one hyperparameter.
```python
tuner.search_space_summary()
```
<div class="k-default-codeblock">
```
Search space summary
Default search space size: 1
learning_rate (Float)
{'default': 0.0001, 'conditions': [], 'min_value': 0.0001, 'max_value': 0.01, 'step': None, 'sampling': 'log'}
```
</div>
---
## Fix a few and tune the rest
In the example above we showed how to tune only a few hyperparameters and keep
the rest fixed. You can also do the reverse: only fix a few hyperparameters
and tune all the rest.
In the following example, we fixed the value of the `learning_rate`
hyperparameter. Pass a `hyperparameters` argument with a `Fixed` entry (or any
number of `Fixed` entries). Also remember to specify `tune_new_entries=True`,
which allows us to tune the rest of the hyperparameters.
```python
hp = keras_tuner.HyperParameters()
hp.Fixed("learning_rate", value=1e-4)
tuner = keras_tuner.RandomSearch(
build_model,
hyperparameters=hp,
tune_new_entries=True,
objective="val_accuracy",
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="fix_a_few",
)
tuner.search(x_train, y_train, epochs=1, validation_data=(x_val, y_val))
```
<div class="k-default-codeblock">
```
Trial 3 Complete [00h 00m 01s]
val_accuracy: 0.15000000596046448
```
</div>
<div class="k-default-codeblock">
```
Best val_accuracy So Far: 0.15000000596046448
Total elapsed time: 00h 00m 03s
```
</div>
If you summarize the search space, you will see the `learning_rate` is marked
as fixed, and the rest of the hyperparameters are being tuned.
```python
tuner.search_space_summary()
```
<div class="k-default-codeblock">
```
Search space summary
Default search space size: 3
learning_rate (Fixed)
{'conditions': [], 'value': 0.0001}
units (Int)
{'default': 64, 'conditions': [], 'min_value': 32, 'max_value': 128, 'step': 32, 'sampling': 'linear'}
dropout (Boolean)
{'default': False, 'conditions': []}
```
</div>
---
## Overriding compilation arguments
If you have a hypermodel for which you want to change the existing optimizer,
loss, or metrics, you can do so by passing these arguments to the tuner
constructor:
```python
tuner = keras_tuner.RandomSearch(
build_model,
optimizer=keras.optimizers.Adam(1e-3),
loss="mse",
metrics=[
"sparse_categorical_crossentropy",
],
objective="val_loss",
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="override_compile",
)
tuner.search(x_train, y_train, epochs=1, validation_data=(x_val, y_val))
```
<div class="k-default-codeblock">
```
Trial 3 Complete [00h 00m 01s]
val_loss: 29.39796257019043
```
</div>
<div class="k-default-codeblock">
```
Best val_loss So Far: 29.39630699157715
Total elapsed time: 00h 00m 04s
```
</div>
If you get the best model, you can see the loss function has changed to MSE.
```python
tuner.get_best_models()[0].loss
```
<div class="k-default-codeblock">
```
/usr/local/python/3.10.13/lib/python3.10/site-packages/keras/src/saving/saving_lib.py:388: UserWarning: Skipping variable loading for optimizer 'adam', because it has 2 variables whereas the saved optimizer has 10 variables.
trackable.load_own_variables(weights_store.get(inner_path))
'mse'
```
</div>
---
## Tailor the search space of pre-build HyperModels
You can also use these techniques with the pre-build models in KerasTuner, like
`HyperResNet` or `HyperXception`. However, to see what hyperparameters are in
these pre-build `HyperModel`s, you will have to read the source code.
In the following example, we only tune the `learning_rate` of `HyperXception`
and fixed all the rest of the hyperparameters. Because the default loss of
`HyperXception` is `categorical_crossentropy`, which expect the labels to be
one-hot encoded, which doesn't match our raw integer label data, we need to
change it by overriding the `loss` in the compile args to
`sparse_categorical_crossentropy`.
```python
hypermodel = keras_tuner.applications.HyperXception(input_shape=(28, 28, 1), classes=10)
hp = keras_tuner.HyperParameters()
# This will override the `learning_rate` parameter with your
# own selection of choices
hp.Choice("learning_rate", values=[1e-2, 1e-3, 1e-4])
tuner = keras_tuner.RandomSearch(
hypermodel,
hyperparameters=hp,
# Prevents unlisted parameters from being tuned
tune_new_entries=False,
# Override the loss.
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
objective="val_accuracy",
max_trials=3,
overwrite=True,
directory="my_dir",
project_name="helloworld",
)
# Run the search
tuner.search(x_train, y_train, epochs=1, validation_data=(x_val, y_val))
tuner.search_space_summary()
```
<div class="k-default-codeblock">
```
Trial 3 Complete [00h 00m 19s]
val_accuracy: 0.15000000596046448
```
</div>
<div class="k-default-codeblock">
```
Best val_accuracy So Far: 0.20000000298023224
Total elapsed time: 00h 00m 58s
Search space summary
Default search space size: 1
learning_rate (Choice)
{'default': 0.01, 'conditions': [], 'values': [0.01, 0.001, 0.0001], 'ordered': True}
```
</div> | keras-io/guides/md/keras_tuner/tailor_the_search_space.md/0 | {
"file_path": "keras-io/guides/md/keras_tuner/tailor_the_search_space.md",
"repo_id": "keras-io",
"token_count": 3317
} | 107 |
<meta http-equiv="refresh" content="0; URL='https://keras.io/api/keras_nlp/modeling_layers/sine_position_encoding/'" />
| keras-io/redirects/api/keras_nlp/layers/sine_position_encoding/index.html/0 | {
"file_path": "keras-io/redirects/api/keras_nlp/layers/sine_position_encoding/index.html",
"repo_id": "keras-io",
"token_count": 49
} | 108 |
<meta http-equiv="refresh" content="0; URL='https://keras.io/examples/vision/nl_image_search/'" />
| keras-io/redirects/examples/nlp/nl_image_search/index.html/0 | {
"file_path": "keras-io/redirects/examples/nlp/nl_image_search/index.html",
"repo_id": "keras-io",
"token_count": 38
} | 109 |
<meta http-equiv="refresh" content="0; URL='https://keras.io/api/layers/regularization_layers/'" />
| keras-io/redirects/layers/noise/index.html/0 | {
"file_path": "keras-io/redirects/layers/noise/index.html",
"repo_id": "keras-io",
"token_count": 38
} | 110 |
<meta http-equiv="refresh" content="0; URL='https://keras.io/api/layers/regularizers/'" />
| keras-io/redirects/regularizers/index.html/0 | {
"file_path": "keras-io/redirects/regularizers/index.html",
"repo_id": "keras-io",
"token_count": 35
} | 111 |
from guides_master import GUIDES_MASTER
from examples_master import EXAMPLES_MASTER
from api_master import API_MASTER
from keras2_api_master import KERAS2_API_MASTER
MASTER = {
"path": "/",
"title": "Keras: the Python Deep Learning library",
"children": [
{
"path": "about",
"title": "About Keras",
},
{
"path": "getting_started/",
"title": "Getting started",
"children": [
{
"path": "intro_to_keras_for_engineers",
"title": "Introduction to Keras for engineers",
},
{
"path": "ecosystem",
"title": "The Keras ecosystem",
},
{
"path": "faq",
"title": "Frequently Asked Questions",
"outline": False,
},
],
},
GUIDES_MASTER,
API_MASTER,
KERAS2_API_MASTER,
EXAMPLES_MASTER,
{
"path": "keras_tuner/",
"title": "KerasTuner: Hyperparameter Tuning",
},
{
"path": "keras_cv/",
"title": "KerasCV: Computer Vision Workflows",
},
{
"path": "keras_nlp/",
"title": "KerasNLP: Natural Language Workflows",
},
],
}
| keras-io/scripts/master.py/0 | {
"file_path": "keras-io/scripts/master.py",
"repo_id": "keras-io",
"token_count": 817
} | 112 |
# KerasCV Metrics
KerasCV metrics are `keras.metrics.Metric` subclasses for computer vision specific use cases.
See also the [*COCO* metrics usage guide](https://keras.io/guides/keras_cv/coco_metrics/).
{{toc}}
| keras-io/templates/api/keras_cv/metrics/index.md/0 | {
"file_path": "keras-io/templates/api/keras_cv/metrics/index.md",
"repo_id": "keras-io",
"token_count": 75
} | 113 |
# Layer activation functions
## Usage of activations
Activations can either be used through an `Activation` layer, or through the `activation` argument supported by all forward layers:
```python
model.add(layers.Dense(64, activation=activations.relu))
```
This is equivalent to:
```python
from keras import layers
from keras import activations
model.add(layers.Dense(64))
model.add(layers.Activation(activations.relu))
```
All built-in activations may also be passed via their string identifier:
```python
model.add(layers.Dense(64, activation='relu'))
```
---
## Available activations
{{autogenerated}}
---
## Creating custom activations
You can also use a callable as an activation
(in this case it should take a tensor and return a tensor of the same shape and dtype):
```python
model.add(layers.Dense(64, activation=keras.ops.tanh))
```
---
## About "advanced activation" layers
Activations that are more complex than a simple function (eg. learnable activations, which maintain a state)
are available as [Advanced Activation layers](/api/layers/activation_layers/).
| keras-io/templates/api/layers/activations.md/0 | {
"file_path": "keras-io/templates/api/layers/activations.md",
"repo_id": "keras-io",
"token_count": 327
} | 114 |
# KerasNLP
<a class="github-button" href="https://github.com/keras-team/keras-nlp" data-size="large" data-show-count="true" aria-label="Star keras-team/keras-nlp on GitHub">Star</a>
KerasNLP is a natural language processing library that works natively
with TensorFlow, JAX, or PyTorch. Built on Keras 3, these models, layers,
metrics, and tokenizers can be trained and serialized in any framework and
re-used in another without costly migrations.
KerasNLP supports users through their entire development cycle. Our workflows
are built from modular components that have state-of-the-art preset weights when
used out-of-the-box and are easily customizable when more control is needed.
This library is an extension of the core Keras API; all high-level modules are
[`Layers`](/api/layers/) or
[`Models`](/api/models/) that receive that same level of polish
as core Keras. If you are familiar with Keras, congratulations! You already
understand most of KerasNLP.
See our [Getting Started guide](/guides/keras_nlp/getting_started)
to start learning our API. We welcome
[contributions](https://github.com/keras-team/keras-nlp/blob/master/CONTRIBUTING.md).
---
## Quick links
* [KerasNLP API reference](/api/keras_nlp/)
* [KerasNLP on GitHub](https://github.com/keras-team/keras-nlp)
* [List of available pre-trained models](/api/keras_nlp/models/)
## Guides
* [Getting Started with KerasNLP](/guides/keras_nlp/getting_started/)
* [Pretraining a Transformer from scratch](/guides/keras_nlp/transformer_pretraining/)
## Examples
* [GPT-2 text generation](/examples/generative/gpt2_text_generation_with_kerasnlp/)
* [Parameter-efficient fine-tuning of GPT-2 with LoRA](/examples/nlp/parameter_efficient_finetuning_of_gpt2_with_lora/)
* [Semantic Similarity](/examples/nlp/semantic_similarity_with_keras_nlp/)
* [Sentence embeddings using Siamese RoBERTa-networks](/examples/nlp/sentence_embeddings_with_sbert/)
* [Data Parallel Training with tf.distribute](/examples/nlp/data_parallel_training_with_keras_nlp/)
* [English-to-Spanish translation](/examples/nlp/neural_machine_translation_with_keras_nlp/)
* [GPT text generation from scratch](/examples/generative/text_generation_gpt/)
* [Text Classification using FNet](/examples/nlp/fnet_classification_with_keras_nlp/)
---
## Installation
KerasNLP supports both Keras 2 and Keras 3. We recommend Keras 3 for all new
users, as it enables using KerasNLP models and layers with JAX, TensorFlow and
PyTorch.
### Keras 2 Installation
To install the latest KerasNLP release with Keras 2, simply run:
```
pip install --upgrade keras-nlp
```
### Keras 3 Installation
There are currently two ways to install Keras 3 with KerasNLP. To install the
stable versions of KerasNLP and Keras 3, you should install Keras 3 **after**
installing KerasNLP. This is a temporary step while TensorFlow is pinned to
Keras 2, and will no longer be necessary after TensorFlow 2.16.
```
pip install --upgrade keras-nlp
pip install --upgrade keras
```
To install the latest nightly changes for both KerasNLP and Keras, you can use
our nightly package.
```
pip install --upgrade keras-nlp-nightly
```
**Note:** Keras 3 will not function with TensorFlow 2.14 or earlier.
See [Getting started with Keras](/getting_started/) for more information on
installing Keras generally and compatibility with different frameworks.
---
## Quickstart
Fine-tune BERT on a small sentiment analysis task using the
[`keras_nlp.models`](/api/keras_nlp/models/) API:
```python
import os
os.environ["KERAS_BACKEND"] = "tensorflow" # Or "jax" or "torch"!
import keras_nlp
import tensorflow_datasets as tfds
imdb_train, imdb_test = tfds.load(
"imdb_reviews",
split=["train", "test"],
as_supervised=True,
batch_size=16,
)
# Load a BERT model.
classifier = keras_nlp.models.BertClassifier.from_preset(
"bert_base_en_uncased",
num_classes=2,
)
# Fine-tune on IMDb movie reviews.
classifier.fit(imdb_train, validation_data=imdb_test)
# Predict two new examples.
classifier.predict(["What an amazing movie!", "A total waste of my time."])
```
---
## Compatibility
We follow [Semantic Versioning](https://semver.org/), and plan to
provide backwards compatibility guarantees both for code and saved models built
with our components. While we continue with pre-release `0.y.z` development, we
may break compatibility at any time and APIs should not be consider stable.
## Disclaimer
KerasNLP provides access to pre-trained models via the `keras_nlp.models` API.
These pre-trained models are provided on an "as is" basis, without warranties
or conditions of any kind. The following underlying models are provided by third
parties, and subject to separate licenses:
BART, DeBERTa, DistilBERT, GPT-2, OPT, RoBERTa, Whisper, and XLM-RoBERTa.
## Citing KerasNLP
If KerasNLP helps your research, we appreciate your citations.
Here is the BibTeX entry:
```bibtex
@misc{kerasnlp2022,
title={KerasNLP},
author={Watson, Matthew, and Qian, Chen, and Bischof, Jonathan and Chollet,
Fran\c{c}ois and others},
year={2022},
howpublished={\url{https://github.com/keras-team/keras-nlp}},
}
```
| keras-io/templates/keras_nlp/index.md/0 | {
"file_path": "keras-io/templates/keras_nlp/index.md",
"repo_id": "keras-io",
"token_count": 1688
} | 115 |
# KerasNLP Benchmarks
This directory houses a collection of scripts for benchmarking APIs and utility
functions which KerasNLP provides.
## Text Generation
For benchmarking text generation functions, the following command can be run
from the root of the repository:
```sh
python3 ./keras_nlp/benchmarks/text_generation.py
```
On running this script on Google Colab (with 3090 GPU, and TensorFlow 2.11.0),
the following results were obtained:
| **Decoding Strategy** | **Graph Mode (sec)** | **Graph Mode with XLA (sec)** |
|:---------------------: |:--------------------: |:-----------------------------: |
| Greedy Search | 470.23 | 61.79 |
| Beam Search | 530.13 | 189.61 |
| Top-k Search | 374.05 | 62.87 |
| Top-p Search | 401.97 | 260.31 |
To change the configuration, say, for example, number of layers in the transformer
model used for inference, the user can modify the config dictionaries given at
the top of the script.
## Sentiment Analysis
For benchmarking classification models, the following command can be run
from the root of the repository:
```sh
python3 keras_nlp/benchmarks/sentiment_analysis.py \
--model="BertClassifier" \
--preset="bert_small_en_uncased" \
--learning_rate=5e-5 \
--num_epochs=5 \
--batch_size=32
--mixed_precision_policy="mixed_float16"
```
flag `--model` specifies the model name, and `--preset` specifies the preset under testing. `--preset` could be None,
while `--model` is required. Other flags are common training flags.
This script outputs:
- validation accuracy for each epoch.
- testing accuracy after training is done.
- total elapsed time (in seconds). | keras-nlp/benchmarks/README.md/0 | {
"file_path": "keras-nlp/benchmarks/README.md",
"repo_id": "keras-nlp",
"token_count": 705
} | 116 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import keras_nlp.layers.modeling.transformer_layer_utils as utils
from keras_nlp.backend import ops
from keras_nlp.backend import random
from keras_nlp.tests.test_case import TestCase
class TransformerLayerUtilsTest(TestCase):
def test_compute_causal_mask(self):
mask = utils.compute_causal_mask(1, 2, 2)
self.assertAllEqual(mask, [[[1, 0], [1, 1]]])
def test_merge_padding_and_attention_mask(self):
padding_mask = ops.array([[1, 1, 0]])
attention_mask = ops.array([[[0, 0, 1], [0, 1, 0], [1, 0, 0]]])
inputs = random.uniform(shape=[1, 3, 2])
merged_mask = utils.merge_padding_and_attention_mask(
inputs,
padding_mask,
attention_mask,
)
self.assertAllEqual(merged_mask, [[[0, 0, 0], [0, 1, 0], [1, 0, 0]]])
def test_bad_mask_shapes(self):
with self.assertRaises(ValueError):
padding_mask = ops.array([[[1, 1, 0], [1, 0, 0]]])
attention_mask = ops.array([[0, 0, 1], [0, 1, 0], [1, 0, 0]])
inputs = random.uniform(shape=[1, 3, 2])
utils.merge_padding_and_attention_mask(
inputs,
padding_mask,
attention_mask,
)
with self.assertRaises(ValueError):
padding_mask = ops.array([[1, 1, 0]])
attention_mask = ops.array([[0, 0, 1], [1, 0, 0]])
inputs = random.uniform(shape=[1, 3, 2])
utils.merge_padding_and_attention_mask(
inputs,
padding_mask,
attention_mask,
)
| keras-nlp/keras_nlp/layers/modeling/transformer_layer_utils_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/layers/modeling/transformer_layer_utils_test.py",
"repo_id": "keras-nlp",
"token_count": 976
} | 117 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.utils.tensor_utils import is_float_dtype
@keras_nlp_export("keras_nlp.metrics.EditDistance")
class EditDistance(keras.metrics.Metric):
"""Edit Distance metric.
This class implements the edit distance metric, sometimes called
Levenshtein Distance, as a `keras.metrics.Metric`. Essentially, edit
distance is the least number of operations required to convert one string to
another, where an operation can be one of substitution, deletion or
insertion. By default, this metric will compute the normalized score, where
the unnormalized edit distance score is divided by the number of tokens in
the reference text.
This class can be used to compute character error rate (CER) and word error
rate (WER). You simply have to pass the appropriate tokenized text, and set
`normalize` to True.
Note on input shapes:
`y_true` and `y_pred` can either be tensors of rank 1 or ragged tensors of
rank 2. These tensors contain tokenized text.
Args:
normalize: bool. If True, the computed number of operations
(substitutions + deletions + insertions) across all samples is
divided by the aggregate number of tokens in all reference texts. If
False, number of operations are calculated for every sample, and
averaged over all the samples.
dtype: string or tf.dtypes.Dtype. Precision of metric computation. If
not specified, it defaults to `"float32"`.
name: string. Name of the metric instance.
**kwargs: Other keyword arguments.
References:
- [Morris et al.](https://www.researchgate.net/publication/221478089)
Examples:
Various Input Types.
Single-level Python list.
>>> edit_distance = keras_nlp.metrics.EditDistance()
>>> y_true = "the tiny little cat was found under the big funny bed".split()
>>> y_pred = "the cat was found under the bed".split()
>>> edit_distance(y_true, y_pred)
<tf.Tensor: shape=(), dtype=float32, numpy=0.36363637>
Nested Python list.
>>> edit_distance = keras_nlp.metrics.EditDistance()
>>> y_true = [
... "the tiny little cat was found under the big funny bed".split(),
... "it is sunny today".split(),
... ]
>>> y_pred = [
... "the cat was found under the bed".split(),
... "it is sunny but with a hint of cloud cover".split(),
... ]
>>> edit_distance(y_true, y_pred)
<tf.Tensor: shape=(), dtype=float32, numpy=0.73333335>
"""
def __init__(
self,
normalize=True,
dtype="float32",
name="edit_distance",
**kwargs,
):
super().__init__(name=name, dtype=dtype, **kwargs)
if not is_float_dtype(dtype):
raise ValueError(
"`dtype` must be a floating point type. "
f"Received: dtype={dtype}"
)
self.normalize = normalize
self._aggregate_unnormalized_edit_distance = self.add_weight(
shape=(),
initializer="zeros",
dtype=self.dtype,
name="aggregate_unnormalized_edit_distance",
)
if normalize:
self._aggregate_reference_length = self.add_weight(
shape=(),
initializer="zeros",
dtype=self.dtype,
name="aggregate_reference_length",
)
else:
self._number_of_samples = self.add_weight(
shape=(),
initializer="zeros",
dtype=self.dtype,
name="number_of_samples",
)
def update_state(self, y_true, y_pred, sample_weight=None):
def validate_and_fix_rank(inputs, tensor_name):
if not isinstance(inputs, (tf.Tensor, tf.RaggedTensor)):
inputs = tf.ragged.constant(inputs)
if inputs.shape.rank == 1:
return tf.RaggedTensor.from_tensor(inputs[tf.newaxis])
elif inputs.shape.rank == 2:
return inputs
else:
raise ValueError(
f"{tensor_name} must be of rank 1 or 2. "
f"Found rank: {inputs.shape.rank}"
)
y_true = validate_and_fix_rank(y_true, "y_true")
y_pred = validate_and_fix_rank(y_pred, "y_pred")
if self.normalize:
self._aggregate_reference_length.assign_add(
tf.cast(tf.size(y_true.flat_values), dtype=self.dtype)
)
def calculate_edit_distance(args):
reference, hypothesis = args
reference = tf.sparse.from_dense([reference])
hypothesis = tf.sparse.from_dense([hypothesis])
edit_distance = tf.squeeze(
tf.edit_distance(
hypothesis=hypothesis,
truth=reference,
normalize=False,
)
)
self._aggregate_unnormalized_edit_distance.assign_add(
tf.cast(edit_distance, dtype=self.dtype)
)
if not self.normalize:
self._number_of_samples.assign_add(tf.cast(1, dtype=self.dtype))
return 0
_ = tf.map_fn(
fn=calculate_edit_distance,
elems=(y_true, y_pred),
fn_output_signature="int8",
)
def result(self):
if self.normalize:
if self._aggregate_reference_length == 0:
return 0.0
return (
self._aggregate_unnormalized_edit_distance
/ self._aggregate_reference_length
)
if self._number_of_samples == 0:
return 0.0
return (
self._aggregate_unnormalized_edit_distance / self._number_of_samples
)
def reset_state(self):
self._aggregate_unnormalized_edit_distance.assign(0.0)
if self.normalize:
self._aggregate_reference_length.assign(0.0)
else:
self._number_of_samples.assign(0.0)
def get_config(self):
config = super().get_config()
config.update({"normalize": self.normalize})
return config
| keras-nlp/keras_nlp/metrics/edit_distance.py/0 | {
"file_path": "keras-nlp/keras_nlp/metrics/edit_distance.py",
"repo_id": "keras-nlp",
"token_count": 3060
} | 118 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
import tensorflow as tf
from absl import logging
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import ops
from keras_nlp.models.bart.bart_preprocessor import BartPreprocessor
from keras_nlp.models.bart.bart_presets import backbone_presets
from keras_nlp.utils.keras_utils import (
convert_inputs_to_list_of_tensor_segments,
)
from keras_nlp.utils.keras_utils import pack_x_y_sample_weight
from keras_nlp.utils.python_utils import classproperty
@keras_nlp_export("keras_nlp.models.BartSeq2SeqLMPreprocessor")
class BartSeq2SeqLMPreprocessor(BartPreprocessor):
"""BART Seq2Seq LM preprocessor.
This layer is used as preprocessor for seq2seq tasks using the BART model.
This class subclasses `keras_nlp.models.BartPreprocessor` and keeps most of
its functionality. It has two changes from the superclass:
1. Sets the `y` (label) and `sample_weights` fields by shifting the
decoder input sequence one step towards the left. Both these fields are
inferred internally, and any passed values will be ignored.
2. Drops the last token from the decoder input sequence as it does not have
a successor.
Args:
tokenizer: A `keras_nlp.models.BartTokenizer` instance.
encoder_sequence_length: The length of the packed encoder inputs.
decoder_sequence_length: The length of the packed decoder inputs.
Call arguments:
x: A dictionary with `encoder_text` and `decoder_text` as its keys.
Each value in the dictionary should be a tensor of single string
sequences. Inputs may be batched or unbatched. Raw python inputs
will be converted to tensors.
y: Label data. Should always be `None` as the layer generates labels by
shifting the decoder input sequence one step to the left.
sample_weight: Label weights. Should always be `None` as the layer
generates label weights by shifting the padding mask one step to the
left.
Examples:
Directly calling the layer on data
```python
preprocessor = keras_nlp.models.BartPreprocessor.from_preset("bart_base_en")
# Preprocess unbatched inputs.
inputs = {
"encoder_text": "The fox was sleeping.",
"decoder_text": "The fox was awake."
}
preprocessor(inputs)
# Preprocess batched inputs.
inputs = {
"encoder_text": ["The fox was sleeping.", "The lion was quiet."],
"decoder_text": ["The fox was awake.", "The lion was roaring."]
}
preprocessor(inputs)
# Custom vocabulary.
vocab = {
"<s>": 0,
"<pad>": 1,
"</s>": 2,
"Ġafter": 5,
"noon": 6,
"Ġsun": 7,
}
merges = ["Ġ a", "Ġ s", "Ġ n", "e r", "n o", "o n", "Ġs u", "Ġa f", "no on"]
merges += ["Ġsu n", "Ġaf t", "Ġaft er"]
tokenizer = keras_nlp.models.BartTokenizer(
vocabulary=vocab,
merges=merges,
)
preprocessor = keras_nlp.models.BartPreprocessor(
tokenizer=tokenizer,
encoder_sequence_length=20,
decoder_sequence_length=10,
)
inputs = {
"encoder_text": "The fox was sleeping.",
"decoder_text": "The fox was awake."
}
preprocessor(inputs)
```
Mapping with `tf.data.Dataset`.
```python
preprocessor = keras_nlp.models.BartPreprocessor.from_preset("bart_base_en")
# Map single sentences.
features = {
"encoder_text": tf.constant(
["The fox was sleeping.", "The lion was quiet."]
),
"decoder_text": tf.constant(
["The fox was awake.", "The lion was roaring."]
)
}
ds = tf.data.Dataset.from_tensor_slices(features)
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
```
"""
def call(
self,
x,
y=None,
sample_weight=None,
*,
encoder_sequence_length=None,
decoder_sequence_length=None,
# `sequence_length` is an alias for `decoder_sequence_length`
sequence_length=None,
):
if y is not None or sample_weight is not None:
logging.warning(
"`BartSeq2SeqLMPreprocessor` infers `y` and `sample_weight` "
"from the provided input data, i.e., `x`. However, non-`None`"
"values have been passed for `y` or `sample_weight` or both. "
"These values will be ignored."
)
if encoder_sequence_length is None:
encoder_sequence_length = self.encoder_sequence_length
decoder_sequence_length = decoder_sequence_length or sequence_length
if decoder_sequence_length is None:
decoder_sequence_length = self.decoder_sequence_length
x = super().call(
x,
encoder_sequence_length=encoder_sequence_length,
decoder_sequence_length=decoder_sequence_length + 1,
)
decoder_token_ids = x.pop("decoder_token_ids")
decoder_padding_mask = x.pop("decoder_padding_mask")
# The last token does not have a next token. Hence, we truncate it.
x = {
**x,
"decoder_token_ids": decoder_token_ids[..., :-1],
"decoder_padding_mask": decoder_padding_mask[..., :-1],
}
# Target `y` will be the decoder input sequence shifted one step to the
# left (i.e., the next token).
y = decoder_token_ids[..., 1:]
sample_weight = decoder_padding_mask[..., 1:]
return pack_x_y_sample_weight(x, y, sample_weight)
def generate_preprocess(
self,
x,
*,
encoder_sequence_length=None,
# `sequence_length` is an alias for `decoder_sequence_length`
decoder_sequence_length=None,
sequence_length=None,
):
"""Convert encoder and decoder input strings to integer token inputs for generation.
Similar to calling the layer for training, this method takes in a dict
containing `"encoder_text"` and `"decoder_text"`, with strings or tensor
strings for values, tokenizes and packs the input, and computes a
padding mask masking all inputs not filled in with a padded value.
Unlike calling the layer for training, this method does not compute
labels and will never append a tokenizer.end_token_id to the end of
the decoder sequence (as generation is expected to continue at the end
of the inputted decoder prompt).
"""
if not self.built:
self.build(None)
if isinstance(x, dict):
encoder_text = x["encoder_text"]
decoder_text = x["decoder_text"]
else:
encoder_text = x
# Initialize empty prompt for the decoder.
decoder_text = tf.fill((tf.shape(encoder_text)[0],), "")
if encoder_sequence_length is None:
encoder_sequence_length = self.encoder_sequence_length
decoder_sequence_length = decoder_sequence_length or sequence_length
if decoder_sequence_length is None:
decoder_sequence_length = self.decoder_sequence_length
# Tokenize and pack the encoder inputs.
# TODO: Remove `[0]` once we have shifted to `MultiSegmentPacker`.
encoder_text = convert_inputs_to_list_of_tensor_segments(encoder_text)[
0
]
encoder_token_ids = self.tokenizer(encoder_text)
encoder_token_ids, encoder_padding_mask = self.encoder_packer(
encoder_token_ids,
sequence_length=encoder_sequence_length,
)
# Tokenize and pack the decoder inputs.
decoder_text = convert_inputs_to_list_of_tensor_segments(decoder_text)[
0
]
decoder_token_ids = self.tokenizer(decoder_text)
decoder_token_ids, decoder_padding_mask = self.decoder_packer(
decoder_token_ids,
sequence_length=decoder_sequence_length,
add_end_value=False,
)
return {
"encoder_token_ids": encoder_token_ids,
"encoder_padding_mask": encoder_padding_mask,
"decoder_token_ids": decoder_token_ids,
"decoder_padding_mask": decoder_padding_mask,
}
def generate_postprocess(
self,
x,
):
"""Convert integer token output to strings for generation.
This method reverses `generate_preprocess()`, by first removing all
padding and start/end tokens, and then converting the integer sequence
back to a string.
"""
if not self.built:
self.build(None)
decoder_token_ids, decoder_padding_mask = (
x["decoder_token_ids"],
x["decoder_padding_mask"],
)
decoder_token_ids = ops.convert_to_numpy(decoder_token_ids)
decoder_padding_mask = ops.convert_to_numpy(decoder_padding_mask)
# Strip any special tokens during detokenization, i.e., the start and
# end markers. In the future, we could make this configurable.
decoder_padding_mask = (
decoder_padding_mask
& (decoder_token_ids != self.tokenizer.end_token_id)
& (decoder_token_ids != self.tokenizer.start_token_id)
)
decoder_token_ids = tf.ragged.boolean_mask(
decoder_token_ids, decoder_padding_mask
)
return self.tokenizer.detokenize(decoder_token_ids)
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
| keras-nlp/keras_nlp/models/bart/bart_seq_2_seq_lm_preprocessor.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/bart/bart_seq_2_seq_lm_preprocessor.py",
"repo_id": "keras-nlp",
"token_count": 4257
} | 119 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""BERT model preset configurations."""
backbone_presets = {
"bert_tiny_en_uncased": {
"metadata": {
"description": (
"2-layer BERT model where all input is lowercased. "
"Trained on English Wikipedia + BooksCorpus."
),
"params": 4385920,
"official_name": "BERT",
"path": "bert",
"model_card": "https://github.com/google-research/bert/blob/master/README.md",
},
"kaggle_handle": "kaggle://keras/bert/keras/bert_tiny_en_uncased/2",
},
"bert_small_en_uncased": {
"metadata": {
"description": (
"4-layer BERT model where all input is lowercased. "
"Trained on English Wikipedia + BooksCorpus."
),
"params": 28763648,
"official_name": "BERT",
"path": "bert",
"model_card": "https://github.com/google-research/bert/blob/master/README.md",
},
"kaggle_handle": "kaggle://keras/bert/keras/bert_small_en_uncased/2",
},
"bert_medium_en_uncased": {
"metadata": {
"description": (
"8-layer BERT model where all input is lowercased. "
"Trained on English Wikipedia + BooksCorpus."
),
"params": 41373184,
"official_name": "BERT",
"path": "bert",
"model_card": "https://github.com/google-research/bert/blob/master/README.md",
},
"kaggle_handle": "kaggle://keras/bert/keras/bert_medium_en_uncased/2",
},
"bert_base_en_uncased": {
"metadata": {
"description": (
"12-layer BERT model where all input is lowercased. "
"Trained on English Wikipedia + BooksCorpus."
),
"params": 109482240,
"official_name": "BERT",
"path": "bert",
"model_card": "https://github.com/google-research/bert/blob/master/README.md",
},
"kaggle_handle": "kaggle://keras/bert/keras/bert_base_en_uncased/2",
},
"bert_base_en": {
"metadata": {
"description": (
"12-layer BERT model where case is maintained. "
"Trained on English Wikipedia + BooksCorpus."
),
"params": 108310272,
"official_name": "BERT",
"path": "bert",
"model_card": "https://github.com/google-research/bert/blob/master/README.md",
},
"kaggle_handle": "kaggle://keras/bert/keras/bert_base_en/2",
},
"bert_base_zh": {
"metadata": {
"description": (
"12-layer BERT model. Trained on Chinese Wikipedia."
),
"params": 102267648,
"official_name": "BERT",
"path": "bert",
"model_card": "https://github.com/google-research/bert/blob/master/README.md",
},
"kaggle_handle": "kaggle://keras/bert/keras/bert_base_zh/2",
},
"bert_base_multi": {
"metadata": {
"description": (
"12-layer BERT model where case is maintained. Trained on trained on Wikipedias of 104 languages"
),
"params": 177853440,
"official_name": "BERT",
"path": "bert",
"model_card": "https://github.com/google-research/bert/blob/master/README.md",
},
"kaggle_handle": "kaggle://keras/bert/keras/bert_base_multi/2",
},
"bert_large_en_uncased": {
"metadata": {
"description": (
"24-layer BERT model where all input is lowercased. "
"Trained on English Wikipedia + BooksCorpus."
),
"params": 335141888,
"official_name": "BERT",
"path": "bert",
"model_card": "https://github.com/google-research/bert/blob/master/README.md",
},
"kaggle_handle": "kaggle://keras/bert/keras/bert_large_en_uncased/2",
},
"bert_large_en": {
"metadata": {
"description": (
"24-layer BERT model where case is maintained. "
"Trained on English Wikipedia + BooksCorpus."
),
"params": 333579264,
"official_name": "BERT",
"path": "bert",
"model_card": "https://github.com/google-research/bert/blob/master/README.md",
},
"kaggle_handle": "kaggle://keras/bert/keras/bert_large_en/2",
},
}
classifier_presets = {
"bert_tiny_en_uncased_sst2": {
"metadata": {
"description": (
"The bert_tiny_en_uncased backbone model fine-tuned on the SST-2 sentiment analysis dataset."
),
"params": 4385920,
"official_name": "BERT",
"path": "bert",
"model_card": "https://github.com/google-research/bert/blob/master/README.md",
},
"kaggle_handle": "kaggle://keras/bert/keras/bert_tiny_en_uncased_sst2/3",
}
}
| keras-nlp/keras_nlp/models/bert/bert_presets.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/bert/bert_presets.py",
"repo_id": "keras-nlp",
"token_count": 2765
} | 120 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.layers.modeling.reversible_embedding import ReversibleEmbedding
from keras_nlp.models.backbone import Backbone
from keras_nlp.models.deberta_v3.deberta_v3_presets import backbone_presets
from keras_nlp.models.deberta_v3.disentangled_attention_encoder import (
DisentangledAttentionEncoder,
)
from keras_nlp.models.deberta_v3.relative_embedding import RelativeEmbedding
from keras_nlp.utils.python_utils import classproperty
def deberta_kernel_initializer(stddev=0.02):
return keras.initializers.TruncatedNormal(stddev=stddev)
@keras_nlp_export("keras_nlp.models.DebertaV3Backbone")
class DebertaV3Backbone(Backbone):
"""DeBERTa encoder network.
This network implements a bi-directional Transformer-based encoder as
described in
["DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing"](https://arxiv.org/abs/2111.09543).
It includes the embedding lookups and transformer layers, but does not
include the enhanced masked decoding head used during pretraining.
The default constructor gives a fully customizable, randomly initialized
DeBERTa encoder with any number of layers, heads, and embedding
dimensions. To load preset architectures and weights, use the `from_preset`
constructor.
Note: `DebertaV3Backbone` has a performance issue on TPUs, and we recommend
other models for TPU training and inference.
Disclaimer: Pre-trained models are provided on an "as is" basis, without
warranties or conditions of any kind. The underlying model is provided by a
third party and subject to a separate license, available
[here](https://github.com/microsoft/DeBERTa).
Args:
vocabulary_size: int. The size of the token vocabulary.
num_layers: int. The number of transformer layers.
num_heads: int. The number of attention heads for each transformer.
The hidden size must be divisible by the number of attention heads.
hidden_dim: int. The size of the transformer encoding layer.
intermediate_dim: int. The output dimension of the first Dense layer in
a two-layer feedforward network for each transformer.
dropout: float. Dropout probability for the DeBERTa model.
max_sequence_length: int. The maximum sequence length this encoder can
consume. The sequence length of the input must be less than
`max_sequence_length`.
bucket_size: int. The size of the relative position buckets. Generally
equal to `max_sequence_length // 2`.
dtype: string or `keras.mixed_precision.DTypePolicy`. The dtype to use
for model computations and weights. Note that some computations,
such as softmax and layer normalization, will always be done at
float32 precision regardless of dtype.
Example:
```python
input_data = {
"token_ids": np.ones(shape=(1, 12), dtype="int32"),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]]),
}
# Pretrained DeBERTa encoder.
model = keras_nlp.models.DebertaV3Backbone.from_preset(
"deberta_v3_base_en",
)
model(input_data)
# Randomly initialized DeBERTa encoder with custom config
model = keras_nlp.models.DebertaV3Backbone(
vocabulary_size=128100,
num_layers=12,
num_heads=6,
hidden_dim=384,
intermediate_dim=1536,
max_sequence_length=512,
bucket_size=256,
)
# Call the model on the input data.
model(input_data)
```
"""
def __init__(
self,
vocabulary_size,
num_layers,
num_heads,
hidden_dim,
intermediate_dim,
dropout=0.1,
max_sequence_length=512,
bucket_size=256,
dtype=None,
**kwargs,
):
# === Layers ===
self.token_embedding = ReversibleEmbedding(
input_dim=vocabulary_size,
output_dim=hidden_dim,
embeddings_initializer=deberta_kernel_initializer(),
dtype=dtype,
name="token_embedding",
)
self.embeddings_layer_norm = keras.layers.LayerNormalization(
epsilon=1e-7,
dtype=dtype,
name="embeddings_layer_norm",
)
self.embeddings_dropout = keras.layers.Dropout(
dropout,
dtype=dtype,
name="embeddings_dropout",
)
self.relative_embeddings = RelativeEmbedding(
hidden_dim=hidden_dim,
bucket_size=bucket_size,
layer_norm_epsilon=1e-7,
kernel_initializer=deberta_kernel_initializer(),
dtype=dtype,
name="rel_embedding",
)
self.transformer_layers = []
for i in range(num_layers):
layer = DisentangledAttentionEncoder(
num_heads=num_heads,
intermediate_dim=intermediate_dim,
max_position_embeddings=max_sequence_length,
bucket_size=bucket_size,
dropout=dropout,
activation=keras.activations.gelu,
layer_norm_epsilon=1e-7,
kernel_initializer=deberta_kernel_initializer(),
dtype=dtype,
name=f"disentangled_attention_encoder_layer_{i}",
)
self.transformer_layers.append(layer)
# === Functional Model ===
token_id_input = keras.Input(
shape=(None,), dtype="int32", name="token_ids"
)
padding_mask_input = keras.Input(
shape=(None,), dtype="int32", name="padding_mask"
)
x = self.token_embedding(token_id_input)
x = self.embeddings_layer_norm(x)
x = self.embeddings_dropout(x)
rel_embeddings = self.relative_embeddings(x)
for transformer_layer in self.transformer_layers:
x = transformer_layer(
x,
rel_embeddings=rel_embeddings,
padding_mask=padding_mask_input,
)
super().__init__(
inputs={
"token_ids": token_id_input,
"padding_mask": padding_mask_input,
},
outputs=x,
**kwargs,
)
# === Config ===
self.vocabulary_size = vocabulary_size
self.num_layers = num_layers
self.num_heads = num_heads
self.hidden_dim = hidden_dim
self.intermediate_dim = intermediate_dim
self.dropout = dropout
self.max_sequence_length = max_sequence_length
self.bucket_size = bucket_size
self.start_token_index = 0
def get_config(self):
config = super().get_config()
config.update(
{
"vocabulary_size": self.vocabulary_size,
"num_layers": self.num_layers,
"num_heads": self.num_heads,
"hidden_dim": self.hidden_dim,
"intermediate_dim": self.intermediate_dim,
"dropout": self.dropout,
"max_sequence_length": self.max_sequence_length,
"bucket_size": self.bucket_size,
}
)
return config
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
| keras-nlp/keras_nlp/models/deberta_v3/deberta_v3_backbone.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/deberta_v3/deberta_v3_backbone.py",
"repo_id": "keras-nlp",
"token_count": 3496
} | 121 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pytest
from keras_nlp.backend import ops
from keras_nlp.models.electra.electra_backbone import ElectraBackbone
from keras_nlp.tests.test_case import TestCase
class ElectraBackboneTest(TestCase):
def setUp(self):
self.init_kwargs = {
"vocab_size": 10,
"num_layers": 2,
"num_heads": 2,
"hidden_dim": 2,
"embedding_dim": 2,
"intermediate_dim": 4,
"max_sequence_length": 5,
}
self.input_data = {
"token_ids": ops.ones((2, 5), dtype="int32"),
"segment_ids": ops.zeros((2, 5), dtype="int32"),
"padding_mask": ops.ones((2, 5), dtype="int32"),
}
def test_backbone_basics(self):
self.run_backbone_test(
cls=ElectraBackbone,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
expected_output_shape={
"sequence_output": (2, 5, 2),
"pooled_output": (2, 2),
},
)
@pytest.mark.large
def test_saved_model(self):
self.run_model_saving_test(
cls=ElectraBackbone,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
)
| keras-nlp/keras_nlp/models/electra/electra_backbone_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/electra/electra_backbone_test.py",
"repo_id": "keras-nlp",
"token_count": 826
} | 122 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import pytest
from keras_nlp.models.f_net.f_net_tokenizer import FNetTokenizer
from keras_nlp.tests.test_case import TestCase
class FNetTokenizerTest(TestCase):
def setUp(self):
self.init_kwargs = {
# Generated using create_f_net_test_proto.py
"proto": os.path.join(
self.get_test_data_dir(), "f_net_test_vocab.spm"
)
}
self.input_data = ["the quick brown fox", "the earth is round"]
def test_tokenizer_basics(self):
self.run_preprocessing_layer_test(
cls=FNetTokenizer,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
expected_output=[[5, 10, 6, 8], [5, 7, 9, 11]],
)
def test_errors_missing_special_tokens(self):
with self.assertRaises(ValueError):
FNetTokenizer(
# Generated using create_no_special_token_proto.py
proto=os.path.join(
self.get_test_data_dir(), "no_special_token_vocab.spm"
)
)
@pytest.mark.large
def test_smallest_preset(self):
self.run_preset_test(
cls=FNetTokenizer,
preset="f_net_base_en",
input_data=["The quick brown fox."],
expected_output=[[97, 1467, 5187, 26, 2521, 16678]],
)
@pytest.mark.extra_large
def test_all_presets(self):
for preset in FNetTokenizer.presets:
self.run_preset_test(
cls=FNetTokenizer,
preset=preset,
input_data=self.input_data,
)
| keras-nlp/keras_nlp/models/f_net/f_net_tokenizer_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/f_net/f_net_tokenizer_test.py",
"repo_id": "keras-nlp",
"token_count": 1008
} | 123 |
# Copyright 2024 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_nlp.backend import keras
from keras_nlp.backend import ops
class RMSNormalization(keras.layers.Layer):
def __init__(self, epsilon=1e-6, **kwargs):
super().__init__(**kwargs)
self.epsilon = epsilon
def build(self, input_shape):
self.scale = self.add_weight(
name="scale",
trainable=True,
shape=(input_shape[-1],),
initializer="zeros",
)
self.built = True
def call(self, x):
# Always compute normalization in float32.
x = ops.cast(x, "float32")
scale = ops.cast(self.scale, "float32")
var = ops.mean(ops.square(x), axis=-1, keepdims=True)
normed_inputs = x * ops.reciprocal(ops.sqrt(var + 1e-06))
normed_inputs = normed_inputs * (1 + scale)
return ops.cast(normed_inputs, self.compute_dtype)
| keras-nlp/keras_nlp/models/gemma/rms_normalization.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/gemma/rms_normalization.py",
"repo_id": "keras-nlp",
"token_count": 565
} | 124 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.layers.modeling.reversible_embedding import ReversibleEmbedding
from keras_nlp.models.backbone import Backbone
from keras_nlp.models.gpt_neo_x.gpt_neo_x_decoder import GPTNeoXDecoder
from keras_nlp.utils.keras_utils import gelu_approximate
def _gpt_neo_x_kernel_initializer(stddev=0.02):
return keras.initializers.RandomNormal(stddev=stddev)
@keras_nlp_export("keras_nlp.models.GPTNeoXBackbone")
class GPTNeoXBackbone(Backbone):
"""GPT-NeoX core network with hyperparameters.
This network implements a Transformer-based decoder network,
Generative Pretrained Transformer-Neo-X (GPTNeoX), as described in
["GPT-NeoX-20B: An Open-Source Autoregressive Language Model"](https://arxiv.org/abs/2204.06745).
It includes the embedding lookups and transformer layers.
The default constructor gives a fully customizable, randomly initialized
GPT-NeoX model with any number of layers, heads, and embedding
dimensions.
Disclaimer: Pre-trained models are provided on an "as is" basis, without
warranties or conditions of any kind. The underlying model is provided by a
third party and subject to a separate license, available
[here](https://github.com/EleutherAI/gpt-neox/).
Args:
vocabulary_size: int. The size of the token vocabulary.
num_layers: int. The number of transformer layers.
num_heads: int. The number of attention heads for each transformer.
The hidden size must be divisible by the number of attention heads.
hidden_dim: int. The size of the transformer encoding and pooler layers.
intermediate_dim: int. The output dimension of the first Dense layer in
a two-layer feedforward network for each transformer.
dropout: float. Dropout probability for the Transformer encoder.
layer_norm_epsilon: float. a value added to the denominator for
numerical stability.
rotary_max_wavelength: int. The maximum angular wavelength of the
sine/cosine curves, for rotary embeddings.
rotary_percentage: float. The percentage by which query, key, value
matrices are to be rotated
max_sequence_length: int. The maximum sequence length that this encoder
can consume. If `None`, `max_sequence_length` uses the value from
sequence length. This determines the variable shape for positional
embeddings.
dtype: string or `keras.mixed_precision.DTypePolicy`. The dtype to use
for model computations and weights. Note that some computations,
such as softmax and layer normalization, will always be done at
float32 precision regardless of dtype.
"""
def __init__(
self,
vocabulary_size,
num_layers,
num_heads,
hidden_dim,
intermediate_dim,
dropout=0.0,
rotary_percentage=0.25,
rotary_max_wavelength=10000,
layer_norm_epsilon=1e-5,
max_sequence_length=512,
dtype=None,
**kwargs,
):
# === Layers ===
self.token_embedding = ReversibleEmbedding(
input_dim=vocabulary_size,
output_dim=hidden_dim,
embeddings_initializer=_gpt_neo_x_kernel_initializer(stddev=0.01),
dtype=dtype,
name="token_embedding",
)
self.embeddings_dropout = keras.layers.Dropout(
dropout,
dtype=dtype,
name="embeddings_dropout",
)
self.transformer_layers = []
for i in range(num_layers):
layer = GPTNeoXDecoder(
intermediate_dim=intermediate_dim,
num_heads=num_heads,
dropout=dropout,
max_sequence_length=max_sequence_length,
rotary_percentage=rotary_percentage,
rotary_max_wavelength=rotary_max_wavelength,
layer_norm_epsilon=layer_norm_epsilon,
activation=gelu_approximate,
kernel_initializer=_gpt_neo_x_kernel_initializer(stddev=0.02),
dtype=dtype,
name=f"transformer_layer_{i}",
)
self.transformer_layers.append(layer)
self.layer_norm = keras.layers.LayerNormalization(
axis=-1,
epsilon=layer_norm_epsilon,
dtype=dtype,
name="layer_norm",
)
# === Functional Model ===
token_id_input = keras.Input(
shape=(None,), dtype="int32", name="token_ids"
)
padding_mask_input = keras.Input(
shape=(None,), dtype="int32", name="padding_mask"
)
# Embed tokens.
x = self.token_embedding(token_id_input)
x = self.embeddings_dropout(x)
for transformer_layer in self.transformer_layers:
x = transformer_layer(x, decoder_padding_mask=padding_mask_input)
sequence_output = self.layer_norm(x)
super().__init__(
inputs={
"token_ids": token_id_input,
"padding_mask": padding_mask_input,
},
outputs=sequence_output,
**kwargs,
)
# === Config ===
self.vocabulary_size = vocabulary_size
self.num_layers = num_layers
self.num_heads = num_heads
self.hidden_dim = hidden_dim
self.intermediate_dim = intermediate_dim
self.dropout = dropout
self.rotary_percentage = rotary_percentage
self.rotary_max_wavelength = rotary_max_wavelength
self.max_sequence_length = max_sequence_length
self.layer_norm_epsilon = layer_norm_epsilon
def get_config(self):
config = super().get_config()
config.update(
{
"vocabulary_size": self.vocabulary_size,
"num_layers": self.num_layers,
"num_heads": self.num_heads,
"hidden_dim": self.hidden_dim,
"intermediate_dim": self.intermediate_dim,
"dropout": self.dropout,
"rotary_percentage": self.rotary_percentage,
"rotary_max_wavelength": self.rotary_max_wavelength,
"max_sequence_length": self.max_sequence_length,
"layer_norm_epsilon": self.layer_norm_epsilon,
}
)
return config
| keras-nlp/keras_nlp/models/gpt_neo_x/gpt_neo_x_backbone.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/gpt_neo_x/gpt_neo_x_backbone.py",
"repo_id": "keras-nlp",
"token_count": 3081
} | 125 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import pytest
from keras_nlp.models.mistral.mistral_tokenizer import MistralTokenizer
from keras_nlp.tests.test_case import TestCase
class MistralTokenizerTest(TestCase):
def setUp(self):
self.init_kwargs = {
# Generated using create_mistral_test_proto.py
"proto": os.path.join(
self.get_test_data_dir(), "mistral_test_vocab.spm"
)
}
self.input_data = ["the quick brown fox", "the earth is round"]
def test_tokenizer_basics(self):
self.run_preprocessing_layer_test(
cls=MistralTokenizer,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
expected_output=[[3, 8, 4, 6], [3, 5, 7, 9]],
)
def test_errors_missing_special_tokens(self):
with self.assertRaises(ValueError):
MistralTokenizer(
# Generated using create_no_special_token_proto.py
proto=os.path.join(
self.get_test_data_dir(), "no_special_token_vocab.spm"
)
)
@pytest.mark.large
def test_smallest_preset(self):
self.run_preset_test(
cls=MistralTokenizer,
preset="mistral_7b_en",
input_data=["The quick brown fox."],
expected_output=[[415, 2936, 9060, 285, 1142, 28723]],
)
@pytest.mark.extra_large
def test_all_presets(self):
for preset in MistralTokenizer.presets:
self.run_preset_test(
cls=MistralTokenizer,
preset=preset,
input_data=self.input_data,
)
| keras-nlp/keras_nlp/models/mistral/mistral_tokenizer_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/mistral/mistral_tokenizer_test.py",
"repo_id": "keras-nlp",
"token_count": 1004
} | 126 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.layers.modeling.token_and_position_embedding import (
TokenAndPositionEmbedding,
)
from keras_nlp.layers.modeling.transformer_encoder import TransformerEncoder
from keras_nlp.models.backbone import Backbone
from keras_nlp.models.roberta.roberta_presets import backbone_presets
from keras_nlp.utils.python_utils import classproperty
def roberta_kernel_initializer(stddev=0.02):
return keras.initializers.TruncatedNormal(stddev=stddev)
@keras_nlp_export("keras_nlp.models.RobertaBackbone")
class RobertaBackbone(Backbone):
"""A RoBERTa encoder network.
This network implements a bi-directional Transformer-based encoder as
described in ["RoBERTa: A Robustly Optimized BERT Pretraining Approach"](https://arxiv.org/abs/1907.11692).
It includes the embedding lookups and transformer layers, but does not
include the masked language model head used during pretraining.
The default constructor gives a fully customizable, randomly initialized
RoBERTa encoder with any number of layers, heads, and embedding
dimensions. To load preset architectures and weights, use the `from_preset()`
constructor.
Disclaimer: Pre-trained models are provided on an "as is" basis, without
warranties or conditions of any kind. The underlying model is provided by a
third party and subject to a separate license, available
[here](https://github.com/facebookresearch/fairseq).
Args:
vocabulary_size: int. The size of the token vocabulary.
num_layers: int. The number of transformer layers.
num_heads: int. The number of attention heads for each transformer.
The hidden size must be divisible by the number of attention heads.
hidden_dim: int. The size of the transformer encoding layer.
intermediate_dim: int. The output dimension of the first Dense layer in
a two-layer feedforward network for each transformer.
dropout: float. Dropout probability for the Transformer encoder.
max_sequence_length: int. The maximum sequence length this encoder can
consume. The sequence length of the input must be less than
`max_sequence_length` default value. This determines the variable
shape for positional embeddings.
dtype: string or `keras.mixed_precision.DTypePolicy`. The dtype to use
for model computations and weights. Note that some computations,
such as softmax and layer normalization, will always be done at
float32 precision regardless of dtype.
Examples:
```python
input_data = {
"token_ids": np.ones(shape=(1, 12), dtype="int32"),
"padding_mask": np.array(
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0], shape=(1, 12)),
}
# Pretrained RoBERTa encoder
model = keras_nlp.models.RobertaBackbone.from_preset("roberta_base_en")
model(input_data)
# Randomly initialized RoBERTa model with custom config
model = keras_nlp.models.RobertaBackbone(
vocabulary_size=50265,
num_layers=4,
num_heads=4,
hidden_dim=256,
intermediate_dim=512,
max_sequence_length=128,
)
model(input_data)
```
"""
def __init__(
self,
vocabulary_size,
num_layers,
num_heads,
hidden_dim,
intermediate_dim,
dropout=0.1,
max_sequence_length=512,
dtype=None,
**kwargs,
):
# === Layers ===
self.embeddings = TokenAndPositionEmbedding(
vocabulary_size=vocabulary_size,
sequence_length=max_sequence_length,
embedding_dim=hidden_dim,
embeddings_initializer=roberta_kernel_initializer(),
dtype=dtype,
name="embeddings",
)
self.token_embedding = self.embeddings.token_embedding
self.embeddings_layer_norm = keras.layers.LayerNormalization(
axis=-1,
epsilon=1e-5, # Original paper uses this epsilon value
dtype=dtype,
name="embeddings_layer_norm",
)
self.embeddings_dropout = keras.layers.Dropout(
dropout,
dtype=dtype,
name="embeddings_dropout",
)
self.transformer_layers = []
for i in range(num_layers):
layer = TransformerEncoder(
num_heads=num_heads,
intermediate_dim=intermediate_dim,
activation="gelu",
dropout=dropout,
layer_norm_epsilon=1e-5,
kernel_initializer=roberta_kernel_initializer(),
dtype=dtype,
name=f"transformer_layer_{i}",
)
self.transformer_layers.append(layer)
# === Functional Model ===
token_id_input = keras.Input(
shape=(None,), dtype="int32", name="token_ids"
)
padding_mask_input = keras.Input(
shape=(None,), dtype="int32", name="padding_mask"
)
x = self.embeddings(token_id_input)
x = self.embeddings_layer_norm(x)
x = self.embeddings_dropout(x)
for transformer_layer in self.transformer_layers:
x = transformer_layer(x, padding_mask=padding_mask_input)
super().__init__(
inputs={
"token_ids": token_id_input,
"padding_mask": padding_mask_input,
},
outputs=x,
**kwargs,
)
# === Config ===
self.vocabulary_size = vocabulary_size
self.num_layers = num_layers
self.num_heads = num_heads
self.hidden_dim = hidden_dim
self.intermediate_dim = intermediate_dim
self.dropout = dropout
self.max_sequence_length = max_sequence_length
self.start_token_index = 0
def get_config(self):
config = super().get_config()
config.update(
{
"vocabulary_size": self.vocabulary_size,
"num_layers": self.num_layers,
"num_heads": self.num_heads,
"hidden_dim": self.hidden_dim,
"intermediate_dim": self.intermediate_dim,
"dropout": self.dropout,
"max_sequence_length": self.max_sequence_length,
}
)
return config
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
| keras-nlp/keras_nlp/models/roberta/roberta_backbone.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/roberta/roberta_backbone.py",
"repo_id": "keras-nlp",
"token_count": 3022
} | 127 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from absl import logging
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.layers.preprocessing.start_end_packer import StartEndPacker
from keras_nlp.models.preprocessor import Preprocessor
from keras_nlp.models.whisper.whisper_audio_feature_extractor import (
WhisperAudioFeatureExtractor,
)
from keras_nlp.models.whisper.whisper_presets import backbone_presets
from keras_nlp.models.whisper.whisper_tokenizer import WhisperTokenizer
from keras_nlp.utils.keras_utils import (
convert_inputs_to_list_of_tensor_segments,
)
from keras_nlp.utils.keras_utils import pack_x_y_sample_weight
from keras_nlp.utils.python_utils import classproperty
@keras_nlp_export("keras_nlp.models.WhisperPreprocessor")
class WhisperPreprocessor(Preprocessor):
"""A Whisper preprocessing layer which handles audio and text input.
This preprocessing layer will do three things:
1. Compute the log-mel spectrogram of the audio tensor inputs using
`audio_feature_extractor`.
2. Tokenize decoder inputs using the `tokenizer`.
2. Add the appropriate special tokens - `"<|startoftranscript|>", task
token, language token, `"<|endoftext|>"`, etc.
3. Construct a dictionary with keys `"encoder_features"`,
`"decoder_token_ids"`, `"decoder_padding_mask"` that can be passed
directly to a Whisper model.
Args:
tokenizer: A `keras_nlp.models.WhisperTokenizer` instance.
audio_feature_extractor: A
`keras_nlp.models.WhisperAudioFeatureExtractor` instance or `None`.
If `None` a feature extractor with default parameters will be
created.
decoder_sequence_length: The length of the packed decoder inputs.
language: string, language token. Should only be passed if your
tokenizer is multilingual.
task: string, task name. One of `"transcribe"`, `"translate"`. Should
only be passed if your tokenizer is multilingual.
no_timestamps: bool. If True, `"<|no_timestamps|>"` will be added as a
special token to your input.
Call arguments:
x: A dictionary with `"encoder_audio"` and `"decoder_text"` as its keys.
`"encoder_audio"` should correspond to the input audio tensor.
`"decoder_text"` should be a tensor of single string sequences.
Inputs may be batched or unbatched. Raw python inputs will be
converted to tensors.
y: Any label data. Will be passed through unaltered.
sample_weight: Any label weight data. Will be passed through unaltered.
Examples:
Directly calling the layer on data.
```python
preprocessor = keras_nlp.models.WhisperPreprocessor.from_preset(
"whisper_tiny_en",
)
# Preprocess unbatched inputs.
input_data = {
"encoder_audio": tf.ones((200,)),
"decoder_text": "The quick brown fox jumped.",
}
preprocessor(input_data)
# Preprocess batched inputs.
input_data = {
"encoder_audio": tf.ones((2, 200)),
"decoder_text": ["The quick brown fox jumped.", "Call me Ishmael."],
}
preprocessor(input_data)
# Custom audio feature extractor and vocabulary.
audio_feature_extractor = keras_nlp.models.WhisperAudioFeatureExtractor(
num_mels=80,
num_fft_bins=400,
stride=100,
sampling_rate=100,
max_audio_length=5,
)
features = ["a quick fox.", "a fox quick."]
vocab = {"<|endoftext|>": 0, "a": 4, "Ġquick": 5, "Ġfox": 6}
merges = ["Ġ q", "u i", "c k", "ui ck", "Ġq uick"]
merges += ["Ġ f", "o x", "Ġf ox"]
special_tokens = {
"<|startoftranscript|>": 9,
"<|endoftext|>": 10,
"<|notimestamps|>": 11,
"<|transcribe|>": 12,
"<|translate|>": 13,
}
tokenizer = keras_nlp.models.WhisperTokenizer(
vocabulary=vocab,
merges=merges,
special_tokens=special_tokens,
)
preprocessor = keras_nlp.models.WhisperPreprocessor(
audio_feature_extractor=audio_feature_extractor,
tokenizer=tokenizer,
)
input_data = {
"encoder_audio": tf.ones((200,)),
"decoder_text": "The quick brown fox jumped.",
}
preprocessor(input_data)
```
Mapping with `tf.data.Dataset`.
```python
preprocessor = keras_nlp.models.WhisperPreprocessor.from_preset(
"whisper_tiny_en")
# Map labeled single sentences.
features = {
"encoder_audio": tf.ones((2, 200)),
"decoder_text": ["The quick brown fox jumped.", "Call me Ishmael."],
}
labels = tf.constant(["True", "False"])
ds = tf.data.Dataset.from_tensor_slices((features, labels))
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
# Map unlabeled single sentences.
features = {
"encoder_audio": tf.ones((2, 200)),
"decoder_text": ["The quick brown fox jumped.", "Call me Ishmael."],
}
ds = tf.data.Dataset.from_tensor_slices(features)
ds = ds.map(preprocessor, num_parallel_calls=tf.data.AUTOTUNE)
```
"""
def __init__(
self,
tokenizer,
audio_feature_extractor=None,
decoder_sequence_length=448,
language=None,
task=None,
no_timestamps=True,
**kwargs,
):
super().__init__(**kwargs)
if audio_feature_extractor is None:
audio_feature_extractor = WhisperAudioFeatureExtractor()
self.audio_feature_extractor = audio_feature_extractor
self.tokenizer = tokenizer
self.decoder_packer = None
self.decoder_sequence_length = decoder_sequence_length
self.language = language
self.task = task
self.no_timestamps = no_timestamps
def build(self, input_shape):
# Defer packer creation to `build()` so that we can be sure tokenizer
# assets have loaded when restoring a saved model.
# Create list of tokens to be prepended to decoder inputs.
bos_tokens = [self.tokenizer.bos_token_id]
if self.tokenizer.language_tokens is not None:
if (
self.language is None
or self.language not in self.tokenizer.language_tokens
):
raise ValueError(
"You must pass a non-None value for `language` when using "
"a multilingual tokenizer. The value must be one of "
f'{",".join(self.tokenizer.language_tokens.keys())}. '
f"Received: language={self.language}."
)
if self.task is None or self.task not in [
"transcribe",
"translate",
]:
raise ValueError(
"You must pass a non-None value for `task` when using "
"a multilingual tokenizer. The value must be one of "
'`"transcribe"`, `"translate"`. '
f"Received: task={self.task}."
)
bos_tokens += [self.tokenizer.language_tokens[self.language]]
if self.task == "transcribe":
bos_tokens += [self.tokenizer.special_tokens["<|transcribe|>"]]
elif self.task == "translate":
bos_tokens += [self.tokenizer.special_tokens["<|translate|>"]]
else:
if self.language is not None:
logging.info(
"`tokenizer` is monolingual, and `language` has a "
"non-`None` value. Setting `language` to `None`."
)
self.language = None
if self.task is not None:
logging.info(
"`tokenizer` is monolingual, and `task` has a "
"non-`None` value. Setting `task` to `None`."
)
self.task = None
if self.no_timestamps:
bos_tokens += [self.tokenizer.no_timestamps_token_id]
# TODO: Use `MultiSegmentPacker` instead of `StartEndPacker` once we
# want to move to multi-segment packing and have improved
# `MultiSegmentPacker`'s performance.
self.decoder_packer = StartEndPacker(
start_value=bos_tokens,
end_value=self.tokenizer.eos_token_id,
pad_value=self.tokenizer.pad_token_id,
sequence_length=self.decoder_sequence_length,
return_padding_mask=True,
)
def call(self, x, y=None, sample_weight=None, decoder_sequence_length=None):
if not (
isinstance(x, dict)
and ["encoder_audio", "decoder_text"] == list(x.keys())
):
raise ValueError(
'`x` must be a dictionary, containing the keys `"encoder_audio"`'
f' and `"decoder_text"`. Received x={x}.'
)
encoder_audio = x["encoder_audio"]
decoder_text = x["decoder_text"]
encoder_audio = convert_inputs_to_list_of_tensor_segments(encoder_audio)
decoder_text = convert_inputs_to_list_of_tensor_segments(decoder_text)
if len(encoder_audio) > 1 or len(decoder_text) > 1:
raise ValueError(
'`WhisperPreprocessor` requires both `"encoder_audio"` and '
f'`"decoder_text"` to contain only one segment, but received '
f"{len(encoder_audio)} and {len(decoder_text)}, respectively."
)
encoder_features = self.audio_feature_extractor(encoder_audio[0])
decoder_sequence_length = (
decoder_sequence_length or self.decoder_sequence_length
)
decoder_inputs = self.tokenizer(decoder_text[0])
decoder_token_ids, decoder_padding_mask = self.decoder_packer(
decoder_inputs,
sequence_length=decoder_sequence_length,
)
x = {
"encoder_features": encoder_features,
"decoder_token_ids": decoder_token_ids,
"decoder_padding_mask": decoder_padding_mask,
}
return pack_x_y_sample_weight(x, y, sample_weight)
def get_config(self):
config = super().get_config()
config.update(
{
"audio_feature_extractor": keras.layers.serialize(
self.audio_feature_extractor
),
"decoder_sequence_length": self.decoder_sequence_length,
"language": self.language,
"task": self.task,
"no_timestamps": self.no_timestamps,
}
)
return config
@classmethod
def from_config(cls, config):
if "tokenizer" in config and isinstance(config["tokenizer"], dict):
config["tokenizer"] = keras.layers.deserialize(config["tokenizer"])
if "audio_feature_extractor" in config and isinstance(
config["audio_feature_extractor"], dict
):
config["audio_feature_extractor"] = keras.layers.deserialize(
config["audio_feature_extractor"]
)
return cls(**config)
@property
def decoder_sequence_length(self):
"""The padded length of decoder input sequences."""
return self._decoder_sequence_length
@decoder_sequence_length.setter
def decoder_sequence_length(self, value):
self._decoder_sequence_length = value
if self.decoder_packer is not None:
self.decoder_packer.sequence_length = value
@property
def sequence_length(self):
"""Alias for `decoder_sequence_length`."""
return self.decoder_sequence_length
@sequence_length.setter
def sequence_length(self, value):
self.decoder_sequence_length = value
@classproperty
def audio_feature_extractor_cls(cls):
return WhisperAudioFeatureExtractor
@classproperty
def tokenizer_cls(cls):
return WhisperTokenizer
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
| keras-nlp/keras_nlp/models/whisper/whisper_preprocessor.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/whisper/whisper_preprocessor.py",
"repo_id": "keras-nlp",
"token_count": 5597
} | 128 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import ops
from keras_nlp.backend import random
from keras_nlp.samplers.sampler import Sampler
@keras_nlp_export("keras_nlp.samplers.RandomSampler")
class RandomSampler(Sampler):
"""Random Sampler class.
This sampler implements random sampling. Briefly, random sampler randomly
selects a token from the entire distribution of the tokens, with selection
chance determined by the probability of each token.
Args:
seed: int. The random seed. Defaults to `None`.
Call arguments:
{{call_args}}
Examples:
```python
causal_lm = keras_nlp.models.GPT2CausalLM.from_preset("gpt2_base_en")
# Pass by name to compile.
causal_lm.compile(sampler="random")
causal_lm.generate(["Keras is a"])
# Pass by object to compile.
sampler = keras_nlp.samplers.RandomSampler(temperature=0.7)
causal_lm.compile(sampler=sampler)
causal_lm.generate(["Keras is a"])
```
"""
def __init__(
self,
seed=None,
**kwargs,
):
super().__init__(**kwargs)
self.seed = seed
self.seed_generator = random.SeedGenerator(seed)
def get_next_token(self, probabilities):
# Sample the next token from the probability distribution.
next_token_id = random.categorical(
ops.log(probabilities),
1,
seed=self.seed_generator,
dtype="int32",
)
return ops.squeeze(next_token_id, axis=-1)
def get_config(self):
config = super().get_config()
config.update(
{
"seed": self.seed,
}
)
return config
| keras-nlp/keras_nlp/samplers/random_sampler.py/0 | {
"file_path": "keras-nlp/keras_nlp/samplers/random_sampler.py",
"repo_id": "keras-nlp",
"token_count": 911
} | 129 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import io
import tensorflow as tf
try:
import sentencepiece as spm
except ImportError:
spm = None
from keras_nlp.api_export import keras_nlp_export
@keras_nlp_export("keras_nlp.tokenizers.compute_sentence_piece_proto")
def compute_sentence_piece_proto(
data,
vocabulary_size,
model_type="unigram",
proto_output_file=None,
lowercase=False,
):
r"""A utility to train a SentencePiece vocabulary.
Trains a SentencePiece vocabulary from an input dataset or a list of
filenames.
If `data` is a list of filenames, the file format is required to be plain
text files, and the text will be read in line by line during training.
Args:
data: A `tf.data.Dataset`, or a list of filenames.
vocabulary_size: int. The maximum size of a vocabulary to be trained.
model_type: str. The model algorithm must be one of
`"unigram"`, `"bpe"`, `"word"` or `"char"`. Defaults to `"unigram"`.
proto_output_file: str. If provided it will be used
as model_file which is passed to model_writer.
If `None`, the model_file will be `io.BytesIO` object.
Defaults to `None`.
lowercase: bool. If True, the input text will be
lowercased before tokenization. Defaults to `False`.
Returns:
A `bytes` object with a serialized SentencePiece proto or
`None` if proto_output_file if provided.
Examples:
Basic Usage (from Dataset).
>>> inputs = tf.data.Dataset.from_tensor_slices(["Drifting Along"])
>>> proto = keras_nlp.tokenizers.compute_sentence_piece_proto(inputs, vocabulary_size=15)
>>> tokenizer = keras_nlp.tokenizers.SentencePieceTokenizer(proto=proto)
>>> outputs = inputs.map(tokenizer)
>>> for output in outputs:
... print(output)
tf.Tensor([ 4 8 12 5 9 14 5 6 13 4 7 10 11 6 13],
shape=(15,), dtype=int32)
Basic Usage (with files).
``` python
with open("test.txt", "w+") as f: f.write("Drifting Along\n")
inputs = ["test.txt"]
proto = keras_nlp.tokenizers.compute_sentence_piece_proto(
inputs, vocabulary_size=15, proto_output_file="model.spm")
tokenizer = keras_nlp.tokenizers.SentencePieceTokenizer(proto="model.spm")
ds = tf.data.Dataset.from_tensor_slices(["the quick brown fox."])
ds = ds.map(tokenizer)
```
Usage with lowercase
>>> inputs = tf.data.Dataset.from_tensor_slices(["Drifting Along"])
>>> proto = keras_nlp.tokenizers.compute_sentence_piece_proto(
... inputs, vocabulary_size=15, lowercase=True)
>>> tokenizer = keras_nlp.tokenizers.SentencePieceTokenizer(proto=proto)
>>> outputs = inputs.map(tokenizer)
>>> for output in outputs:
... print(output)
tf.Tensor([ 4 8 12 5 9 14 5 6 13 4 7 10 11 6 13],
shape=(15,), dtype=int32)
"""
if spm is None:
raise ImportError(
f"{compute_sentence_piece_proto.__name__} requires the "
"`sentencepiece` package. Please install it with "
"`pip install sentencepiece`."
)
if not isinstance(data, (list, tuple, tf.data.Dataset)):
raise ValueError(
"The `data` argument must be either `tf.data.Dataset` or `tuple` or `list`. "
f"Received: type(data)={type(data)}."
)
if model_type not in ["unigram", "bpe", "word", "char"]:
raise ValueError(
"The `model_type` argument must be one of `unigram`, `bpe`, `word`"
f"or `char`. Received: model_type={model_type}."
)
model_writer = (
open(proto_output_file, "wb") if proto_output_file else io.BytesIO()
)
is_dataset = isinstance(data, tf.data.Dataset)
if is_dataset:
spm.SentencePieceTrainer.train(
sentence_iterator=data.as_numpy_iterator(),
model_writer=model_writer,
vocab_size=vocabulary_size,
model_type=model_type,
normalization_rule_name="nmt_nfkc_cf" if lowercase else "nmt_nfkc",
pad_id=0,
unk_id=1,
bos_id=2,
eos_id=3,
)
else:
spm.SentencePieceTrainer.train(
input=data,
model_writer=model_writer,
vocab_size=vocabulary_size,
model_type=model_type,
normalization_rule_name="nmt_nfkc_cf" if lowercase else "nmt_nfkc",
pad_id=0,
unk_id=1,
bos_id=2,
eos_id=3,
)
if proto_output_file:
model_writer.close()
else:
return model_writer.getvalue()
| keras-nlp/keras_nlp/tokenizers/sentence_piece_tokenizer_trainer.py/0 | {
"file_path": "keras-nlp/keras_nlp/tokenizers/sentence_piece_tokenizer_trainer.py",
"repo_id": "keras-nlp",
"token_count": 2231
} | 130 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import os
import pytest
from absl.testing import parameterized
from keras_nlp.models.albert.albert_classifier import AlbertClassifier
from keras_nlp.models.backbone import Backbone
from keras_nlp.models.bert.bert_classifier import BertClassifier
from keras_nlp.models.roberta.roberta_classifier import RobertaClassifier
from keras_nlp.models.task import Task
from keras_nlp.tests.test_case import TestCase
from keras_nlp.utils.preset_utils import check_preset_class
from keras_nlp.utils.preset_utils import load_from_preset
from keras_nlp.utils.preset_utils import save_to_preset
class PresetUtilsTest(TestCase):
@parameterized.parameters(
(AlbertClassifier, "albert_base_en_uncased", "sentencepiece"),
(RobertaClassifier, "roberta_base_en", "bytepair"),
(BertClassifier, "bert_tiny_en_uncased", "wordpiece"),
)
@pytest.mark.keras_3_only
@pytest.mark.large
def test_preset_saving(self, cls, preset_name, tokenizer_type):
save_dir = self.get_temp_dir()
model = cls.from_preset(preset_name, num_classes=2)
save_to_preset(model, save_dir)
if tokenizer_type == "bytepair":
vocab_filename = "assets/tokenizer/vocabulary.json"
expected_assets = [
"assets/tokenizer/vocabulary.json",
"assets/tokenizer/merges.txt",
]
elif tokenizer_type == "sentencepiece":
vocab_filename = "assets/tokenizer/vocabulary.spm"
expected_assets = ["assets/tokenizer/vocabulary.spm"]
else:
vocab_filename = "assets/tokenizer/vocabulary.txt"
expected_assets = ["assets/tokenizer/vocabulary.txt"]
# Check existence of files
self.assertTrue(os.path.exists(os.path.join(save_dir, vocab_filename)))
self.assertTrue(os.path.exists(os.path.join(save_dir, "config.json")))
self.assertTrue(
os.path.exists(os.path.join(save_dir, "model.weights.h5"))
)
self.assertTrue(os.path.exists(os.path.join(save_dir, "metadata.json")))
# Check the model config (`config.json`)
config_json = open(os.path.join(save_dir, "config.json"), "r").read()
self.assertTrue(
"build_config" not in config_json
) # Test on raw json to include nested keys
self.assertTrue(
"compile_config" not in config_json
) # Test on raw json to include nested keys
config = json.loads(config_json)
self.assertEqual(set(config["assets"]), set(expected_assets))
self.assertEqual(config["weights"], "model.weights.h5")
# Try loading the model from preset directory
self.assertEqual(cls, check_preset_class(save_dir, cls))
self.assertEqual(cls, check_preset_class(save_dir, Task))
with self.assertRaises(ValueError):
# Preset is a subclass of Task, not Backbone.
check_preset_class(save_dir, Backbone)
# Try loading the model from preset directory
restored_model = load_from_preset(save_dir)
train_data = (
["the quick brown fox.", "the slow brown fox."], # Features.
)
model_input_data = model.preprocessor(*train_data)
restored_model_input_data = restored_model.preprocessor(*train_data)
# Check that saved vocab is equal to the original preset vocab
self.assertAllClose(model_input_data, restored_model_input_data)
# Check model outputs
self.assertAllEqual(
model(model_input_data), restored_model(restored_model_input_data)
)
def test_preset_errors(self):
with self.assertRaisesRegex(ValueError, "must be a string"):
AlbertClassifier.from_preset(AlbertClassifier)
with self.assertRaisesRegex(ValueError, "Unknown preset identifier"):
AlbertClassifier.from_preset("snaggle://bort/bort/bort")
| keras-nlp/keras_nlp/utils/preset_utils_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/utils/preset_utils_test.py",
"repo_id": "keras-nlp",
"token_count": 1805
} | 131 |
#!/bin/bash -e
base_dir=$(dirname $(dirname $0))
targets="${base_dir}/*.py ${base_dir}/examples/ ${base_dir}/keras_nlp/ ${base_dir}/tools/"
isort --sp "${base_dir}/pyproject.toml" ${targets}
black --config "${base_dir}/pyproject.toml" ${targets}
for i in $(find ${targets} -name '*.py'); do
if ! grep -q Copyright $i; then
echo $i
cat shell/copyright.txt $i >$i.new && mv $i.new $i
fi
done
flake8 --config "${base_dir}/setup.cfg" ${targets}
| keras-nlp/shell/format.sh/0 | {
"file_path": "keras-nlp/shell/format.sh",
"repo_id": "keras-nlp",
"token_count": 203
} | 132 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import os
os.environ["KERAS_BACKEND"] = "torch"
os.environ["CUDA_VISIBLE_DEVICES"] = "-1"
import huggingface_hub # noqa: E402
import numpy as np # noqa: E402
import torch # noqa: E402
import transformers # noqa: E402
from absl import app # noqa: E402
from absl import flags # noqa: E402
import keras_nlp # noqa: E402
from keras_nlp.models import BloomBackbone # noqa: E402
from keras_nlp.models import BloomTokenizer # noqa: E402
FLAGS = flags.FLAGS
PRESET_MAP = {
"bloom_560m_multi": "bigscience/bloom-560m",
"bloom_1.1b_multi": "bigscience/bloom-1b1",
"bloom_1.7b_multi": "bigscience/bloom-1b7",
"bloom_3b_multi": "bigscience/bloom-3b",
"bloom_7b_multi": "bigscience/bloom-7b1",
"bloom_176b_multi": "bigscience/bloom",
}
EXTRACT_DIR = "./model"
flags.DEFINE_string(
"preset", None, f'Must be one of {",".join(PRESET_MAP.keys())}'
)
flags.mark_flag_as_required("preset")
def download_hf_model(hf_model_name):
hf_model_dir = huggingface_hub.snapshot_download(
repo_id=hf_model_name,
allow_patterns=["*.json", "*.bin"],
ignore_patterns=["onnx/*"],
local_dir=EXTRACT_DIR,
)
return hf_model_dir
def convert_model(hf_model):
# get huggingface model configuration.
hf_config = hf_model.config.to_dict()
kwargs = {}
kwargs["vocabulary_size"] = hf_config["vocab_size"]
kwargs["num_layers"] = hf_config["n_layer"]
kwargs["num_heads"] = hf_config["n_head"]
kwargs["hidden_dim"] = hf_config["hidden_size"]
kwargs["intermediate_dim"] = hf_config["hidden_size"] * 4
kwargs["dropout"] = hf_config["hidden_dropout"]
kwargs["layer_norm_epsilon"] = hf_config["layer_norm_epsilon"]
return BloomBackbone(**kwargs)
def convert_tokenizer(hf_model_dir):
tokenizer_file_path = os.path.join(hf_model_dir, "tokenizer.json")
with open(tokenizer_file_path) as tokenizer_file:
hf_tokenizer = json.load(tokenizer_file)
vocab = hf_tokenizer["model"]["vocab"]
merges = hf_tokenizer["model"]["merges"]
return BloomTokenizer(vocabulary=vocab, merges=merges)
def convert_weights(keras_model, hf_model):
hidden_dim = keras_model.hidden_dim
num_heads = keras_model.num_heads
head_dim = hidden_dim // num_heads
num_layers = keras_model.num_layers
# get huggingface model weights.
hf_wts = hf_model.state_dict()
# assign huggingface weights to the keras model.
# Embedding layer.
keras_model.get_layer("token_embedding").embeddings.assign(
hf_wts["word_embeddings.weight"]
)
# LayerNorm.
keras_model.get_layer("token_embedding_layernorm").gamma.assign(
hf_wts["word_embeddings_layernorm.weight"]
)
keras_model.get_layer("token_embedding_layernorm").beta.assign(
hf_wts["word_embeddings_layernorm.bias"]
)
keras_model.get_layer("final_layernorm").gamma.assign(hf_wts["ln_f.weight"])
keras_model.get_layer("final_layernorm").beta.assign(hf_wts["ln_f.bias"])
# Decoder layers.
for i in range(num_layers):
decoder_layer = keras_model.get_layer(f"transformer_layer_{i}")
# LayrNorm.
decoder_layer._pre_attention_layernorm.gamma.assign(
hf_wts[f"h.{i}.input_layernorm.weight"]
)
decoder_layer._pre_attention_layernorm.beta.assign(
hf_wts[f"h.{i}.input_layernorm.bias"]
)
decoder_layer._post_attention_layernorm.gamma.assign(
hf_wts[f"h.{i}.post_attention_layernorm.weight"]
)
decoder_layer._post_attention_layernorm.beta.assign(
hf_wts[f"h.{i}.post_attention_layernorm.bias"]
)
# Attention layer.
attention_layer = decoder_layer._self_attention_layer
fused_qkv_kernal = hf_wts[
f"h.{i}.self_attention.query_key_value.weight"
].T
fused_qkv_kernal = fused_qkv_kernal.view(
hidden_dim, num_heads, 3, head_dim
)
query_kernal = fused_qkv_kernal[..., 0, :]
key_kernal = fused_qkv_kernal[..., 1, :]
value_kernl = fused_qkv_kernal[..., 2, :]
fused_qkv_bais = hf_wts[f"h.{i}.self_attention.query_key_value.bias"]
fused_qkv_bais = fused_qkv_bais.view(num_heads, 3, head_dim)
query_bais = fused_qkv_bais[:, 0, :]
key_bais = fused_qkv_bais[:, 1, :]
value_bais = fused_qkv_bais[:, 2, :]
attention_layer._query_dense.kernel.assign(query_kernal)
attention_layer._query_dense.bias.assign(query_bais)
attention_layer._key_dense.kernel.assign(key_kernal)
attention_layer._key_dense.bias.assign(key_bais)
attention_layer._value_dense.kernel.assign(value_kernl)
attention_layer._value_dense.bias.assign(value_bais)
attention_layer._output_dense.kernel.assign(
hf_wts[f"h.{i}.self_attention.dense.weight"].T
)
attention_layer._output_dense.bias.assign(
hf_wts[f"h.{i}.self_attention.dense.bias"]
)
# mlp.
decoder_layer._mlp_intermediate_dense.kernel.assign(
hf_wts[f"h.{i}.mlp.dense_h_to_4h.weight"].T
)
decoder_layer._mlp_intermediate_dense.bias.assign(
hf_wts[f"h.{i}.mlp.dense_h_to_4h.bias"]
)
decoder_layer._mlp_output_dense.kernel.assign(
hf_wts[f"h.{i}.mlp.dense_4h_to_h.weight"].T
)
decoder_layer._mlp_output_dense.bias.assign(
hf_wts[f"h.{i}.mlp.dense_4h_to_h.bias"]
)
def validate_output(
hf_model,
keras_model,
hf_tokenizer,
keras_tokenizer,
):
input_str = ["the quick brown fox ran, galloped and jumped."]
# KerasNLP
token_ids = torch.tensor(keras_tokenizer(input_str))
padding_mask = token_ids != 3
keras_model_input = {
"token_ids": token_ids,
"padding_mask": padding_mask,
}
keras_model_outputs = keras_model.predict(keras_model_input)
hf_model_input = hf_tokenizer(input_str, return_tensors="pt")
hf_model_outputs = hf_model(**hf_model_input).last_hidden_state
hf_model_outputs = hf_model_outputs.detach().numpy()
# Comparing the outputs.
print("🔶 KerasNLP output:", keras_model_outputs[0, 0, :10])
print("🔶 HF output:", hf_model_outputs[0, 0, :10])
print("🔶 Difference:", np.mean(keras_model_outputs - hf_model_outputs))
def main(_):
preset = FLAGS.preset
assert (
preset in PRESET_MAP.keys()
), f'Invalid preset {preset}. Must be one of {",".join(PRESET_MAP.keys())}'
print(f"✅ Coverting {preset}")
hf_model_name = PRESET_MAP[preset]
hf_model_dir = download_hf_model(hf_model_name)
print("✅ Huggingface model downloaded from hub")
hf_model = transformers.BloomModel.from_pretrained(hf_model_dir)
hf_tokenizer = transformers.BloomTokenizerFast.from_pretrained(hf_model_dir)
print("✅ Huggingface model loaded")
keras_model = convert_model(hf_model)
keras_tokenizer = convert_tokenizer(hf_model_dir)
print("✅ Keras model loaded")
convert_weights(keras_model, hf_model)
print("✅ Weights converted")
validate_output(
hf_model,
keras_model,
hf_tokenizer,
keras_tokenizer,
)
print("✅ Numerics validated")
keras_nlp.src.utils.preset_utils.save_to_preset(keras_model, preset)
keras_nlp.src.utils.preset_utils.save_to_preset(
keras_tokenizer, preset, config_filename="tokenizer.json"
)
print("✅ Preset saved")
if __name__ == "__main__":
app.run(main)
| keras-nlp/tools/checkpoint_conversion/convert_bloom_checkpoints.py/0 | {
"file_path": "keras-nlp/tools/checkpoint_conversion/convert_bloom_checkpoints.py",
"repo_id": "keras-nlp",
"token_count": 3734
} | 133 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Small utility script to count parameters in our preset checkpoints.
Usage:
python tools/count_preset_params.py
python tools/count_preset_params.py --model BertBackbone
python tools/count_preset_params.py --preset bert_base_multi
"""
import inspect
from absl import app
from absl import flags
from keras.utils.layer_utils import count_params
from tensorflow import keras
import keras_nlp
FLAGS = flags.FLAGS
flags.DEFINE_string("model", None, "The name of a model, e.g. BertBackbone.")
flags.DEFINE_string(
"preset", None, "The name of a preset, e.g. bert_base_multi."
)
def main(_):
for name, symbol in keras_nlp.models.__dict__.items():
if inspect.isclass(symbol) and issubclass(symbol, keras.Model):
if FLAGS.model and name != FLAGS.model:
continue
if not hasattr(symbol, "from_preset"):
continue
for preset in symbol.presets:
if FLAGS.preset and preset != FLAGS.preset:
continue
model = symbol.from_preset(preset)
params = count_params(model.weights)
print(f"{name} {preset} {params}")
if __name__ == "__main__":
app.run(main)
| keras-nlp/tools/count_preset_params.py/0 | {
"file_path": "keras-nlp/tools/count_preset_params.py",
"repo_id": "keras-nlp",
"token_count": 672
} | 134 |
# -*- coding: utf-8 -*-
"""Utilities for preprocessing sequence data.
"""
import json
import random
import numpy as np
def pad_sequences(sequences, maxlen=None, dtype='int32',
padding='pre', truncating='pre', value=0.):
"""Pads sequences to the same length.
This function transforms a list of
`num_samples` sequences (lists of integers)
into a 2D Numpy array of shape `(num_samples, num_timesteps)`.
`num_timesteps` is either the `maxlen` argument if provided,
or the length of the longest sequence otherwise.
Sequences that are shorter than `num_timesteps`
are padded with `value` at the beginning or the end
if padding='post.
Sequences longer than `num_timesteps` are truncated
so that they fit the desired length.
The position where padding or truncation happens is determined by
the arguments `padding` and `truncating`, respectively.
Pre-padding is the default.
# Arguments
sequences: List of lists, where each element is a sequence.
maxlen: Int, maximum length of all sequences.
dtype: Type of the output sequences.
To pad sequences with variable length strings, you can use `object`.
padding: String, 'pre' or 'post':
pad either before or after each sequence.
truncating: String, 'pre' or 'post':
remove values from sequences larger than
`maxlen`, either at the beginning or at the end of the sequences.
value: Float or String, padding value.
# Returns
x: Numpy array with shape `(len(sequences), maxlen)`
# Raises
ValueError: In case of invalid values for `truncating` or `padding`,
or in case of invalid shape for a `sequences` entry.
"""
if not hasattr(sequences, '__len__'):
raise ValueError('`sequences` must be iterable.')
num_samples = len(sequences)
lengths = []
sample_shape = ()
flag = True
# take the sample shape from the first non empty sequence
# checking for consistency in the main loop below.
for x in sequences:
try:
lengths.append(len(x))
if flag and len(x):
sample_shape = np.asarray(x).shape[1:]
flag = False
except TypeError:
raise ValueError('`sequences` must be a list of iterables. '
'Found non-iterable: ' + str(x))
if maxlen is None:
maxlen = np.max(lengths)
is_dtype_str = np.issubdtype(dtype, np.str_) or np.issubdtype(dtype, np.unicode_)
if isinstance(value, str) and dtype != object and not is_dtype_str:
raise ValueError("`dtype` {} is not compatible with `value`'s type: {}\n"
"You should set `dtype=object` for variable length strings."
.format(dtype, type(value)))
x = np.full((num_samples, maxlen) + sample_shape, value, dtype=dtype)
for idx, s in enumerate(sequences):
if not len(s):
continue # empty list/array was found
if truncating == 'pre':
trunc = s[-maxlen:]
elif truncating == 'post':
trunc = s[:maxlen]
else:
raise ValueError('Truncating type "%s" '
'not understood' % truncating)
# check `trunc` has expected shape
trunc = np.asarray(trunc, dtype=dtype)
if trunc.shape[1:] != sample_shape:
raise ValueError('Shape of sample %s of sequence at position %s '
'is different from expected shape %s' %
(trunc.shape[1:], idx, sample_shape))
if padding == 'post':
x[idx, :len(trunc)] = trunc
elif padding == 'pre':
x[idx, -len(trunc):] = trunc
else:
raise ValueError('Padding type "%s" not understood' % padding)
return x
def make_sampling_table(size, sampling_factor=1e-5):
"""Generates a word rank-based probabilistic sampling table.
Used for generating the `sampling_table` argument for `skipgrams`.
`sampling_table[i]` is the probability of sampling
the word i-th most common word in a dataset
(more common words should be sampled less frequently, for balance).
The sampling probabilities are generated according
to the sampling distribution used in word2vec:
```
p(word) = (min(1, sqrt(word_frequency / sampling_factor) /
(word_frequency / sampling_factor)))
```
We assume that the word frequencies follow Zipf's law (s=1) to derive
a numerical approximation of frequency(rank):
`frequency(rank) ~ 1/(rank * (log(rank) + gamma) + 1/2 - 1/(12*rank))`
where `gamma` is the Euler-Mascheroni constant.
# Arguments
size: Int, number of possible words to sample.
sampling_factor: The sampling factor in the word2vec formula.
# Returns
A 1D Numpy array of length `size` where the ith entry
is the probability that a word of rank i should be sampled.
"""
gamma = 0.577
rank = np.arange(size)
rank[0] = 1
inv_fq = rank * (np.log(rank) + gamma) + 0.5 - 1. / (12. * rank)
f = sampling_factor * inv_fq
return np.minimum(1., f / np.sqrt(f))
def skipgrams(sequence, vocabulary_size,
window_size=4, negative_samples=1., shuffle=True,
categorical=False, sampling_table=None, seed=None):
"""Generates skipgram word pairs.
This function transforms a sequence of word indexes (list of integers)
into tuples of words of the form:
- (word, word in the same window), with label 1 (positive samples).
- (word, random word from the vocabulary), with label 0 (negative samples).
Read more about Skipgram in this gnomic paper by Mikolov et al.:
[Efficient Estimation of Word Representations in
Vector Space](http://arxiv.org/pdf/1301.3781v3.pdf)
# Arguments
sequence: A word sequence (sentence), encoded as a list
of word indices (integers). If using a `sampling_table`,
word indices are expected to match the rank
of the words in a reference dataset (e.g. 10 would encode
the 10-th most frequently occurring token).
Note that index 0 is expected to be a non-word and will be skipped.
vocabulary_size: Int, maximum possible word index + 1
window_size: Int, size of sampling windows (technically half-window).
The window of a word `w_i` will be
`[i - window_size, i + window_size+1]`.
negative_samples: Float >= 0. 0 for no negative (i.e. random) samples.
1 for same number as positive samples.
shuffle: Whether to shuffle the word couples before returning them.
categorical: bool. if False, labels will be
integers (eg. `[0, 1, 1 .. ]`),
if `True`, labels will be categorical, e.g.
`[[1,0],[0,1],[0,1] .. ]`.
sampling_table: 1D array of size `vocabulary_size` where the entry i
encodes the probability to sample a word of rank i.
seed: Random seed.
# Returns
couples, labels: where `couples` are int pairs and
`labels` are either 0 or 1.
# Note
By convention, index 0 in the vocabulary is
a non-word and will be skipped.
"""
couples = []
labels = []
for i, wi in enumerate(sequence):
if not wi:
continue
if sampling_table is not None:
if sampling_table[wi] < random.random():
continue
window_start = max(0, i - window_size)
window_end = min(len(sequence), i + window_size + 1)
for j in range(window_start, window_end):
if j != i:
wj = sequence[j]
if not wj:
continue
couples.append([wi, wj])
if categorical:
labels.append([0, 1])
else:
labels.append(1)
if negative_samples > 0:
num_negative_samples = int(len(labels) * negative_samples)
words = [c[0] for c in couples]
random.shuffle(words)
couples += [[words[i % len(words)],
random.randint(1, vocabulary_size - 1)]
for i in range(num_negative_samples)]
if categorical:
labels += [[1, 0]] * num_negative_samples
else:
labels += [0] * num_negative_samples
if shuffle:
if seed is None:
seed = random.randint(0, 10e6)
random.seed(seed)
random.shuffle(couples)
random.seed(seed)
random.shuffle(labels)
return couples, labels
def _remove_long_seq(maxlen, seq, label):
"""Removes sequences that exceed the maximum length.
# Arguments
maxlen: Int, maximum length of the output sequences.
seq: List of lists, where each sublist is a sequence.
label: List where each element is an integer.
# Returns
new_seq, new_label: shortened lists for `seq` and `label`.
"""
new_seq, new_label = [], []
for x, y in zip(seq, label):
if len(x) < maxlen:
new_seq.append(x)
new_label.append(y)
return new_seq, new_label
class TimeseriesGenerator(object):
"""Utility class for generating batches of temporal data.
This class takes in a sequence of data-points gathered at
equal intervals, along with time series parameters such as
stride, length of history, etc., to produce batches for
training/validation.
# Arguments
data: Indexable generator (such as list or Numpy array)
containing consecutive data points (timesteps).
The data should be at 2D, and axis 0 is expected
to be the time dimension.
targets: Targets corresponding to timesteps in `data`.
It should have same length as `data`.
length: Length of the output sequences (in number of timesteps).
sampling_rate: Period between successive individual timesteps
within sequences. For rate `r`, timesteps
`data[i]`, `data[i-r]`, ... `data[i - length]`
are used for create a sample sequence.
stride: Period between successive output sequences.
For stride `s`, consecutive output samples would
be centered around `data[i]`, `data[i+s]`, `data[i+2*s]`, etc.
start_index: Data points earlier than `start_index` will not be used
in the output sequences. This is useful to reserve part of the
data for test or validation.
end_index: Data points later than `end_index` will not be used
in the output sequences. This is useful to reserve part of the
data for test or validation.
shuffle: Whether to shuffle output samples,
or instead draw them in chronological order.
reverse: Boolean: if `true`, timesteps in each output sample will be
in reverse chronological order.
batch_size: Number of timeseries samples in each batch
(except maybe the last one).
# Returns
A [Sequence](/utils/#sequence) instance.
# Examples
```python
from keras.preprocessing.sequence import TimeseriesGenerator
import numpy as np
data = np.array([[i] for i in range(50)])
targets = np.array([[i] for i in range(50)])
data_gen = TimeseriesGenerator(data, targets,
length=10, sampling_rate=2,
batch_size=2)
assert len(data_gen) == 20
batch_0 = data_gen[0]
x, y = batch_0
assert np.array_equal(x,
np.array([[[0], [2], [4], [6], [8]],
[[1], [3], [5], [7], [9]]]))
assert np.array_equal(y,
np.array([[10], [11]]))
```
"""
def __init__(self, data, targets, length,
sampling_rate=1,
stride=1,
start_index=0,
end_index=None,
shuffle=False,
reverse=False,
batch_size=128):
if len(data) != len(targets):
raise ValueError('Data and targets have to be' +
' of same length. '
'Data length is {}'.format(len(data)) +
' while target length is {}'.format(len(targets)))
self.data = data
self.targets = targets
self.length = length
self.sampling_rate = sampling_rate
self.stride = stride
self.start_index = start_index + length
if end_index is None:
end_index = len(data) - 1
self.end_index = end_index
self.shuffle = shuffle
self.reverse = reverse
self.batch_size = batch_size
if self.start_index > self.end_index:
raise ValueError('`start_index+length=%i > end_index=%i` '
'is disallowed, as no part of the sequence '
'would be left to be used as current step.'
% (self.start_index, self.end_index))
def __len__(self):
return (self.end_index - self.start_index +
self.batch_size * self.stride) // (self.batch_size * self.stride)
def __getitem__(self, index):
if self.shuffle:
rows = np.random.randint(
self.start_index, self.end_index + 1, size=self.batch_size)
else:
i = self.start_index + self.batch_size * self.stride * index
rows = np.arange(i, min(i + self.batch_size *
self.stride, self.end_index + 1), self.stride)
samples = np.array([self.data[row - self.length:row:self.sampling_rate]
for row in rows])
targets = np.array([self.targets[row] for row in rows])
if self.reverse:
return samples[:, ::-1, ...], targets
return samples, targets
def get_config(self):
'''Returns the TimeseriesGenerator configuration as Python dictionary.
# Returns
A Python dictionary with the TimeseriesGenerator configuration.
'''
data = self.data
if type(self.data).__module__ == np.__name__:
data = self.data.tolist()
try:
json_data = json.dumps(data)
except TypeError:
raise TypeError('Data not JSON Serializable:', data)
targets = self.targets
if type(self.targets).__module__ == np.__name__:
targets = self.targets.tolist()
try:
json_targets = json.dumps(targets)
except TypeError:
raise TypeError('Targets not JSON Serializable:', targets)
return {
'data': json_data,
'targets': json_targets,
'length': self.length,
'sampling_rate': self.sampling_rate,
'stride': self.stride,
'start_index': self.start_index,
'end_index': self.end_index,
'shuffle': self.shuffle,
'reverse': self.reverse,
'batch_size': self.batch_size
}
def to_json(self, **kwargs):
"""Returns a JSON string containing the timeseries generator
configuration. To load a generator from a JSON string, use
`keras.preprocessing.sequence.timeseries_generator_from_json(json_string)`.
# Arguments
**kwargs: Additional keyword arguments
to be passed to `json.dumps()`.
# Returns
A JSON string containing the tokenizer configuration.
"""
config = self.get_config()
timeseries_generator_config = {
'class_name': self.__class__.__name__,
'config': config
}
return json.dumps(timeseries_generator_config, **kwargs)
def timeseries_generator_from_json(json_string):
"""Parses a JSON timeseries generator configuration file and
returns a timeseries generator instance.
# Arguments
json_string: JSON string encoding a timeseries
generator configuration.
# Returns
A Keras TimeseriesGenerator instance
"""
full_config = json.loads(json_string)
config = full_config.get('config')
data = json.loads(config.pop('data'))
config['data'] = data
targets = json.loads(config.pop('targets'))
config['targets'] = targets
return TimeseriesGenerator(**config)
| keras-preprocessing/keras_preprocessing/sequence.py/0 | {
"file_path": "keras-preprocessing/keras_preprocessing/sequence.py",
"repo_id": "keras-preprocessing",
"token_count": 7185
} | 135 |
<meta http-equiv="refresh" content="0; URL='https://keras.io/keras_tuner/'" />
| keras-tuner/docs/site/index.html/0 | {
"file_path": "keras-tuner/docs/site/index.html",
"repo_id": "keras-tuner",
"token_count": 32
} | 136 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import glob as built_in_glob
import os
import shutil
from keras_tuner.backend import config
if config.backend() == "tensorflow":
import tensorflow as tf
else:
tf = None
def exists(path):
if tf is None:
return os.path.exists(path)
return tf.io.gfile.exists(path)
def rmtree(path):
if tf is None:
return shutil.rmtree(path)
return tf.io.gfile.rmtree(path)
def File(filename, mode):
if tf is None:
file = open(filename, mode)
else:
file = tf.io.gfile.GFile(filename, mode)
return file
def glob(path):
if tf is None:
return built_in_glob.glob(path)
return tf.io.gfile.glob(path)
def makedirs(path):
if tf is None:
return os.makedirs(path, exist_ok=True)
return tf.io.gfile.makedirs(path)
# The following code is forked from Keras:
# https://github.com/keras-team/keras/blob/master/keras/distribute/distributed_file_utils.py
"""Utilities that help manage directory path in distributed settings.
In multi-worker training, the need to write a file to distributed file
location often requires only one copy done by one worker despite many workers
that are involved in training. The option to only perform saving by chief is
not feasible for a couple of reasons: 1) Chief and workers may each contain
a client that runs the same piece of code and it's preferred not to make
any distinction between the code run by chief and other workers, and 2)
saving of model or model's related information may require SyncOnRead
variables to be read, which needs the cooperation of all workers to perform
all-reduce.
This set of utility is used so that only one copy is written to the needed
directory, by supplying a temporary write directory path for workers that don't
need to save, and removing the temporary directory once file writing is done.
Example usage:
```
# Before using a directory to write file to.
self.log_write_dir = write_dirpath(self.log_dir, get_distribution_strategy())
# Now `self.log_write_dir` can be safely used to write file to.
...
# After the file is written to the directory.
remove_temp_dirpath(self.log_dir, get_distribution_strategy())
```
Experimental. API is subject to change.
"""
def _get_base_dirpath(strategy):
task_id = strategy.extended._task_id # pylint: disable=protected-access
return f"workertemp_{str(task_id)}"
def _is_temp_dir(dirpath, strategy):
return dirpath.endswith(_get_base_dirpath(strategy))
def _get_temp_dir(dirpath, strategy):
if _is_temp_dir(dirpath, strategy):
temp_dir = dirpath
else:
temp_dir = os.path.join(dirpath, _get_base_dirpath(strategy))
tf.io.gfile.makedirs(temp_dir)
return temp_dir
def write_dirpath(dirpath, strategy):
"""Returns the writing dir that should be used to save file distributedly.
`dirpath` would be created if it doesn't exist.
Args:
dirpath: Original dirpath that would be used without distribution.
strategy: The tf.distribute strategy object currently used.
Returns:
The writing dir path that should be used to save with distribution.
"""
if tf is None:
return
if strategy is None:
# Infer strategy if not given.
strategy = tf.distribute.get_strategy()
if strategy is None:
# If strategy is still not available, this is not in distributed
# training. Fallback to original dirpath.
return dirpath
if (
not strategy.extended._in_multi_worker_mode()
): # pylint: disable=protected-access
return dirpath
if strategy.extended.should_checkpoint:
return dirpath
# If this worker is not chief and hence should not save file, save it to a
# temporary directory to be removed later.
return _get_temp_dir(dirpath, strategy)
def remove_temp_dirpath(dirpath, strategy):
"""Removes the temp path after writing is finished.
Args:
dirpath: Original dirpath that would be used without distribution, or
the temporary dirpath used with distribution.
strategy: The tf.distribute strategy object currently used.
"""
if tf is None:
return
if strategy is None:
# Infer strategy if not given.
strategy = tf.distribute.get_strategy()
if strategy is None:
# If strategy is still not available, this is not in distributed
# training. Fallback to no-op.
return
# TODO(anjalisridhar): Consider removing the check for multi worker mode
# since it is redundant when used with the should_checkpoint property.
if (
strategy.extended._in_multi_worker_mode()
and not strategy.extended.should_checkpoint
):
# If this worker is not chief and hence should not save file, remove
# the temporary directory.
tf.io.gfile.rmtree(_get_temp_dir(dirpath, strategy))
def write_filepath(filepath, strategy):
"""Returns the writing file path to be used to save file distributedly.
Directory to contain `filepath` would be created if it doesn't exist.
Args:
filepath: Original filepath that would be used without distribution.
strategy: The tf.distribute strategy object currently used.
Returns:
The writing filepath that should be used to save file with distribution.
"""
if tf is None:
return
dirpath = os.path.dirname(filepath)
base = os.path.basename(filepath)
return os.path.join(write_dirpath(dirpath, strategy), base)
def remove_temp_dir_with_filepath(filepath, strategy):
"""Removes the temp path for file after writing is finished.
Args:
filepath: Original filepath that would be used without distribution, or
the temporary filepath used with distribution.
strategy: The tf.distribute strategy object currently used.
"""
if tf is None:
return
remove_temp_dirpath(os.path.dirname(filepath), strategy)
| keras-tuner/keras_tuner/backend/io.py/0 | {
"file_path": "keras-tuner/keras_tuner/backend/io.py",
"repo_id": "keras-tuner",
"token_count": 2201
} | 137 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"HyperParameters logic."
import abc
import six
from keras_tuner import protos
from keras_tuner import utils
@six.add_metaclass(abc.ABCMeta)
class Condition:
"""Abstract condition for a conditional hyperparameter.
Subclasses of this object can be passed to a `HyperParameter` to specify
that this condition must be met for the hyperparameter to be active for the
`Trial`.
"""
@abc.abstractmethod
def is_active(self, values):
"""Whether this condition should be considered active.
Determines whether this condition is true for the current `Trial`.
Args:
values: Dict. The active values for this `Trial`. Keys are the
names of the hyperparameters.
Returns:
A boolean value of whether the condition is true.
"""
raise NotImplementedError("Must be implemented in subclasses.")
@abc.abstractmethod
def __eq__(self, other):
raise NotImplementedError("Must be implemented in subclasses.")
@abc.abstractmethod
def get_config(self):
raise NotImplementedError("Must be implemented in subclasses.")
@classmethod
def from_config(cls, config):
return cls(**config) # pytype: disable=not-instantiable
@classmethod
def from_proto(cls, proto):
kind = proto.WhichOneof("kind")
if kind == "parent":
parent = getattr(proto, kind)
name = parent.name
values = parent.values
values = [getattr(v, v.WhichOneof("kind")) for v in values]
return Parent(name=name, values=values)
raise ValueError(f"Unrecognized condition of type: {kind}")
class Parent(Condition):
"""Condition checking a `HyperParameter`'s value is in a list of values.
It specifies a condition that a `HyperParameter`'s value is in a list of
values. It can be used as the condition to activate another
`HyperParameter` for the `Trial`.
Example:
```python
a = Choice('model', ['linear', 'dnn'])
condition = Parent(name='model', value=['dnn'])
b = Int('num_layers', 5, 10, conditions=[condition])
```
Args:
name: A string, the name of the `HyperParameter` to use in the
condition.
values: A list of values of the `HyperParameter` to activate the
condition.
"""
def __init__(self, name, values):
self.name = name
# Standardize on str, int, float, bool.
values = utils.to_list(values)
first_val = values[0]
if isinstance(first_val, six.string_types):
values = [str(v) for v in values]
elif isinstance(first_val, bool):
# Bool check needs to be before integer check to prevent bool falls
# into integer condition.
pass
elif isinstance(first_val, six.integer_types):
values = [int(v) for v in values]
elif not isinstance(first_val, float):
raise TypeError(
"Can contain only `int`, `float`, `str`, or "
"`bool`, found values: " + str(values) + "with "
"types: " + str(type(first_val))
)
self.values = values
def is_active(self, values):
return self.name in values and values[self.name] in self.values
def __eq__(self, other):
return (
isinstance(other, Parent)
and other.name == self.name
and other.values == self.values
)
def get_config(self):
return {"name": self.name, "values": self.values}
def to_proto(self):
print(self.values[0])
if isinstance(self.values[0], six.string_types):
values = [
protos.get_proto().Value(string_value=v) for v in self.values
]
elif isinstance(self.values[0], bool):
values = [
protos.get_proto().Value(boolean_value=v) for v in self.values
]
elif isinstance(self.values[0], six.integer_types):
values = [
protos.get_proto().Value(int_value=v) for v in self.values
]
else:
values = [
protos.get_proto().Value(float_value=v) for v in self.values
]
return protos.get_proto().Condition(
parent=protos.get_proto().Condition.Parent(
name=self.name, values=values
)
)
OBJECTS = (
Condition,
Parent,
)
ALL_CLASSES = {cls.__name__: cls for cls in OBJECTS}
def deserialize(config):
return utils.deserialize_keras_object(config, module_objects=ALL_CLASSES)
def serialize(obj):
return utils.serialize_keras_object(obj)
| keras-tuner/keras_tuner/engine/conditions.py/0 | {
"file_path": "keras-tuner/keras_tuner/engine/conditions.py",
"repo_id": "keras-tuner",
"token_count": 2181
} | 138 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import math
from keras_tuner import protos
def sampling_from_proto(sampling):
if sampling == protos.get_proto().Sampling.LINEAR:
return "linear"
if sampling == protos.get_proto().Sampling.LOG:
return "log"
if sampling == protos.get_proto().Sampling.REVERSE_LOG:
return "reverse_log"
raise ValueError(
"Expected sampling to be one of predefined proto values. "
f"Received: '{sampling}'."
)
def sampling_to_proto(sampling):
if sampling == "linear":
return protos.get_proto().Sampling.LINEAR
if sampling == "log":
return protos.get_proto().Sampling.LOG
if sampling == "reverse_log":
return protos.get_proto().Sampling.REVERSE_LOG
raise ValueError(
"Expected sampling to be 'linear', 'log', or 'reverse_log'. "
f"Received: '{sampling}'."
)
def prob_to_index(prob, n_index):
"""Convert cumulative probability to 0-based index in the given range."""
ele_prob = 1 / n_index
index = int(math.floor(prob / ele_prob))
# Can happen when `prob` is very close to 1.
if index == n_index:
index -= 1
return index
def index_to_prob(index, n_index):
"""Convert 0-based index in the given range to cumulative probability."""
ele_prob = 1 / n_index
# Center the value in its probability bucket.
return (index + 0.5) * ele_prob
| keras-tuner/keras_tuner/engine/hyperparameters/hp_utils.py/0 | {
"file_path": "keras-tuner/keras_tuner/engine/hyperparameters/hp_utils.py",
"repo_id": "keras-tuner",
"token_count": 705
} | 139 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from io import StringIO
from unittest.mock import patch
import numpy as np
import pytest
from tensorboard.plugins.hparams import api as hparams_api
import keras_tuner
from keras_tuner import errors
from keras_tuner.backend import config
from keras_tuner.backend import keras
from keras_tuner.engine import tuner as tuner_module
INPUT_DIM = 2
NUM_CLASSES = 3
NUM_SAMPLES = 64
TRAIN_INPUTS = np.random.random(size=(NUM_SAMPLES, INPUT_DIM))
TRAIN_TARGETS = np.random.randint(0, NUM_CLASSES, size=(NUM_SAMPLES, 1))
VAL_INPUTS = np.random.random(size=(NUM_SAMPLES, INPUT_DIM))
VAL_TARGETS = np.random.randint(0, NUM_CLASSES, size=(NUM_SAMPLES, 1))
def build_model(hp):
inputs = keras.Input(shape=(INPUT_DIM,))
x = inputs
for i in range(hp.Int("num_layers", 1, 4)):
x = keras.layers.Dense(
units=hp.Int(f"units_{str(i)}", 5, 9, 1, default=6),
activation="relu",
)(x)
outputs = keras.layers.Dense(NUM_CLASSES, activation="softmax")(x)
model = keras.Model(inputs, outputs)
model.compile(
optimizer=keras.optimizers.Adam(
hp.Choice("learning_rate", [1e-2, 1e-3, 1e-4])
),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
return model
class MockModel(keras.Model):
def __init__(self, full_history):
super().__init__()
self.full_history = full_history
self.callbacks = []
self.optimizer = True
def call_callbacks(self, callbacks, method_name, *args, **kwargs):
for callback in callbacks:
method = getattr(callback, method_name)
method(*args, **kwargs)
def on_epoch_begin(self, epoch):
for callback in self.callbacks:
callback.on_epoch_begin(epoch, logs=None)
def on_epoch_end(self, epoch):
logs = {"loss": np.average(self.full_history[epoch])}
for callback in self.callbacks:
callback.on_epoch_end(epoch, logs=logs)
def on_batch_begin(self, epoch, batch):
for callback in self.callbacks:
callback.on_batch_begin(batch, logs=None)
def on_batch_end(self, epoch, batch):
logs = {"loss": self.full_history[epoch][batch]}
for callback in self.callbacks:
callback.on_batch_end(epoch, logs=logs)
def fit(self, *args, **kwargs):
self.callbacks = kwargs["callbacks"]
for callback in self.callbacks:
callback.set_model(self)
for epoch in range(len(self.full_history)):
self.on_epoch_begin(epoch)
for batch in range(len(self.full_history[epoch])):
self.on_batch_begin(epoch, batch)
self.on_batch_end(epoch, batch)
self.on_epoch_end(epoch)
history = keras.callbacks.History()
history.history = {
"loss": [
np.average(epoch_values) for epoch_values in self.full_history
]
}
return history
def save_weights(self, fname, **kwargs):
pass
def get_config(self):
return {}
class MockHyperModel(keras_tuner.HyperModel):
mode_0 = [[10, 9, 8], [7, 6, 5], [4, 3, 2]]
mode_1 = [[13, 13, 13], [12, 12, 12], [11, 11, 11]]
def __init__(self):
# The first call to `build` in tuner __init__
# will reset this to 0
self.mode_0_execution_count = -1
def build(self, hp):
if hp.Choice("mode", [0, 1]) == 0:
return MockModel(self.mode_0)
return MockModel(self.mode_1)
def build_subclass_model(hp):
class MyModel(keras.Model):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.layer = keras.layers.Dense(NUM_CLASSES, activation="softmax")
def build(self, input_shape):
self.layer.build(input_shape)
super().build(input_shape)
def call(self, x):
x = x + hp.Float("bias", 0, 10)
return self.layer(x)
# Currently necessary, because we save the model.
# Note that this model is not written w/ best practices,
# because the hp.Float value of the best model cannot be
# inferred from `get_config()`. The best practice is to pass
# HPs as __init__ arguments to subclass Layers and Models.
def get_config(self):
return {}
model = MyModel()
model.compile(
optimizer=keras.optimizers.Adam(
hp.Choice("learning_rate", [1e-2, 1e-3, 1e-4])
),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
return model
class ExampleHyperModel(keras_tuner.HyperModel):
def build(self, hp):
inputs = keras.Input(shape=(INPUT_DIM,))
x = inputs
for i in range(hp.Int("num_layers", 1, 4)):
x = keras.layers.Dense(
units=hp.Int(f"units_{str(i)}", 5, 9, 1, default=6),
activation="relu",
)(x)
outputs = keras.layers.Dense(NUM_CLASSES, activation="softmax")(x)
model = keras.Model(inputs, outputs)
model.compile(
optimizer=keras.optimizers.Adam(
hp.Choice("learning_rate", [1e-2, 1e-3, 1e-4])
),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
return model
def fit(self, hp, model, *args, **kwargs):
return model.fit(*args, shuffle=hp.Boolean("shuffle"), **kwargs)
def test_basic_tuner_attributes(tmp_path):
tuner = keras_tuner.tuners.RandomSearch(
build_model,
objective="val_accuracy",
max_trials=2,
executions_per_trial=3,
directory=tmp_path,
)
assert tuner.oracle.objective.name == "val_accuracy"
assert tuner.oracle.max_trials == 2
assert tuner.executions_per_trial == 3
assert tuner.directory == tmp_path
assert tuner.hypermodel.__class__.__name__ == "DefaultHyperModel"
assert len(tuner.oracle.hyperparameters.space) == 3 # default search space
assert len(tuner.oracle.hyperparameters.values) == 3 # default search space
tuner.search_space_summary()
tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=2,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
tuner.results_summary()
assert len(tuner.oracle.trials) == 2
assert os.path.exists(os.path.join(str(tmp_path), "untitled_project"))
def test_multi_objective(tmp_path):
tuner = keras_tuner.tuners.RandomSearch(
build_model,
objective=["val_accuracy", "val_loss"],
max_trials=2,
executions_per_trial=3,
directory=tmp_path,
)
assert tuner.oracle.objective.name == "multi_objective"
tuner.search_space_summary()
tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=2,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
tuner.results_summary()
def test_no_hypermodel_with_objective(tmp_path):
class MyTuner(keras_tuner.tuners.RandomSearch):
def run_trial(self, trial, *args, **kwargs):
hp = trial.hyperparameters
return {"val_loss": hp.Float("value", 0, 10)}
tuner = MyTuner(
objective="val_loss",
max_trials=2,
directory=tmp_path,
)
tuner.search()
assert len(tuner.oracle.trials) == 2
def test_no_objective_with_hypermodel(tmp_path):
class MyHyperModel(ExampleHyperModel):
def fit(self, hp, model, *args, **kwargs):
return hp.Float("value", 0, 10)
tuner = keras_tuner.tuners.RandomSearch(
hypermodel=MyHyperModel(),
max_trials=2,
executions_per_trial=3,
directory=tmp_path,
)
tuner.search()
assert len(tuner.oracle.trials) == 2
def test_no_hypermodel_no_objective(tmp_path):
class MyTuner(keras_tuner.tuners.RandomSearch):
def run_trial(self, trial, *args, **kwargs):
hp = trial.hyperparameters
return hp.Float("value", 0, 10)
tuner = MyTuner(
objective="val_loss",
max_trials=2,
directory=tmp_path,
)
tuner.search()
assert len(tuner.oracle.trials) == 2
def test_no_hypermodel_without_override_run_trial_error(tmp_path):
with pytest.raises(ValueError, match="Received `hypermodel=None`"):
keras_tuner.tuners.RandomSearch(
max_trials=2,
executions_per_trial=3,
directory=tmp_path,
)
def test_fit_return_string(tmp_path):
class MyHyperModel(ExampleHyperModel):
def fit(self, hp, model, *args, **kwargs):
return hp.Choice("value", ["a", "b"])
tuner = keras_tuner.tuners.RandomSearch(
objective="val_loss",
hypermodel=MyHyperModel(),
max_trials=2,
executions_per_trial=3,
directory=tmp_path,
)
with pytest.raises(TypeError, match="HyperModel\.fit\(\) to be one of"):
tuner.search()
def test_run_trial_return_string(tmp_path):
class MyTuner(keras_tuner.tuners.RandomSearch):
def run_trial(self, trial, **kwargs):
return trial.hyperparameters.Choice("value", ["a", "b"])
tuner = MyTuner(
objective="val_loss",
max_trials=2,
executions_per_trial=3,
directory=tmp_path,
)
with pytest.raises(TypeError, match="Tuner\.run_trial\(\) to be one of"):
tuner.search()
def test_no_objective_fit_return_not_float(tmp_path):
class MyHyperModel(ExampleHyperModel):
def fit(self, hp, model, *args, **kwargs):
return {"val_loss": hp.Float("value", 0, 10)}
tuner = keras_tuner.tuners.RandomSearch(
hypermodel=MyHyperModel(),
max_trials=2,
executions_per_trial=3,
directory=tmp_path,
)
with pytest.raises(
TypeError, match="HyperModel\.fit\(\) to be a single float"
):
tuner.search()
def test_no_objective_run_trial_return_not_float(tmp_path):
class MyTuner(keras_tuner.tuners.RandomSearch):
def run_trial(self, trial, **kwargs):
return {"val_loss": trial.hyperparameters.Float("value", 0, 10)}
tuner = MyTuner(
max_trials=2,
executions_per_trial=3,
directory=tmp_path,
)
with pytest.raises(
TypeError, match="Tuner\.run_trial\(\) to be a single float"
):
tuner.search()
def test_callbacks_in_fit_kwargs(tmp_path):
tuner = keras_tuner.tuners.RandomSearch(
build_model,
objective="val_accuracy",
max_trials=2,
executions_per_trial=3,
directory=tmp_path,
)
with patch.object(
tuner, "_build_and_fit_model", wraps=tuner._build_and_fit_model
) as mock_build_and_fit_model:
tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=2,
validation_data=(VAL_INPUTS, VAL_TARGETS),
callbacks=[
keras.callbacks.EarlyStopping(),
keras.callbacks.TensorBoard(tmp_path),
],
)
assert len(tuner.oracle.trials) == 2
callback_class_names = [
x.__class__.__name__
for x in mock_build_and_fit_model.call_args[1]["callbacks"]
]
assert {
"EarlyStopping",
"TensorBoard",
"TunerCallback",
"SaveBestEpoch",
}.issubset(
set(callback_class_names),
)
def test_hypermodel_with_dynamic_space(tmp_path):
hypermodel = ExampleHyperModel()
tuner = keras_tuner.tuners.RandomSearch(
hypermodel,
objective="val_accuracy",
max_trials=2,
executions_per_trial=3,
directory=tmp_path,
)
assert tuner.hypermodel == hypermodel
tuner.search_space_summary()
tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=2,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
tuner.results_summary()
assert len(tuner.oracle.trials) == 2
tuner.oracle.hyperparameters.get("shuffle")
def test_override_compile(tmp_path):
class MyHyperModel(ExampleHyperModel):
def fit(self, hp, model, *args, **kwargs):
history = super().fit(hp, model, *args, **kwargs)
assert model.optimizer.__class__.__name__ == "RMSprop"
assert model.loss == "mse"
assert len(model.metrics) >= 2
assert model.metrics[-2].__class__.__name__ in (
"mean_squared_error",
"Mean",
)
assert model.metrics[-1].__class__.__name__ in (
"sparse_categorical_accuracy",
"CompileMetrics",
)
return history
tuner = keras_tuner.tuners.RandomSearch(
MyHyperModel(),
objective="val_mse",
max_trials=2,
executions_per_trial=1,
metrics=["mse", "accuracy"],
loss="mse",
optimizer="rmsprop",
directory=tmp_path,
)
assert tuner.oracle.objective.name == "val_mse"
assert tuner.optimizer == "rmsprop"
assert tuner.loss == "mse"
assert tuner.metrics == ["mse", "accuracy"]
tuner.search_space_summary()
tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=2,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
tuner.results_summary()
model = tuner.get_best_models()[0]
assert model.loss == "mse"
def test_override_optimizer_with_actual_optimizer_object(tmp_path):
tuner = keras_tuner.tuners.RandomSearch(
build_model,
objective="val_loss",
max_trials=4,
optimizer=keras.optimizers.Adam(0.01),
directory=tmp_path,
)
tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=2,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
def test_static_space(tmp_path):
def build_model_static(hp):
inputs = keras.Input(shape=(INPUT_DIM,))
x = inputs
for i in range(hp.get("num_layers")):
x = keras.layers.Dense(
units=hp.get(f"units_{str(i)}"), activation="relu"
)(x)
outputs = keras.layers.Dense(NUM_CLASSES, activation="softmax")(x)
model = keras.Model(inputs, outputs)
model.compile(
optimizer=keras.optimizers.Adam(hp.get("learning_rate")),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
return model
hp = keras_tuner.HyperParameters()
hp.Int("num_layers", 1, 3, 1, default=2)
hp.Int("units_0", 4, 6, 1, default=5)
hp.Int("units_1", 4, 6, 1, default=5)
hp.Int("units_2", 4, 6, 1, default=5)
hp.Choice("learning_rate", [0.01, 0.001])
tuner = keras_tuner.tuners.RandomSearch(
build_model_static,
objective="val_accuracy",
max_trials=4,
directory=tmp_path,
hyperparameters=hp,
allow_new_entries=False,
)
assert tuner.oracle.hyperparameters == hp
tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=2,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
assert len(tuner.oracle.trials) == 4
def test_static_space_errors(tmp_path):
def build_model_static(hp):
inputs = keras.Input(shape=(INPUT_DIM,))
x = inputs
for i in range(hp.get("num_layers")):
x = keras.layers.Dense(
units=hp.get(f"units_{str(i)}"), activation="relu"
)(x)
outputs = keras.layers.Dense(NUM_CLASSES, activation="softmax")(x)
model = keras.Model(inputs, outputs)
model.compile(
optimizer=keras.optimizers.Adam(
hp.Float("learning_rate", 1e-5, 1e-2)
),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
return model
hp = keras_tuner.HyperParameters()
hp.Int("num_layers", 1, 3, 1, default=2)
hp.Int("units_0", 4, 6, 1, default=5)
hp.Int("units_1", 4, 6, 1, default=5)
with pytest.raises(RuntimeError, match="`allow_new_entries` is `False`"):
tuner = keras_tuner.tuners.RandomSearch(
build_model_static,
objective="val_accuracy",
max_trials=2,
directory=tmp_path,
hyperparameters=hp,
allow_new_entries=False,
)
tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=2,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
def build_model_static_invalid(hp):
inputs = keras.Input(shape=(INPUT_DIM,))
x = inputs
for i in range(hp.get("num_layers")):
x = keras.layers.Dense(
units=hp.get(f"units_{str(i)}"), activation="relu"
)(x)
outputs = keras.layers.Dense(NUM_CLASSES, activation="softmax")(x)
model = keras.Model(inputs, outputs)
model.compile(
optimizer=keras.optimizers.Adam(
hp.Float("learning_rate", 0.001, 0.008, 0.001)
),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
return model
with pytest.raises(RuntimeError, match="`allow_new_entries` is `False`"):
tuner = keras_tuner.tuners.RandomSearch(
build_model_static_invalid,
objective="val_accuracy",
max_trials=2,
directory=tmp_path,
hyperparameters=hp,
allow_new_entries=False,
)
tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=2,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
def test_restricted_space_using_defaults(tmp_path):
hp = keras_tuner.HyperParameters()
hp.Int("num_layers", 1, 5, 1, default=2)
hp.Choice("learning_rate", [0.01, 0.001, 0.0001])
tuner = keras_tuner.tuners.RandomSearch(
build_model,
objective="val_accuracy",
max_trials=4,
directory=tmp_path,
hyperparameters=hp,
allow_new_entries=True,
tune_new_entries=False,
)
assert len(tuner.oracle.hyperparameters.space) == 2
new_lr = [
p
for p in tuner.oracle.hyperparameters.space
if p.name == "learning_rate"
][0]
assert new_lr.values == [0.01, 0.001, 0.0001]
tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=1,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
assert len(tuner.oracle.trials) == 4
assert len(tuner.oracle.hyperparameters.space) == 2 # Nothing added
for trial in tuner.oracle.trials.values():
# Trials get default values but don't pass these on to the oracle.
assert len(trial.hyperparameters.space) >= 2
def test_restricted_space_with_custom_defaults(tmp_path):
hp = keras_tuner.HyperParameters()
hp.Int("num_layers", 1, 3, 1, default=2)
hp.Choice("learning_rate", [0.01, 0.001, 0.0001])
hp.Fixed("units_0", 4)
hp.Fixed("units_1", 3)
tuner = keras_tuner.tuners.RandomSearch(
build_model,
objective="val_accuracy",
max_trials=4,
directory=tmp_path,
hyperparameters=hp,
allow_new_entries=True,
tune_new_entries=False,
)
assert len(tuner.oracle.hyperparameters.space) == 4
tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=1,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
assert len(tuner.oracle.trials) == 4
def test_reparameterized_space(tmp_path):
hp = keras_tuner.HyperParameters()
hp.Int("num_layers", 1, 3, 1, default=3)
hp.Choice("learning_rate", [0.01, 0.001, 0.0001])
tuner = keras_tuner.tuners.RandomSearch(
build_model,
seed=1337,
objective="val_accuracy",
max_trials=4,
directory=tmp_path,
hyperparameters=hp,
allow_new_entries=True,
tune_new_entries=True,
)
# Initial build model adds to the space.
assert len(tuner.oracle.hyperparameters.space) == 5
tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=1,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
assert len(tuner.oracle.trials) == 4
assert len(tuner.oracle.hyperparameters.space) == 5
def test_get_best_models(tmp_path):
tuner = keras_tuner.tuners.RandomSearch(
build_model, objective="val_accuracy", max_trials=4, directory=tmp_path
)
tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=2,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
models = tuner.get_best_models(2)
assert len(models) == 2
assert isinstance(models[0], keras.Model)
assert isinstance(models[1], keras.Model)
def test_saving_and_reloading(tmp_path):
tuner = keras_tuner.tuners.RandomSearch(
build_model,
objective="val_accuracy",
max_trials=4,
executions_per_trial=2,
directory=tmp_path,
)
tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=2,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
new_tuner = keras_tuner.tuners.RandomSearch(
build_model,
objective="val_accuracy",
max_trials=4,
executions_per_trial=2,
directory=tmp_path,
)
new_tuner.reload()
assert len(new_tuner.oracle.trials) == 4
new_tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=2,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
def test_subclass_model(tmp_path):
tuner = keras_tuner.tuners.RandomSearch(
build_subclass_model,
objective="val_accuracy",
max_trials=2,
directory=tmp_path,
)
tuner.search_space_summary()
tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=2,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
tuner.results_summary()
assert len(tuner.oracle.trials) == 2
def test_subclass_model_loading(tmp_path):
tuner = keras_tuner.tuners.RandomSearch(
build_subclass_model,
objective="val_accuracy",
max_trials=2,
directory=tmp_path,
)
tuner.search_space_summary()
tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=2,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
best_trial_score = tuner.oracle.get_best_trials()[0].score
best_model = tuner.get_best_models()[0]
best_model_score = best_model.evaluate(VAL_INPUTS, VAL_TARGETS)[1]
assert best_model_score == best_trial_score
def test_update_trial(tmp_path):
# Test stop the oracle in update_trial.
class MyOracle(keras_tuner.Oracle):
def populate_space(self, _):
values = {
p.name: p.random_sample() for p in self.hyperparameters.space
}
return {"values": values, "status": "RUNNING"}
def update_trial(self, trial_id, metrics, step=0):
super().update_trial(trial_id, metrics, step)
trial = self.trials[trial_id]
trial.status = "STOPPED"
return trial.status
my_oracle = MyOracle(objective="val_accuracy", max_trials=2)
tuner = keras_tuner.Tuner(
oracle=my_oracle, hypermodel=build_model, directory=tmp_path
)
tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=5,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
assert len(my_oracle.trials) == 2
for trial in my_oracle.trials.values():
# Test that early stopping worked.
assert len(trial.metrics.get_history("val_accuracy")) == 1
@pytest.mark.skipif(
config.multi_backend(),
reason="The test is too slow.",
)
def test_tunable_false_hypermodel(tmp_path):
def build_model(hp):
input_shape = (256, 256, 3)
inputs = keras.Input(shape=input_shape)
with hp.name_scope("xception"):
# Tune the pooling of Xception by supplying the search space
# beforehand.
hp.Choice("pooling", ["avg", "max"])
xception = keras_tuner.applications.HyperXception(
include_top=False, input_shape=input_shape, tunable=False
).build(hp)
x = xception(inputs)
x = keras.layers.Dense(
hp.Int("hidden_units", 50, 100, step=10), activation="relu"
)(x)
outputs = keras.layers.Dense(NUM_CLASSES, activation="softmax")(x)
model = keras.Model(inputs, outputs)
optimizer = keras.optimizers.get(
hp.Choice("optimizer", ["adam", "sgd"])
)
optimizer.learning_rate = hp.Float(
"learning_rate", 1e-4, 1e-2, sampling="log"
)
model.compile(optimizer, loss="sparse_categorical_crossentropy")
return model
tuner = keras_tuner.RandomSearch(
objective="val_loss",
hypermodel=build_model,
max_trials=4,
directory=tmp_path,
)
x = np.random.random(size=(2, 256, 256, 3))
y = np.random.randint(0, NUM_CLASSES, size=(2,))
tuner.search(x, y, validation_data=(x, y), batch_size=2)
hps = tuner.oracle.get_space()
assert "xception/pooling" in hps
assert "hidden_units" in hps
assert "optimizer" in hps
assert "learning_rate" in hps
# Make sure no HPs from building xception were added.
assert len(hps.space) == 4
def test_get_best_hyperparameters(tmp_path):
hp1 = keras_tuner.HyperParameters()
hp1.Fixed("a", 1)
trial1 = keras_tuner.engine.trial.Trial(hyperparameters=hp1)
trial1.status = "COMPLETED"
trial1.score = 10
hp2 = keras_tuner.HyperParameters()
hp2.Fixed("a", 2)
trial2 = keras_tuner.engine.trial.Trial(hyperparameters=hp2)
trial2.status = "COMPLETED"
trial2.score = 9
tuner = keras_tuner.RandomSearch(
objective="val_accuracy",
hypermodel=build_model,
max_trials=2,
directory=tmp_path,
)
tuner.oracle.trials = {trial1.trial_id: trial1, trial2.trial_id: trial2}
hps = tuner.get_best_hyperparameters()[0]
assert hps["a"] == 1
def test_reloading_error_message(tmp_path):
shared_dir = tmp_path
tuner = keras_tuner.tuners.RandomSearch(
build_model,
objective="val_accuracy",
max_trials=2,
executions_per_trial=3,
directory=shared_dir,
)
tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=2,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
with pytest.raises(RuntimeError, match="pass `overwrite=True`"):
keras_tuner.tuners.BayesianOptimization(
build_model,
objective="val_accuracy",
max_trials=2,
executions_per_trial=3,
directory=shared_dir,
)
def test_search_logging_verbosity(tmp_path):
tuner = keras_tuner.tuners.RandomSearch(
build_model,
objective="val_accuracy",
max_trials=2,
executions_per_trial=3,
directory=tmp_path,
)
with patch("sys.stdout", new=StringIO()) as output:
tuner.search(
x=TRAIN_INPUTS,
y=TRAIN_TARGETS,
epochs=2,
validation_data=(VAL_INPUTS, VAL_TARGETS),
verbose=0,
)
assert output.getvalue().strip() == ""
@pytest.mark.skipif(
keras_tuner.backend.config.backend() != "tensorflow",
reason="KerasTuner can only use TensorBoard with TensorFlow backend.",
)
def test_convert_hyperparams_to_hparams():
def _check_hparams_equal(hp1, hp2):
assert (
hparams_api.hparams_pb(hp1, start_time_secs=0).SerializeToString()
== hparams_api.hparams_pb(
hp2, start_time_secs=0
).SerializeToString()
)
hps = keras_tuner.engine.hyperparameters.HyperParameters()
hps.Choice("learning_rate", [1e-4, 1e-3, 1e-2])
hparams = keras_tuner.engine.tuner_utils.convert_hyperparams_to_hparams(
hps, hparams_api
)
_check_hparams_equal(
hparams,
{
hparams_api.HParam(
"learning_rate", hparams_api.Discrete([1e-4, 1e-3, 1e-2])
): 1e-4
},
)
hps = keras_tuner.engine.hyperparameters.HyperParameters()
hps.Int("units", min_value=2, max_value=16)
hparams = keras_tuner.engine.tuner_utils.convert_hyperparams_to_hparams(
hps, hparams_api
)
_check_hparams_equal(
hparams,
{hparams_api.HParam("units", hparams_api.IntInterval(2, 16)): 2},
)
hps = keras_tuner.engine.hyperparameters.HyperParameters()
hps.Int("units", min_value=32, max_value=128, step=32)
hparams = keras_tuner.engine.tuner_utils.convert_hyperparams_to_hparams(
hps, hparams_api
)
_check_hparams_equal(
hparams,
{
hparams_api.HParam(
"units", hparams_api.Discrete([32, 64, 96, 128])
): 32
},
)
hps = keras_tuner.engine.hyperparameters.HyperParameters()
hps.Float("learning_rate", min_value=0.5, max_value=1.25, step=0.25)
hparams = keras_tuner.engine.tuner_utils.convert_hyperparams_to_hparams(
hps, hparams_api
)
_check_hparams_equal(
hparams,
{
hparams_api.HParam(
"learning_rate", hparams_api.Discrete([0.5, 0.75, 1.0, 1.25])
): 0.5
},
)
hps = keras_tuner.engine.hyperparameters.HyperParameters()
hps.Float("learning_rate", min_value=1e-4, max_value=1e-1)
hparams = keras_tuner.engine.tuner_utils.convert_hyperparams_to_hparams(
hps, hparams_api
)
_check_hparams_equal(
hparams,
{
hparams_api.HParam(
"learning_rate", hparams_api.RealInterval(1e-4, 1e-1)
): 1e-4
},
)
hps = keras_tuner.engine.hyperparameters.HyperParameters()
hps.Float("theta", min_value=0.0, max_value=1.57)
hps.Float("r", min_value=0.0, max_value=1.0)
hparams = keras_tuner.engine.tuner_utils.convert_hyperparams_to_hparams(
hps, hparams_api
)
expected_hparams = {
hparams_api.HParam("theta", hparams_api.RealInterval(0.0, 1.57)): 0.0,
hparams_api.HParam("r", hparams_api.RealInterval(0.0, 1.0)): 0.0,
}
hparams_repr_list = [repr(hparams[x]) for x in hparams.keys()]
expected_hparams_repr_list = [
repr(expected_hparams[x]) for x in expected_hparams
]
assert sorted(hparams_repr_list) == sorted(expected_hparams_repr_list)
hps = keras_tuner.engine.hyperparameters.HyperParameters()
hps.Boolean("has_beta")
hparams = keras_tuner.engine.tuner_utils.convert_hyperparams_to_hparams(
hps, hparams_api
)
_check_hparams_equal(
hparams,
{
hparams_api.HParam(
"has_beta", hparams_api.Discrete([True, False])
): False
},
)
hps = keras_tuner.engine.hyperparameters.HyperParameters()
hps.Fixed("beta", 0.1)
hparams = keras_tuner.engine.tuner_utils.convert_hyperparams_to_hparams(
hps, hparams_api
)
_check_hparams_equal(
hparams, {hparams_api.HParam("beta", hparams_api.Discrete([0.1])): 0.1}
)
hps = keras_tuner.engine.hyperparameters.HyperParameters()
hps.Fixed("type", "WIDE_AND_DEEP")
hparams = keras_tuner.engine.tuner_utils.convert_hyperparams_to_hparams(
hps, hparams_api
)
_check_hparams_equal(
hparams,
{
hparams_api.HParam(
"type", hparams_api.Discrete(["WIDE_AND_DEEP"])
): "WIDE_AND_DEEP"
},
)
hps = keras_tuner.engine.hyperparameters.HyperParameters()
hps.Fixed("condition", True)
hparams = keras_tuner.engine.tuner_utils.convert_hyperparams_to_hparams(
hps, hparams_api
)
_check_hparams_equal(
hparams,
{hparams_api.HParam("condition", hparams_api.Discrete([True])): True},
)
hps = keras_tuner.engine.hyperparameters.HyperParameters()
hps.Fixed("num_layers", 2)
hparams = keras_tuner.engine.tuner_utils.convert_hyperparams_to_hparams(
hps, hparams_api
)
_check_hparams_equal(
hparams,
{hparams_api.HParam("num_layers", hparams_api.Discrete([2])): 2},
)
def test_tuning_correctness(tmp_path):
tuner = keras_tuner.Tuner(
oracle=keras_tuner.tuners.randomsearch.RandomSearchOracle(
objective="loss", max_trials=2, seed=1337
),
hypermodel=MockHyperModel(),
directory=tmp_path,
)
tuner.search()
assert len(tuner.oracle.trials) == 2
m0_epochs = [float(np.average(x)) for x in MockHyperModel.mode_0]
m1_epochs = [float(np.average(x)) for x in MockHyperModel.mode_1]
# Score tracking correctness
first_trial, second_trial = sorted(
tuner.oracle.trials.values(), key=lambda t: t.score
)
assert first_trial.score == min(m0_epochs)
assert second_trial.score == min(m1_epochs)
assert tuner.oracle.get_best_trials(1)[0].trial_id == first_trial.trial_id
def assert_found_best_score(
tmp_path, hypermodel, tuner_class=keras_tuner.Tuner
):
tuner = tuner_class(
oracle=keras_tuner.tuners.randomsearch.RandomSearchOracle(
objective="loss", max_trials=2, seed=1337
),
hypermodel=hypermodel,
directory=tmp_path,
)
tuner.search(callbacks=[])
assert tuner.oracle.get_best_trials(1)[0].score == 3.0
def test_hypermodel_fit_return_a_dict(tmp_path):
class MyHyperModel(MockHyperModel):
def fit(self, hp, model, *args, **kwargs):
history = super().fit(hp, model, *args, **kwargs)
return {
"loss": min(history.history["loss"]),
"other_metric": np.random.rand(),
}
assert_found_best_score(tmp_path, MyHyperModel())
def test_hypermodel_fit_return_a_float(tmp_path):
class MyHyperModel(MockHyperModel):
def fit(self, hp, model, *args, **kwargs):
history = super().fit(hp, model, *args, **kwargs)
return min(history.history["loss"])
assert_found_best_score(tmp_path, MyHyperModel())
def test_hypermodel_fit_return_an_int(tmp_path):
class MyHyperModel(MockHyperModel):
def fit(self, hp, model, *args, **kwargs):
history = super().fit(hp, model, *args, **kwargs)
return int(min(history.history["loss"]))
assert_found_best_score(tmp_path, MyHyperModel())
def test_run_trial_return_none_without_update_trial(tmp_path):
class MyTuner(keras_tuner.Tuner):
def run_trial(self, trial, *fit_args, **fit_kwargs):
self.hypermodel.build(trial.hyperparameters).fit(
*fit_args, **fit_kwargs
)
with pytest.raises(
errors.FatalTypeError,
match="Did you forget",
):
assert_found_best_score(tmp_path, MockHyperModel(), MyTuner)
def test_run_trial_return_none_with_update_trial(tmp_path):
class MyTuner(keras_tuner.Tuner):
def run_trial(self, trial, *fit_args, **fit_kwargs):
history = self.hypermodel.build(trial.hyperparameters).fit(
*fit_args, **fit_kwargs
)
self.oracle.update_trial(
trial.trial_id, {"loss": min(history.history["loss"])}
)
with pytest.deprecated_call(match="Please remove the call"):
assert_found_best_score(tmp_path, MockHyperModel(), MyTuner)
def test_run_trial_return_history(tmp_path):
class MyTuner(keras_tuner.Tuner):
def run_trial(self, trial, *fit_args, **fit_kwargs):
return self.hypermodel.build(trial.hyperparameters).fit(
*fit_args, **fit_kwargs
)
assert_found_best_score(tmp_path, MockHyperModel(), MyTuner)
def test_run_trial_return_a_dict(tmp_path):
class MyTuner(keras_tuner.Tuner):
def run_trial(self, trial, *fit_args, **fit_kwargs):
history = self.hypermodel.build(trial.hyperparameters).fit(
*fit_args, **fit_kwargs
)
return {"loss": min(history.history["loss"])}
assert_found_best_score(tmp_path, MockHyperModel(), MyTuner)
def test_run_trial_return_a_float(tmp_path):
class MyTuner(keras_tuner.Tuner):
def run_trial(self, trial, *fit_args, **fit_kwargs):
history = self.hypermodel.build(trial.hyperparameters).fit(
*fit_args, **fit_kwargs
)
return min(history.history["loss"])
assert_found_best_score(tmp_path, MockHyperModel(), MyTuner)
def test_run_trial_return_float_list(tmp_path):
class MyTuner(keras_tuner.Tuner):
def run_trial(self, trial, *fit_args, **fit_kwargs):
ret = []
for _ in range(3):
history = self.hypermodel.build(trial.hyperparameters).fit(
*fit_args, **fit_kwargs
)
ret.append(min(history.history["loss"]))
return ret
assert_found_best_score(tmp_path, MockHyperModel(), MyTuner)
def test_tuner_errors(tmp_path):
# invalid oracle
with pytest.raises(
ValueError,
match="Expected `oracle` argument to be an instance of `Oracle`",
):
tuner_module.Tuner(
oracle="invalid", hypermodel=build_model, directory=tmp_path
)
# invalid hypermodel
with pytest.raises(
ValueError, match="`hypermodel` argument should be either"
):
tuner_module.Tuner(
oracle=keras_tuner.tuners.randomsearch.RandomSearchOracle(
objective="val_accuracy", max_trials=3
),
hypermodel="build_model",
directory=tmp_path,
)
# oversize model
with pytest.raises(RuntimeError, match="Oversized model"):
tuner = tuner_module.Tuner(
oracle=keras_tuner.tuners.randomsearch.RandomSearchOracle(
objective="val_accuracy", max_trials=3
),
hypermodel=build_model,
max_model_size=4,
directory=tmp_path,
)
tuner.search(
TRAIN_INPUTS,
TRAIN_TARGETS,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
# TODO: test no optimizer
def test_metric_direction_inferred_from_objective(tmp_path):
oracle = keras_tuner.tuners.randomsearch.RandomSearchOracle(
objective=keras_tuner.Objective("a", "max"), max_trials=1
)
oracle._set_project_dir(tmp_path, "untitled_project")
trial = oracle.create_trial("tuner0")
oracle.update_trial(trial.trial_id, {"a": 1})
trial = oracle.get_trial(trial.trial_id)
assert trial.metrics.get_direction("a") == "max"
oracle = keras_tuner.tuners.randomsearch.RandomSearchOracle(
objective=keras_tuner.Objective("a", "min"), max_trials=1
)
oracle._set_project_dir(tmp_path, "untitled_project2")
trial = oracle.create_trial("tuner0")
oracle.update_trial(trial.trial_id, {"a": 1})
trial = oracle.get_trial(trial.trial_id)
assert trial.metrics.get_direction("a") == "min"
def test_overwrite_true(tmp_path):
tuner = keras_tuner.tuners.RandomSearch(
hypermodel=build_model,
objective="val_accuracy",
max_trials=2,
directory=tmp_path,
)
tuner.search(
TRAIN_INPUTS, TRAIN_TARGETS, validation_data=(VAL_INPUTS, VAL_TARGETS)
)
assert len(tuner.oracle.trials) == 2
new_tuner = keras_tuner.tuners.RandomSearch(
hypermodel=build_model,
objective="val_accuracy",
max_trials=2,
directory=tmp_path,
overwrite=True,
)
assert len(new_tuner.oracle.trials) == 0
def test_correct_display_trial_number(tmp_path):
tuner = keras_tuner.tuners.RandomSearch(
hypermodel=build_model,
objective="val_accuracy",
max_trials=2,
directory=tmp_path,
)
tuner.search(
TRAIN_INPUTS, TRAIN_TARGETS, validation_data=(VAL_INPUTS, VAL_TARGETS)
)
new_tuner = keras_tuner.tuners.RandomSearch(
hypermodel=build_model,
objective="val_accuracy",
max_trials=6,
directory=tmp_path,
overwrite=False,
)
new_tuner.search(
TRAIN_INPUTS, TRAIN_TARGETS, validation_data=(VAL_INPUTS, VAL_TARGETS)
)
new_tuner.oracle._display.trial_number.items()
assert len(new_tuner.oracle.trials) == max(
new_tuner.oracle._display.trial_number.values()
)
def test_error_on_unknown_objective_direction(tmp_path):
with pytest.raises(
ValueError, match="Could not infer optimization direction"
):
keras_tuner.tuners.RandomSearch(
hypermodel=build_model,
objective="custom_metric",
max_trials=2,
directory=tmp_path,
)
def test_callbacks_run_each_execution(tmp_path):
callback_instances = set()
class LoggingCallback(keras.callbacks.Callback):
def on_train_begin(self, logs):
callback_instances.add(id(self))
logging_callback = LoggingCallback()
tuner = keras_tuner.tuners.RandomSearch(
hypermodel=build_model,
objective="val_accuracy",
max_trials=2,
executions_per_trial=3,
directory=tmp_path,
)
tuner.search(
TRAIN_INPUTS,
TRAIN_TARGETS,
validation_data=(VAL_INPUTS, VAL_TARGETS),
callbacks=[logging_callback],
)
# Unknown reason cause the callback to run 5 times sometime.
# Make 5 & 6 both pass the test before found the reason.
assert len(callback_instances) in {5, 6}
def test_build_and_fit_model(tmp_path):
class MyTuner(keras_tuner.tuners.RandomSearch):
def _build_and_fit_model(self, trial, *args, **kwargs):
self.was_called = True
return super()._build_and_fit_model(trial, *args, **kwargs)
tuner = MyTuner(
hypermodel=build_model,
objective="val_accuracy",
max_trials=2,
executions_per_trial=3,
directory=tmp_path,
)
tuner.run_trial(
tuner.oracle.create_trial("tuner0"),
TRAIN_INPUTS,
TRAIN_TARGETS,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
assert tuner.was_called
def test_build_and_fit_model_in_tuner(tmp_path):
class MyTuner(tuner_module.Tuner):
def _build_and_fit_model(self, trial, *args, **kwargs):
self.was_called = True
return super()._build_and_fit_model(trial, *args, **kwargs)
tuner = MyTuner(
oracle=keras_tuner.tuners.randomsearch.RandomSearchOracle(
objective="val_loss",
max_trials=2,
),
hypermodel=build_model,
directory=tmp_path,
)
tuner.run_trial(
tuner.oracle.create_trial("tuner0"),
TRAIN_INPUTS,
TRAIN_TARGETS,
validation_data=(VAL_INPUTS, VAL_TARGETS),
)
assert tuner.was_called
def test_init_build_all_hps_in_all_conditions(tmp_path):
class ConditionalHyperModel(MockHyperModel):
def build(self, hp):
model_type = hp.Choice("model_type", ["cnn", "mlp"])
with hp.conditional_scope("model_type", ["cnn"]):
if model_type == "cnn":
sub_cnn = hp.Choice("sub_cnn", ["a", "b"])
with hp.conditional_scope("sub_cnn", ["a"]):
if sub_cnn == "a":
hp.Int("n_filters_a", 2, 4)
with hp.conditional_scope("sub_cnn", ["b"]):
if sub_cnn == "b":
hp.Int("n_filters_b", 6, 8)
with hp.conditional_scope("model_type", ["mlp"]):
if model_type == "mlp":
sub_mlp = hp.Choice("sub_mlp", ["a", "b"])
with hp.conditional_scope("sub_mlp", ["a"]):
if sub_mlp == "a":
hp.Int("n_units_a", 2, 4)
with hp.conditional_scope("sub_mlp", ["b"]):
if sub_mlp == "b":
hp.Int("n_units_b", 6, 8)
more_block = hp.Boolean("more_block", default=False)
with hp.conditional_scope("more_block", [True]):
if more_block:
hp.Int("new_block_hp", 1, 3)
return super().build(hp)
def name_in_hp(name, hp):
return any(name == single_hp.name for single_hp in hp.space)
class MyTuner(tuner_module.Tuner):
def _populate_initial_space(self):
super()._populate_initial_space()
hp = self.oracle.hyperparameters
assert name_in_hp("model_type", hp)
assert name_in_hp("sub_cnn", hp)
assert name_in_hp("n_filters_a", hp)
assert name_in_hp("n_filters_b", hp)
assert name_in_hp("sub_mlp", hp)
assert name_in_hp("n_units_a", hp)
assert name_in_hp("n_units_b", hp)
assert name_in_hp("more_block", hp)
assert name_in_hp("new_block_hp", hp)
MyTuner(
oracle=keras_tuner.tuners.randomsearch.RandomSearchOracle(
objective="loss", max_trials=2, seed=1337
),
hypermodel=ConditionalHyperModel(),
directory=tmp_path,
)
def test_populate_initial_space_with_hp_parent_arg(tmp_path):
def build_model(hp):
hp.Boolean("parent", default=True)
hp.Boolean(
"child",
parent_name="parent",
parent_values=[False],
)
return keras.Sequential()
keras_tuner.RandomSearch(
build_model,
objective="val_accuracy",
directory=tmp_path,
max_trials=1,
)
def test_populate_initial_space_with_declare_hp(tmp_path):
class MyHyperModel(keras_tuner.HyperModel):
def declare_hyperparameters(self, hp):
hp.Boolean("bool")
def build(self, hp):
return keras.Sequential()
keras_tuner.RandomSearch(
MyHyperModel(),
objective="val_accuracy",
directory=tmp_path,
max_trials=1,
)
def test_build_did_not_return_keras_model(tmp_path):
tuner = keras_tuner.tuners.RandomSearch(
hypermodel=lambda hp: None,
objective="val_accuracy",
directory=tmp_path,
)
with pytest.raises(
errors.FatalTypeError,
match="Expected the model-building function",
):
tuner.search()
def test_callback_cannot_be_deep_copied(tmp_path):
tuner = keras_tuner.tuners.RandomSearch(
hypermodel=lambda hp: keras.Sequential(),
objective="val_accuracy",
directory=tmp_path,
)
with pytest.raises(
errors.FatalValueError,
match="All callbacks used during a search should be deep-copyable",
):
tuner.search(callbacks=[keras_tuner])
| keras-tuner/keras_tuner/engine/tuner_test.py/0 | {
"file_path": "keras-tuner/keras_tuner/engine/tuner_test.py",
"repo_id": "keras-tuner",
"token_count": 22946
} | 140 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"Basic random search tuner."
from keras_tuner.api_export import keras_tuner_export
from keras_tuner.engine import oracle as oracle_module
from keras_tuner.engine import trial as trial_module
from keras_tuner.engine import tuner as tuner_module
@keras_tuner_export("keras_tuner.oracles.RandomSearchOracle")
class RandomSearchOracle(oracle_module.Oracle):
"""Random search oracle.
Args:
objective: A string, `keras_tuner.Objective` instance, or a list of
`keras_tuner.Objective`s and strings. If a string, the direction of
the optimization (min or max) will be inferred. If a list of
`keras_tuner.Objective`, we will minimize the sum of all the
objectives to minimize subtracting the sum of all the objectives to
maximize. The `objective` argument is optional when
`Tuner.run_trial()` or `HyperModel.fit()` returns a single float as
the objective to minimize.
max_trials: Integer, the total number of trials (model configurations)
to test at most. Note that the oracle may interrupt the search
before `max_trial` models have been tested if the search space has
been exhausted. Defaults to 10.
seed: Optional integer, the random seed.
hyperparameters: Optional `HyperParameters` instance. Can be used to
override (or register in advance) hyperparameters in the search
space.
tune_new_entries: Boolean, whether hyperparameter entries that are
requested by the hypermodel but that were not specified in
`hyperparameters` should be added to the search space, or not. If
not, then the default value for these parameters will be used.
Defaults to True.
allow_new_entries: Boolean, whether the hypermodel is allowed to
request hyperparameter entries not listed in `hyperparameters`.
Defaults to True.
max_retries_per_trial: Integer. Defaults to 0. The maximum number of
times to retry a `Trial` if the trial crashed or the results are
invalid.
max_consecutive_failed_trials: Integer. Defaults to 3. The maximum
number of consecutive failed `Trial`s. When this number is reached,
the search will be stopped. A `Trial` is marked as failed when none
of the retries succeeded.
"""
def __init__(
self,
objective=None,
max_trials=10,
seed=None,
hyperparameters=None,
allow_new_entries=True,
tune_new_entries=True,
max_retries_per_trial=0,
max_consecutive_failed_trials=3,
):
super().__init__(
objective=objective,
max_trials=max_trials,
hyperparameters=hyperparameters,
tune_new_entries=tune_new_entries,
allow_new_entries=allow_new_entries,
seed=seed,
max_retries_per_trial=max_retries_per_trial,
max_consecutive_failed_trials=max_consecutive_failed_trials,
)
def populate_space(self, trial_id):
"""Fill the hyperparameter space with values.
Args:
trial_id: A string, the ID for this Trial.
Returns:
A dictionary with keys "values" and "status", where "values" is
a mapping of parameter names to suggested values, and "status"
should be one of "RUNNING" (the trial can start normally), "IDLE"
(the oracle is waiting on something and cannot create a trial), or
"STOPPED" (the oracle has finished searching and no new trial should
be created).
"""
values = self._random_values()
if values is None:
return {"status": trial_module.TrialStatus.STOPPED, "values": None}
return {"status": trial_module.TrialStatus.RUNNING, "values": values}
@keras_tuner_export(
["keras_tuner.RandomSearch", "keras_tuner.tuners.RandomSearch"]
)
class RandomSearch(tuner_module.Tuner):
"""Random search tuner.
Args:
hypermodel: Instance of `HyperModel` class (or callable that takes
hyperparameters and returns a Model instance). It is optional when
`Tuner.run_trial()` is overridden and does not use
`self.hypermodel`.
objective: A string, `keras_tuner.Objective` instance, or a list of
`keras_tuner.Objective`s and strings. If a string, the direction of
the optimization (min or max) will be inferred. If a list of
`keras_tuner.Objective`, we will minimize the sum of all the
objectives to minimize subtracting the sum of all the objectives to
maximize. The `objective` argument is optional when
`Tuner.run_trial()` or `HyperModel.fit()` returns a single float as
the objective to minimize.
max_trials: Integer, the total number of trials (model configurations)
to test at most. Note that the oracle may interrupt the search
before `max_trial` models have been tested if the search space has
been exhausted. Defaults to 10.
seed: Optional integer, the random seed.
hyperparameters: Optional `HyperParameters` instance. Can be used to
override (or register in advance) hyperparameters in the search
space.
tune_new_entries: Boolean, whether hyperparameter entries that are
requested by the hypermodel but that were not specified in
`hyperparameters` should be added to the search space, or not. If
not, then the default value for these parameters will be used.
Defaults to True.
allow_new_entries: Boolean, whether the hypermodel is allowed to
request hyperparameter entries not listed in `hyperparameters`.
Defaults to True.
max_retries_per_trial: Integer. Defaults to 0. The maximum number of
times to retry a `Trial` if the trial crashed or the results are
invalid.
max_consecutive_failed_trials: Integer. Defaults to 3. The maximum
number of consecutive failed `Trial`s. When this number is reached,
the search will be stopped. A `Trial` is marked as failed when none
of the retries succeeded.
**kwargs: Keyword arguments relevant to all `Tuner` subclasses.
Please see the docstring for `Tuner`.
"""
def __init__(
self,
hypermodel=None,
objective=None,
max_trials=10,
seed=None,
hyperparameters=None,
tune_new_entries=True,
allow_new_entries=True,
max_retries_per_trial=0,
max_consecutive_failed_trials=3,
**kwargs
):
self.seed = seed
oracle = RandomSearchOracle(
objective=objective,
max_trials=max_trials,
seed=seed,
hyperparameters=hyperparameters,
tune_new_entries=tune_new_entries,
allow_new_entries=allow_new_entries,
max_retries_per_trial=max_retries_per_trial,
max_consecutive_failed_trials=max_consecutive_failed_trials,
)
super().__init__(oracle, hypermodel, **kwargs)
| keras-tuner/keras_tuner/tuners/randomsearch.py/0 | {
"file_path": "keras-tuner/keras_tuner/tuners/randomsearch.py",
"repo_id": "keras-tuner",
"token_count": 3140
} | 141 |
# Rebuilding rebuilding the gPRC protos
python -m grpc_tools.protoc --python_out=. --grpc_python_out=. --proto_path=. keras_tuner/protos/keras_tuner.proto
python -m grpc_tools.protoc --python_out=. --grpc_python_out=. --proto_path=. keras_tuner/protos/service.proto
| keras-tuner/shell/grpc.sh/0 | {
"file_path": "keras-tuner/shell/grpc.sh",
"repo_id": "keras-tuner",
"token_count": 107
} | 142 |
import os
# When using jax.experimental.enable_x64 in unit test, we want to keep the
# default dtype with 32 bits, aligning it with Keras's default.
os.environ["JAX_DEFAULT_DTYPE_BITS"] = "32"
try:
# When using torch and tensorflow, torch needs to be imported first,
# otherwise it will segfault upon import. This should force the torch
# import to happen first for all tests.
import torch # noqa: F401
except ImportError:
pass
import pytest # noqa: E402
from keras.backend import backend # noqa: E402
def pytest_configure(config):
config.addinivalue_line(
"markers",
"requires_trainable_backend: mark test for trainable backend only",
)
def pytest_collection_modifyitems(config, items):
requires_trainable_backend = pytest.mark.skipif(
backend() == "numpy",
reason="Trainer not implemented for NumPy backend.",
)
for item in items:
if "requires_trainable_backend" in item.keywords:
item.add_marker(requires_trainable_backend)
| keras/conftest.py/0 | {
"file_path": "keras/conftest.py",
"repo_id": "keras",
"token_count": 376
} | 143 |
from keras import activations
from keras import applications
from keras import backend
from keras import constraints
from keras import datasets
from keras import initializers
from keras import layers
from keras import models
from keras import ops
from keras import optimizers
from keras import regularizers
from keras import utils
from keras.backend import KerasTensor
from keras.layers import Input
from keras.layers import Layer
from keras.models import Functional
from keras.models import Model
from keras.models import Sequential
from keras.version import __version__
| keras/keras/__init__.py/0 | {
"file_path": "keras/keras/__init__.py",
"repo_id": "keras",
"token_count": 142
} | 144 |
from keras.backend.common import global_state
from keras.testing import test_case
from keras.utils.naming import auto_name
class GlobalStateTest(test_case.TestCase):
def test_clear_session(self):
name0 = auto_name("somename")
self.assertEqual(name0, "somename")
name1 = auto_name("somename")
self.assertEqual(name1, "somename_1")
global_state.clear_session()
name0 = auto_name("somename")
self.assertEqual(name0, "somename")
| keras/keras/backend/common/global_state_test.py/0 | {
"file_path": "keras/keras/backend/common/global_state_test.py",
"repo_id": "keras",
"token_count": 204
} | 145 |
class JaxLayer:
pass
| keras/keras/backend/jax/layer.py/0 | {
"file_path": "keras/keras/backend/jax/layer.py",
"repo_id": "keras",
"token_count": 11
} | 146 |
import jax
import numpy as np
from jax import lax
from jax import numpy as jnp
from keras.backend import standardize_data_format
from keras.backend import standardize_dtype
from keras.backend.common.backend_utils import (
compute_conv_transpose_padding_args_for_jax,
)
from keras.backend.config import epsilon
from keras.backend.numpy.core import cast
from keras.backend.numpy.core import convert_to_tensor
from keras.backend.numpy.core import is_tensor
from keras.utils.module_utils import scipy
def relu(x):
x = convert_to_tensor(x)
return np.maximum(x, np.array(0.0, x.dtype))
def relu6(x):
x = convert_to_tensor(x)
# np.clip incorrectly promote bfloat16 to float32, so we replace it with
# np.minimum and np.maximum here
return np.minimum(
np.maximum(x, np.array(0.0, x.dtype)), np.array(6.0, x.dtype)
)
def sigmoid(x):
x = convert_to_tensor(x)
return np.array(1.0, x.dtype) / (np.array(1.0, x.dtype) + np.exp(-x))
def tanh(x):
return np.tanh(x)
def softplus(x):
x = convert_to_tensor(x)
return np.logaddexp(x, np.array(0.0, x.dtype))
def softsign(x):
x = convert_to_tensor(x)
return x / (np.array(1.0, x.dtype) + np.abs(x))
def silu(x):
x = convert_to_tensor(x)
return x * sigmoid(x)
def log_sigmoid(x):
x = convert_to_tensor(x)
return -softplus(-x)
def leaky_relu(x, negative_slope=0.2):
x = convert_to_tensor(x)
return np.maximum(x, np.array(negative_slope, x.dtype) * x)
def hard_sigmoid(x):
# python numbers will be promoted to float64 by np, so it's neccessary to
# first convert the python numbers to np scalars
x = x / np.array(6.0, x.dtype) + np.array(0.5, x.dtype)
return np.where(
x <= 0.0,
np.array(0.0, x.dtype),
np.where(x >= 1.0, np.array(1.0, x.dtype), x),
)
def hard_silu(x):
return x * hard_sigmoid(x)
def elu(x, alpha=1.0):
x = convert_to_tensor(x)
return np.where(
x >= np.array(0.0, x.dtype), x, np.array(alpha, x.dtype) * np.expm1(x)
)
def selu(
x,
alpha=1.6732632423543772848170429916717,
scale=1.0507009873554804934193349852946,
):
x = convert_to_tensor(x)
return np.array(scale, x.dtype) * elu(x, alpha)
def gelu(x, approximate=True):
x = convert_to_tensor(x)
# followd by JAX's implementation
if approximate:
sqrt_2_over_pi = np.sqrt(2 / np.pi).astype(x.dtype)
cdf = np.array(0.5, x.dtype) * (
np.array(1.0, x.dtype)
+ np.tanh(
sqrt_2_over_pi
* (x + np.array(0.044715, x.dtype) * (x**3).astype(x.dtype))
)
)
return x * cdf
else:
sqrt_2 = np.sqrt(2).astype(x.dtype)
return (
x
* (scipy.special.erf(x / sqrt_2) + 1).astype(x.dtype)
/ np.array(2, x.dtype)
)
def softmax(x, axis=None):
exp_x = np.exp(x - np.max(x, axis=axis, keepdims=True))
return exp_x / np.sum(exp_x, axis=axis, keepdims=True)
def log_softmax(x, axis=None):
max_x = np.max(x, axis=axis, keepdims=True)
logsumexp = np.log(np.exp(x - max_x).sum(axis=axis, keepdims=True))
return x - max_x - logsumexp
def _convert_to_spatial_operand(
x,
num_spatial_dims,
data_format="channels_last",
include_batch_and_channels=True,
):
# Helper function that converts an operand to a spatial operand.
x = (x,) * num_spatial_dims if isinstance(x, int) else x
if not include_batch_and_channels:
return x
if data_format == "channels_last":
x = (1,) + x + (1,)
else:
x = (1,) + (1,) + x
return x
def _pool(
inputs,
initial_value,
reduce_fn,
pool_size,
strides=None,
padding="valid",
):
"""Helper function to define pooling functions.
Args:
inputs: input data of shape `N+2`.
initial_value: the initial value for the reduction.
reduce_fn: a reduce function of the form `(T, T) -> T`.
pool_size: a sequence of `N` integers, representing the window size to
reduce over.
strides: a sequence of `N` integers, representing the inter-window
strides (default: `(1, ..., 1)`).
padding: either the string `same` or `valid`.
Returns:
The output of the reduction for each window slice.
"""
if padding not in ("same", "valid"):
raise ValueError(
f"Invalid padding '{padding}', must be 'same' or 'valid'."
)
padding = padding.upper()
return np.array(
lax.reduce_window(
inputs,
initial_value,
reduce_fn,
pool_size,
strides,
padding,
)
)
def max_pool(
inputs,
pool_size,
strides=None,
padding="valid",
data_format=None,
):
data_format = standardize_data_format(data_format)
num_spatial_dims = inputs.ndim - 2
pool_size = _convert_to_spatial_operand(
pool_size, num_spatial_dims, data_format
)
strides = pool_size if strides is None else strides
strides = _convert_to_spatial_operand(
strides, num_spatial_dims, data_format
)
return _pool(inputs, -jnp.inf, lax.max, pool_size, strides, padding)
def average_pool(
inputs,
pool_size,
strides,
padding,
data_format=None,
):
data_format = standardize_data_format(data_format)
num_spatial_dims = inputs.ndim - 2
pool_size = _convert_to_spatial_operand(
pool_size, num_spatial_dims, data_format
)
strides = pool_size if strides is None else strides
strides = _convert_to_spatial_operand(
strides, num_spatial_dims, data_format
)
pooled = _pool(inputs, 0.0, lax.add, pool_size, strides, padding)
if padding == "valid":
# Avoid the extra reduce_window.
return pooled / np.prod(pool_size)
else:
# Count the number of valid entries at each input point, then use that
# for computing average. Assumes that any two arrays of same shape will
# be padded the same. Avoid broadcasting on axis where pooling is
# skipped.
shape = [
(a if b != 1 else 1) for (a, b) in zip(inputs.shape, pool_size)
]
window_counts = _pool(
jnp.ones(shape, inputs.dtype),
0.0,
lax.add,
pool_size,
strides,
padding,
)
return pooled / window_counts
def _convert_to_lax_conv_dimension_numbers(
num_spatial_dims,
data_format="channels_last",
transpose=False,
):
"""Create a `lax.ConvDimensionNumbers` for the given inputs."""
num_dims = num_spatial_dims + 2
if data_format == "channels_last":
spatial_dims = tuple(range(1, num_dims - 1))
inputs_dn = (0, num_dims - 1) + spatial_dims
else:
spatial_dims = tuple(range(2, num_dims))
inputs_dn = (0, 1) + spatial_dims
if transpose:
kernel_dn = (num_dims - 2, num_dims - 1) + tuple(range(num_dims - 2))
else:
kernel_dn = (num_dims - 1, num_dims - 2) + tuple(range(num_dims - 2))
return lax.ConvDimensionNumbers(
lhs_spec=inputs_dn, rhs_spec=kernel_dn, out_spec=inputs_dn
)
def conv(
inputs,
kernel,
strides=1,
padding="valid",
data_format=None,
dilation_rate=1,
):
data_format = standardize_data_format(data_format)
num_spatial_dims = inputs.ndim - 2
dimension_numbers = _convert_to_lax_conv_dimension_numbers(
num_spatial_dims,
data_format,
transpose=False,
)
strides = _convert_to_spatial_operand(
strides,
num_spatial_dims,
data_format,
include_batch_and_channels=False,
)
dilation_rate = _convert_to_spatial_operand(
dilation_rate,
num_spatial_dims,
data_format,
include_batch_and_channels=False,
)
if data_format == "channels_last":
channels = inputs.shape[-1]
else:
channels = inputs.shape[1]
kernel_in_channels = kernel.shape[-2]
if channels % kernel_in_channels > 0:
raise ValueError(
"The number of input channels must be evenly divisible by "
f"kernel's in_channels. Received input channels {channels} and "
f"kernel in_channels {kernel_in_channels}. "
)
feature_group_count = channels // kernel_in_channels
return np.array(
jax.lax.conv_general_dilated(
inputs,
kernel if is_tensor(kernel) else kernel.numpy(),
strides,
padding,
rhs_dilation=dilation_rate,
dimension_numbers=dimension_numbers,
feature_group_count=feature_group_count,
)
)
def depthwise_conv(
inputs,
kernel,
strides=1,
padding="valid",
data_format=None,
dilation_rate=1,
):
data_format = standardize_data_format(data_format)
num_spatial_dims = inputs.ndim - 2
dimension_numbers = _convert_to_lax_conv_dimension_numbers(
num_spatial_dims,
data_format,
transpose=False,
)
strides = _convert_to_spatial_operand(
strides,
num_spatial_dims,
data_format,
include_batch_and_channels=False,
)
dilation_rate = _convert_to_spatial_operand(
dilation_rate,
num_spatial_dims,
data_format,
include_batch_and_channels=False,
)
feature_group_count = (
inputs.shape[-1] if data_format == "channels_last" else inputs.shape[1]
)
kernel = jnp.reshape(
kernel if is_tensor(kernel) else kernel.numpy(),
kernel.shape[:-2] + (1, feature_group_count * kernel.shape[-1]),
)
return np.array(
jax.lax.conv_general_dilated(
inputs,
kernel,
strides,
padding,
rhs_dilation=dilation_rate,
dimension_numbers=dimension_numbers,
feature_group_count=feature_group_count,
)
)
def separable_conv(
inputs,
depthwise_kernel,
pointwise_kernel,
strides=1,
padding="valid",
data_format=None,
dilation_rate=1,
):
data_format = standardize_data_format(data_format)
depthwise_conv_output = depthwise_conv(
inputs,
depthwise_kernel,
strides,
padding,
data_format,
dilation_rate,
)
return conv(
depthwise_conv_output,
pointwise_kernel,
strides=1,
padding="valid",
data_format=data_format,
dilation_rate=dilation_rate,
)
def conv_transpose(
inputs,
kernel,
strides=1,
padding="valid",
output_padding=None,
data_format=None,
dilation_rate=1,
):
data_format = standardize_data_format(data_format)
num_spatial_dims = inputs.ndim - 2
padding_values = compute_conv_transpose_padding_args_for_jax(
input_shape=inputs.shape,
kernel_shape=kernel.shape,
strides=strides,
padding=padding,
output_padding=output_padding,
dilation_rate=dilation_rate,
)
dimension_numbers = _convert_to_lax_conv_dimension_numbers(
num_spatial_dims,
data_format,
transpose=False,
)
strides = _convert_to_spatial_operand(
strides,
num_spatial_dims,
data_format,
include_batch_and_channels=False,
)
dilation_rate = _convert_to_spatial_operand(
dilation_rate,
num_spatial_dims,
data_format,
include_batch_and_channels=False,
)
return np.array(
jax.lax.conv_transpose(
inputs,
kernel if is_tensor(kernel) else kernel.numpy(),
strides,
padding=padding_values,
rhs_dilation=dilation_rate,
dimension_numbers=dimension_numbers,
transpose_kernel=True,
)
)
def one_hot(x, num_classes, axis=-1, dtype="float32"):
x = convert_to_tensor(x)
input_shape = x.shape
# Shrink the last dimension if the shape is (..., 1).
if input_shape and input_shape[-1] == 1 and len(input_shape) > 1:
input_shape = tuple(input_shape[:-1])
x = x.reshape(-1)
if not num_classes:
num_classes = np.max(x) + 1
batch_size = x.shape[0]
categorical = np.zeros((batch_size, num_classes), dtype=dtype)
valid_indices = x >= 0
categorical[np.arange(batch_size)[valid_indices], x[valid_indices]] = 1
# First, reshape the array with the extra dimension at the end
output_shape = input_shape + (num_classes,)
categorical = np.reshape(categorical, output_shape)
# Then, move this new dimension to the right place (according to axis)
if axis != -1:
categorical = np.moveaxis(categorical, -1, axis)
return categorical
def multi_hot(x, num_classes, axis=-1, dtype="float32"):
x = convert_to_tensor(x)
reduction_axis = 1 if len(x.shape) > 1 else 0
outputs = np.max(
one_hot(cast(x, "int32"), num_classes, axis=axis, dtype=dtype),
axis=reduction_axis,
)
return outputs
def categorical_crossentropy(target, output, from_logits=False, axis=-1):
target = np.array(target)
output = np.array(output)
if target.shape != output.shape:
raise ValueError(
"Arguments `target` and `output` must have the same shape. "
"Received: "
f"target.shape={target.shape}, output.shape={output.shape}"
)
if len(target.shape) < 1:
raise ValueError(
"Arguments `target` and `output` must be at least rank 1. "
"Received: "
f"target.shape={target.shape}, output.shape={output.shape}"
)
if from_logits:
log_prob = log_softmax(output, axis=axis)
else:
output = output / np.sum(output, axis, keepdims=True)
output = np.clip(output, epsilon(), 1.0 - epsilon())
log_prob = np.log(output)
return -np.sum(target * log_prob, axis=axis)
def sparse_categorical_crossentropy(target, output, from_logits=False, axis=-1):
target = np.array(target, dtype="int32")
output = np.array(output)
if len(target.shape) == len(output.shape) and target.shape[-1] == 1:
target = np.squeeze(target, axis=-1)
if len(output.shape) < 1:
raise ValueError(
"Argument `output` must be at least rank 1. "
"Received: "
f"output.shape={output.shape}"
)
if target.shape != output.shape[:-1]:
raise ValueError(
"Arguments `target` and `output` must have the same shape "
"up until the last dimension: "
f"target.shape={target.shape}, output.shape={output.shape}"
)
if from_logits:
log_prob = log_softmax(output, axis=axis)
else:
output = output / np.sum(output, axis, keepdims=True)
output = np.clip(output, epsilon(), 1.0 - epsilon())
log_prob = np.log(output)
target = one_hot(target, output.shape[axis], axis=axis)
return -np.sum(target * log_prob, axis=axis)
def binary_crossentropy(target, output, from_logits=False):
target = np.array(target)
output = np.array(output)
if target.shape != output.shape:
raise ValueError(
"Arguments `target` and `output` must have the same shape. "
"Received: "
f"target.shape={target.shape}, output.shape={output.shape}"
)
if from_logits:
output = sigmoid(output)
output = np.clip(output, epsilon(), 1.0 - epsilon())
bce = target * np.log(output)
bce += (1.0 - target) * np.log(1.0 - output)
return -bce
def moments(x, axes, keepdims=False, synchronized=False):
if synchronized:
raise NotImplementedError(
"Argument synchronized=True is not supported with NumPy."
)
axes = tuple(axes) if isinstance(axes, list) else axes
# The dynamic range of float16 is too limited for statistics. As a
# workaround, we simply perform the operations on float32 and convert back
# to float16
need_cast = False
ori_dtype = standardize_dtype(x.dtype)
if ori_dtype == "float16":
need_cast = True
x = cast(x, "float32")
mean = np.mean(x, axes, keepdims=True)
# The variance is computed using $Var = E[|x|^2] - |E[x]|^2$, It is faster
# but less numerically stable.
variance = np.mean(np.square(x), axis=axes, keepdims=True) - np.square(mean)
if not keepdims:
mean = np.squeeze(mean, axes)
variance = np.squeeze(variance, axes)
if need_cast:
# avoid overflow and underflow when casting from float16 to float32
mean = np.clip(mean, np.finfo(np.float16).min, np.finfo(np.float16).max)
variance = np.clip(
variance, np.finfo(np.float16).min, np.finfo(np.float16).max
)
mean = cast(mean, ori_dtype)
variance = cast(variance, ori_dtype)
return mean, variance
def batch_normalization(
x, mean, variance, axis, offset=None, scale=None, epsilon=1e-3
):
shape = [1] * len(x.shape)
shape[axis] = mean.shape[0]
mean = np.reshape(mean, shape)
variance = np.reshape(variance, shape)
inv = 1.0 / np.sqrt(variance + epsilon)
if scale is not None:
scale = np.reshape(scale, shape)
inv = inv * scale
res = -mean * inv
if offset is not None:
offset = np.reshape(offset, shape)
res = res + offset
return x * inv + res
| keras/keras/backend/numpy/nn.py/0 | {
"file_path": "keras/keras/backend/numpy/nn.py",
"repo_id": "keras",
"token_count": 8042
} | 147 |
import tensorflow as tf
from keras import backend
from keras.backend.common import KerasVariable
from keras.optimizers import base_optimizer
class TFOptimizer(base_optimizer.BaseOptimizer):
"""A class for Tensorflow specific optimizer logic.
The major behavior change for this class is for tf.distribute.
It will override methods from base Keras core Optimizer,
which provide distribute specific functionality, e.g. variable
creation, loss reduction, etc.
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._distribution_strategy = tf.distribute.get_strategy()
def add_variable_from_reference(
self, reference_variable, name=None, initializer="zeros"
):
if isinstance(reference_variable, backend.Variable):
colocate_var = reference_variable.value
else:
colocate_var = reference_variable
with self._distribution_strategy.extended.colocate_vars_with(
colocate_var
):
return super().add_variable_from_reference(
reference_variable, name=name, initializer=initializer
)
def stateless_apply(self, optimizer_variables, grads, trainable_variables):
# This is mainly due to the interaction with tf.distribute.Strategy,
# which requires tf.Variable as the inputs for most of its APIs.
raise ValueError(
"stateless_apply is not supported with the TensorFlow backend "
"(as it is incompatible with tf.distribute)."
)
def assign_add(self, variable, value):
if isinstance(variable, KerasVariable):
variable = variable.value
value = tf.cast(value, variable.dtype)
if isinstance(value, tf.IndexedSlices):
variable.scatter_add(value)
else:
variable.assign_add(value)
def assign_sub(self, variable, value):
if isinstance(variable, KerasVariable):
variable = variable.value
value = tf.cast(value, variable.dtype)
if isinstance(value, tf.IndexedSlices):
variable.scatter_sub(value)
else:
variable.assign_sub(value)
def _var_key(self, variable):
if isinstance(variable, backend.Variable):
variable = variable.value # Convert to tf.Variable
if hasattr(variable, "_distributed_container"):
variable = variable._distributed_container()
elif (
isinstance(variable, tf.__internal__.CompositeTensor)
and hasattr(variable, "handle")
and hasattr(variable.handle, "_distributed_container")
):
# For ResourceVariables, the _distributed_container attribute
# is added to their handle tensors.
variable = variable.handle._distributed_container()
return variable._unique_id
def _apply_weight_decay(self, variables):
if self.weight_decay is None:
return
def distributed_apply_weight_decay(distribution, variables, **kwargs):
def weight_decay_fn(variable):
if self._use_weight_decay(variable):
lr = tf.cast(self.learning_rate, variable.dtype)
wd = tf.cast(self.weight_decay, variable.dtype)
variable.assign_sub(variable * wd * lr)
for variable in variables:
if isinstance(variable, backend.Variable):
variable = variable.value # Convert to tf.Variable
distribution.extended.update(
variable, weight_decay_fn, group=False
)
tf.__internal__.distribute.interim.maybe_merge_call(
distributed_apply_weight_decay,
self._distribution_strategy,
variables,
)
def _backend_update_step(self, grads, trainable_variables, learning_rate):
trainable_variables = [
v.value if isinstance(v, backend.Variable) else v
for v in trainable_variables
]
tf.__internal__.distribute.interim.maybe_merge_call(
self._distributed_tf_update_step,
self._distribution_strategy,
list(zip(grads, trainable_variables)),
learning_rate,
)
def _distributed_tf_update_step(
self, distribution, grads_and_vars, learning_rate
):
def apply_grad_to_update_var(var, grad):
return self.update_step(grad, var, learning_rate)
for grad, var in grads_and_vars:
distribution.extended.update(
var, apply_grad_to_update_var, args=(grad,), group=False
)
def _overwrite_model_variables_with_average_value(
self, trainable_variables
):
"""Overwrite model variables with their moving average values.
This function overwrites variables on each device.
Args:
var_list: list of model variables.
"""
trainable_variables = [
v.value if isinstance(v, backend.Variable) else v
for v in trainable_variables
]
# Override model variable by the stored average value on all devices.
for var, average_var in zip(
trainable_variables, self._model_variables_moving_average
):
self._distribution_strategy.extended.update(
var, lambda a, b: a.assign(b), args=(average_var,)
)
def _backend_increment_gradient_accumulators(self, grads):
def update_accumulator(var, grad):
var.assign(var + grad)
accumulators = [v.value for v in self._accumulated_gradients]
def _distributed_tf_increment_grad_acc(
distribution, grads, accumulators
):
for grad, var in zip(grads, accumulators):
distribution.extended.update(
var, update_accumulator, args=(grad,), group=False
)
tf.__internal__.distribute.interim.maybe_merge_call(
_distributed_tf_increment_grad_acc,
self._distribution_strategy,
grads,
accumulators,
)
def _clip_by_norm(self, values, axes=None):
# We need to use TF-specific OP to support the case,
# when `values` are `tf.IndexedSlices`.
return tf.clip_by_norm(values, self.clipnorm, axes)
| keras/keras/backend/tensorflow/optimizer.py/0 | {
"file_path": "keras/keras/backend/tensorflow/optimizer.py",
"repo_id": "keras",
"token_count": 2803
} | 148 |
import numpy as np
from keras import backend
from keras import constraints
from keras import testing
def get_example_array():
np.random.seed(3537)
example_array = np.random.random((100, 100)) * 100.0 - 50.0
example_array[0, 0] = 0.0 # Possible edge case
return example_array
class ConstraintsTest(testing.TestCase):
def test_max_norm(self):
constraint_fn = constraints.MaxNorm(2.0)
x = np.array([[0, 0, 0], [1.0, 0, 0], [3, 0, 0], [3, 3, 3]]).T
target = np.array(
[
[0, 0, 0],
[1.0, 0, 0],
[2.0, 0, 0],
[2.0 / np.sqrt(3), 2.0 / np.sqrt(3), 2.0 / np.sqrt(3)],
]
).T
output = constraint_fn(x)
self.assertAllClose(target, output)
def test_non_neg(self):
constraint_fn = constraints.NonNeg()
output = constraint_fn(get_example_array())
output = backend.convert_to_numpy(output)
self.assertTrue((np.min(output, axis=1) >= 0.0).all())
def test_unit_norm(self):
constraint_fn = constraints.UnitNorm()
output = constraint_fn(get_example_array())
output = backend.convert_to_numpy(output)
l2 = np.sqrt(np.sum(np.square(output), axis=0))
self.assertAllClose(l2, 1.0)
def test_min_max_norm(self):
constraint_fn = constraints.MinMaxNorm(min_value=0.2, max_value=0.5)
output = constraint_fn(get_example_array())
output = backend.convert_to_numpy(output)
l2 = np.sqrt(np.sum(np.square(output), axis=0))
self.assertFalse(l2[l2 < 0.2])
self.assertFalse(l2[l2 > 0.5 + 1e-6])
def test_get_method(self):
obj = constraints.get("unit_norm")
self.assertTrue(obj, constraints.UnitNorm)
obj = constraints.get(None)
self.assertEqual(obj, None)
with self.assertRaises(ValueError):
constraints.get("typo")
def test_default_constraint_call(self):
constraint_fn = constraints.Constraint()
x = np.array([1.0, 2.0, 3.0])
output = constraint_fn(x)
self.assertAllClose(x, output)
def test_constraint_get_config(self):
constraint_fn = constraints.Constraint()
config = constraint_fn.get_config()
self.assertEqual(config, {})
def test_constraint_from_config(self):
constraint_fn = constraints.Constraint()
config = constraint_fn.get_config()
recreated_constraint_fn = constraints.Constraint.from_config(config)
self.assertIsInstance(recreated_constraint_fn, constraints.Constraint)
def test_max_norm_get_config(self):
constraint_fn = constraints.MaxNorm(max_value=3.0, axis=1)
config = constraint_fn.get_config()
expected_config = {"max_value": 3.0, "axis": 1}
self.assertEqual(config, expected_config)
def test_unit_norm_get_config(self):
constraint_fn = constraints.UnitNorm(axis=1)
config = constraint_fn.get_config()
expected_config = {"axis": 1}
self.assertEqual(config, expected_config)
def test_min_max_norm_get_config(self):
constraint_fn = constraints.MinMaxNorm(
min_value=0.5, max_value=2.0, rate=0.7, axis=1
)
config = constraint_fn.get_config()
expected_config = {
"min_value": 0.5,
"max_value": 2.0,
"rate": 0.7,
"axis": 1,
}
self.assertEqual(config, expected_config)
| keras/keras/constraints/constraints_test.py/0 | {
"file_path": "keras/keras/constraints/constraints_test.py",
"repo_id": "keras",
"token_count": 1623
} | 149 |
from keras.dtype_policies.dtype_policy import DTypePolicy
from keras.dtype_policies.dtype_policy import dtype_policy
from keras.dtype_policies.dtype_policy import set_dtype_policy
from keras.testing import test_case
class DTypePolicyTest(test_case.TestCase):
def test_initialization_valid_name(self):
"""Test initialization with a valid name."""
policy = DTypePolicy("mixed_float16")
self.assertEqual(policy.compute_dtype, "float16")
self.assertEqual(policy.variable_dtype, "float32")
def test_initialization_invalid_name(self):
"""Test initialization with an invalid name."""
with self.assertRaisesRegex(ValueError, "Cannot convert"):
DTypePolicy("invalid_name")
def test_initialization_non_string_name(self):
"""Test initialization with a non-string name."""
with self.assertRaisesRegex(TypeError, "'name' must be a string"):
DTypePolicy(123)
def test_properties_mixed_float16(self):
"""Test properties for 'mixed_float16'."""
policy = DTypePolicy("mixed_float16")
self.assertEqual(policy.compute_dtype, "float16")
self.assertEqual(policy.variable_dtype, "float32")
def test_properties_mixed_bfloat16(self):
"""Test properties for 'mixed_bfloat16'."""
policy = DTypePolicy("mixed_bfloat16")
self.assertEqual(policy.compute_dtype, "bfloat16")
self.assertEqual(policy.variable_dtype, "float32")
def test_initialization_with_invalid_name_behaviour(self):
"""Test initialization behavior with an invalid name."""
with self.assertRaisesRegex(ValueError, "Cannot convert"):
DTypePolicy("invalid_name")
def test_properties(self):
"""Test variable_dtype, compute_dtype, and name properties."""
policy = DTypePolicy("mixed_float16")
self.assertEqual(policy.variable_dtype, "float32")
self.assertEqual(policy.compute_dtype, "float16")
self.assertEqual(policy.name, "mixed_float16")
def test_repr(self):
"""Test __repr__ method."""
policy = DTypePolicy("mixed_float16")
self.assertEqual(repr(policy), '<DTypePolicy "mixed_float16">')
def test_get_config_from_config(self):
"""Test get_config and from_config methods."""
policy = DTypePolicy("mixed_float16")
config = policy.get_config()
self.assertEqual(config, {"name": "mixed_float16"})
new_policy = DTypePolicy.from_config(config)
self.assertEqual(new_policy.name, "mixed_float16")
class DTypePolicyGlobalFunctionsTest(test_case.TestCase):
def setUp(self):
"""Reset the global dtype policy before each test."""
set_dtype_policy("float32")
def test_set_dtype_policy_valid_string(self):
"""Test set_dtype_policy with a valid string."""
set_dtype_policy("mixed_float16")
policy = dtype_policy()
self.assertEqual(policy.name, "mixed_float16")
def test_set_dtype_policy_valid_policy(self):
"""Test set_dtype_policy with a valid DTypePolicy object."""
policy_obj = DTypePolicy("mixed_float16")
set_dtype_policy(policy_obj)
policy = dtype_policy()
self.assertEqual(policy.name, "mixed_float16")
def test_set_dtype_policy_invalid(self):
"""Test set_dtype_policy with an invalid input."""
with self.assertRaisesRegex(ValueError, "Invalid `policy` argument"):
set_dtype_policy(12345)
def test_dtype_policy_default(self):
"""Test dtype_policy default value."""
policy = dtype_policy()
self.assertEqual(policy.name, "float32")
class DTypePolicyEdgeCasesTest(test_case.TestCase):
def test_empty_name(self):
"""Test initialization with an empty name."""
with self.assertRaisesRegex(ValueError, "Cannot convert"):
DTypePolicy("")
def test_special_character_name(self):
"""Test initialization with special characters in the name."""
with self.assertRaisesRegex(ValueError, "Cannot convert"):
DTypePolicy("@mixed_float16!")
def test_very_long_name(self):
"""Test initialization with a very long name."""
with self.assertRaisesRegex(ValueError, "Cannot convert"):
DTypePolicy("mixed_float16" * 100)
def test_almost_valid_name(self):
"""Test initialization with a name close to a valid one."""
with self.assertRaisesRegex(ValueError, "Cannot convert"):
DTypePolicy("mixed_float15")
class DTypePolicyGlobalFunctionsEdgeCasesTest(test_case.TestCase):
def setUp(self):
"""Reset the global dtype policy before each test."""
set_dtype_policy("float32")
def test_set_policy_multiple_times(self):
"""Test setting the policy multiple times in a row."""
set_dtype_policy("mixed_float16")
policy = dtype_policy()
self.assertEqual(policy.name, "mixed_float16")
set_dtype_policy("float32")
policy = dtype_policy()
self.assertEqual(policy.name, "float32")
def test_set_policy_none(self):
"""Test setting the policy to None."""
with self.assertRaisesRegex(ValueError, "Invalid `policy` argument"):
set_dtype_policy(None)
| keras/keras/dtype_policies/dtype_policy_test.py/0 | {
"file_path": "keras/keras/dtype_policies/dtype_policy_test.py",
"repo_id": "keras",
"token_count": 2132
} | 150 |
import os
import numpy as np
import pytest
from absl.testing import parameterized
from keras import backend
from keras import constraints
from keras import initializers
from keras import layers
from keras import models
from keras import saving
from keras import testing
class MultiHeadAttentionTest(testing.TestCase, parameterized.TestCase):
def test_basics(self):
self.run_layer_test(
layers.MultiHeadAttention,
init_kwargs={
"num_heads": 2,
"key_dim": 2,
},
input_shape={"query_shape": (2, 8, 16), "value_shape": (2, 4, 16)},
expected_output_shape=(2, 8, 16),
expected_num_trainable_weights=8,
expected_num_non_trainable_weights=0,
expected_num_seed_generators=0,
expected_num_losses=0,
supports_masking=True,
run_training_check=False,
)
self.run_layer_test(
layers.MultiHeadAttention,
init_kwargs={
"num_heads": 2,
"key_dim": 2,
"value_dim": 4,
"use_bias": False,
"dropout": 0.5,
},
input_shape={"query_shape": (2, 8, 16), "value_shape": (2, 4, 16)},
expected_output_shape=(2, 8, 16),
expected_num_trainable_weights=4,
expected_num_non_trainable_weights=0,
expected_num_seed_generators=0,
expected_num_losses=0,
supports_masking=True,
run_training_check=False,
)
@parameterized.named_parameters(
("4d_inputs_1freebatch_mask2", (3, 4), (3, 2), (4, 2), (2,)),
("4d_inputs_1freebatch_mask3", (3, 4), (3, 2), (3, 4, 2), (2,)),
("4d_inputs_1freebatch_mask4", (3, 4), (3, 2), (3, 2, 4, 2), (2,)),
("4d_inputs_2d_attention", (3, 4), (3, 2), (3, 4, 3, 2), (1, 2)),
("5d_inputs_2d_attention", (5, 3, 4), (5, 3, 2), (3, 4, 3, 2), (2, 3)),
(
"5d_inputs_2d_attention_fullmask",
(5, 3, 4),
(5, 3, 2),
(5, 3, 4, 3, 2),
(2, 3),
),
)
def test_high_dim_attention(
self, q_dims, v_dims, mask_dims, attention_axes
):
batch_size, hidden_size = 3, 8
query_shape = (batch_size,) + q_dims + (hidden_size,)
value_shape = (batch_size,) + v_dims + (hidden_size,)
self.run_layer_test(
layers.MultiHeadAttention,
init_kwargs={
"num_heads": 2,
"key_dim": 2,
"attention_axes": attention_axes,
},
input_shape={
"query_shape": query_shape,
"value_shape": value_shape,
},
expected_output_shape=query_shape,
expected_num_trainable_weights=8,
expected_num_non_trainable_weights=0,
expected_num_seed_generators=0,
expected_num_losses=0,
supports_masking=True,
run_training_check=False,
)
@parameterized.named_parameters(
("without_key_same_proj", (4, 8), (2, 8), None, None),
("with_key_same_proj", (4, 8), (2, 8), (2, 3), None),
("wihtout_key_different_proj", (4, 8), (2, 8), None, (3, 4)),
("with_key_different_proj", (4, 8), (2, 8), (2, 3), (1, 5)),
("high_dim_same_proj", (4, 2, 3, 8), (1, 1, 5, 8), (1, 1, 5, 2), None),
(
"high_dim_different_proj",
(4, 2, 3, 8),
(1, 1, 5, 8),
(1, 1, 5, 2),
(3, 2),
),
)
def test_compute_output_shape(
self, query_dims, value_dims, key_dims, output_shape
):
"""Test computed shape is equal to the layer output's shape."""
layer = layers.MultiHeadAttention(
num_heads=2,
key_dim=2,
value_dim=2,
output_shape=output_shape,
)
batch_size = 7
query_shape = (batch_size,) + query_dims
value_shape = (batch_size,) + value_dims
key_shape = (batch_size,) + key_dims if key_dims else None
query = np.ones(query_shape)
value = np.ones(value_shape)
key = np.ones(key_shape) if key_shape else None
output = layer(query=query, value=value, key=key)
comp_output_shape = layer.compute_output_shape(
query_shape, value_shape, key_shape
)
self.assertEqual(output.shape, comp_output_shape)
@parameterized.named_parameters(
("query_value_dim_mismatch", (2, 4, 8), (2, 2, 7), 2),
("key_value_dim_mismatch", (2, 4, 8), (2, 2, 8), (2, 1, 7)),
(
"key_value_dim_mismatch_high_dim",
(2, 4, 2, 3, 8),
(2, 1, 1, 5, 8),
(2, 1, 15, 5, 2),
),
)
def test_shape_mismatch_error(self, query_shape, value_shape, key_shape):
"""Test dimension mismatches"""
layer = layers.MultiHeadAttention(
num_heads=4,
key_dim=2,
value_dim=2,
)
with self.assertRaisesRegex(ValueError, r"must be equal"):
layer.compute_output_shape(query_shape, value_shape, key_shape)
def test_initializer(self):
# Test with a specified initializer.
layer = layers.MultiHeadAttention(
num_heads=12,
key_dim=64,
kernel_initializer=initializers.TruncatedNormal(stddev=0.02),
)
layer.build((2, 4, 8), (2, 4, 8))
# Make sure the sub layers have different kernel init value.
self.assertNotAllClose(
layer._query_dense.kernel,
layer._key_dense.kernel,
)
self.assertNotAllClose(
layer._query_dense.kernel,
layer._value_dense.kernel,
)
self.assertNotAllClose(
layer._query_dense.kernel,
layer._output_dense.kernel,
)
@pytest.mark.skipif(
backend.backend() == "numpy",
reason="Numpy backend does not support masking.",
)
def test_query_mask_propagation(self):
"""Test automatic propagation of the query's mask."""
layer = layers.MultiHeadAttention(num_heads=2, key_dim=2)
self.assertTrue(layer.supports_masking)
query = np.array([[1, 2, 3, 0, 0], [3, 3, 1, 1, 2], [1, 0, 0, 0, 0]])
masked_query = layers.Embedding(4, 8, mask_zero=True)(query)
value = np.random.normal(size=(3, 3, 8))
output = layer(query=masked_query, value=value)
self.assertAllClose(masked_query._keras_mask, output._keras_mask)
@parameterized.named_parameters(("causal", True), ("not_causal", 0))
@pytest.mark.skipif(
backend.backend() == "numpy",
reason="Numpy backend does not support masking.",
)
def test_masking(self, use_causal_mask):
"""Test that the value and causal masks are taken into account."""
layer = layers.MultiHeadAttention(num_heads=2, key_dim=2)
query = np.array([[1, 2, 3, 0, 0], [3, 3, 1, 1, 2], [1, 0, 0, 0, 0]])
masked_query = layers.Embedding(4, 8, mask_zero=True)(query)
value = np.array([[5, 4, 0], [3, 0, 0], [2, 1, 1]])
masked_value = layers.Embedding(6, 8, mask_zero=True)(value)
output = layer(
query=masked_query,
value=masked_value,
use_causal_mask=use_causal_mask,
)
mask = np.array(
[[[1, 1, 0]] * 3 + [[0, 0, 0]] * 2]
+ [[[1, 0, 0]] * 5]
+ [[[1, 1, 1]] + [[0, 0, 0]] * 4]
).astype(bool)
if use_causal_mask:
mask = mask & np.array(
[[[1, 0, 0], [1, 1, 0]] + [[1, 1, 1]] * 3]
).astype(bool)
del masked_query._keras_mask
del masked_value._keras_mask
output_with_manual_mask = layer(
query=masked_query, value=masked_value, attention_mask=mask
)
self.assertAllClose(output, output_with_manual_mask)
def test_correctness(self):
query = np.array([[[1.0, 0.0], [0.0, 1.0]]])
key = np.array([[[0.0, 1.0], [1.0, 0.0]]])
value = np.array([[[1.0, 2.0], [3.0, 4.0]]])
# Setup layer.
num_heads = 2
key_dim = 2
layer = layers.MultiHeadAttention(
num_heads=num_heads,
key_dim=key_dim,
)
layer.build(query.shape, key.shape, value.shape)
# Set layer weights.
kernel = np.identity(key_dim)
# To get an identity kernel we need to add a head dim and repeat on it.
kernel = np.repeat(kernel[:, np.newaxis, :], num_heads, axis=1)
# Zeros for all biases.
bias = np.zeros((2, 2))
output_bias = np.zeros((2,))
layer.set_weights([kernel, bias] * 3 + [kernel, output_bias])
# Call layer and assert output.
output, scores = layer(
query=query,
value=value,
key=key,
return_attention_scores=True,
)
self.assertAllClose(output, [[[5.679, 5.679], [4.32, 4.32]]], atol=1e-3)
self.assertAllClose(
scores,
[[[[0.33, 0.67], [0.67, 0.33]], [[0.33, 0.67], [0.67, 0.33]]]],
atol=1e-3,
)
def test_mha_constraints(self):
query = np.array([[[1.0, 0.0], [0.0, 1.0]]])
key = np.array([[[0.0, 1.0], [1.0, 0.0]]])
value = np.array([[[1.0, 2.0], [3.0, 4.0]]])
num_heads = 2
key_dim = 2
layer = layers.MultiHeadAttention(
num_heads=num_heads,
key_dim=key_dim,
kernel_constraint="non_neg",
)
layer.build(query.shape, key.shape, value.shape)
self.assertIsInstance(
layer._query_dense.kernel.constraint, constraints.NonNeg
)
self.assertIsInstance(
layer._value_dense.kernel.constraint, constraints.NonNeg
)
self.assertIsInstance(
layer._key_dense.kernel.constraint, constraints.NonNeg
)
layer = layers.MultiHeadAttention(
num_heads=num_heads,
key_dim=key_dim,
bias_constraint="non_neg",
)
layer.build(query.shape, key.shape, value.shape)
self.assertIsInstance(
layer._query_dense.bias.constraint, constraints.NonNeg
)
self.assertIsInstance(
layer._value_dense.bias.constraint, constraints.NonNeg
)
self.assertIsInstance(
layer._key_dense.bias.constraint, constraints.NonNeg
)
@pytest.mark.requires_trainable_backend
def test_lora(self):
query = np.array([[[1.0, 0.0], [0.0, 1.0]]])
key = np.array([[[0.0, 1.0], [1.0, 0.0]]])
value = np.array([[[1.0, 2.0], [3.0, 4.0]]])
layer = layers.MultiHeadAttention(
num_heads=3,
key_dim=8,
use_bias=False,
)
layer.build(query.shape, key.shape, value.shape)
layer.query_dense.enable_lora(2)
layer.key_dense.enable_lora(2)
layer.value_dense.enable_lora(2)
self.assertLen(layer.trainable_variables, 7)
self.assertLen(layer.non_trainable_variables, 3)
# Try eager call
x = {
"query": query,
"key": key,
"value": value,
}
y = np.random.random((1, 2, 2))
_ = layer(**x)
# Try calling fit()
inputs = {
"query": layers.Input((2, 2)),
"key": layers.Input((2, 2)),
"value": layers.Input((2, 2)),
}
outputs = layer(inputs["query"], inputs["key"], inputs["value"])
model = models.Model(inputs, outputs)
model.compile(optimizer="sgd", loss="mse")
model.fit(x, y)
# Try saving and reloading the model
temp_filepath = os.path.join(self.get_temp_dir(), "lora_model.keras")
model.save(temp_filepath)
new_model = saving.load_model(temp_filepath)
self.assertAllClose(model.predict(x), new_model.predict(x))
# Try saving and reloading the model's weights only
temp_filepath = os.path.join(
self.get_temp_dir(), "lora_model.weights.h5"
)
model.save_weights(temp_filepath)
# Load the file into a fresh, non-lora model
inputs = {
"query": layers.Input((2, 2)),
"key": layers.Input((2, 2)),
"value": layers.Input((2, 2)),
}
outputs = layers.MultiHeadAttention(
num_heads=3,
key_dim=8,
use_bias=False,
)(inputs["query"], inputs["key"], inputs["value"])
new_model = models.Model(inputs, outputs)
new_model.load_weights(temp_filepath)
self.assertAllClose(model.predict(x), new_model.predict(x))
# Try loading a normal checkpoint into a lora model
new_model.save_weights(temp_filepath)
model.load_weights(temp_filepath)
self.assertAllClose(model.predict(x), new_model.predict(x))
| keras/keras/layers/attention/multi_head_attention_test.py/0 | {
"file_path": "keras/keras/layers/attention/multi_head_attention_test.py",
"repo_id": "keras",
"token_count": 6778
} | 151 |
import numpy as np
from absl.testing import parameterized
from keras import backend
from keras import layers
from keras import testing
class ZeroPadding2DTest(testing.TestCase, parameterized.TestCase):
@parameterized.named_parameters(
("channels_first", "channels_first"), ("channels_last", "channels_last")
)
def test_zero_padding_2d(self, data_format):
inputs = np.random.rand(1, 2, 3, 4)
outputs = layers.ZeroPadding2D(
padding=((1, 2), (3, 4)), data_format=data_format
)(inputs)
if data_format == "channels_first":
for index in [0, -1, -2]:
self.assertAllClose(outputs[:, :, index, :], 0.0)
for index in [0, 1, 2, -1, -2, -3, -4]:
self.assertAllClose(outputs[:, :, :, index], 0.0)
self.assertAllClose(outputs[:, :, 1:-2, 3:-4], inputs)
else:
for index in [0, -1, -2]:
self.assertAllClose(outputs[:, index, :, :], 0.0)
for index in [0, 1, 2, -1, -2, -3, -4]:
self.assertAllClose(outputs[:, :, index, :], 0.0)
self.assertAllClose(outputs[:, 1:-2, 3:-4, :], inputs)
@parameterized.product(
(
{"padding": ((2, 2), (2, 2))}, # 2 tuples
{"padding": (2, 2)}, # 1 tuple
{"padding": 2}, # 1 int
),
(
{"data_format": "channels_first"},
{"data_format": "channels_last"},
),
)
def test_zero_padding_2d_with_same_padding(self, padding, data_format):
inputs = np.random.rand(1, 2, 3, 4)
outputs = layers.ZeroPadding2D(
padding=padding, data_format=data_format
)(inputs)
if data_format == "channels_first":
for index in [0, 1, -1, -2]:
self.assertAllClose(outputs[:, :, index, :], 0.0)
self.assertAllClose(outputs[:, :, :, index], 0.0)
self.assertAllClose(outputs[:, :, 2:-2, 2:-2], inputs)
else:
for index in [0, 1, -1, -2]:
self.assertAllClose(outputs[:, index, :, :], 0.0)
self.assertAllClose(outputs[:, :, index, :], 0.0)
self.assertAllClose(outputs[:, 2:-2, 2:-2, :], inputs)
def test_zero_padding_2d_with_dynamic_spatial_dim(self):
if backend.config.image_data_format() == "channels_last":
input_layer = layers.Input(batch_shape=(1, 2, None, 4))
else:
input_layer = layers.Input(batch_shape=(1, 4, 2, None))
padded = layers.ZeroPadding2D(((1, 2), (3, 4)))(input_layer)
if backend.config.image_data_format() == "channels_last":
self.assertEqual(padded.shape, (1, 5, None, 4))
else:
self.assertEqual(padded.shape, (1, 4, 5, None))
def test_zero_padding_2d_errors_if_padding_argument_invalid(self):
with self.assertRaises(ValueError):
layers.ZeroPadding2D(padding=(1,))
with self.assertRaises(ValueError):
layers.ZeroPadding2D(padding=(1, 2, 3))
with self.assertRaises(ValueError):
layers.ZeroPadding2D(padding="1")
with self.assertRaises(ValueError):
layers.ZeroPadding2D(padding=((1, 2), (3, 4, 5)))
with self.assertRaises(ValueError):
layers.ZeroPadding2D(padding=((1, 2), (3, -4)))
with self.assertRaises(ValueError):
layers.ZeroPadding2D(padding=((1, 2), "3"))
| keras/keras/layers/reshaping/zero_padding2d_test.py/0 | {
"file_path": "keras/keras/layers/reshaping/zero_padding2d_test.py",
"repo_id": "keras",
"token_count": 1711
} | 152 |
# from keras.ops.numpy import Matmul, matmul
# from keras.ops.numpy import Add, add
# from keras.ops.numpy import Multiply, multiply
from keras.backend import cast
from keras.backend import cond
from keras.backend import is_tensor
from keras.backend import name_scope
from keras.backend import random
from keras.ops import image
from keras.ops import operation_utils
from keras.ops.core import * # noqa: F403
from keras.ops.linalg import * # noqa: F403
from keras.ops.math import * # noqa: F403
from keras.ops.nn import * # noqa: F403
from keras.ops.numpy import * # noqa: F403
| keras/keras/ops/__init__.py/0 | {
"file_path": "keras/keras/ops/__init__.py",
"repo_id": "keras",
"token_count": 205
} | 153 |
import contextlib
import functools
import itertools
import math
import numpy as np
import pytest
from absl.testing import parameterized
import keras
from keras import backend
from keras import testing
from keras.backend.common import standardize_dtype
from keras.backend.common.keras_tensor import KerasTensor
from keras.backend.common.variables import ALLOWED_DTYPES
from keras.ops import numpy as knp
from keras.testing.test_utils import named_product
class NumpyTwoInputOpsDynamicShapeTest(testing.TestCase):
def test_add(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.add(x, y).shape, (2, 3))
def test_subtract(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.subtract(x, y).shape, (2, 3))
def test_multiply(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.multiply(x, y).shape, (2, 3))
def test_matmul(self):
x = KerasTensor((None, 3, 4))
y = KerasTensor((3, None, 4, 5))
self.assertEqual(knp.matmul(x, y).shape, (3, None, 3, 5))
def test_power(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.power(x, y).shape, (2, 3))
def test_divide(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.divide(x, y).shape, (2, 3))
def test_divide_no_nan(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.divide_no_nan(x, y).shape, (2, 3))
def test_true_divide(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.true_divide(x, y).shape, (2, 3))
def test_append(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.append(x, y).shape, (None,))
def test_arctan2(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.arctan2(x, y).shape, (2, 3))
def test_cross(self):
x1 = KerasTensor((2, 3, 3))
x2 = KerasTensor((1, 3, 2))
y = KerasTensor((None, 1, 2))
self.assertEqual(knp.cross(x1, y).shape, (2, 3, 3))
self.assertEqual(knp.cross(x2, y).shape, (None, 3))
def test_einsum(self):
x = KerasTensor((None, 3))
y = KerasTensor((3, 4))
self.assertEqual(knp.einsum("ij,jk->ik", x, y).shape, (None, 4))
self.assertEqual(knp.einsum("ij,jk->ikj", x, y).shape, (None, 4, 3))
self.assertEqual(knp.einsum("ii", x).shape, ())
self.assertEqual(knp.einsum(",ij", 5, x).shape, (None, 3))
x = KerasTensor((None, 3, 4))
y = KerasTensor((None, 4, 5))
z = KerasTensor((1, 1, 1, 9))
self.assertEqual(knp.einsum("ijk,jkl->li", x, y).shape, (5, None))
self.assertEqual(knp.einsum("ijk,jkl->lij", x, y).shape, (5, None, 3))
self.assertEqual(
knp.einsum("...,...j->...j", x, y).shape, (None, 3, 4, 5)
)
self.assertEqual(
knp.einsum("i...,...j->i...j", x, y).shape, (None, 3, 4, 5)
)
self.assertEqual(knp.einsum("i...,...j", x, y).shape, (3, 4, None, 5))
self.assertEqual(
knp.einsum("i...,...j,...k", x, y, z).shape, (1, 3, 4, None, 5, 9)
)
self.assertEqual(
knp.einsum("mij,ijk,...", x, y, z).shape, (1, 1, 1, 9, 5, None)
)
with self.assertRaises(ValueError):
x = KerasTensor((None, 3))
y = KerasTensor((3, 4))
knp.einsum("ijk,jk->ik", x, y)
def test_full_like(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.full_like(x, KerasTensor((1, 3))).shape, (None, 3))
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.full_like(x, 2).shape, (None, 3, 3))
def test_greater(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.greater(x, y).shape, (2, 3))
def test_greater_equal(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.greater_equal(x, y).shape, (2, 3))
def test_isclose(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.isclose(x, y).shape, (2, 3))
def test_less(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.less(x, y).shape, (2, 3))
def test_less_equal(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.less_equal(x, y).shape, (2, 3))
def test_linspace(self):
start = KerasTensor((None, 3, 4))
stop = KerasTensor((2, 3, 4))
self.assertEqual(
knp.linspace(start, stop, 10, axis=1).shape, (2, 10, 3, 4)
)
start = KerasTensor((None, 3))
stop = 2
self.assertEqual(
knp.linspace(start, stop, 10, axis=1).shape, (None, 10, 3)
)
def test_logical_and(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.logical_and(x, y).shape, (2, 3))
def test_logical_or(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.logical_or(x, y).shape, (2, 3))
def test_logspace(self):
start = KerasTensor((None, 3, 4))
stop = KerasTensor((2, 3, 4))
self.assertEqual(
knp.logspace(start, stop, 10, axis=1).shape, (2, 10, 3, 4)
)
start = KerasTensor((None, 3))
stop = 2
self.assertEqual(
knp.logspace(start, stop, 10, axis=1).shape, (None, 10, 3)
)
def test_maximum(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.maximum(x, y).shape, (2, 3))
def test_minimum(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.minimum(x, y).shape, (2, 3))
def test_mod(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.mod(x, y).shape, (2, 3))
def test_not_equal(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.not_equal(x, y).shape, (2, 3))
def test_outer(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.outer(x, y).shape, (None, None))
def test_quantile(self):
x = KerasTensor((None, 3))
# q as scalar
q = KerasTensor(())
self.assertEqual(knp.quantile(x, q).shape, ())
# q as 1D tensor
q = KerasTensor((2,))
self.assertEqual(knp.quantile(x, q).shape, (2,))
self.assertEqual(knp.quantile(x, q, axis=1).shape, (2, None))
self.assertEqual(
knp.quantile(x, q, axis=1, keepdims=True).shape,
(2, None, 1),
)
def test_take(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.take(x, 1).shape, ())
self.assertEqual(knp.take(x, [1, 2]).shape, (2,))
self.assertEqual(
knp.take(x, [[1, 2], [1, 2]], axis=1).shape, (None, 2, 2)
)
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.take(x, 1, axis=1).shape, (None, 3))
self.assertEqual(knp.take(x, [1, 2]).shape, (2,))
self.assertEqual(
knp.take(x, [[1, 2], [1, 2]], axis=1).shape, (None, 2, 2, 3)
)
# test with negative axis
self.assertEqual(knp.take(x, 1, axis=-2).shape, (None, 3))
# test with multi-dimensional indices
x = KerasTensor((None, 3, None, 5))
indices = KerasTensor((6, 7))
self.assertEqual(knp.take(x, indices, axis=2).shape, (None, 3, 6, 7, 5))
def test_take_along_axis(self):
x = KerasTensor((None, 3))
indices = KerasTensor((1, 3))
self.assertEqual(knp.take_along_axis(x, indices, axis=0).shape, (1, 3))
self.assertEqual(
knp.take_along_axis(x, indices, axis=1).shape, (None, 3)
)
x = KerasTensor((None, 3, 3))
indices = KerasTensor((1, 3, None))
self.assertEqual(
knp.take_along_axis(x, indices, axis=1).shape, (None, 3, 3)
)
def test_tensordot(self):
x = KerasTensor((None, 3, 4))
y = KerasTensor((3, 4))
self.assertEqual(knp.tensordot(x, y, axes=1).shape, (None, 3, 4))
self.assertEqual(knp.tensordot(x, y, axes=[[0, 1], [1, 0]]).shape, (4,))
def test_vdot(self):
x = KerasTensor((None, 3))
y = KerasTensor((None, 3))
self.assertEqual(knp.vdot(x, y).shape, ())
x = KerasTensor((None, 3, 3))
y = KerasTensor((None, 3, 3))
self.assertEqual(knp.vdot(x, y).shape, ())
def test_where(self):
condition = KerasTensor((2, None, 1))
x = KerasTensor((None, 1))
y = KerasTensor((None, 3))
self.assertEqual(knp.where(condition, x, y).shape, (2, None, 3))
self.assertEqual(knp.where(condition).shape, (2, None, 1))
def test_floor_divide(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.floor_divide(x, y).shape, (2, 3))
def test_xor(self):
x = KerasTensor((None, 3))
y = KerasTensor((2, None))
self.assertEqual(knp.logical_xor(x, y).shape, (2, 3))
def test_shape_equal_basic_equality(self):
x = KerasTensor((3, 4)).shape
y = KerasTensor((3, 4)).shape
self.assertTrue(knp.shape_equal(x, y))
y = KerasTensor((3, 5)).shape
self.assertFalse(knp.shape_equal(x, y))
def test_shape_equal_allow_none(self):
x = KerasTensor((3, 4, None)).shape
y = KerasTensor((3, 4, 5)).shape
self.assertTrue(knp.shape_equal(x, y, allow_none=True))
self.assertFalse(knp.shape_equal(x, y, allow_none=False))
def test_shape_equal_different_shape_lengths(self):
x = KerasTensor((3, 4)).shape
y = KerasTensor((3, 4, 5)).shape
self.assertFalse(knp.shape_equal(x, y))
def test_shape_equal_ignore_axes(self):
x = KerasTensor((3, 4, 5)).shape
y = KerasTensor((3, 6, 5)).shape
self.assertTrue(knp.shape_equal(x, y, axis=1))
y = KerasTensor((3, 6, 7)).shape
self.assertTrue(knp.shape_equal(x, y, axis=(1, 2)))
self.assertFalse(knp.shape_equal(x, y, axis=1))
def test_shape_equal_only_none(self):
x = KerasTensor((None, None)).shape
y = KerasTensor((5, 6)).shape
self.assertTrue(knp.shape_equal(x, y, allow_none=True))
def test_shape_equal_axis_as_list(self):
x = KerasTensor((3, 4, 5)).shape
y = KerasTensor((3, 6, 5)).shape
self.assertTrue(knp.shape_equal(x, y, axis=[1]))
def test_shape_non_equal_with_negative_axis(self):
x = KerasTensor((3, 4, 5)).shape
y = KerasTensor((3, 4, 6)).shape
self.assertFalse(knp.shape_equal(x, y, axis=-2))
def test_shape_equal_with_negative_axis(self):
x = KerasTensor((3, 4, 5)).shape
y = KerasTensor((3, 4, 5)).shape
self.assertTrue(knp.shape_equal(x, y, axis=-1))
def test_shape_equal_zeros(self):
x = KerasTensor((0, 4)).shape
y = KerasTensor((0, 4)).shape
self.assertTrue(knp.shape_equal(x, y))
y = KerasTensor((0, 5)).shape
self.assertFalse(knp.shape_equal(x, y))
def test_broadcast_shapes_conversion_to_list(self):
shape1 = KerasTensor((1, 2)).shape
shape2 = KerasTensor((3, 1)).shape
expected_output = [3, 2]
self.assertEqual(knp.broadcast_shapes(shape1, shape2), expected_output)
def test_broadcast_shapes_shape1_longer_than_shape2(self):
shape1 = KerasTensor((5, 3, 2)).shape
shape2 = KerasTensor((1, 3)).shape
with self.assertRaisesRegex(ValueError, "Cannot broadcast shape"):
knp.broadcast_shapes(shape1, shape2)
def test_broadcast_shapes_shape2_longer_than_shape1(self):
shape1 = KerasTensor((5, 3)).shape
shape2 = KerasTensor((2, 5, 3)).shape
expected_output = [2, 5, 3]
self.assertEqual(knp.broadcast_shapes(shape1, shape2), expected_output)
def test_broadcast_shapes_broadcasting_shape1_is_1(self):
shape1 = KerasTensor((1, 3)).shape
shape2 = KerasTensor((5, 1)).shape
expected_output = [5, 3]
self.assertEqual(knp.broadcast_shapes(shape1, shape2), expected_output)
def test_broadcast_shapes_broadcasting_shape1_is_none(self):
shape1 = KerasTensor((None, 3)).shape
shape2 = KerasTensor((5, 1)).shape
expected_output = [5, 3]
self.assertEqual(knp.broadcast_shapes(shape1, shape2), expected_output)
shape1 = KerasTensor((None, 3)).shape
shape2 = KerasTensor((5, 3)).shape
expected_output = [5, 3]
self.assertEqual(knp.broadcast_shapes(shape1, shape2), expected_output)
def test_broadcast_shapes_broadcasting_shape2_conditions(self):
shape1 = KerasTensor((5, 3, 2)).shape
shape2 = KerasTensor((1, 3, 2)).shape
expected_output = [5, 3, 2]
self.assertEqual(knp.broadcast_shapes(shape1, shape2), expected_output)
shape1 = KerasTensor((5, 3, 2)).shape
shape2 = KerasTensor((1, None, 2)).shape
expected_output = [5, 3, 2]
self.assertEqual(knp.broadcast_shapes(shape1, shape2), expected_output)
class NumpyTwoInputOpsStaticShapeTest(testing.TestCase):
def test_add(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.add(x, y).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.add(x, y)
def test_subtract(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.subtract(x, y).shape, (2, 3))
x = KerasTensor((2, 3))
self.assertEqual(knp.subtract(x, 2).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.subtract(x, y)
def test_multiply(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.multiply(x, y).shape, (2, 3))
x = KerasTensor((2, 3))
self.assertEqual(knp.multiply(x, 2).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.multiply(x, y)
def test_matmul(self):
x = KerasTensor((2, 3))
y = KerasTensor((3, 2))
self.assertEqual(knp.matmul(x, y).shape, (2, 2))
with self.assertRaises(ValueError):
x = KerasTensor((3, 4))
y = KerasTensor((2, 3, 4))
knp.matmul(x, y)
def test_matmul_sparse(self):
x = KerasTensor((2, 3), sparse=True)
y = KerasTensor((3, 2))
result = knp.matmul(x, y)
self.assertEqual(result.shape, (2, 2))
x = KerasTensor((2, 3))
y = KerasTensor((3, 2), sparse=True)
result = knp.matmul(x, y)
self.assertEqual(result.shape, (2, 2))
x = KerasTensor((2, 3), sparse=True)
y = KerasTensor((3, 2), sparse=True)
result = knp.matmul(x, y)
self.assertEqual(result.shape, (2, 2))
self.assertTrue(result.sparse)
def test_power(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.power(x, y).shape, (2, 3))
x = KerasTensor((2, 3))
self.assertEqual(knp.power(x, 2).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.power(x, y)
def test_divide(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.divide(x, y).shape, (2, 3))
x = KerasTensor((2, 3))
self.assertEqual(knp.divide(x, 2).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.divide(x, y)
def test_divide_no_nan(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.divide_no_nan(x, y).shape, (2, 3))
x = KerasTensor((2, 3))
self.assertEqual(knp.divide_no_nan(x, 2).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.divide_no_nan(x, y)
def test_true_divide(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.true_divide(x, y).shape, (2, 3))
x = KerasTensor((2, 3))
self.assertEqual(knp.true_divide(x, 2).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.true_divide(x, y)
def test_append(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.append(x, y).shape, (12,))
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.append(x, y, axis=0).shape, (4, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.append(x, y, axis=2)
def test_arctan2(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.arctan2(x, y).shape, (2, 3))
x = KerasTensor((2, 3))
self.assertEqual(knp.arctan2(x, 2).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.arctan2(x, y)
def test_cross(self):
x1 = KerasTensor((2, 3, 3))
x2 = KerasTensor((1, 3, 2))
y1 = KerasTensor((2, 3, 3))
y2 = KerasTensor((2, 3, 2))
self.assertEqual(knp.cross(x1, y1).shape, (2, 3, 3))
self.assertEqual(knp.cross(x2, y2).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.cross(x, y)
with self.assertRaises(ValueError):
x = KerasTensor((4, 3, 3))
y = KerasTensor((2, 3, 3))
knp.cross(x, y)
def test_einsum(self):
x = KerasTensor((2, 3))
y = KerasTensor((3, 4))
self.assertEqual(knp.einsum("ij,jk->ik", x, y).shape, (2, 4))
self.assertEqual(knp.einsum("ij,jk->ikj", x, y).shape, (2, 4, 3))
self.assertEqual(knp.einsum("ii", x).shape, ())
self.assertEqual(knp.einsum(",ij", 5, x).shape, (2, 3))
x = KerasTensor((2, 3, 4))
y = KerasTensor((3, 4, 5))
z = KerasTensor((1, 1, 1, 9))
self.assertEqual(knp.einsum("ijk,jkl->li", x, y).shape, (5, 2))
self.assertEqual(knp.einsum("ijk,jkl->lij", x, y).shape, (5, 2, 3))
self.assertEqual(knp.einsum("...,...j->...j", x, y).shape, (2, 3, 4, 5))
self.assertEqual(
knp.einsum("i...,...j->i...j", x, y).shape, (2, 3, 4, 5)
)
self.assertEqual(knp.einsum("i...,...j", x, y).shape, (3, 4, 2, 5))
self.assertEqual(knp.einsum("i...,...j", x, y).shape, (3, 4, 2, 5))
self.assertEqual(
knp.einsum("i...,...j,...k", x, y, z).shape, (1, 3, 4, 2, 5, 9)
)
self.assertEqual(
knp.einsum("mij,ijk,...", x, y, z).shape, (1, 1, 1, 9, 5, 2)
)
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((3, 4))
knp.einsum("ijk,jk->ik", x, y)
def test_full_like(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.full_like(x, 2).shape, (2, 3))
def test_greater(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.greater(x, y).shape, (2, 3))
x = KerasTensor((2, 3))
self.assertEqual(knp.greater(x, 2).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.greater(x, y)
def test_greater_equal(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.greater_equal(x, y).shape, (2, 3))
x = KerasTensor((2, 3))
self.assertEqual(knp.greater_equal(x, 2).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.greater_equal(x, y)
def test_isclose(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.isclose(x, y).shape, (2, 3))
x = KerasTensor((2, 3))
self.assertEqual(knp.isclose(x, 2).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.isclose(x, y)
def test_less(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.less(x, y).shape, (2, 3))
x = KerasTensor((2, 3))
self.assertEqual(knp.less(x, 2).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.less(x, y)
def test_less_equal(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.less_equal(x, y).shape, (2, 3))
x = KerasTensor((2, 3))
self.assertEqual(knp.less_equal(x, 2).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.less_equal(x, y)
def test_linspace(self):
start = KerasTensor((2, 3, 4))
stop = KerasTensor((2, 3, 4))
self.assertEqual(knp.linspace(start, stop, 10).shape, (10, 2, 3, 4))
with self.assertRaises(ValueError):
start = KerasTensor((2, 3))
stop = KerasTensor((2, 3, 4))
knp.linspace(start, stop)
def test_logical_and(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.logical_and(x, y).shape, (2, 3))
x = KerasTensor((2, 3))
self.assertEqual(knp.logical_and(x, 2).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.logical_and(x, y)
def test_logical_or(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.logical_or(x, y).shape, (2, 3))
x = KerasTensor((2, 3))
self.assertEqual(knp.logical_or(x, 2).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.logical_or(x, y)
def test_logspace(self):
start = KerasTensor((2, 3, 4))
stop = KerasTensor((2, 3, 4))
self.assertEqual(knp.logspace(start, stop, 10).shape, (10, 2, 3, 4))
with self.assertRaises(ValueError):
start = KerasTensor((2, 3))
stop = KerasTensor((2, 3, 4))
knp.logspace(start, stop)
def test_maximum(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.maximum(x, y).shape, (2, 3))
x = KerasTensor((2, 3))
self.assertEqual(knp.maximum(x, 2).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.maximum(x, y)
def test_minimum(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.minimum(x, y).shape, (2, 3))
x = KerasTensor((2, 3))
self.assertEqual(knp.minimum(x, 2).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.minimum(x, y)
def test_mod(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.mod(x, y).shape, (2, 3))
x = KerasTensor((2, 3))
self.assertEqual(knp.mod(x, 2).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.mod(x, y)
def test_not_equal(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.not_equal(x, y).shape, (2, 3))
x = KerasTensor((2, 3))
self.assertEqual(knp.not_equal(x, 2).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.not_equal(x, y)
def test_outer(self):
x = KerasTensor((3,))
y = KerasTensor((4,))
self.assertEqual(knp.outer(x, y).shape, (3, 4))
x = KerasTensor((2, 3))
y = KerasTensor((4, 5))
self.assertEqual(knp.outer(x, y).shape, (6, 20))
x = KerasTensor((2, 3))
self.assertEqual(knp.outer(x, 2).shape, (6, 1))
def test_quantile(self):
x = KerasTensor((3, 3))
# q as scalar
q = KerasTensor(())
self.assertEqual(knp.quantile(x, q).shape, ())
# q as 1D tensor
q = KerasTensor((2,))
self.assertEqual(knp.quantile(x, q).shape, (2,))
self.assertEqual(knp.quantile(x, q, axis=1).shape, (2, 3))
self.assertEqual(
knp.quantile(x, q, axis=1, keepdims=True).shape,
(2, 3, 1),
)
def test_take(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.take(x, 1).shape, ())
self.assertEqual(knp.take(x, [1, 2]).shape, (2,))
self.assertEqual(knp.take(x, [[1, 2], [1, 2]], axis=1).shape, (2, 2, 2))
# test with multi-dimensional indices
x = KerasTensor((2, 3, 4, 5))
indices = KerasTensor((6, 7))
self.assertEqual(knp.take(x, indices, axis=2).shape, (2, 3, 6, 7, 5))
def test_take_along_axis(self):
x = KerasTensor((2, 3))
indices = KerasTensor((1, 3))
self.assertEqual(knp.take_along_axis(x, indices, axis=0).shape, (1, 3))
self.assertEqual(knp.take_along_axis(x, indices, axis=1).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
indices = KerasTensor((1, 4))
knp.take_along_axis(x, indices, axis=0)
def test_tensordot(self):
x = KerasTensor((2, 3, 3))
y = KerasTensor((3, 3, 4))
self.assertEqual(knp.tensordot(x, y, axes=1).shape, (2, 3, 3, 4))
self.assertEqual(knp.tensordot(x, y, axes=2).shape, (2, 4))
self.assertEqual(
knp.tensordot(x, y, axes=[[1, 2], [0, 1]]).shape, (2, 4)
)
def test_vdot(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.vdot(x, y).shape, ())
def test_where(self):
condition = KerasTensor((2, 3))
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.where(condition, x, y).shape, (2, 3))
self.assertAllEqual(knp.where(condition).shape, (2, 3))
def test_floor_divide(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.floor_divide(x, y).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.floor_divide(x, y)
def test_xor(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.logical_xor(x, y).shape, (2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
knp.logical_xor(x, y)
def test_digitize(self):
x = KerasTensor((2, 3))
bins = KerasTensor((3,))
self.assertEqual(knp.digitize(x, bins).shape, (2, 3))
self.assertTrue(knp.digitize(x, bins).dtype == "int32")
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
bins = KerasTensor((2, 3, 4))
knp.digitize(x, bins)
class NumpyOneInputOpsDynamicShapeTest(testing.TestCase):
def test_mean(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.mean(x).shape, ())
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.mean(x, axis=1).shape, (None, 3))
self.assertEqual(knp.mean(x, axis=1, keepdims=True).shape, (None, 1, 3))
def test_all(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.all(x).shape, ())
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.all(x, axis=1).shape, (None, 3))
self.assertEqual(knp.all(x, axis=1, keepdims=True).shape, (None, 1, 3))
def test_any(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.any(x).shape, ())
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.any(x, axis=1).shape, (None, 3))
self.assertEqual(knp.any(x, axis=1, keepdims=True).shape, (None, 1, 3))
def test_var(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.var(x).shape, ())
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.var(x, axis=1).shape, (None, 3))
self.assertEqual(knp.var(x, axis=1, keepdims=True).shape, (None, 1, 3))
def test_sum(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.sum(x).shape, ())
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.sum(x, axis=1).shape, (None, 3))
self.assertEqual(knp.sum(x, axis=1, keepdims=True).shape, (None, 1, 3))
def test_amax(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.amax(x).shape, ())
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.amax(x, axis=1).shape, (None, 3))
self.assertEqual(knp.amax(x, axis=1, keepdims=True).shape, (None, 1, 3))
def test_amin(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.amin(x).shape, ())
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.amin(x, axis=1).shape, (None, 3))
self.assertEqual(knp.amin(x, axis=1, keepdims=True).shape, (None, 1, 3))
def test_square(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.square(x).shape, (None, 3))
def test_negative(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.negative(x).shape, (None, 3))
def test_abs(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.abs(x).shape, (None, 3))
def test_absolute(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.absolute(x).shape, (None, 3))
def test_squeeze(self):
x = KerasTensor((None, 1))
self.assertEqual(knp.squeeze(x).shape, (None,))
self.assertEqual(knp.squeeze(x, axis=1).shape, (None,))
with self.assertRaises(ValueError):
x = KerasTensor((None, 1))
knp.squeeze(x, axis=0)
def test_transpose(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.transpose(x).shape, (3, None))
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.transpose(x, (2, 0, 1)).shape, (3, None, 3))
def test_arccos(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.arccos(x).shape, (None, 3))
def test_arccosh(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.arccosh(x).shape, (None, 3))
def test_arcsin(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.arcsin(x).shape, (None, 3))
def test_arcsinh(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.arcsinh(x).shape, (None, 3))
def test_arctan(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.arctan(x).shape, (None, 3))
def test_arctanh(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.arctanh(x).shape, (None, 3))
def test_argmax(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.argmax(x).shape, ())
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.argmax(x, axis=1).shape, (None, 3))
def test_argmin(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.argmin(x).shape, ())
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.argmin(x, axis=1).shape, (None, 3))
def test_argsort(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.argsort(x).shape, (None, 3))
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.argsort(x, axis=1).shape, (None, 3, 3))
def test_array(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.array(x).shape, (None, 3))
def test_average(self):
x = KerasTensor((None, 3))
weights = KerasTensor((None, 3))
self.assertEqual(knp.average(x, weights=weights).shape, ())
x = KerasTensor((None, 3))
weights = KerasTensor((3,))
self.assertEqual(knp.average(x, axis=1, weights=weights).shape, (None,))
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.average(x, axis=1).shape, (None, 3))
with self.assertRaises(ValueError):
x = KerasTensor((None, 3, 3))
weights = KerasTensor((None, 4))
knp.average(x, weights=weights)
def test_broadcast_to(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.broadcast_to(x, (2, 3, 3)).shape, (2, 3, 3))
with self.assertRaises(ValueError):
x = KerasTensor((3, 3))
knp.broadcast_to(x, (2, 2, 3))
def test_ceil(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.ceil(x).shape, (None, 3))
def test_clip(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.clip(x, 1, 2).shape, (None, 3))
def test_concatenate(self):
x = KerasTensor((None, 3))
y = KerasTensor((None, 3))
self.assertEqual(
knp.concatenate(
[x, y],
).shape,
(None, 3),
)
self.assertEqual(knp.concatenate([x, y], axis=1).shape, (None, 6))
with self.assertRaises(ValueError):
self.assertEqual(knp.concatenate([x, y], axis=None).shape, (None,))
with self.assertRaises(ValueError):
x = KerasTensor((None, 3, 5))
y = KerasTensor((None, 4, 6))
knp.concatenate([x, y], axis=1)
def test_concatenate_sparse(self):
x = KerasTensor((2, 3), sparse=True)
y = KerasTensor((2, 3))
result = knp.concatenate([x, y], axis=1)
self.assertEqual(result.shape, (2, 6))
self.assertFalse(result.sparse)
x = KerasTensor((2, 3))
y = KerasTensor((2, 3), sparse=True)
result = knp.concatenate([x, y], axis=1)
self.assertEqual(result.shape, (2, 6))
self.assertFalse(result.sparse)
x = KerasTensor((2, 3), sparse=True)
y = KerasTensor((2, 3), sparse=True)
result = knp.concatenate([x, y], axis=1)
self.assertEqual(result.shape, (2, 6))
self.assertTrue(result.sparse)
def test_conjugate(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.conjugate(x).shape, (None, 3))
def test_conj(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.conj(x).shape, (None, 3))
def test_copy(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.copy(x).shape, (None, 3))
def test_cos(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.cos(x).shape, (None, 3))
def test_cosh(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.cosh(x).shape, (None, 3))
def test_count_nonzero(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.count_nonzero(x).shape, ())
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.count_nonzero(x, axis=1).shape, (None, 3))
def test_cumprod(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.cumprod(x).shape, (None,))
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.cumprod(x, axis=1).shape, (None, 3, 3))
def test_cumsum(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.cumsum(x).shape, (None,))
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.cumsum(x, axis=1).shape, (None, 3, 3))
def test_diag(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.diag(x).shape, (None,))
self.assertEqual(knp.diag(x, k=3).shape, (None,))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3, 4))
knp.diag(x)
def test_diagonal(self):
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.diagonal(x).shape, (3, None))
def test_diff(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.diff(x).shape, (None, 2))
self.assertEqual(knp.diff(x, n=2).shape, (None, 1))
self.assertEqual(knp.diff(x, n=3).shape, (None, 0))
self.assertEqual(knp.diff(x, n=4).shape, (None, 0))
self.assertEqual(knp.diff(x, axis=0).shape, (None, 3))
self.assertEqual(knp.diff(x, n=2, axis=0).shape, (None, 3))
def test_dot(self):
x = KerasTensor((None, 3))
y = KerasTensor((3, 2))
z = KerasTensor((None, None, 2))
self.assertEqual(knp.dot(x, y).shape, (None, 2))
self.assertEqual(knp.dot(x, 2).shape, (None, 3))
self.assertEqual(knp.dot(x, z).shape, (None, None, 2))
x = KerasTensor((None,))
y = KerasTensor((5,))
self.assertEqual(knp.dot(x, y).shape, ())
def test_exp(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.exp(x).shape, (None, 3))
def test_expand_dims(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.expand_dims(x, -1).shape, (None, 3, 1))
self.assertEqual(knp.expand_dims(x, 0).shape, (1, None, 3))
self.assertEqual(knp.expand_dims(x, 1).shape, (None, 1, 3))
self.assertEqual(knp.expand_dims(x, -2).shape, (None, 1, 3))
def test_expm1(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.expm1(x).shape, (None, 3))
def test_flip(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.flip(x).shape, (None, 3))
def test_floor(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.floor(x).shape, (None, 3))
def test_get_item(self):
x = KerasTensor((None, 5, 16))
# Simple slice.
sliced = knp.get_item(x, 5)
self.assertEqual(sliced.shape, (5, 16))
# Ellipsis slice.
sliced = knp.get_item(x, np.s_[..., -1])
self.assertEqual(sliced.shape, (None, 5))
# `newaxis` slice.
sliced = knp.get_item(x, np.s_[:, np.newaxis, ...])
self.assertEqual(sliced.shape, (None, 1, 5, 16))
# Strided slice.
sliced = knp.get_item(x, np.s_[:5, 3:, 3:12:2])
self.assertEqual(sliced.shape, (None, 2, 5))
# Error states.
with self.assertRaises(ValueError):
sliced = knp.get_item(x, np.s_[:, 17, :])
with self.assertRaises(ValueError):
sliced = knp.get_item(x, np.s_[..., 5, ...])
with self.assertRaises(ValueError):
sliced = knp.get_item(x, np.s_[:, :, :, :])
def test_hstack(self):
x = KerasTensor((None, 3))
y = KerasTensor((None, 3))
self.assertEqual(knp.hstack([x, y]).shape, (None, 6))
x = KerasTensor((None, 3))
y = KerasTensor((None, None))
self.assertEqual(knp.hstack([x, y]).shape, (None, None))
def test_imag(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.imag(x).shape, (None, 3))
def test_isfinite(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.isfinite(x).shape, (None, 3))
def test_isinf(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.isinf(x).shape, (None, 3))
def test_isnan(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.isnan(x).shape, (None, 3))
def test_log(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.log(x).shape, (None, 3))
def test_log10(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.log10(x).shape, (None, 3))
def test_log1p(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.log1p(x).shape, (None, 3))
def test_log2(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.log2(x).shape, (None, 3))
def test_logaddexp(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.logaddexp(x, x).shape, (None, 3))
def test_logical_not(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.logical_not(x).shape, (None, 3))
def test_max(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.max(x).shape, ())
def test_median(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.median(x).shape, ())
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.median(x, axis=1).shape, (None, 3))
self.assertEqual(
knp.median(x, axis=1, keepdims=True).shape, (None, 1, 3)
)
def test_meshgrid(self):
x = KerasTensor((None, 3))
y = KerasTensor((None, 3))
self.assertEqual(knp.meshgrid(x, y)[0].shape, (None, None))
self.assertEqual(knp.meshgrid(x, y)[1].shape, (None, None))
with self.assertRaises(ValueError):
knp.meshgrid(x, y, indexing="kk")
def test_moveaxis(self):
x = KerasTensor((None, 3, 4, 5))
self.assertEqual(knp.moveaxis(x, 0, -1).shape, (3, 4, 5, None))
self.assertEqual(knp.moveaxis(x, -1, 0).shape, (5, None, 3, 4))
self.assertEqual(
knp.moveaxis(x, [0, 1], [-1, -2]).shape, (4, 5, 3, None)
)
self.assertEqual(knp.moveaxis(x, [0, 1], [1, 0]).shape, (3, None, 4, 5))
self.assertEqual(
knp.moveaxis(x, [0, 1], [-2, -1]).shape, (4, 5, None, 3)
)
def test_ndim(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.ndim(x).shape, (2,))
def test_ones_like(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.ones_like(x).shape, (None, 3))
self.assertEqual(knp.ones_like(x).dtype, x.dtype)
def test_zeros_like(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.zeros_like(x).shape, (None, 3))
self.assertEqual(knp.zeros_like(x).dtype, x.dtype)
def test_pad(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.pad(x, 1).shape, (None, 5))
self.assertEqual(knp.pad(x, (1, 2)).shape, (None, 6))
self.assertEqual(knp.pad(x, ((1, 2), (3, 4))).shape, (None, 10))
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.pad(x, 1).shape, (None, 5, 5))
self.assertEqual(knp.pad(x, (1, 2)).shape, (None, 6, 6))
self.assertEqual(
knp.pad(x, ((1, 2), (3, 4), (5, 6))).shape, (None, 10, 14)
)
def test_prod(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.prod(x).shape, ())
self.assertEqual(knp.prod(x, axis=0).shape, (3,))
self.assertEqual(knp.prod(x, axis=1, keepdims=True).shape, (None, 1))
def test_ravel(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.ravel(x).shape, (None,))
def test_real(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.real(x).shape, (None, 3))
def test_reciprocal(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.reciprocal(x).shape, (None, 3))
def test_repeat(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.repeat(x, 2).shape, (None,))
self.assertEqual(knp.repeat(x, 3, axis=1).shape, (None, 9))
self.assertEqual(knp.repeat(x, [1, 2], axis=0).shape, (3, 3))
self.assertEqual(knp.repeat(x, 2, axis=0).shape, (None, 3))
def test_reshape(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.reshape(x, (3, 2)).shape, (3, 2))
self.assertEqual(knp.reshape(x, (3, -1)).shape, (3, None))
def test_roll(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.roll(x, 1).shape, (None, 3))
self.assertEqual(knp.roll(x, 1, axis=1).shape, (None, 3))
self.assertEqual(knp.roll(x, 1, axis=0).shape, (None, 3))
def test_round(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.round(x).shape, (None, 3))
def test_sign(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.sign(x).shape, (None, 3))
def test_sin(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.sin(x).shape, (None, 3))
def test_sinh(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.sinh(x).shape, (None, 3))
def test_size(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.size(x).shape, ())
def test_sort(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.sort(x).shape, (None, 3))
self.assertEqual(knp.sort(x, axis=1).shape, (None, 3))
self.assertEqual(knp.sort(x, axis=0).shape, (None, 3))
def test_split(self):
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.split(x, 2)[0].shape, (None, 3, 3))
self.assertEqual(knp.split(x, 3, axis=1)[0].shape, (None, 1, 3))
self.assertEqual(len(knp.split(x, [1, 3], axis=1)), 3)
self.assertEqual(knp.split(x, [1, 3], axis=1)[0].shape, (None, 1, 3))
self.assertEqual(knp.split(x, [1, 3], axis=1)[1].shape, (None, 2, 3))
self.assertEqual(knp.split(x, [1, 3], axis=1)[2].shape, (None, 0, 3))
def test_sqrt(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.sqrt(x).shape, (None, 3))
def test_stack(self):
x = KerasTensor((None, 3))
y = KerasTensor((None, 3))
self.assertEqual(knp.stack([x, y]).shape, (2, None, 3))
self.assertEqual(knp.stack([x, y], axis=-1).shape, (None, 3, 2))
def test_std(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.std(x).shape, ())
x = KerasTensor((None, 3, 3))
self.assertEqual(knp.std(x, axis=1).shape, (None, 3))
self.assertEqual(knp.std(x, axis=1, keepdims=True).shape, (None, 1, 3))
def test_swapaxes(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.swapaxes(x, 0, 1).shape, (3, None))
def test_tan(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.tan(x).shape, (None, 3))
def test_tanh(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.tanh(x).shape, (None, 3))
def test_tile(self):
x = KerasTensor((None, 3))
self.assertEqual(knp.tile(x, [2]).shape, (None, 6))
self.assertEqual(knp.tile(x, [1, 2]).shape, (None, 6))
self.assertEqual(knp.tile(x, [2, 1, 2]).shape, (2, None, 6))
def test_trace(self):
x = KerasTensor((None, 3, None, 5))
self.assertEqual(knp.trace(x).shape, (None, 5))
self.assertEqual(knp.trace(x, axis1=2, axis2=3).shape, (None, 3))
def test_tril(self):
x = KerasTensor((None, 3, None, 5))
self.assertEqual(knp.tril(x).shape, (None, 3, None, 5))
self.assertEqual(knp.tril(x, k=1).shape, (None, 3, None, 5))
self.assertEqual(knp.tril(x, k=-1).shape, (None, 3, None, 5))
def test_triu(self):
x = KerasTensor((None, 3, None, 5))
self.assertEqual(knp.triu(x).shape, (None, 3, None, 5))
self.assertEqual(knp.triu(x, k=1).shape, (None, 3, None, 5))
self.assertEqual(knp.triu(x, k=-1).shape, (None, 3, None, 5))
def test_vstack(self):
x = KerasTensor((None, 3))
y = KerasTensor((None, 3))
self.assertEqual(knp.vstack([x, y]).shape, (None, 3))
x = KerasTensor((None, 3))
y = KerasTensor((None, None))
self.assertEqual(knp.vstack([x, y]).shape, (None, 3))
class NumpyOneInputOpsStaticShapeTest(testing.TestCase):
def test_mean(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.mean(x).shape, ())
def test_all(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.all(x).shape, ())
def test_any(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.any(x).shape, ())
def test_var(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.var(x).shape, ())
def test_sum(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.sum(x).shape, ())
def test_amax(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.amax(x).shape, ())
def test_amin(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.amin(x).shape, ())
def test_square(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.square(x).shape, (2, 3))
def test_negative(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.negative(x).shape, (2, 3))
def test_abs(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.abs(x).shape, (2, 3))
def test_absolute(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.absolute(x).shape, (2, 3))
def test_squeeze(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.squeeze(x).shape, (2, 3))
x = KerasTensor((2, 1, 3))
self.assertEqual(knp.squeeze(x).shape, (2, 3))
self.assertEqual(knp.squeeze(x, axis=1).shape, (2, 3))
self.assertEqual(knp.squeeze(x, axis=-2).shape, (2, 3))
with self.assertRaises(ValueError):
knp.squeeze(x, axis=0)
def test_transpose(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.transpose(x).shape, (3, 2))
def test_arccos(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.arccos(x).shape, (2, 3))
def test_arccosh(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.arccosh(x).shape, (2, 3))
def test_arcsin(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.arcsin(x).shape, (2, 3))
def test_arcsinh(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.arcsinh(x).shape, (2, 3))
def test_arctan(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.arctan(x).shape, (2, 3))
def test_arctanh(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.arctanh(x).shape, (2, 3))
def test_argmax(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.argmax(x).shape, ())
def test_argmin(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.argmin(x).shape, ())
def test_argsort(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.argsort(x).shape, (2, 3))
self.assertEqual(knp.argsort(x, axis=None).shape, (6,))
def test_array(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.array(x).shape, (2, 3))
def test_average(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.average(x).shape, ())
def test_broadcast_to(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.broadcast_to(x, (2, 2, 3)).shape, (2, 2, 3))
with self.assertRaises(ValueError):
x = KerasTensor((3, 3))
knp.broadcast_to(x, (2, 2, 3))
def test_ceil(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.ceil(x).shape, (2, 3))
def test_clip(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.clip(x, 1, 2).shape, (2, 3))
def test_concatenate(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.concatenate([x, y]).shape, (4, 3))
self.assertEqual(knp.concatenate([x, y], axis=1).shape, (2, 6))
with self.assertRaises(ValueError):
self.assertEqual(knp.concatenate([x, y], axis=None).shape, (None,))
def test_conjugate(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.conjugate(x).shape, (2, 3))
def test_conj(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.conj(x).shape, (2, 3))
def test_copy(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.copy(x).shape, (2, 3))
def test_cos(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.cos(x).shape, (2, 3))
def test_cosh(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.cosh(x).shape, (2, 3))
def test_count_nonzero(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.count_nonzero(x).shape, ())
def test_cumprod(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.cumprod(x).shape, (6,))
def test_cumsum(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.cumsum(x).shape, (6,))
def test_diag(self):
x = KerasTensor((3,))
self.assertEqual(knp.diag(x).shape, (3, 3))
self.assertEqual(knp.diag(x, k=3).shape, (6, 6))
self.assertEqual(knp.diag(x, k=-2).shape, (5, 5))
x = KerasTensor((3, 5))
self.assertEqual(knp.diag(x).shape, (3,))
self.assertEqual(knp.diag(x, k=3).shape, (2,))
self.assertEqual(knp.diag(x, k=-2).shape, (1,))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3, 4))
knp.diag(x)
def test_diagonal(self):
x = KerasTensor((3, 3))
self.assertEqual(knp.diagonal(x).shape, (3,))
self.assertEqual(knp.diagonal(x, offset=1).shape, (2,))
x = KerasTensor((3, 5, 5))
self.assertEqual(knp.diagonal(x).shape, (5, 3))
with self.assertRaises(ValueError):
x = KerasTensor((3,))
knp.diagonal(x)
def test_diff(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.diff(x).shape, (2, 2))
self.assertEqual(knp.diff(x, n=2).shape, (2, 1))
self.assertEqual(knp.diff(x, n=3).shape, (2, 0))
self.assertEqual(knp.diff(x, n=4).shape, (2, 0))
self.assertEqual(knp.diff(x, axis=0).shape, (1, 3))
self.assertEqual(knp.diff(x, n=2, axis=0).shape, (0, 3))
self.assertEqual(knp.diff(x, n=3, axis=0).shape, (0, 3))
def test_dot(self):
x = KerasTensor((2, 3))
y = KerasTensor((3, 2))
z = KerasTensor((4, 3, 2))
self.assertEqual(knp.dot(x, y).shape, (2, 2))
self.assertEqual(knp.dot(x, 2).shape, (2, 3))
self.assertEqual(knp.dot(x, z).shape, (2, 4, 2))
x = KerasTensor((5,))
y = KerasTensor((5,))
self.assertEqual(knp.dot(x, y).shape, ())
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
knp.dot(x, y)
def test_exp(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.exp(x).shape, (2, 3))
def test_expand_dims(self):
x = KerasTensor((2, 3, 4))
self.assertEqual(knp.expand_dims(x, 0).shape, (1, 2, 3, 4))
self.assertEqual(knp.expand_dims(x, 1).shape, (2, 1, 3, 4))
self.assertEqual(knp.expand_dims(x, -2).shape, (2, 3, 1, 4))
def test_expm1(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.expm1(x).shape, (2, 3))
def test_flip(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.flip(x).shape, (2, 3))
def test_floor(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.floor(x).shape, (2, 3))
def test_get_item(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.get_item(x, 1).shape, (3,))
x = KerasTensor((5, 3, 2))
self.assertEqual(knp.get_item(x, 3).shape, (3, 2))
x = KerasTensor(
[
2,
]
)
self.assertEqual(knp.get_item(x, 0).shape, ())
def test_hstack(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.hstack([x, y]).shape, (2, 6))
def test_imag(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.imag(x).shape, (2, 3))
def test_isfinite(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.isfinite(x).shape, (2, 3))
def test_isinf(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.isinf(x).shape, (2, 3))
def test_isnan(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.isnan(x).shape, (2, 3))
def test_log(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.log(x).shape, (2, 3))
def test_log10(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.log10(x).shape, (2, 3))
def test_log1p(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.log1p(x).shape, (2, 3))
def test_log2(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.log2(x).shape, (2, 3))
def test_logaddexp(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.logaddexp(x, x).shape, (2, 3))
def test_logical_not(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.logical_not(x).shape, (2, 3))
def test_max(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.max(x).shape, ())
def test_median(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.median(x).shape, ())
x = KerasTensor((2, 3, 3))
self.assertEqual(knp.median(x, axis=1).shape, (2, 3))
self.assertEqual(knp.median(x, axis=1, keepdims=True).shape, (2, 1, 3))
def test_meshgrid(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3, 4))
z = KerasTensor((2, 3, 4, 5))
self.assertEqual(knp.meshgrid(x, y)[0].shape, (24, 6))
self.assertEqual(knp.meshgrid(x, y)[1].shape, (24, 6))
self.assertEqual(knp.meshgrid(x, y, indexing="ij")[0].shape, (6, 24))
self.assertEqual(
knp.meshgrid(x, y, z, indexing="ij")[0].shape, (6, 24, 120)
)
with self.assertRaises(ValueError):
knp.meshgrid(x, y, indexing="kk")
def test_moveaxis(self):
x = KerasTensor((2, 3, 4, 5))
self.assertEqual(knp.moveaxis(x, 0, -1).shape, (3, 4, 5, 2))
self.assertEqual(knp.moveaxis(x, -1, 0).shape, (5, 2, 3, 4))
self.assertEqual(knp.moveaxis(x, [0, 1], [-1, -2]).shape, (4, 5, 3, 2))
self.assertEqual(knp.moveaxis(x, [0, 1], [1, 0]).shape, (3, 2, 4, 5))
self.assertEqual(knp.moveaxis(x, [0, 1], [-2, -1]).shape, (4, 5, 2, 3))
def test_ndim(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.ndim(x).shape, (2,))
def test_ones_like(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.ones_like(x).shape, (2, 3))
self.assertEqual(knp.ones_like(x).dtype, x.dtype)
def test_zeros_like(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.zeros_like(x).shape, (2, 3))
self.assertEqual(knp.zeros_like(x).dtype, x.dtype)
def test_pad(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.pad(x, 1).shape, (4, 5))
self.assertEqual(knp.pad(x, (1, 2)).shape, (5, 6))
self.assertEqual(knp.pad(x, ((1, 2), (3, 4))).shape, (5, 10))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
knp.pad(x, ((1, 2), (3, 4), (5, 6)))
def test_prod(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.prod(x).shape, ())
self.assertEqual(knp.prod(x, axis=0).shape, (3,))
self.assertEqual(knp.prod(x, axis=1).shape, (2,))
def test_ravel(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.ravel(x).shape, (6,))
def test_real(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.real(x).shape, (2, 3))
def test_reciprocal(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.reciprocal(x).shape, (2, 3))
def test_repeat(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.repeat(x, 2).shape, (12,))
self.assertEqual(knp.repeat(x, 3, axis=1).shape, (2, 9))
self.assertEqual(knp.repeat(x, [1, 2], axis=0).shape, (3, 3))
def test_reshape(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.reshape(x, (3, 2)).shape, (3, 2))
self.assertEqual(knp.reshape(x, (3, -1)).shape, (3, 2))
self.assertEqual(knp.reshape(x, (6,)).shape, (6,))
self.assertEqual(knp.reshape(x, (-1,)).shape, (6,))
def test_roll(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.roll(x, 1).shape, (2, 3))
self.assertEqual(knp.roll(x, 1, axis=1).shape, (2, 3))
self.assertEqual(knp.roll(x, 1, axis=0).shape, (2, 3))
def test_round(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.round(x).shape, (2, 3))
def test_sign(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.sign(x).shape, (2, 3))
def test_sin(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.sin(x).shape, (2, 3))
def test_sinh(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.sinh(x).shape, (2, 3))
def test_size(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.size(x).shape, ())
def test_sort(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.sort(x).shape, (2, 3))
self.assertEqual(knp.sort(x, axis=1).shape, (2, 3))
self.assertEqual(knp.sort(x, axis=0).shape, (2, 3))
def test_split(self):
x = KerasTensor((2, 3))
self.assertEqual(len(knp.split(x, 2)), 2)
self.assertEqual(knp.split(x, 2)[0].shape, (1, 3))
self.assertEqual(knp.split(x, 3, axis=1)[0].shape, (2, 1))
self.assertEqual(len(knp.split(x, [1, 3], axis=1)), 3)
self.assertEqual(knp.split(x, [1, 3], axis=1)[0].shape, (2, 1))
self.assertEqual(knp.split(x, [1, 3], axis=1)[1].shape, (2, 2))
self.assertEqual(knp.split(x, [1, 3], axis=1)[2].shape, (2, 0))
with self.assertRaises(ValueError):
knp.split(x, 2, axis=1)
def test_sqrt(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.sqrt(x).shape, (2, 3))
def test_stack(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.stack([x, y]).shape, (2, 2, 3))
self.assertEqual(knp.stack([x, y], axis=-1).shape, (2, 3, 2))
with self.assertRaises(ValueError):
x = KerasTensor((2, 3))
y = KerasTensor((3, 3))
knp.stack([x, y])
def test_std(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.std(x).shape, ())
def test_swapaxes(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.swapaxes(x, 0, 1).shape, (3, 2))
def test_tan(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.tan(x).shape, (2, 3))
def test_tanh(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.tanh(x).shape, (2, 3))
def test_tile(self):
x = KerasTensor((2, 3))
self.assertEqual(knp.tile(x, [2]).shape, (2, 6))
self.assertEqual(knp.tile(x, [1, 2]).shape, (2, 6))
self.assertEqual(knp.tile(x, [2, 1, 2]).shape, (2, 2, 6))
def test_trace(self):
x = KerasTensor((2, 3, 4, 5))
self.assertEqual(knp.trace(x).shape, (4, 5))
self.assertEqual(knp.trace(x, axis1=2, axis2=3).shape, (2, 3))
def test_tril(self):
x = KerasTensor((2, 3, 4, 5))
self.assertEqual(knp.tril(x).shape, (2, 3, 4, 5))
self.assertEqual(knp.tril(x, k=1).shape, (2, 3, 4, 5))
self.assertEqual(knp.tril(x, k=-1).shape, (2, 3, 4, 5))
def test_triu(self):
x = KerasTensor((2, 3, 4, 5))
self.assertEqual(knp.triu(x).shape, (2, 3, 4, 5))
self.assertEqual(knp.triu(x, k=1).shape, (2, 3, 4, 5))
self.assertEqual(knp.triu(x, k=-1).shape, (2, 3, 4, 5))
def test_vstack(self):
x = KerasTensor((2, 3))
y = KerasTensor((2, 3))
self.assertEqual(knp.vstack([x, y]).shape, (4, 3))
class NumpyTwoInputOpsCorretnessTest(testing.TestCase, parameterized.TestCase):
def test_add(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
y = np.array([[4, 5, 6], [3, 2, 1]])
z = np.array([[[1, 2, 3], [3, 2, 1]]])
self.assertAllClose(knp.add(x, y), np.add(x, y))
self.assertAllClose(knp.add(x, z), np.add(x, z))
self.assertAllClose(knp.Add()(x, y), np.add(x, y))
self.assertAllClose(knp.Add()(x, z), np.add(x, z))
def test_subtract(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
y = np.array([[4, 5, 6], [3, 2, 1]])
z = np.array([[[1, 2, 3], [3, 2, 1]]])
self.assertAllClose(knp.subtract(x, y), np.subtract(x, y))
self.assertAllClose(knp.subtract(x, z), np.subtract(x, z))
self.assertAllClose(knp.Subtract()(x, y), np.subtract(x, y))
self.assertAllClose(knp.Subtract()(x, z), np.subtract(x, z))
def test_multiply(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
y = np.array([[4, 5, 6], [3, 2, 1]])
z = np.array([[[1, 2, 3], [3, 2, 1]]])
self.assertAllClose(knp.multiply(x, y), np.multiply(x, y))
self.assertAllClose(knp.multiply(x, z), np.multiply(x, z))
self.assertAllClose(knp.Multiply()(x, y), np.multiply(x, y))
self.assertAllClose(knp.Multiply()(x, z), np.multiply(x, z))
def test_matmul(self):
x = np.ones([2, 3, 4, 5])
y = np.ones([2, 3, 5, 6])
z = np.ones([5, 6])
p = np.ones([4])
self.assertAllClose(knp.matmul(x, y), np.matmul(x, y))
self.assertAllClose(knp.matmul(x, z), np.matmul(x, z))
self.assertAllClose(knp.matmul(p, x), np.matmul(p, x))
self.assertAllClose(knp.Matmul()(x, y), np.matmul(x, y))
self.assertAllClose(knp.Matmul()(x, z), np.matmul(x, z))
self.assertAllClose(knp.Matmul()(p, x), np.matmul(p, x))
@parameterized.named_parameters(
named_product(
(
{
"testcase_name": "rank2",
"x_shape": (5, 3),
"y_shape": (3, 4),
},
{
"testcase_name": "rank3",
"x_shape": (2, 5, 3),
"y_shape": (2, 3, 4),
},
{
"testcase_name": "rank4",
"x_shape": (2, 2, 5, 3),
"y_shape": (2, 2, 3, 4),
},
),
dtype=["float16", "float32", "float64", "int32"],
x_sparse=[False, True],
y_sparse=[False, True],
)
)
@pytest.mark.skipif(
not backend.SUPPORTS_SPARSE_TENSORS,
reason="Backend does not support sparse tensors.",
)
def test_matmul_sparse(self, dtype, x_shape, y_shape, x_sparse, y_sparse):
if backend.backend() == "tensorflow":
import tensorflow as tf
if x_sparse and y_sparse and dtype in ("float16", "int32"):
pytest.skip(
f"Sparse sparse matmul unsupported for {dtype}"
" with TensorFlow backend"
)
dense_to_sparse = tf.sparse.from_dense
sparse_class = tf.SparseTensor
elif backend.backend() == "jax":
import jax.experimental.sparse as jax_sparse
dense_to_sparse = functools.partial(
jax_sparse.BCOO.fromdense, n_batch=len(x_shape) - 2
)
sparse_class = jax_sparse.JAXSparse
rng = np.random.default_rng(0)
x = x_np = (4 * rng.standard_normal(x_shape)).astype(dtype)
if x_sparse:
x_np = np.multiply(x_np, rng.random(x_shape) < 0.7)
x = dense_to_sparse(x_np)
y = y_np = (4 * rng.standard_normal(y_shape)).astype(dtype)
if y_sparse:
y_np = np.multiply(y_np, rng.random(y_shape) < 0.7)
y = dense_to_sparse(y_np)
atol = 0.1 if dtype == "float16" else 1e-4
self.assertAllClose(knp.matmul(x, y), np.matmul(x_np, y_np), atol=atol)
if x_sparse and y_sparse:
self.assertIsInstance(knp.matmul(x, y), sparse_class)
def test_power(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
y = np.array([[4, 5, 6], [3, 2, 1]])
z = np.array([[[1, 2, 3], [3, 2, 1]]])
self.assertAllClose(knp.power(x, y), np.power(x, y))
self.assertAllClose(knp.power(x, z), np.power(x, z))
self.assertAllClose(knp.Power()(x, y), np.power(x, y))
self.assertAllClose(knp.Power()(x, z), np.power(x, z))
def test_divide(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
y = np.array([[4, 5, 6], [3, 2, 1]])
z = np.array([[[1, 2, 3], [3, 2, 1]]])
self.assertAllClose(knp.divide(x, y), np.divide(x, y))
self.assertAllClose(knp.divide(x, z), np.divide(x, z))
self.assertAllClose(knp.Divide()(x, y), np.divide(x, y))
self.assertAllClose(knp.Divide()(x, z), np.divide(x, z))
def test_divide_no_nan(self):
x = np.array(
[[2, 1, 0], [np.inf, -np.inf, np.nan], [np.inf, -np.inf, np.nan]]
)
y = np.array([[2, 0, 0], [0, 0, 0], [3, 2, 1]])
expected_result = np.array(
[[1, 0, 0], [0, 0, 0], [np.inf, -np.inf, np.nan]]
)
self.assertAllClose(knp.divide_no_nan(x, y), expected_result)
self.assertAllClose(knp.DivideNoNan()(x, y), expected_result)
def test_true_divide(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
y = np.array([[4, 5, 6], [3, 2, 1]])
z = np.array([[[1, 2, 3], [3, 2, 1]]])
self.assertAllClose(knp.true_divide(x, y), np.true_divide(x, y))
self.assertAllClose(knp.true_divide(x, z), np.true_divide(x, z))
self.assertAllClose(knp.TrueDivide()(x, y), np.true_divide(x, y))
self.assertAllClose(knp.TrueDivide()(x, z), np.true_divide(x, z))
def test_append(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
y = np.array([[4, 5, 6], [3, 2, 1]])
z = np.array([[[1, 2, 3], [3, 2, 1]], [[4, 5, 6], [3, 2, 1]]])
self.assertAllClose(knp.append(x, y), np.append(x, y))
self.assertAllClose(knp.append(x, y, axis=1), np.append(x, y, axis=1))
self.assertAllClose(knp.append(x, z), np.append(x, z))
self.assertAllClose(knp.Append()(x, y), np.append(x, y))
self.assertAllClose(knp.Append(axis=1)(x, y), np.append(x, y, axis=1))
self.assertAllClose(knp.Append()(x, z), np.append(x, z))
def test_arctan2(self):
x = np.array([[1.0, 2.0, 3.0], [3.0, 2.0, 1.0]])
y = np.array([[4.0, 5.0, 6.0], [3.0, 2.0, 1.0]])
self.assertAllClose(knp.arctan2(x, y), np.arctan2(x, y))
self.assertAllClose(knp.Arctan2()(x, y), np.arctan2(x, y))
def test_cross(self):
x1 = np.ones([2, 1, 4, 3])
x2 = np.ones([2, 1, 4, 2])
y1 = np.ones([2, 1, 4, 3])
y2 = np.ones([1, 5, 4, 3])
y3 = np.ones([1, 5, 4, 2])
self.assertAllClose(knp.cross(x1, y1), np.cross(x1, y1))
self.assertAllClose(knp.cross(x1, y2), np.cross(x1, y2))
if backend.backend() != "torch":
# API divergence between `torch.cross` and `np.cross`
# `torch.cross` only allows dim 3, `np.cross` allows dim 2 or 3
self.assertAllClose(knp.cross(x1, y3), np.cross(x1, y3))
self.assertAllClose(knp.cross(x2, y3), np.cross(x2, y3))
self.assertAllClose(knp.Cross()(x1, y1), np.cross(x1, y1))
self.assertAllClose(knp.Cross()(x1, y2), np.cross(x1, y2))
if backend.backend() != "torch":
# API divergence between `torch.cross` and `np.cross`
# `torch.cross` only allows dim 3, `np.cross` allows dim 2 or 3
self.assertAllClose(knp.Cross()(x1, y3), np.cross(x1, y3))
self.assertAllClose(knp.Cross()(x2, y3), np.cross(x2, y3))
def test_einsum(self):
x = np.arange(24).reshape([2, 3, 4]).astype("float32")
y = np.arange(24).reshape([2, 4, 3]).astype("float32")
self.assertAllClose(
knp.einsum("ijk,lkj->il", x, y),
np.einsum("ijk,lkj->il", x, y),
)
self.assertAllClose(
knp.einsum("ijk,ikj->i", x, y),
np.einsum("ijk,ikj->i", x, y),
)
self.assertAllClose(
knp.einsum("i...,j...k->...ijk", x, y),
np.einsum("i..., j...k->...ijk", x, y),
)
self.assertAllClose(knp.einsum(",ijk", 5, y), np.einsum(",ijk", 5, y))
self.assertAllClose(
knp.Einsum("ijk,lkj->il")(x, y),
np.einsum("ijk,lkj->il", x, y),
)
self.assertAllClose(
knp.Einsum("ijk,ikj->i")(x, y),
np.einsum("ijk,ikj->i", x, y),
)
self.assertAllClose(
knp.Einsum("i...,j...k->...ijk")(x, y),
np.einsum("i...,j...k->...ijk", x, y),
)
self.assertAllClose(knp.Einsum(",ijk")(5, y), np.einsum(",ijk", 5, y))
def test_full_like(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.full_like(x, 2), np.full_like(x, 2))
self.assertAllClose(
knp.full_like(x, 2, dtype="float32"),
np.full_like(x, 2, dtype="float32"),
)
self.assertAllClose(
knp.full_like(x, np.ones([2, 3])),
np.full_like(x, np.ones([2, 3])),
)
self.assertAllClose(knp.FullLike()(x, 2), np.full_like(x, 2))
self.assertAllClose(
knp.FullLike()(x, 2, dtype="float32"),
np.full_like(x, 2, dtype="float32"),
)
self.assertAllClose(
knp.FullLike()(x, np.ones([2, 3])),
np.full_like(x, np.ones([2, 3])),
)
def test_greater(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
y = np.array([[4, 5, 6], [3, 2, 1]])
self.assertAllClose(knp.greater(x, y), np.greater(x, y))
self.assertAllClose(knp.greater(x, 2), np.greater(x, 2))
self.assertAllClose(knp.greater(2, x), np.greater(2, x))
self.assertAllClose(knp.Greater()(x, y), np.greater(x, y))
self.assertAllClose(knp.Greater()(x, 2), np.greater(x, 2))
self.assertAllClose(knp.Greater()(2, x), np.greater(2, x))
def test_greater_equal(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
y = np.array([[4, 5, 6], [3, 2, 1]])
self.assertAllClose(
knp.greater_equal(x, y),
np.greater_equal(x, y),
)
self.assertAllClose(
knp.greater_equal(x, 2),
np.greater_equal(x, 2),
)
self.assertAllClose(
knp.greater_equal(2, x),
np.greater_equal(2, x),
)
self.assertAllClose(
knp.GreaterEqual()(x, y),
np.greater_equal(x, y),
)
self.assertAllClose(
knp.GreaterEqual()(x, 2),
np.greater_equal(x, 2),
)
self.assertAllClose(
knp.GreaterEqual()(2, x),
np.greater_equal(2, x),
)
def test_isclose(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
y = np.array([[4, 5, 6], [3, 2, 1]])
self.assertAllClose(knp.isclose(x, y), np.isclose(x, y))
self.assertAllClose(knp.isclose(x, 2), np.isclose(x, 2))
self.assertAllClose(knp.isclose(2, x), np.isclose(2, x))
self.assertAllClose(knp.Isclose()(x, y), np.isclose(x, y))
self.assertAllClose(knp.Isclose()(x, 2), np.isclose(x, 2))
self.assertAllClose(knp.Isclose()(2, x), np.isclose(2, x))
def test_less(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
y = np.array([[4, 5, 6], [3, 2, 1]])
self.assertAllClose(knp.less(x, y), np.less(x, y))
self.assertAllClose(knp.less(x, 2), np.less(x, 2))
self.assertAllClose(knp.less(2, x), np.less(2, x))
self.assertAllClose(knp.Less()(x, y), np.less(x, y))
self.assertAllClose(knp.Less()(x, 2), np.less(x, 2))
self.assertAllClose(knp.Less()(2, x), np.less(2, x))
def test_less_equal(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
y = np.array([[4, 5, 6], [3, 2, 1]])
self.assertAllClose(knp.less_equal(x, y), np.less_equal(x, y))
self.assertAllClose(knp.less_equal(x, 2), np.less_equal(x, 2))
self.assertAllClose(knp.less_equal(2, x), np.less_equal(2, x))
self.assertAllClose(knp.LessEqual()(x, y), np.less_equal(x, y))
self.assertAllClose(knp.LessEqual()(x, 2), np.less_equal(x, 2))
self.assertAllClose(knp.LessEqual()(2, x), np.less_equal(2, x))
def test_linspace(self):
self.assertAllClose(knp.linspace(0, 10, 5), np.linspace(0, 10, 5))
self.assertAllClose(
knp.linspace(0, 10, 5, endpoint=False),
np.linspace(0, 10, 5, endpoint=False),
)
self.assertAllClose(knp.Linspace(num=5)(0, 10), np.linspace(0, 10, 5))
self.assertAllClose(
knp.Linspace(num=5, endpoint=False)(0, 10),
np.linspace(0, 10, 5, endpoint=False),
)
start = np.zeros([2, 3, 4])
stop = np.ones([2, 3, 4])
self.assertAllClose(
knp.linspace(start, stop, 5, retstep=True)[0],
np.linspace(start, stop, 5, retstep=True)[0],
)
self.assertAllClose(
backend.convert_to_numpy(
knp.linspace(start, stop, 5, endpoint=False, retstep=True)[0]
),
np.linspace(start, stop, 5, endpoint=False, retstep=True)[0],
)
self.assertAllClose(
backend.convert_to_numpy(
knp.linspace(
start, stop, 5, endpoint=False, retstep=True, dtype="int32"
)[0]
),
np.linspace(
start, stop, 5, endpoint=False, retstep=True, dtype="int32"
)[0],
)
self.assertAllClose(
knp.Linspace(5, retstep=True)(start, stop)[0],
np.linspace(start, stop, 5, retstep=True)[0],
)
self.assertAllClose(
backend.convert_to_numpy(
knp.Linspace(5, endpoint=False, retstep=True)(start, stop)[0]
),
np.linspace(start, stop, 5, endpoint=False, retstep=True)[0],
)
self.assertAllClose(
backend.convert_to_numpy(
knp.Linspace(5, endpoint=False, retstep=True, dtype="int32")(
start, stop
)[0]
),
np.linspace(
start, stop, 5, endpoint=False, retstep=True, dtype="int32"
)[0],
)
def test_logical_and(self):
x = np.array([[True, False], [True, True]])
y = np.array([[False, False], [True, False]])
self.assertAllClose(knp.logical_and(x, y), np.logical_and(x, y))
self.assertAllClose(knp.logical_and(x, True), np.logical_and(x, True))
self.assertAllClose(knp.logical_and(True, x), np.logical_and(True, x))
self.assertAllClose(knp.LogicalAnd()(x, y), np.logical_and(x, y))
self.assertAllClose(knp.LogicalAnd()(x, True), np.logical_and(x, True))
self.assertAllClose(knp.LogicalAnd()(True, x), np.logical_and(True, x))
def test_logical_or(self):
x = np.array([[True, False], [True, True]])
y = np.array([[False, False], [True, False]])
self.assertAllClose(knp.logical_or(x, y), np.logical_or(x, y))
self.assertAllClose(knp.logical_or(x, True), np.logical_or(x, True))
self.assertAllClose(knp.logical_or(True, x), np.logical_or(True, x))
self.assertAllClose(knp.LogicalOr()(x, y), np.logical_or(x, y))
self.assertAllClose(knp.LogicalOr()(x, True), np.logical_or(x, True))
self.assertAllClose(knp.LogicalOr()(True, x), np.logical_or(True, x))
def test_logspace(self):
self.assertAllClose(knp.logspace(0, 10, 5), np.logspace(0, 10, 5))
self.assertAllClose(
knp.logspace(0, 10, 5, endpoint=False),
np.logspace(0, 10, 5, endpoint=False),
)
self.assertAllClose(knp.Logspace(num=5)(0, 10), np.logspace(0, 10, 5))
self.assertAllClose(
knp.Logspace(num=5, endpoint=False)(0, 10),
np.logspace(0, 10, 5, endpoint=False),
)
start = np.zeros([2, 3, 4])
stop = np.ones([2, 3, 4])
self.assertAllClose(
knp.logspace(start, stop, 5, base=10),
np.logspace(start, stop, 5, base=10),
)
self.assertAllClose(
knp.logspace(start, stop, 5, endpoint=False, base=10),
np.logspace(start, stop, 5, endpoint=False, base=10),
)
self.assertAllClose(
knp.Logspace(5, base=10)(start, stop),
np.logspace(start, stop, 5, base=10),
)
self.assertAllClose(
knp.Logspace(5, endpoint=False, base=10)(start, stop),
np.logspace(start, stop, 5, endpoint=False, base=10),
)
def test_maximum(self):
x = np.array([[1, 2], [3, 4]])
y = np.array([[5, 6], [7, 8]])
self.assertAllClose(knp.maximum(x, y), np.maximum(x, y))
self.assertAllClose(knp.maximum(x, 1), np.maximum(x, 1))
self.assertAllClose(knp.maximum(1, x), np.maximum(1, x))
self.assertAllClose(knp.Maximum()(x, y), np.maximum(x, y))
self.assertAllClose(knp.Maximum()(x, 1), np.maximum(x, 1))
self.assertAllClose(knp.Maximum()(1, x), np.maximum(1, x))
def test_minimum(self):
x = np.array([[1, 2], [3, 4]])
y = np.array([[5, 6], [7, 8]])
self.assertAllClose(knp.minimum(x, y), np.minimum(x, y))
self.assertAllClose(knp.minimum(x, 1), np.minimum(x, 1))
self.assertAllClose(knp.minimum(1, x), np.minimum(1, x))
self.assertAllClose(knp.Minimum()(x, y), np.minimum(x, y))
self.assertAllClose(knp.Minimum()(x, 1), np.minimum(x, 1))
self.assertAllClose(knp.Minimum()(1, x), np.minimum(1, x))
def test_mod(self):
x = np.array([[1, 2], [3, 4]])
y = np.array([[5, 6], [7, 8]])
self.assertAllClose(knp.mod(x, y), np.mod(x, y))
self.assertAllClose(knp.mod(x, 1), np.mod(x, 1))
self.assertAllClose(knp.mod(1, x), np.mod(1, x))
self.assertAllClose(knp.Mod()(x, y), np.mod(x, y))
self.assertAllClose(knp.Mod()(x, 1), np.mod(x, 1))
self.assertAllClose(knp.Mod()(1, x), np.mod(1, x))
def test_not_equal(self):
x = np.array([[1, 2], [3, 4]])
y = np.array([[5, 6], [7, 8]])
self.assertAllClose(knp.not_equal(x, y), np.not_equal(x, y))
self.assertAllClose(knp.not_equal(x, 1), np.not_equal(x, 1))
self.assertAllClose(knp.not_equal(1, x), np.not_equal(1, x))
self.assertAllClose(knp.NotEqual()(x, y), np.not_equal(x, y))
self.assertAllClose(knp.NotEqual()(x, 1), np.not_equal(x, 1))
self.assertAllClose(knp.NotEqual()(1, x), np.not_equal(1, x))
def test_outer(self):
x = np.array([1, 2, 3])
y = np.array([4, 5, 6])
self.assertAllClose(knp.outer(x, y), np.outer(x, y))
self.assertAllClose(knp.Outer()(x, y), np.outer(x, y))
x = np.ones([2, 3, 4])
y = np.ones([2, 3, 4, 5, 6])
self.assertAllClose(knp.outer(x, y), np.outer(x, y))
self.assertAllClose(knp.Outer()(x, y), np.outer(x, y))
def test_quantile(self):
x = np.arange(24).reshape([2, 3, 4]).astype("float32")
# q as scalar
q = np.array(0.5, dtype="float32")
self.assertAllClose(knp.quantile(x, q), np.quantile(x, q))
self.assertAllClose(
knp.quantile(x, q, keepdims=True), np.quantile(x, q, keepdims=True)
)
# q as 1D tensor
q = np.array([0.5, 1.0], dtype="float32")
self.assertAllClose(knp.quantile(x, q), np.quantile(x, q))
self.assertAllClose(
knp.quantile(x, q, keepdims=True), np.quantile(x, q, keepdims=True)
)
self.assertAllClose(
knp.quantile(x, q, axis=1), np.quantile(x, q, axis=1)
)
self.assertAllClose(
knp.quantile(x, q, axis=1, keepdims=True),
np.quantile(x, q, axis=1, keepdims=True),
)
# multiple axes
self.assertAllClose(
knp.quantile(x, q, axis=(1, 2)), np.quantile(x, q, axis=(1, 2))
)
# test all supported methods
q = np.array([0.501, 1.0], dtype="float32")
for method in ["linear", "lower", "higher", "midpoint", "nearest"]:
self.assertAllClose(
knp.quantile(x, q, method=method),
np.quantile(x, q, method=method),
)
self.assertAllClose(
knp.quantile(x, q, axis=1, method=method),
np.quantile(x, q, axis=1, method=method),
)
def test_take(self):
x = np.arange(24).reshape([1, 2, 3, 4])
indices = np.array([0, 1])
self.assertAllClose(knp.take(x, indices), np.take(x, indices))
self.assertAllClose(knp.take(x, 0), np.take(x, 0))
self.assertAllClose(knp.take(x, 0, axis=1), np.take(x, 0, axis=1))
self.assertAllClose(knp.Take()(x, indices), np.take(x, indices))
self.assertAllClose(knp.Take()(x, 0), np.take(x, 0))
self.assertAllClose(knp.Take(axis=1)(x, 0), np.take(x, 0, axis=1))
# test with multi-dimensional indices
rng = np.random.default_rng(0)
x = rng.standard_normal((2, 3, 4, 5))
indices = rng.integers(0, 4, (6, 7))
self.assertAllClose(
knp.take(x, indices, axis=2),
np.take(x, indices, axis=2),
)
# test with negative axis
self.assertAllClose(
knp.take(x, indices, axis=-2),
np.take(x, indices, axis=-2),
)
# test with axis=None & x.ndim=2
x = np.array(([1, 2], [3, 4]))
indices = np.array([2, 3])
self.assertAllClose(
knp.take(x, indices, axis=None), np.take(x, indices, axis=None)
)
@parameterized.named_parameters(
named_product(
[
{"testcase_name": "axis_none", "axis": None},
{"testcase_name": "axis_0", "axis": 0},
{"testcase_name": "axis_1", "axis": 1},
{"testcase_name": "axis_minus1", "axis": -1},
],
dtype=[
"float16",
"float32",
"float64",
"uint8",
"int8",
"int16",
"int32",
],
)
)
@pytest.mark.skipif(
not backend.SUPPORTS_SPARSE_TENSORS,
reason="Backend does not support sparse tensors.",
)
def test_take_sparse(self, dtype, axis):
rng = np.random.default_rng(0)
x = (4 * rng.standard_normal((3, 4, 5))).astype(dtype)
if backend.backend() == "tensorflow":
import tensorflow as tf
indices = tf.SparseTensor([[0, 0], [1, 2]], [1, 2], (2, 3))
elif backend.backend() == "jax":
import jax.experimental.sparse as jax_sparse
indices = jax_sparse.BCOO(([1, 2], [[0, 0], [1, 2]]), shape=(2, 3))
self.assertAllClose(
knp.take(x, indices, axis=axis),
np.take(x, backend.convert_to_numpy(indices), axis=axis),
)
def test_take_along_axis(self):
x = np.arange(24).reshape([1, 2, 3, 4])
indices = np.ones([1, 4, 1, 1], dtype=np.int32)
self.assertAllClose(
knp.take_along_axis(x, indices, axis=1),
np.take_along_axis(x, indices, axis=1),
)
self.assertAllClose(
knp.TakeAlongAxis(axis=1)(x, indices),
np.take_along_axis(x, indices, axis=1),
)
x = np.arange(12).reshape([1, 1, 3, 4])
indices = np.ones([1, 4, 1, 1], dtype=np.int32)
self.assertAllClose(
knp.take_along_axis(x, indices, axis=2),
np.take_along_axis(x, indices, axis=2),
)
self.assertAllClose(
knp.TakeAlongAxis(axis=2)(x, indices),
np.take_along_axis(x, indices, axis=2),
)
def test_tensordot(self):
x = np.arange(24).reshape([1, 2, 3, 4]).astype("float32")
y = np.arange(24).reshape([3, 4, 1, 2]).astype("float32")
self.assertAllClose(
knp.tensordot(x, y, axes=2), np.tensordot(x, y, axes=2)
)
self.assertAllClose(
knp.tensordot(x, y, axes=([0, 1], [2, 3])),
np.tensordot(x, y, axes=([0, 1], [2, 3])),
)
self.assertAllClose(
knp.Tensordot(axes=2)(x, y),
np.tensordot(x, y, axes=2),
)
self.assertAllClose(
knp.Tensordot(axes=([0, 1], [2, 3]))(x, y),
np.tensordot(x, y, axes=([0, 1], [2, 3])),
)
self.assertAllClose(
knp.Tensordot(axes=[0, 2])(x, y),
np.tensordot(x, y, axes=[0, 2]),
)
def test_vdot(self):
x = np.array([1.0, 2.0, 3.0])
y = np.array([4.0, 5.0, 6.0])
self.assertAllClose(knp.vdot(x, y), np.vdot(x, y))
self.assertAllClose(knp.Vdot()(x, y), np.vdot(x, y))
def test_where(self):
x = np.array([1, 2, 3])
y = np.array([4, 5, 6])
self.assertAllClose(knp.where(x > 1, x, y), np.where(x > 1, x, y))
self.assertAllClose(knp.Where()(x > 1, x, y), np.where(x > 1, x, y))
self.assertAllClose(knp.where(x > 1), np.where(x > 1))
self.assertAllClose(knp.Where()(x > 1), np.where(x > 1))
with self.assertRaisesRegexp(
ValueError, "`x1` and `x2` either both should be `None`"
):
knp.where(x > 1, x, None)
def test_digitize(self):
x = np.array([0.0, 1.0, 3.0, 1.6])
bins = np.array([0.0, 3.0, 4.5, 7.0])
self.assertAllClose(knp.digitize(x, bins), np.digitize(x, bins))
self.assertAllClose(knp.Digitize()(x, bins), np.digitize(x, bins))
self.assertTrue(
standardize_dtype(knp.digitize(x, bins).dtype) == "int32"
)
self.assertTrue(
standardize_dtype(knp.Digitize()(x, bins).dtype) == "int32"
)
x = np.array([0.2, 6.4, 3.0, 1.6])
bins = np.array([0.0, 1.0, 2.5, 4.0, 10.0])
self.assertAllClose(knp.digitize(x, bins), np.digitize(x, bins))
self.assertAllClose(knp.Digitize()(x, bins), np.digitize(x, bins))
self.assertTrue(
standardize_dtype(knp.digitize(x, bins).dtype) == "int32"
)
self.assertTrue(
standardize_dtype(knp.Digitize()(x, bins).dtype) == "int32"
)
x = np.array([1, 4, 10, 15])
bins = np.array([4, 10, 14, 15])
self.assertAllClose(knp.digitize(x, bins), np.digitize(x, bins))
self.assertAllClose(knp.Digitize()(x, bins), np.digitize(x, bins))
self.assertTrue(
standardize_dtype(knp.digitize(x, bins).dtype) == "int32"
)
self.assertTrue(
standardize_dtype(knp.Digitize()(x, bins).dtype) == "int32"
)
class NumpyOneInputOpsCorrectnessTest(testing.TestCase, parameterized.TestCase):
def test_mean(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.mean(x), np.mean(x))
self.assertAllClose(knp.mean(x, axis=()), np.mean(x, axis=()))
self.assertAllClose(knp.mean(x, axis=1), np.mean(x, axis=1))
self.assertAllClose(knp.mean(x, axis=(1,)), np.mean(x, axis=(1,)))
self.assertAllClose(
knp.mean(x, axis=1, keepdims=True),
np.mean(x, axis=1, keepdims=True),
)
self.assertAllClose(knp.Mean()(x), np.mean(x))
self.assertAllClose(knp.Mean(axis=1)(x), np.mean(x, axis=1))
self.assertAllClose(
knp.Mean(axis=1, keepdims=True)(x),
np.mean(x, axis=1, keepdims=True),
)
# test overflow
x = np.array([65504, 65504, 65504], dtype="float16")
self.assertAllClose(knp.mean(x), np.mean(x))
def test_all(self):
x = np.array([[True, False, True], [True, True, True]])
self.assertAllClose(knp.all(x), np.all(x))
self.assertAllClose(knp.all(x, axis=()), np.all(x, axis=()))
self.assertAllClose(knp.all(x, axis=1), np.all(x, axis=1))
self.assertAllClose(knp.all(x, axis=(1,)), np.all(x, axis=(1,)))
self.assertAllClose(
knp.all(x, axis=1, keepdims=True),
np.all(x, axis=1, keepdims=True),
)
self.assertAllClose(knp.All()(x), np.all(x))
self.assertAllClose(knp.All(axis=1)(x), np.all(x, axis=1))
self.assertAllClose(
knp.All(axis=1, keepdims=True)(x),
np.all(x, axis=1, keepdims=True),
)
def test_any(self):
x = np.array([[True, False, True], [True, True, True]])
self.assertAllClose(knp.any(x), np.any(x))
self.assertAllClose(knp.any(x, axis=()), np.any(x, axis=()))
self.assertAllClose(knp.any(x, axis=1), np.any(x, axis=1))
self.assertAllClose(knp.any(x, axis=(1,)), np.any(x, axis=(1,)))
self.assertAllClose(
knp.any(x, axis=1, keepdims=True),
np.any(x, axis=1, keepdims=True),
)
self.assertAllClose(knp.Any()(x), np.any(x))
self.assertAllClose(knp.Any(axis=1)(x), np.any(x, axis=1))
self.assertAllClose(
knp.Any(axis=1, keepdims=True)(x),
np.any(x, axis=1, keepdims=True),
)
def test_var(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.var(x), np.var(x))
self.assertAllClose(knp.var(x, axis=()), np.var(x, axis=()))
self.assertAllClose(knp.var(x, axis=1), np.var(x, axis=1))
self.assertAllClose(knp.var(x, axis=(1,)), np.var(x, axis=(1,)))
self.assertAllClose(
knp.var(x, axis=1, keepdims=True),
np.var(x, axis=1, keepdims=True),
)
self.assertAllClose(knp.Var()(x), np.var(x))
self.assertAllClose(knp.Var(axis=1)(x), np.var(x, axis=1))
self.assertAllClose(
knp.Var(axis=1, keepdims=True)(x),
np.var(x, axis=1, keepdims=True),
)
def test_sum(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.sum(x), np.sum(x))
self.assertAllClose(knp.sum(x, axis=()), np.sum(x, axis=()))
self.assertAllClose(knp.sum(x, axis=1), np.sum(x, axis=1))
self.assertAllClose(knp.sum(x, axis=(1,)), np.sum(x, axis=(1,)))
self.assertAllClose(
knp.sum(x, axis=1, keepdims=True),
np.sum(x, axis=1, keepdims=True),
)
self.assertAllClose(knp.Sum()(x), np.sum(x))
self.assertAllClose(knp.Sum(axis=1)(x), np.sum(x, axis=1))
self.assertAllClose(
knp.Sum(axis=1, keepdims=True)(x),
np.sum(x, axis=1, keepdims=True),
)
def test_amax(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.amax(x), np.amax(x))
self.assertAllClose(knp.amax(x, axis=()), np.amax(x, axis=()))
self.assertAllClose(knp.amax(x, axis=1), np.amax(x, axis=1))
self.assertAllClose(knp.amax(x, axis=(1,)), np.amax(x, axis=(1,)))
self.assertAllClose(
knp.amax(x, axis=1, keepdims=True),
np.amax(x, axis=1, keepdims=True),
)
self.assertAllClose(knp.Amax()(x), np.amax(x))
self.assertAllClose(knp.Amax(axis=1)(x), np.amax(x, axis=1))
self.assertAllClose(
knp.Amax(axis=1, keepdims=True)(x),
np.amax(x, axis=1, keepdims=True),
)
def test_amin(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.amin(x), np.amin(x))
self.assertAllClose(knp.amin(x, axis=()), np.amin(x, axis=()))
self.assertAllClose(knp.amin(x, axis=1), np.amin(x, axis=1))
self.assertAllClose(knp.amin(x, axis=(1,)), np.amin(x, axis=(1,)))
self.assertAllClose(
knp.amin(x, axis=1, keepdims=True),
np.amin(x, axis=1, keepdims=True),
)
self.assertAllClose(knp.Amin()(x), np.amin(x))
self.assertAllClose(knp.Amin(axis=1)(x), np.amin(x, axis=1))
self.assertAllClose(
knp.Amin(axis=1, keepdims=True)(x),
np.amin(x, axis=1, keepdims=True),
)
def test_square(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.square(x), np.square(x))
self.assertAllClose(knp.Square()(x), np.square(x))
def test_negative(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.negative(x), np.negative(x))
self.assertAllClose(knp.Negative()(x), np.negative(x))
def test_abs(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.abs(x), np.abs(x))
self.assertAllClose(knp.Abs()(x), np.abs(x))
def test_absolute(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.absolute(x), np.absolute(x))
self.assertAllClose(knp.Absolute()(x), np.absolute(x))
def test_squeeze(self):
x = np.ones([1, 3, 1, 5])
self.assertAllClose(knp.squeeze(x), np.squeeze(x))
self.assertAllClose(knp.squeeze(x, axis=0), np.squeeze(x, axis=0))
self.assertAllClose(knp.Squeeze()(x), np.squeeze(x))
self.assertAllClose(knp.Squeeze(axis=0)(x), np.squeeze(x, axis=0))
def test_transpose(self):
x = np.ones([1, 2, 3, 4, 5])
self.assertAllClose(knp.transpose(x), np.transpose(x))
self.assertAllClose(
knp.transpose(x, axes=(1, 0, 3, 2, 4)),
np.transpose(x, axes=(1, 0, 3, 2, 4)),
)
self.assertAllClose(knp.Transpose()(x), np.transpose(x))
self.assertAllClose(
knp.Transpose(axes=(1, 0, 3, 2, 4))(x),
np.transpose(x, axes=(1, 0, 3, 2, 4)),
)
def test_arccos(self):
x = np.array([[1, 0.5, -0.7], [0.9, 0.2, -1]])
self.assertAllClose(knp.arccos(x), np.arccos(x))
self.assertAllClose(knp.Arccos()(x), np.arccos(x))
def test_arccosh(self):
x = np.array([[1, 0.5, -0.7], [0.9, 0.2, -1]])
self.assertAllClose(knp.arccosh(x), np.arccosh(x))
self.assertAllClose(knp.Arccosh()(x), np.arccosh(x))
def test_arcsin(self):
x = np.array([[1, 0.5, -0.7], [0.9, 0.2, -1]])
self.assertAllClose(knp.arcsin(x), np.arcsin(x))
self.assertAllClose(knp.Arcsin()(x), np.arcsin(x))
def test_arcsinh(self):
x = np.array([[1, 0.5, -0.7], [0.9, 0.2, -1]])
self.assertAllClose(knp.arcsinh(x), np.arcsinh(x))
self.assertAllClose(knp.Arcsinh()(x), np.arcsinh(x))
def test_arctan(self):
x = np.array([[1, 0.5, -0.7], [0.9, 0.2, -1]])
self.assertAllClose(knp.arctan(x), np.arctan(x))
self.assertAllClose(knp.Arctan()(x), np.arctan(x))
def test_arctanh(self):
x = np.array([[1, 0.5, -0.7], [0.9, 0.2, -1]])
self.assertAllClose(knp.arctanh(x), np.arctanh(x))
self.assertAllClose(knp.Arctanh()(x), np.arctanh(x))
def test_argmax(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.argmax(x), np.argmax(x))
self.assertAllClose(knp.argmax(x, axis=1), np.argmax(x, axis=1))
self.assertAllClose(knp.Argmax()(x), np.argmax(x))
self.assertAllClose(knp.Argmax(axis=1)(x), np.argmax(x, axis=1))
def test_argmin(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.argmin(x), np.argmin(x))
self.assertAllClose(knp.argmin(x, axis=1), np.argmin(x, axis=1))
self.assertAllClose(knp.Argmin()(x), np.argmin(x))
self.assertAllClose(knp.Argmin(axis=1)(x), np.argmin(x, axis=1))
def test_argsort(self):
x = np.array([[1, 2, 3], [4, 5, 6]])
self.assertAllClose(knp.argsort(x), np.argsort(x))
self.assertAllClose(knp.argsort(x, axis=1), np.argsort(x, axis=1))
self.assertAllClose(knp.argsort(x, axis=None), np.argsort(x, axis=None))
self.assertAllClose(knp.Argsort()(x), np.argsort(x))
self.assertAllClose(knp.Argsort(axis=1)(x), np.argsort(x, axis=1))
self.assertAllClose(knp.Argsort(axis=None)(x), np.argsort(x, axis=None))
x = np.array(1) # rank == 0
self.assertAllClose(knp.argsort(x), np.argsort(x))
self.assertAllClose(knp.Argsort()(x), np.argsort(x))
def test_array(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.array(x), np.array(x))
self.assertAllClose(knp.Array()(x), np.array(x))
self.assertTrue(backend.is_tensor(knp.array(x)))
self.assertTrue(backend.is_tensor(knp.Array()(x)))
# Check dtype convertion.
x = [[1, 0, 1], [1, 1, 0]]
output = knp.array(x, dtype="int32")
self.assertEqual(standardize_dtype(output.dtype), "int32")
x = [[1, 0, 1], [1, 1, 0]]
output = knp.array(x, dtype="float32")
self.assertEqual(standardize_dtype(output.dtype), "float32")
x = [[1, 0, 1], [1, 1, 0]]
output = knp.array(x, dtype="bool")
self.assertEqual(standardize_dtype(output.dtype), "bool")
def test_average(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
weights = np.ones([2, 3])
weights_1d = np.ones([3])
self.assertAllClose(knp.average(x), np.average(x))
self.assertAllClose(knp.average(x, axis=()), np.average(x, axis=()))
self.assertAllClose(knp.average(x, axis=1), np.average(x, axis=1))
self.assertAllClose(knp.average(x, axis=(1,)), np.average(x, axis=(1,)))
self.assertAllClose(
knp.average(x, axis=1, weights=weights),
np.average(x, axis=1, weights=weights),
)
self.assertAllClose(
knp.average(x, axis=1, weights=weights_1d),
np.average(x, axis=1, weights=weights_1d),
)
self.assertAllClose(knp.Average()(x), np.average(x))
self.assertAllClose(knp.Average(axis=1)(x), np.average(x, axis=1))
self.assertAllClose(
knp.Average(axis=1)(x, weights=weights),
np.average(x, axis=1, weights=weights),
)
self.assertAllClose(
knp.Average(axis=1)(x, weights=weights_1d),
np.average(x, axis=1, weights=weights_1d),
)
def test_bincount(self):
if backend.backend() == "tensorflow":
import tensorflow as tf
if tf.test.is_gpu_available():
self.skipTest("bincount does not work in tensorflow gpu")
x = np.array([1, 1, 2, 3, 2, 4, 4, 5])
weights = np.array([0, 0, 3, 2, 1, 1, 4, 2])
minlength = 3
self.assertAllClose(
knp.bincount(x, weights=weights, minlength=minlength),
np.bincount(x, weights=weights, minlength=minlength),
)
self.assertAllClose(
knp.Bincount(weights=weights, minlength=minlength)(x),
np.bincount(x, weights=weights, minlength=minlength),
)
x = np.array([[1, 1, 2, 3, 2, 4, 4, 5]])
weights = np.array([[0, 0, 3, 2, 1, 1, 4, 2]])
expected_output = np.array([[0, 0, 4, 2, 5, 2]])
self.assertAllClose(
knp.bincount(x, weights=weights, minlength=minlength),
expected_output,
)
self.assertAllClose(
knp.Bincount(weights=weights, minlength=minlength)(x),
expected_output,
)
# test with weights=None
expected_output = np.array([[0, 2, 2, 1, 2, 1]])
self.assertAllClose(
knp.Bincount(weights=None, minlength=minlength)(x),
expected_output,
)
def test_broadcast_to(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(
knp.broadcast_to(x, [2, 2, 3]),
np.broadcast_to(x, [2, 2, 3]),
)
self.assertAllClose(
knp.BroadcastTo([2, 2, 3])(x),
np.broadcast_to(x, [2, 2, 3]),
)
def test_ceil(self):
x = np.array([[1.2, 2.1, -2.5], [2.4, -11.9, -5.5]])
self.assertAllClose(knp.ceil(x), np.ceil(x))
self.assertAllClose(knp.Ceil()(x), np.ceil(x))
def test_clip(self):
x = np.array([[1.2, 2.1, -2.5], [2.4, -11.9, -5.5]])
self.assertAllClose(knp.clip(x, -2, 2), np.clip(x, -2, 2))
self.assertAllClose(knp.clip(x, -2, 2), np.clip(x, -2, 2))
self.assertAllClose(knp.Clip(0, 1)(x), np.clip(x, 0, 1))
self.assertAllClose(knp.Clip(0, 1)(x), np.clip(x, 0, 1))
def test_concatenate(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
y = np.array([[4, 5, 6], [6, 5, 4]])
z = np.array([[7, 8, 9], [9, 8, 7]])
self.assertAllClose(
knp.concatenate([x, y], axis=0),
np.concatenate([x, y], axis=0),
)
self.assertAllClose(
knp.concatenate([x, y, z], axis=0),
np.concatenate([x, y, z], axis=0),
)
self.assertAllClose(
knp.concatenate([x, y], axis=1),
np.concatenate([x, y], axis=1),
)
self.assertAllClose(
knp.Concatenate(axis=0)([x, y]),
np.concatenate([x, y], axis=0),
)
self.assertAllClose(
knp.Concatenate(axis=0)([x, y, z]),
np.concatenate([x, y, z], axis=0),
)
self.assertAllClose(
knp.Concatenate(axis=1)([x, y]),
np.concatenate([x, y], axis=1),
)
@parameterized.named_parameters(
[
{"testcase_name": "axis_0", "axis": 0},
{"testcase_name": "axis_1", "axis": 1},
]
)
@pytest.mark.skipif(
not backend.SUPPORTS_SPARSE_TENSORS,
reason="Backend does not support sparse tensors.",
)
def test_concatenate_sparse(self, axis):
if backend.backend() == "tensorflow":
import tensorflow as tf
x = tf.SparseTensor([[0, 0], [1, 2]], [1.0, 2.0], (2, 3))
y = tf.SparseTensor([[0, 0], [1, 1]], [4.0, 5.0], (2, 3))
sparse_class = tf.SparseTensor
elif backend.backend() == "jax":
import jax.experimental.sparse as jax_sparse
x = jax_sparse.BCOO(([1.0, 2.0], [[0, 0], [1, 2]]), shape=(2, 3))
y = jax_sparse.BCOO(([4.0, 5.0], [[0, 0], [1, 1]]), shape=(2, 3))
sparse_class = jax_sparse.JAXSparse
x_np = backend.convert_to_numpy(x)
y_np = backend.convert_to_numpy(y)
z = np.random.rand(2, 3).astype("float32")
self.assertAllClose(
knp.concatenate([x, z], axis=axis),
np.concatenate([x_np, z], axis=axis),
)
self.assertAllClose(
knp.concatenate([z, x], axis=axis),
np.concatenate([z, x_np], axis=axis),
)
self.assertAllClose(
knp.concatenate([x, y], axis=axis),
np.concatenate([x_np, y_np], axis=axis),
)
self.assertAllClose(
knp.Concatenate(axis=axis)([x, z]),
np.concatenate([x_np, z], axis=axis),
)
self.assertAllClose(
knp.Concatenate(axis=axis)([z, x]),
np.concatenate([z, x_np], axis=axis),
)
self.assertAllClose(
knp.Concatenate(axis=axis)([x, y]),
np.concatenate([x_np, y_np], axis=axis),
)
self.assertIsInstance(knp.concatenate([x, y], axis=axis), sparse_class)
self.assertIsInstance(knp.Concatenate(axis=axis)([x, y]), sparse_class)
def test_conjugate(self):
x = np.array([[1 + 2j, 2 + 3j], [3 + 4j, 4 + 5j]])
self.assertAllClose(knp.conjugate(x), np.conjugate(x))
self.assertAllClose(knp.Conjugate()(x), np.conjugate(x))
def test_conj(self):
x = np.array([[1 + 2j, 2 + 3j], [3 + 4j, 4 + 5j]])
self.assertAllClose(knp.conj(x), np.conj(x))
self.assertAllClose(knp.Conj()(x), np.conj(x))
def test_copy(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.copy(x), np.copy(x))
self.assertAllClose(knp.Copy()(x), np.copy(x))
def test_cos(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.cos(x), np.cos(x))
self.assertAllClose(knp.Cos()(x), np.cos(x))
def test_cosh(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.cosh(x), np.cosh(x))
self.assertAllClose(knp.Cosh()(x), np.cosh(x))
def test_count_nonzero(self):
x = np.array([[0, 2, 3], [3, 2, 0]])
self.assertAllClose(knp.count_nonzero(x), np.count_nonzero(x))
self.assertAllClose(
knp.count_nonzero(x, axis=()), np.count_nonzero(x, axis=())
)
self.assertAllClose(
knp.count_nonzero(x, axis=1),
np.count_nonzero(x, axis=1),
)
self.assertAllClose(
knp.count_nonzero(x, axis=(1,)),
np.count_nonzero(x, axis=(1,)),
)
self.assertAllClose(
knp.CountNonzero()(x),
np.count_nonzero(x),
)
self.assertAllClose(
knp.CountNonzero(axis=1)(x),
np.count_nonzero(x, axis=1),
)
@parameterized.product(
axis=[None, 0, 1, -1],
dtype=[None, "int32", "float32"],
)
def test_cumprod(self, axis, dtype):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(
knp.cumprod(x, axis=axis, dtype=dtype),
np.cumprod(x, axis=axis, dtype=dtype or x.dtype),
)
self.assertAllClose(
knp.Cumprod(axis=axis, dtype=dtype)(x),
np.cumprod(x, axis=axis, dtype=dtype or x.dtype),
)
@parameterized.product(
axis=[None, 0, 1, -1],
dtype=[None, "int32", "float32"],
)
def test_cumsum(self, axis, dtype):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(
knp.cumsum(x, axis=axis, dtype=dtype),
np.cumsum(x, axis=axis, dtype=dtype or x.dtype),
)
self.assertAllClose(
knp.Cumsum(axis=axis, dtype=dtype)(x),
np.cumsum(x, axis=axis, dtype=dtype or x.dtype),
)
def test_diag(self):
x = np.array([1, 2, 3])
self.assertAllClose(knp.diag(x), np.diag(x))
self.assertAllClose(knp.diag(x, k=1), np.diag(x, k=1))
self.assertAllClose(knp.diag(x, k=-1), np.diag(x, k=-1))
self.assertAllClose(knp.Diag()(x), np.diag(x))
self.assertAllClose(knp.Diag(k=1)(x), np.diag(x, k=1))
self.assertAllClose(knp.Diag(k=-1)(x), np.diag(x, k=-1))
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.diag(x), np.diag(x))
self.assertAllClose(knp.diag(x, k=1), np.diag(x, k=1))
self.assertAllClose(knp.diag(x, k=-1), np.diag(x, k=-1))
self.assertAllClose(knp.Diag()(x), np.diag(x))
self.assertAllClose(knp.Diag(k=1)(x), np.diag(x, k=1))
self.assertAllClose(knp.Diag(k=-1)(x), np.diag(x, k=-1))
def test_diagonal(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.diagonal(x), np.diagonal(x))
self.assertAllClose(
knp.diagonal(x, offset=1),
np.diagonal(x, offset=1),
)
self.assertAllClose(
knp.diagonal(x, offset=-1), np.diagonal(x, offset=-1)
)
self.assertAllClose(knp.Diagonal()(x), np.diagonal(x))
self.assertAllClose(knp.Diagonal(offset=1)(x), np.diagonal(x, offset=1))
self.assertAllClose(
knp.Diagonal(offset=-1)(x), np.diagonal(x, offset=-1)
)
x = np.ones([2, 3, 4, 5])
self.assertAllClose(knp.diagonal(x), np.diagonal(x))
self.assertAllClose(
knp.diagonal(x, offset=1, axis1=2, axis2=3),
np.diagonal(x, offset=1, axis1=2, axis2=3),
)
self.assertAllClose(
knp.diagonal(x, offset=-1, axis1=2, axis2=3),
np.diagonal(x, offset=-1, axis1=2, axis2=3),
)
def test_diff(self):
x = np.array([1, 2, 4, 7, 0])
self.assertAllClose(knp.diff(x), np.diff(x))
self.assertAllClose(knp.diff(x, n=2), np.diff(x, n=2))
self.assertAllClose(knp.diff(x, n=3), np.diff(x, n=3))
x = np.array([[1, 3, 6, 10], [0, 5, 6, 8]])
self.assertAllClose(knp.diff(x), np.diff(x))
self.assertAllClose(knp.diff(x, axis=0), np.diff(x, axis=0))
self.assertAllClose(knp.diff(x, n=2, axis=0), np.diff(x, n=2, axis=0))
self.assertAllClose(knp.diff(x, n=2, axis=1), np.diff(x, n=2, axis=1))
def test_dot(self):
x = np.arange(24).reshape([2, 3, 4]).astype("float32")
y = np.arange(12).reshape([4, 3]).astype("float32")
z = np.arange(4).astype("float32")
self.assertAllClose(knp.dot(x, y), np.dot(x, y))
self.assertAllClose(knp.dot(x, z), np.dot(x, z))
self.assertAllClose(knp.dot(x, 2), np.dot(x, 2))
self.assertAllClose(knp.Dot()(x, y), np.dot(x, y))
self.assertAllClose(knp.Dot()(x, z), np.dot(x, z))
self.assertAllClose(knp.Dot()(x, 2), np.dot(x, 2))
def test_exp(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.exp(x), np.exp(x))
self.assertAllClose(knp.Exp()(x), np.exp(x))
def test_expand_dims(self):
x = np.ones([2, 3, 4])
self.assertAllClose(knp.expand_dims(x, 0), np.expand_dims(x, 0))
self.assertAllClose(knp.expand_dims(x, 1), np.expand_dims(x, 1))
self.assertAllClose(knp.expand_dims(x, -2), np.expand_dims(x, -2))
self.assertAllClose(knp.ExpandDims(0)(x), np.expand_dims(x, 0))
self.assertAllClose(knp.ExpandDims(1)(x), np.expand_dims(x, 1))
self.assertAllClose(knp.ExpandDims(-2)(x), np.expand_dims(x, -2))
def test_expm1(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.expm1(x), np.expm1(x))
self.assertAllClose(knp.Expm1()(x), np.expm1(x))
def test_flip(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.flip(x), np.flip(x))
self.assertAllClose(knp.flip(x, 0), np.flip(x, 0))
self.assertAllClose(knp.flip(x, 1), np.flip(x, 1))
self.assertAllClose(knp.Flip()(x), np.flip(x))
self.assertAllClose(knp.Flip(0)(x), np.flip(x, 0))
self.assertAllClose(knp.Flip(1)(x), np.flip(x, 1))
def test_floor(self):
x = np.array([[1.1, 2.2, -3.3], [3.3, 2.2, -1.1]])
self.assertAllClose(knp.floor(x), np.floor(x))
self.assertAllClose(knp.Floor()(x), np.floor(x))
def test_hstack(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
y = np.array([[4, 5, 6], [6, 5, 4]])
self.assertAllClose(knp.hstack([x, y]), np.hstack([x, y]))
self.assertAllClose(knp.Hstack()([x, y]), np.hstack([x, y]))
x = np.ones([2, 3, 4])
y = np.ones([2, 5, 4])
self.assertAllClose(knp.hstack([x, y]), np.hstack([x, y]))
self.assertAllClose(knp.Hstack()([x, y]), np.hstack([x, y]))
def test_imag(self):
x = np.array([[1 + 1j, 2 + 2j, 3 + 3j], [3 + 3j, 2 + 2j, 1 + 1j]])
self.assertAllClose(knp.imag(x), np.imag(x))
self.assertAllClose(knp.Imag()(x), np.imag(x))
def test_isfinite(self):
x = np.array([[1, 2, np.inf], [np.nan, np.nan, np.nan]])
self.assertAllClose(knp.isfinite(x), np.isfinite(x))
self.assertAllClose(knp.Isfinite()(x), np.isfinite(x))
# TODO: fix and reenable
def DISABLED_test_isinf(self):
x = np.array([[1, 2, np.inf], [np.nan, np.nan, np.nan]])
self.assertAllClose(knp.isinf(x), np.isinf(x))
self.assertAllClose(knp.Isinf()(x), np.isinf(x))
def test_isnan(self):
x = np.array([[1, 2, np.inf], [np.nan, np.nan, np.nan]])
self.assertAllClose(knp.isnan(x), np.isnan(x))
self.assertAllClose(knp.Isnan()(x), np.isnan(x))
def test_log(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.log(x), np.log(x))
self.assertAllClose(knp.Log()(x), np.log(x))
def test_log10(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.log10(x), np.log10(x))
self.assertAllClose(knp.Log10()(x), np.log10(x))
def test_log1p(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.log1p(x), np.log1p(x))
self.assertAllClose(knp.Log1p()(x), np.log1p(x))
def test_log2(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.log2(x), np.log2(x))
self.assertAllClose(knp.Log2()(x), np.log2(x))
def test_logaddexp(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
y = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.logaddexp(x, y), np.logaddexp(x, y))
self.assertAllClose(knp.Logaddexp()(x, y), np.logaddexp(x, y))
def test_logical_not(self):
x = np.array([[True, False], [False, True]])
self.assertAllClose(knp.logical_not(x), np.logical_not(x))
self.assertAllClose(knp.LogicalNot()(x), np.logical_not(x))
def test_max(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.max(x), np.max(x))
self.assertAllClose(knp.Max()(x), np.max(x))
self.assertAllClose(knp.max(x, 0), np.max(x, 0))
self.assertAllClose(knp.Max(0)(x), np.max(x, 0))
self.assertAllClose(knp.max(x, 1), np.max(x, 1))
self.assertAllClose(knp.Max(1)(x), np.max(x, 1))
# test max with initial
self.assertAllClose(knp.max(x, initial=4), 4)
# test empty tensor
x = np.array([[]])
self.assertAllClose(knp.max(x, initial=1), np.max(x, initial=1))
self.assertAllClose(
knp.max(x, initial=1, keepdims=True),
np.max(x, initial=1, keepdims=True),
)
def test_min(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.min(x), np.min(x))
self.assertAllClose(knp.Min()(x), np.min(x))
self.assertAllClose(knp.min(x, axis=(0, 1)), np.min(x, (0, 1)))
self.assertAllClose(knp.Min((0, 1))(x), np.min(x, (0, 1)))
self.assertAllClose(knp.min(x, axis=()), np.min(x, axis=()))
self.assertAllClose(knp.Min(())(x), np.min(x, axis=()))
self.assertAllClose(knp.min(x, 0), np.min(x, 0))
self.assertAllClose(knp.Min(0)(x), np.min(x, 0))
self.assertAllClose(knp.min(x, 1), np.min(x, 1))
self.assertAllClose(knp.Min(1)(x), np.min(x, 1))
# test min with initial
self.assertAllClose(knp.min(x, initial=0), 0)
# test empty tensor
x = np.array([[]])
self.assertAllClose(knp.min(x, initial=1), np.min(x, initial=1))
self.assertAllClose(
knp.min(x, initial=1, keepdims=True),
np.min(x, initial=1, keepdims=True),
)
def test_median(self):
x = np.array([[1, 2, 3], [3, 2, 1]]).astype("float32")
self.assertAllClose(knp.median(x), np.median(x))
self.assertAllClose(
knp.median(x, keepdims=True), np.median(x, keepdims=True)
)
self.assertAllClose(knp.median(x, axis=1), np.median(x, axis=1))
self.assertAllClose(knp.median(x, axis=(1,)), np.median(x, axis=(1,)))
self.assertAllClose(
knp.median(x, axis=1, keepdims=True),
np.median(x, axis=1, keepdims=True),
)
self.assertAllClose(knp.Median()(x), np.median(x))
self.assertAllClose(knp.Median(axis=1)(x), np.median(x, axis=1))
self.assertAllClose(
knp.Median(axis=1, keepdims=True)(x),
np.median(x, axis=1, keepdims=True),
)
def test_meshgrid(self):
x = np.array([1, 2, 3])
y = np.array([4, 5, 6])
z = np.array([7, 8, 9])
self.assertAllClose(knp.meshgrid(x, y), np.meshgrid(x, y))
self.assertAllClose(knp.meshgrid(x, z), np.meshgrid(x, z))
self.assertAllClose(
knp.meshgrid(x, y, z, indexing="ij"),
np.meshgrid(x, y, z, indexing="ij"),
)
self.assertAllClose(knp.Meshgrid()(x, y), np.meshgrid(x, y))
self.assertAllClose(knp.Meshgrid()(x, z), np.meshgrid(x, z))
self.assertAllClose(
knp.Meshgrid(indexing="ij")(x, y, z),
np.meshgrid(x, y, z, indexing="ij"),
)
if backend.backend() == "tensorflow":
# Arguments to `jax.numpy.meshgrid` must be 1D now.
x = np.ones([1, 2, 3])
y = np.ones([4, 5, 6, 6])
z = np.ones([7, 8])
self.assertAllClose(knp.meshgrid(x, y), np.meshgrid(x, y))
self.assertAllClose(knp.meshgrid(x, z), np.meshgrid(x, z))
self.assertAllClose(
knp.meshgrid(x, y, z, indexing="ij"),
np.meshgrid(x, y, z, indexing="ij"),
)
self.assertAllClose(knp.Meshgrid()(x, y), np.meshgrid(x, y))
self.assertAllClose(knp.Meshgrid()(x, z), np.meshgrid(x, z))
self.assertAllClose(
knp.Meshgrid(indexing="ij")(x, y, z),
np.meshgrid(x, y, z, indexing="ij"),
)
def test_moveaxis(self):
x = np.array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]])
self.assertAllClose(knp.moveaxis(x, 0, -1), np.moveaxis(x, 0, -1))
self.assertAllClose(knp.moveaxis(x, -1, 0), np.moveaxis(x, -1, 0))
self.assertAllClose(
knp.moveaxis(x, (0, 1), (1, 0)),
np.moveaxis(x, (0, 1), (1, 0)),
)
self.assertAllClose(
knp.moveaxis(x, [0, 1, 2], [2, 0, 1]),
np.moveaxis(x, [0, 1, 2], [2, 0, 1]),
)
self.assertAllClose(knp.Moveaxis(-1, 0)(x), np.moveaxis(x, -1, 0))
self.assertAllClose(
knp.Moveaxis((0, 1), (1, 0))(x),
np.moveaxis(x, (0, 1), (1, 0)),
)
self.assertAllClose(
knp.Moveaxis([0, 1, 2], [2, 0, 1])(x),
np.moveaxis(x, [0, 1, 2], [2, 0, 1]),
)
def test_ndim(self):
x = np.array([1, 2, 3])
self.assertEqual(knp.ndim(x), np.ndim(x))
self.assertEqual(knp.Ndim()(x), np.ndim(x))
def test_nonzero(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.nonzero(x), np.nonzero(x))
self.assertAllClose(knp.Nonzero()(x), np.nonzero(x))
def test_ones_like(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.ones_like(x), np.ones_like(x))
self.assertAllClose(knp.OnesLike()(x), np.ones_like(x))
@parameterized.named_parameters(
named_product(
dtype=[
"float16",
"float32",
"float64",
"uint8",
"int8",
"int16",
"int32",
],
mode=["constant", "reflect", "symmetric"],
constant_values=[None, 0, 2],
)
)
def test_pad(self, dtype, mode, constant_values):
# 2D
x = np.ones([2, 3], dtype=dtype)
pad_width = ((1, 1), (1, 1))
if mode != "constant":
if constant_values is not None:
with self.assertRaisesRegex(
ValueError,
"Argument `constant_values` can only be "
"provided when `mode == 'constant'`",
):
knp.pad(
x, pad_width, mode=mode, constant_values=constant_values
)
return
# constant_values is None
kwargs = {}
else:
# mode is constant
kwargs = {"constant_values": constant_values or 0}
self.assertAllClose(
knp.pad(x, pad_width, mode=mode, constant_values=constant_values),
np.pad(x, pad_width, mode=mode, **kwargs),
)
self.assertAllClose(
knp.Pad(pad_width, mode=mode)(x, constant_values=constant_values),
np.pad(x, pad_width, mode=mode, **kwargs),
)
# 5D (pad last 3D)
x = np.ones([2, 3, 4, 5, 6], dtype=dtype)
pad_width = ((0, 0), (0, 0), (2, 3), (1, 1), (1, 1))
self.assertAllClose(
knp.pad(x, pad_width, mode=mode, constant_values=constant_values),
np.pad(x, pad_width, mode=mode, **kwargs),
)
self.assertAllClose(
knp.Pad(pad_width, mode=mode)(x, constant_values=constant_values),
np.pad(x, pad_width, mode=mode, **kwargs),
)
# 5D (pad arbitrary dimensions)
if backend.backend() == "torch" and mode != "constant":
self.skipTest(
"reflect and symmetric padding for arbitary dimensions are not "
"supported by torch"
)
x = np.ones([2, 3, 4, 5, 6], dtype=dtype)
pad_width = ((1, 1), (2, 1), (3, 2), (4, 3), (5, 4))
self.assertAllClose(
knp.pad(x, pad_width, mode=mode, constant_values=constant_values),
np.pad(x, pad_width, mode=mode, **kwargs),
)
self.assertAllClose(
knp.Pad(pad_width, mode=mode)(x, constant_values=constant_values),
np.pad(x, pad_width, mode=mode, **kwargs),
)
def test_prod(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.prod(x), np.prod(x))
self.assertAllClose(knp.prod(x, axis=()), np.prod(x, axis=()))
self.assertAllClose(knp.prod(x, axis=1), np.prod(x, axis=1))
self.assertAllClose(knp.prod(x, axis=(1,)), np.prod(x, axis=(1,)))
self.assertAllClose(
knp.prod(x, axis=1, keepdims=True),
np.prod(x, axis=1, keepdims=True),
)
self.assertAllClose(knp.Prod()(x), np.prod(x))
self.assertAllClose(knp.Prod(axis=1)(x), np.prod(x, axis=1))
self.assertAllClose(
knp.Prod(axis=1, keepdims=True)(x),
np.prod(x, axis=1, keepdims=True),
)
def test_ravel(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.ravel(x), np.ravel(x))
self.assertAllClose(knp.Ravel()(x), np.ravel(x))
def test_real(self):
x = np.array([[1, 2, 3 - 3j], [3, 2, 1 + 5j]])
self.assertAllClose(knp.real(x), np.real(x))
self.assertAllClose(knp.Real()(x), np.real(x))
def test_reciprocal(self):
x = np.array([[1.0, 2.0, 3.0], [3.0, 2.0, 1.0]])
self.assertAllClose(knp.reciprocal(x), np.reciprocal(x))
self.assertAllClose(knp.Reciprocal()(x), np.reciprocal(x))
def test_repeat(self):
x = np.array([[1, 2], [3, 4]])
self.assertAllClose(knp.repeat(x, 2), np.repeat(x, 2))
self.assertAllClose(knp.repeat(x, 3, axis=1), np.repeat(x, 3, axis=1))
self.assertAllClose(
knp.repeat(x, np.array([1, 2]), axis=-1),
np.repeat(x, np.array([1, 2]), axis=-1),
)
self.assertAllClose(knp.Repeat(2)(x), np.repeat(x, 2))
self.assertAllClose(knp.Repeat(3, axis=1)(x), np.repeat(x, 3, axis=1))
self.assertAllClose(
knp.Repeat(np.array([1, 2]), axis=0)(x),
np.repeat(x, np.array([1, 2]), axis=0),
)
def test_reshape(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.reshape(x, [3, 2]), np.reshape(x, [3, 2]))
self.assertAllClose(knp.Reshape([3, 2])(x), np.reshape(x, [3, 2]))
self.assertAllClose(knp.Reshape(-1)(x), np.reshape(x, -1))
def test_roll(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.roll(x, 1), np.roll(x, 1))
self.assertAllClose(knp.roll(x, 1, axis=1), np.roll(x, 1, axis=1))
self.assertAllClose(knp.roll(x, -1, axis=0), np.roll(x, -1, axis=0))
self.assertAllClose(knp.Roll(1)(x), np.roll(x, 1))
self.assertAllClose(knp.Roll(1, axis=1)(x), np.roll(x, 1, axis=1))
self.assertAllClose(knp.Roll(-1, axis=0)(x), np.roll(x, -1, axis=0))
def test_round(self):
x = np.array([[1.1, 2.5, 3.9], [3.2, 2.3, 1.8]])
self.assertAllClose(knp.round(x), np.round(x))
self.assertAllClose(knp.Round()(x), np.round(x))
def test_sign(self):
x = np.array([[1, -2, 3], [-3, 2, -1]])
self.assertAllClose(knp.sign(x), np.sign(x))
self.assertAllClose(knp.Sign()(x), np.sign(x))
def test_sin(self):
x = np.array([[1, -2, 3], [-3, 2, -1]])
self.assertAllClose(knp.sin(x), np.sin(x))
self.assertAllClose(knp.Sin()(x), np.sin(x))
def test_sinh(self):
x = np.array([[1, -2, 3], [-3, 2, -1]])
self.assertAllClose(knp.sinh(x), np.sinh(x))
self.assertAllClose(knp.Sinh()(x), np.sinh(x))
def test_size(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.size(x), np.size(x))
self.assertAllClose(knp.Size()(x), np.size(x))
def test_sort(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.sort(x), np.sort(x))
self.assertAllClose(knp.Sort()(x), np.sort(x))
self.assertAllClose(knp.sort(x, axis=0), np.sort(x, axis=0))
self.assertAllClose(knp.Sort(axis=0)(x), np.sort(x, axis=0))
def test_split(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.split(x, 2), np.split(x, 2))
self.assertAllClose(knp.Split(2)(x), np.split(x, 2))
self.assertAllClose(
knp.split(x, [1, 2], axis=1),
np.split(x, [1, 2], axis=1),
)
self.assertAllClose(
knp.Split([1, 2], axis=1)(x),
np.split(x, [1, 2], axis=1),
)
# test invalid indices_or_sections
with self.assertRaises(Exception):
knp.split(x, 3)
# test zero dimension
x = np.ones(shape=(0,))
self.assertEqual(len(knp.split(x, 2)), 2)
self.assertEqual(len(knp.Split(2)(x)), 2)
# test indices_or_sections as tensor
x = knp.array([[1, 2, 3], [3, 2, 1]])
indices_or_sections = knp.array([1, 2])
x_np = np.array([[1, 2, 3], [3, 2, 1]])
indices_or_sections_np = np.array([1, 2])
self.assertAllClose(
knp.split(x, indices_or_sections, axis=1),
np.split(x_np, indices_or_sections_np, axis=1),
)
@pytest.mark.skipif(
backend.backend() != "tensorflow",
reason="Only test tensorflow backend",
)
def test_split_with_jit_in_tf(self):
import tensorflow as tf
x = knp.array([[1, 2, 3], [3, 2, 1]])
indices = knp.array([1, 2])
x_np = np.array([[1, 2, 3], [3, 2, 1]])
indices_np = np.array([1, 2])
@tf.function(jit_compile=True)
def fn(x, indices, axis):
return knp.split(x, indices, axis=axis)
self.assertAllClose(
fn(x, indices, axis=1),
np.split(x_np, indices_np, axis=1),
)
def test_sqrt(self):
x = np.array([[1, 4, 9], [16, 25, 36]], dtype="float32")
ref_y = np.sqrt(x)
y = knp.sqrt(x)
self.assertEqual(standardize_dtype(y.dtype), "float32")
self.assertAllClose(y, ref_y)
y = knp.Sqrt()(x)
self.assertEqual(standardize_dtype(y.dtype), "float32")
self.assertAllClose(y, ref_y)
@pytest.mark.skipif(
backend.backend() == "jax", reason="JAX does not support float64."
)
def test_sqrt_float64(self):
x = np.array([[1, 4, 9], [16, 25, 36]], dtype="float64")
ref_y = np.sqrt(x)
y = knp.sqrt(x)
self.assertEqual(standardize_dtype(y.dtype), "float64")
self.assertAllClose(y, ref_y)
y = knp.Sqrt()(x)
self.assertEqual(standardize_dtype(y.dtype), "float64")
self.assertAllClose(y, ref_y)
def test_sqrt_int32(self):
x = np.array([[1, 4, 9], [16, 25, 36]], dtype="int32")
ref_y = np.sqrt(x)
y = knp.sqrt(x)
self.assertEqual(standardize_dtype(y.dtype), "float32")
self.assertAllClose(y, ref_y)
y = knp.Sqrt()(x)
self.assertEqual(standardize_dtype(y.dtype), "float32")
self.assertAllClose(y, ref_y)
def test_stack(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
y = np.array([[4, 5, 6], [6, 5, 4]])
self.assertAllClose(knp.stack([x, y]), np.stack([x, y]))
self.assertAllClose(knp.stack([x, y], axis=1), np.stack([x, y], axis=1))
self.assertAllClose(knp.Stack()([x, y]), np.stack([x, y]))
self.assertAllClose(knp.Stack(axis=1)([x, y]), np.stack([x, y], axis=1))
def test_std(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.std(x), np.std(x))
self.assertAllClose(knp.std(x, axis=1), np.std(x, axis=1))
self.assertAllClose(
knp.std(x, axis=1, keepdims=True),
np.std(x, axis=1, keepdims=True),
)
self.assertAllClose(knp.Std()(x), np.std(x))
self.assertAllClose(knp.Std(axis=1)(x), np.std(x, axis=1))
self.assertAllClose(
knp.Std(axis=1, keepdims=True)(x),
np.std(x, axis=1, keepdims=True),
)
def test_swapaxes(self):
x = np.arange(24).reshape([1, 2, 3, 4])
self.assertAllClose(
knp.swapaxes(x, 0, 1),
np.swapaxes(x, 0, 1),
)
self.assertAllClose(
knp.Swapaxes(0, 1)(x),
np.swapaxes(x, 0, 1),
)
def test_tan(self):
x = np.array([[1, -2, 3], [-3, 2, -1]])
self.assertAllClose(knp.tan(x), np.tan(x))
self.assertAllClose(knp.Tan()(x), np.tan(x))
def test_tanh(self):
x = np.array([[1, -2, 3], [-3, 2, -1]])
self.assertAllClose(knp.tanh(x), np.tanh(x))
self.assertAllClose(knp.Tanh()(x), np.tanh(x))
def test_tile(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
self.assertAllClose(knp.tile(x, [2, 3]), np.tile(x, [2, 3]))
self.assertAllClose(knp.Tile([2, 3])(x), np.tile(x, [2, 3]))
def test_trace(self):
x = np.arange(24).reshape([1, 2, 3, 4])
self.assertAllClose(knp.trace(x), np.trace(x))
self.assertAllClose(
knp.trace(x, axis1=2, axis2=3),
np.trace(x, axis1=2, axis2=3),
)
self.assertAllClose(
knp.Trace(axis1=2, axis2=3)(x),
np.trace(x, axis1=2, axis2=3),
)
def test_tril(self):
x = np.arange(24).reshape([1, 2, 3, 4])
self.assertAllClose(knp.tril(x), np.tril(x))
self.assertAllClose(knp.tril(x, -1), np.tril(x, -1))
self.assertAllClose(knp.Tril(-1)(x), np.tril(x, -1))
x = np.ones([5, 5])
self.assertAllClose(knp.tril(x), np.tril(x))
self.assertAllClose(knp.tril(x, -1), np.tril(x, -1))
self.assertAllClose(knp.Tril(-1)(x), np.tril(x, -1))
def test_tril_in_layer(self):
# https://github.com/keras-team/keras/issues/18890
x = keras.Input((None, 3))
y1 = keras.layers.Lambda(
lambda x: keras.ops.tril(
keras.ops.ones((keras.ops.shape(x)[1], keras.ops.shape(x)[1]))
)
)(x)
y2 = keras.layers.Lambda(
lambda x: keras.ops.tril(
keras.ops.ones((keras.ops.shape(x)[1], keras.ops.shape(x)[1])),
k=-1,
)
)(x)
model = keras.Model(x, [y1, y2])
result = model(np.ones((1, 2, 3), "float32"))
self.assertAllClose(
result, [np.tril(np.ones((2, 2))), np.tril(np.ones((2, 2)), k=-1)]
)
def test_triu(self):
x = np.arange(24).reshape([1, 2, 3, 4])
self.assertAllClose(knp.triu(x), np.triu(x))
self.assertAllClose(knp.triu(x, -1), np.triu(x, -1))
self.assertAllClose(knp.Triu(-1)(x), np.triu(x, -1))
x = np.ones([5, 5])
self.assertAllClose(knp.triu(x), np.triu(x))
self.assertAllClose(knp.triu(x, -1), np.triu(x, -1))
self.assertAllClose(knp.Triu(-1)(x), np.triu(x, -1))
def test_triu_in_layer(self):
# https://github.com/keras-team/keras/issues/18890
x = keras.Input((None, 3))
y1 = keras.layers.Lambda(
lambda x: keras.ops.triu(
keras.ops.ones((keras.ops.shape(x)[1], keras.ops.shape(x)[1]))
)
)(x)
y2 = keras.layers.Lambda(
lambda x: keras.ops.triu(
keras.ops.ones((keras.ops.shape(x)[1], keras.ops.shape(x)[1])),
k=-1,
)
)(x)
model = keras.Model(x, [y1, y2])
result = model(np.ones((1, 2, 3), "float32"))
self.assertAllClose(
result, [np.triu(np.ones((2, 2))), np.triu(np.ones((2, 2)), k=-1)]
)
def test_vstack(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
y = np.array([[4, 5, 6], [6, 5, 4]])
self.assertAllClose(knp.vstack([x, y]), np.vstack([x, y]))
self.assertAllClose(knp.Vstack()([x, y]), np.vstack([x, y]))
def test_floor_divide(self):
x = np.array([[1, 2, 3], [3, 2, 1]])
y = np.array([[4, 5, 6], [3, 2, 1]])
z = np.array([[[1, 2, 3], [3, 2, 1]]])
self.assertAllClose(knp.floor_divide(x, y), np.floor_divide(x, y))
self.assertAllClose(knp.floor_divide(x, z), np.floor_divide(x, z))
self.assertAllClose(knp.FloorDivide()(x, y), np.floor_divide(x, y))
self.assertAllClose(knp.FloorDivide()(x, z), np.floor_divide(x, z))
def test_xor(self):
x = np.array([[True, False], [True, True]])
y = np.array([[False, False], [True, False]])
self.assertAllClose(knp.logical_xor(x, y), np.logical_xor(x, y))
self.assertAllClose(knp.logical_xor(x, True), np.logical_xor(x, True))
self.assertAllClose(knp.logical_xor(True, x), np.logical_xor(True, x))
self.assertAllClose(knp.LogicalXor()(x, y), np.logical_xor(x, y))
self.assertAllClose(knp.LogicalXor()(x, True), np.logical_xor(x, True))
self.assertAllClose(knp.LogicalXor()(True, x), np.logical_xor(True, x))
class NumpyArrayCreateOpsCorrectnessTest(testing.TestCase):
def test_ones(self):
self.assertAllClose(knp.ones([2, 3]), np.ones([2, 3]))
self.assertAllClose(knp.Ones()([2, 3]), np.ones([2, 3]))
def test_zeros(self):
self.assertAllClose(knp.zeros([2, 3]), np.zeros([2, 3]))
self.assertAllClose(knp.Zeros()([2, 3]), np.zeros([2, 3]))
def test_eye(self):
self.assertAllClose(knp.eye(3), np.eye(3))
self.assertAllClose(knp.eye(3, 4), np.eye(3, 4))
self.assertAllClose(knp.eye(3, 4, 1), np.eye(3, 4, 1))
self.assertAllClose(knp.Eye()(3), np.eye(3))
self.assertAllClose(knp.Eye()(3, 4), np.eye(3, 4))
self.assertAllClose(knp.Eye()(3, 4, 1), np.eye(3, 4, 1))
def test_arange(self):
self.assertAllClose(knp.arange(3), np.arange(3))
self.assertAllClose(knp.arange(3, 7), np.arange(3, 7))
self.assertAllClose(knp.arange(3, 7, 2), np.arange(3, 7, 2))
self.assertAllClose(knp.Arange()(3), np.arange(3))
self.assertAllClose(knp.Arange()(3, 7), np.arange(3, 7))
self.assertAllClose(knp.Arange()(3, 7, 2), np.arange(3, 7, 2))
self.assertEqual(standardize_dtype(knp.arange(3).dtype), "int32")
with pytest.warns(None) as record:
knp.arange(3, dtype="int")
self.assertEqual(len(record), 0)
def test_full(self):
self.assertAllClose(knp.full([2, 3], 0), np.full([2, 3], 0))
self.assertAllClose(knp.full([2, 3], 0.1), np.full([2, 3], 0.1))
self.assertAllClose(
knp.full([2, 3], np.array([1, 4, 5])),
np.full([2, 3], np.array([1, 4, 5])),
)
self.assertAllClose(knp.Full()([2, 3], 0), np.full([2, 3], 0))
self.assertAllClose(knp.Full()([2, 3], 0.1), np.full([2, 3], 0.1))
self.assertAllClose(
knp.Full()([2, 3], np.array([1, 4, 5])),
np.full([2, 3], np.array([1, 4, 5])),
)
def test_identity(self):
self.assertAllClose(knp.identity(3), np.identity(3))
self.assertAllClose(knp.Identity()(3), np.identity(3))
def test_tri(self):
self.assertAllClose(knp.tri(3), np.tri(3))
self.assertAllClose(knp.tri(3, 4), np.tri(3, 4))
self.assertAllClose(knp.tri(3, 4, 1), np.tri(3, 4, 1))
self.assertAllClose(knp.Tri()(3), np.tri(3))
self.assertAllClose(knp.Tri()(3, 4), np.tri(3, 4))
self.assertAllClose(knp.Tri()(3, 4, 1), np.tri(3, 4, 1))
def create_sparse_tensor(x, indices_from=None, start=0, delta=2):
if indices_from is not None:
indices = indices_from.indices
else:
size = math.prod(x.shape)
flat_indices = np.arange(start, size, delta)
indices = np.stack(np.where(np.ones_like(x)), axis=1)[flat_indices]
if backend.backend() == "tensorflow":
import tensorflow as tf
return tf.SparseTensor(indices, tf.gather_nd(x, indices), x.shape)
elif backend.backend() == "jax":
import jax
import jax.experimental.sparse as jax_sparse
values = x[tuple(jax.numpy.moveaxis(indices, -1, 0))]
return jax_sparse.BCOO((values, indices), shape=x.shape)
def create_indexed_slices(x, indices_from=None, start=0, delta=2):
indices = np.arange(start, x.shape[0], delta)
if backend.backend() == "tensorflow":
import tensorflow as tf
if indices_from is not None:
indices = indices_from.indices
return tf.IndexedSlices(tf.gather(x, indices), indices, x.shape)
elif backend.backend() == "jax":
import jax
import jax.experimental.sparse as jax_sparse
if indices_from is not None:
indices = indices_from.indices
else:
indices = jax.numpy.expand_dims(indices, axis=1)
values = jax.numpy.take(x, jax.numpy.squeeze(indices, axis=1), axis=0)
return jax_sparse.BCOO((values, indices), shape=x.shape)
def get_sparseness_combinations(dense_to_sparse_fn):
x = np.array([[1, 2, 3], [3, 2, 1]])
y = np.array([[4, 5, 6], [3, 2, 1]])
scalar = backend.convert_to_tensor(2)
x_sp = dense_to_sparse_fn(x)
y_sp = dense_to_sparse_fn(y, indices_from=x_sp)
x_sp_sup = dense_to_sparse_fn(x, start=0, delta=1)
y_sp_dis = dense_to_sparse_fn(y, start=1)
y_sp_sup = dense_to_sparse_fn(y, start=0, delta=1)
x = backend.convert_to_tensor(x)
y = backend.convert_to_tensor(y)
return [
{"testcase_name": "sparse_dense", "x": x_sp, "y": y},
{"testcase_name": "dense_sparse", "x": x, "y": y_sp},
{"testcase_name": "sparse_scalar", "x": x_sp, "y": scalar},
{"testcase_name": "scalar_sparse", "x": scalar, "y": y_sp},
{"testcase_name": "sparse_sparse_same", "x": x_sp, "y": y_sp},
{"testcase_name": "sparse_sparse_disjoint", "x": x_sp, "y": y_sp_dis},
{"testcase_name": "sparse_sparse_superset", "x": x_sp, "y": y_sp_sup},
{"testcase_name": "sparse_sparse_subset", "x": x_sp_sup, "y": y_sp},
]
def sparseness(x):
if isinstance(x, KerasTensor):
return "sparse" if x.sparse else "dense"
elif x.__class__.__name__ == "BCOO":
if x.n_dense > 0:
return "slices"
else:
return "sparse"
elif x.__class__.__name__ == "SparseTensor":
return "sparse"
elif x.__class__.__name__ == "IndexedSlices":
return "slices"
elif not hasattr(x, "shape") or not x.shape:
return "scalar"
else:
return "dense"
def union_sparseness(x1, x2):
x1_sparseness = sparseness(x1)
x2_sparseness = sparseness(x2)
if any(s in ("scalar", "dense") for s in (x1_sparseness, x2_sparseness)):
return "dense"
if x1_sparseness != x2_sparseness:
raise ValueError(f"Illegal combination of operands: {x1} {x2}")
return x1_sparseness
def intersection_sparseness(x1, x2):
x1_sparseness = sparseness(x1)
x2_sparseness = sparseness(x2)
if x1_sparseness == "scalar":
return x2_sparseness
if x2_sparseness in ("scalar", "dense"):
return x1_sparseness
if x1_sparseness == "dense":
return x2_sparseness
if x1_sparseness != x2_sparseness:
raise ValueError(f"Illegal combination of operands: {x1} {x2}")
return x1_sparseness
def division_sparseness(x1, x2):
x1_sparseness = sparseness(x1)
x2_sparseness = sparseness(x2)
if x2_sparseness in ("sparse", "slices"):
return "dense"
return "dense" if x1_sparseness == "scalar" else x1_sparseness
def snake_to_pascal_case(name):
return "".join(w.capitalize() for w in name.split("_"))
@pytest.mark.skipif(
not backend.SUPPORTS_SPARSE_TENSORS,
reason="Backend does not support sparse tensors.",
)
class SparseTest(testing.TestCase, parameterized.TestCase):
DTYPES = ["int32", "float32"]
DENSIFYING_UNARY_OPS = [
"arccos",
"arccosh",
"cos",
"cosh",
"exp",
"isfinite",
"log",
"log10",
"log2",
"reciprocal",
]
DENSIFYING_UNARY_OPS_TESTS = [
{
"testcase_name": op,
"op_function": getattr(knp, op),
"op_class": getattr(knp, op.capitalize()),
"np_op": getattr(np, op),
}
for op in DENSIFYING_UNARY_OPS
]
ELEMENTWISE_UNARY_OPS = [
"abs",
"absolute",
"arcsin",
"arcsinh",
"arctan",
"arctanh",
"ceil",
"conj",
"conjugate",
"copy",
"expm1",
"floor",
"imag",
"log1p",
"negative",
"real",
"round",
"sign",
"sin",
"sinh",
"sqrt",
"square",
"tan",
"tanh",
]
ELEMENTWISE_UNARY_OPS_TESTS = [
{
"testcase_name": op,
"op_function": getattr(knp, op),
"op_class": getattr(knp, snake_to_pascal_case(op)),
"np_op": getattr(np, op),
}
for op in ELEMENTWISE_UNARY_OPS
]
OTHER_UNARY_OPS_TESTS = [
{
"testcase_name": "_".join([op, testcase_name]),
"op_function": getattr(knp, op),
"op_class": getattr(knp, snake_to_pascal_case(op)),
"np_op": getattr(np, op),
"op_kwargs": op_kwargs,
"input_shape": input_shape,
}
for op, testcase_name, op_kwargs, input_shape in [
("mean", "none", {"axis": None}, (4, 2, 3)),
("mean", "none_k", {"axis": None, "keepdims": True}, (4, 2, 3)),
("mean", "empty", {"axis": ()}, (4, 2, 3)),
("mean", "empty_k", {"axis": (), "keepdims": True}, (4, 2, 3)),
("mean", "0", {"axis": 0}, (4, 2, 3)),
("mean", "0_k", {"axis": 0, "keepdims": True}, (4, 2, 3)),
("mean", "1", {"axis": 1}, (4, 2, 3)),
("mean", "1_k", {"axis": 1, "keepdims": True}, (4, 2, 3)),
("mean", "01", {"axis": (0, 1)}, (4, 2, 3)),
("mean", "01_k", {"axis": (0, 1), "keepdims": True}, (4, 2, 3)),
("mean", "02", {"axis": (1, 2)}, (4, 2, 3)),
("mean", "02_k", {"axis": (1, 2), "keepdims": True}, (4, 2, 3)),
("mean", "all", {"axis": (0, 1, 2)}, (4, 2, 3)),
("mean", "all_k", {"axis": (0, 1, 2), "keepdims": True}, (4, 2, 3)),
("expand_dims", "zero", {"axis": 0}, (2, 3)),
("expand_dims", "one", {"axis": 1}, (2, 3)),
("expand_dims", "minus_two", {"axis": -2}, (2, 3)),
("reshape", "basic", {"newshape": (4, 3, 2)}, (4, 2, 3)),
("reshape", "minus_one", {"newshape": (4, 3, -1)}, (4, 2, 3)),
("reshape", "fewer_dims", {"newshape": (4, 6)}, (4, 2, 3)),
("squeeze", "no_axis_no_op", {}, (2, 3)),
("squeeze", "one", {"axis": 1}, (2, 1, 3)),
("squeeze", "minus_two", {"axis": -2}, (2, 1, 3)),
("squeeze", "no_axis", {}, (2, 1, 3)),
("transpose", "no_axes", {}, (1, 2, 3, 4)),
("transpose", "axes", {"axes": (0, 3, 2, 1)}, (1, 2, 3, 4)),
]
]
BINARY_OPS = [
("add", union_sparseness),
("subtract", union_sparseness),
("maximum", union_sparseness),
("minimum", union_sparseness),
("multiply", intersection_sparseness),
("divide", division_sparseness),
("true_divide", division_sparseness),
]
BINARY_OPS_TESTS = [
{
"testcase_name": op,
"op_function": getattr(knp, op),
"op_class": getattr(knp, snake_to_pascal_case(op)),
"np_op": getattr(np, op),
"op_sparseness": op_sparseness,
}
for op, op_sparseness in BINARY_OPS
]
def assertSameSparseness(self, x, y):
self.assertEquals(sparseness(x), sparseness(y))
def assertSparseness(self, x, expected_sparseness):
self.assertEquals(sparseness(x), expected_sparseness)
@parameterized.named_parameters(ELEMENTWISE_UNARY_OPS_TESTS)
def test_elementwise_unary_symbolic_static_shape(
self, op_function, op_class, np_op
):
x = KerasTensor([2, 3], sparse=True)
self.assertEqual(op_function(x).shape, (2, 3))
self.assertTrue(op_function(x).sparse)
self.assertEqual(op_class()(x).shape, (2, 3))
self.assertTrue(op_class()(x).sparse)
@parameterized.named_parameters(ELEMENTWISE_UNARY_OPS_TESTS)
def test_elementwise_unary_symbolic_dynamic_shape(
self, op_function, op_class, np_op
):
x = KerasTensor([None, 3], sparse=True)
self.assertEqual(op_function(x).shape, (None, 3))
self.assertTrue(op_function(x).sparse)
self.assertEqual(op_class()(x).shape, (None, 3))
self.assertTrue(op_class()(x).sparse)
@parameterized.named_parameters(OTHER_UNARY_OPS_TESTS)
def test_other_unary_symbolic_static_shape(
self, op_function, op_class, np_op, op_kwargs, input_shape
):
expected_shape = op_function(
KerasTensor(input_shape), **op_kwargs
).shape
x = KerasTensor(input_shape, sparse=True)
self.assertEqual(op_function(x, **op_kwargs).shape, expected_shape)
self.assertTrue(op_function(x, **op_kwargs).sparse)
self.assertEqual(op_class(**op_kwargs)(x).shape, expected_shape)
self.assertTrue(op_class(**op_kwargs)(x).sparse)
@parameterized.named_parameters(OTHER_UNARY_OPS_TESTS)
def test_other_unary_symbolic_dynamic_shape(
self, op_function, op_class, np_op, op_kwargs, input_shape
):
input_shape = (None,) + input_shape[1:]
expected_shape = op_function(
KerasTensor(input_shape), **op_kwargs
).shape
x = KerasTensor(input_shape, sparse=True)
self.assertEqual(op_function(x, **op_kwargs).shape, expected_shape)
self.assertTrue(op_function(x, **op_kwargs).sparse)
self.assertEqual(op_class(**op_kwargs)(x).shape, expected_shape)
self.assertTrue(op_class(**op_kwargs)(x).sparse)
@parameterized.named_parameters(DENSIFYING_UNARY_OPS_TESTS)
def test_densifying_unary_sparse_correctness(
self, op_function, op_class, np_op
):
x = np.array([[1, 0.5, -0.7], [0.9, 0.2, -1]])
x = create_sparse_tensor(x)
x_np = backend.convert_to_numpy(x)
self.assertAllClose(op_function(x), np_op(x_np))
self.assertAllClose(op_class()(x), np_op(x_np))
@parameterized.named_parameters(DENSIFYING_UNARY_OPS_TESTS)
def test_densifying_unary_indexed_slices_correctness(
self, op_function, op_class, np_op
):
x = np.array([[1, 0.5, -0.7], [0.9, 0.2, -1]])
x = create_indexed_slices(x)
x_np = backend.convert_to_numpy(x)
self.assertAllClose(op_function(x), np_op(x_np))
self.assertAllClose(op_class()(x), np_op(x_np))
@parameterized.named_parameters(ELEMENTWISE_UNARY_OPS_TESTS)
def test_elementwise_unary_sparse_correctness(
self, op_function, op_class, np_op
):
if op_function.__name__ in ("conj", "conjugate", "imag", "real"):
x = np.array([[1 + 1j, 2 + 2j, 3 + 3j], [3 + 3j, 2 + 2j, 1 + 1j]])
else:
x = np.array([[1, 0.5, -0.7], [0.9, 0.2, -1]])
x = create_sparse_tensor(x)
x_np = backend.convert_to_numpy(x)
self.assertAllClose(op_function(x), np_op(x_np))
self.assertSameSparseness(op_function(x), x)
self.assertAllClose(op_class()(x), np_op(x_np))
self.assertSameSparseness(op_class()(x), x)
@parameterized.named_parameters(ELEMENTWISE_UNARY_OPS_TESTS)
def test_elementwise_unary_indexed_slices_correctness(
self, op_function, op_class, np_op
):
if op_function.__name__ in ("conj", "conjugate", "imag", "real"):
x = np.array([[1 + 1j, 2 + 2j, 3 + 3j], [3 + 3j, 2 + 2j, 1 + 1j]])
else:
x = np.array([[1, 0.5, -0.7], [0.9, 0.2, -1]])
x = create_indexed_slices(x)
x_np = backend.convert_to_numpy(x)
self.assertAllClose(op_function(x), np_op(x_np))
self.assertSameSparseness(op_function(x), x)
self.assertAllClose(op_class()(x), np_op(x_np))
self.assertSameSparseness(op_class()(x), x)
@parameterized.named_parameters(OTHER_UNARY_OPS_TESTS)
def test_other_unary_symbolic_sparse_correctness(
self, op_function, op_class, np_op, op_kwargs, input_shape
):
x = np.random.random(input_shape)
if op_function is knp.mean:
x = create_indexed_slices(x)
else:
x = create_sparse_tensor(x)
x_np = backend.convert_to_numpy(x)
self.assertAllClose(
op_function(x, **op_kwargs), np_op(x_np, **op_kwargs)
)
self.assertAllClose(op_class(**op_kwargs)(x), np_op(x_np, **op_kwargs))
# Reduction operations have complex and backend dependent rules about
# when the result is sparse and it is dense.
if op_function is not knp.mean:
self.assertSameSparseness(op_function(x, **op_kwargs), x)
self.assertSameSparseness(op_class(**op_kwargs)(x), x)
@parameterized.named_parameters(
named_product(
BINARY_OPS_TESTS, x_sparse=[True, False], y_sparse=[True, False]
)
)
def test_binary_symbolic_static_shape(
self, x_sparse, y_sparse, op_function, op_class, np_op, op_sparseness
):
x = KerasTensor([2, 3], sparse=x_sparse)
y = KerasTensor([2, 3], sparse=y_sparse)
self.assertEqual(op_function(x, y).shape, (2, 3))
self.assertSparseness(op_function(x, y), op_sparseness(x, y))
self.assertEqual(op_class()(x, y).shape, (2, 3))
self.assertSparseness(op_class()(x, y), op_sparseness(x, y))
@parameterized.named_parameters(
named_product(
BINARY_OPS_TESTS, x_sparse=[True, False], y_sparse=[True, False]
)
)
def test_binary_symbolic_dynamic_shape(
self, x_sparse, y_sparse, op_function, op_class, np_op, op_sparseness
):
x = KerasTensor([None, 3], sparse=x_sparse)
y = KerasTensor([2, None], sparse=y_sparse)
self.assertEqual(op_function(x, y).shape, (2, 3))
self.assertSparseness(op_function(x, y), op_sparseness(x, y))
self.assertEqual(op_class()(x, y).shape, (2, 3))
self.assertSparseness(op_class()(x, y), op_sparseness(x, y))
@parameterized.named_parameters(
named_product(
BINARY_OPS_TESTS,
get_sparseness_combinations(create_sparse_tensor),
dtype=DTYPES,
)
)
def test_binary_correctness_sparse_tensor(
self, x, y, op_function, op_class, np_op, op_sparseness, dtype
):
x = backend.cast(x, dtype)
y = backend.cast(y, dtype)
expected_result = np_op(
backend.convert_to_numpy(x), backend.convert_to_numpy(y)
)
self.assertAllClose(op_function(x, y), expected_result)
self.assertSparseness(op_function(x, y), op_sparseness(x, y))
self.assertAllClose(op_class()(x, y), expected_result)
self.assertSparseness(op_class()(x, y), op_sparseness(x, y))
@parameterized.named_parameters(
named_product(
BINARY_OPS_TESTS,
get_sparseness_combinations(create_indexed_slices),
dtype=DTYPES,
)
)
def test_binary_correctness_indexed_slices(
self, x, y, op_function, op_class, np_op, op_sparseness, dtype
):
x = backend.cast(x, dtype)
y = backend.cast(y, dtype)
expected_result = np_op(
backend.convert_to_numpy(x), backend.convert_to_numpy(y)
)
self.assertAllClose(op_function(x, y), expected_result)
self.assertSparseness(op_function(x, y), op_sparseness(x, y))
self.assertAllClose(op_class()(x, y), expected_result)
self.assertSparseness(op_class()(x, y), op_sparseness(x, y))
@parameterized.named_parameters(
named_product(
sparse_type=["sparse_tensor", "indexed_slices"],
dtype=["int32", "float32"],
)
)
def test_divide_with_zeros_nans(self, sparse_type, dtype):
x = backend.convert_to_tensor([[0, 2, 3], [3, 2, 1]], dtype=dtype)
if sparse_type == "indexed_slices":
x = create_indexed_slices(x, start=0, delta=2)
else:
x = create_sparse_tensor(x, start=0, delta=2)
if dtype.startswith("int"):
y = [[0, 0, 3], [0, 0, 1]]
else:
y = [[np.nan, np.nan, 3], [0, 0, 1]]
y = backend.convert_to_tensor(y, dtype=dtype)
expected_result = np.divide(
backend.convert_to_numpy(x), backend.convert_to_numpy(y)
)
self.assertAllClose(knp.divide(x, y), expected_result)
self.assertAllClose(knp.Divide()(x, y), expected_result)
class NumpyDtypeTest(testing.TestCase, parameterized.TestCase):
"""Test the dtype to verify that the behavior matches JAX."""
# TODO: Using uint64 will lead to weak type promotion (`float`),
# resulting in different behavior between JAX and Keras. Currently, we
# are skipping the test for uint64
ALL_DTYPES = [
x for x in ALLOWED_DTYPES if x not in ["string", "uint64"]
] + [None]
INT_DTYPES = [x for x in ALLOWED_DTYPES if "int" in x and x != "uint64"]
FLOAT_DTYPES = [x for x in ALLOWED_DTYPES if "float" in x]
if backend.backend() == "torch":
# TODO: torch doesn't support uint16, uint32 and uint64
ALL_DTYPES = [
x for x in ALL_DTYPES if x not in ["uint16", "uint32", "uint64"]
]
INT_DTYPES = [
x for x in INT_DTYPES if x not in ["uint16", "uint32", "uint64"]
]
def setUp(self):
from jax.experimental import enable_x64
self.jax_enable_x64 = enable_x64()
self.jax_enable_x64.__enter__()
return super().setUp()
def tearDown(self) -> None:
self.jax_enable_x64.__exit__(None, None, None)
return super().tearDown()
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_add(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((1,), dtype=dtype1)
x2 = knp.ones((1,), dtype=dtype2)
x1_jax = jnp.ones((1,), dtype=dtype1)
x2_jax = jnp.ones((1,), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.add(x1_jax, x2_jax).dtype)
self.assertEqual(
standardize_dtype(knp.add(x1, x2).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.Add().symbolic_call(x1, x2).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_add_python_types(self, dtype):
import jax.experimental
import jax.numpy as jnp
# We have to disable x64 for jax since jnp.add doesn't respect
# JAX_DEFAULT_DTYPE_BITS=32 in `./conftest.py`. We also need to downcast
# the expected dtype from 64 bit to 32 bit when using jax backend.
with jax.experimental.disable_x64():
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
# python int
expected_dtype = standardize_dtype(jnp.add(x_jax, 1).dtype)
if dtype == "float64":
expected_dtype = "float64"
elif dtype == "int64":
expected_dtype = "int64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.add(x, 1).dtype), expected_dtype
)
self.assertEqual(
knp.Add().symbolic_call(x, 1).dtype, expected_dtype
)
# python float
expected_dtype = standardize_dtype(jnp.add(x_jax, 1.0).dtype)
if dtype == "float64":
expected_dtype = "float64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.add(x, 1.0).dtype), expected_dtype
)
self.assertEqual(
knp.Add().symbolic_call(x, 1.0).dtype, expected_dtype
)
@parameterized.named_parameters(named_product(dtype=INT_DTYPES))
def test_bincount(self, dtype):
import jax.numpy as jnp
if backend.backend() == "tensorflow":
import tensorflow as tf
if tf.test.is_gpu_available():
self.skipTest("bincount does not work in tensorflow gpu")
x = np.array([1, 1, 2, 3, 2, 4, 4, 5], dtype=dtype)
weights = np.array([0, 0, 3, 2, 1, 1, 4, 2], dtype=dtype)
minlength = 3
self.assertEqual(
standardize_dtype(
knp.bincount(x, weights=weights, minlength=minlength).dtype
),
standardize_dtype(
jnp.bincount(x, weights=weights, minlength=minlength).dtype
),
)
self.assertEqual(
knp.Bincount(weights=weights, minlength=minlength)
.symbolic_call(x)
.dtype,
standardize_dtype(
jnp.bincount(x, weights=weights, minlength=minlength).dtype
),
)
# test float32 weights
weights = np.array([0, 0, 3, 2, 1, 1, 4, 2], dtype="float32")
self.assertEqual(
standardize_dtype(knp.bincount(x, weights=weights).dtype),
standardize_dtype(jnp.bincount(x, weights=weights).dtype),
)
self.assertEqual(
knp.Bincount(weights=weights).symbolic_call(x).dtype,
standardize_dtype(jnp.bincount(x, weights=weights).dtype),
)
# test float16 weights
weights = np.array([0, 0, 3, 2, 1, 1, 4, 2], dtype="float16")
self.assertEqual(
standardize_dtype(knp.bincount(x, weights=weights).dtype),
standardize_dtype(jnp.bincount(x, weights=weights).dtype),
)
self.assertEqual(
knp.Bincount(weights=weights).symbolic_call(x).dtype,
standardize_dtype(jnp.bincount(x, weights=weights).dtype),
)
# test weights=None
self.assertEqual(
standardize_dtype(knp.bincount(x).dtype),
standardize_dtype(jnp.bincount(x).dtype),
)
self.assertEqual(
knp.Bincount().symbolic_call(x).dtype,
standardize_dtype(jnp.bincount(x).dtype),
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_subtract(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
if dtype1 == "bool" and dtype2 == "bool":
self.skipTest("subtract does not support bool")
x1 = knp.ones((1,), dtype=dtype1)
x2 = knp.ones((1,), dtype=dtype2)
x1_jax = jnp.ones((1,), dtype=dtype1)
x2_jax = jnp.ones((1,), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.subtract(x1_jax, x2_jax).dtype)
self.assertEqual(
standardize_dtype(knp.subtract(x1, x2).dtype), expected_dtype
)
self.assertEqual(
knp.Subtract().symbolic_call(x1, x2).dtype, expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_subtract_python_types(self, dtype):
import jax.experimental
import jax.numpy as jnp
# We have to disable x64 for jax since jnp.subtract doesn't respect
# JAX_DEFAULT_DTYPE_BITS=32 in `./conftest.py`. We also need to downcast
# the expected dtype from 64 bit to 32 bit when using jax backend.
with jax.experimental.disable_x64():
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
# python int
expected_dtype = standardize_dtype(jnp.subtract(x_jax, 1).dtype)
if dtype == "float64":
expected_dtype = "float64"
elif dtype == "int64":
expected_dtype = "int64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.subtract(x, 1).dtype), expected_dtype
)
self.assertEqual(
knp.Subtract().symbolic_call(x, 1).dtype, expected_dtype
)
# python float
expected_dtype = standardize_dtype(jnp.subtract(x_jax, 1.0).dtype)
if dtype == "float64":
expected_dtype = "float64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.subtract(x, 1.0).dtype), expected_dtype
)
self.assertEqual(
knp.Subtract().symbolic_call(x, 1.0).dtype, expected_dtype
)
@parameterized.named_parameters(
named_product(
dtypes=list(itertools.combinations(ALL_DTYPES, 2))
+ [("int8", "int8")]
)
)
def test_matmul(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
# The shape of the matrix needs to meet the requirements of
# torch._int_mm to test hardware-accelerated matmul
x1 = knp.ones((17, 16), dtype=dtype1)
x2 = knp.ones((16, 8), dtype=dtype2)
x1_jax = jnp.ones((17, 16), dtype=dtype1)
x2_jax = jnp.ones((16, 8), dtype=dtype2)
if dtype1 == "int8" and dtype2 == "int8":
preferred_element_type = "int32"
else:
preferred_element_type = None
expected_dtype = standardize_dtype(
jnp.matmul(
x1_jax, x2_jax, preferred_element_type=preferred_element_type
).dtype
)
self.assertEqual(
standardize_dtype(knp.matmul(x1, x2).dtype), expected_dtype
)
self.assertEqual(
knp.Matmul().symbolic_call(x1, x2).dtype, expected_dtype
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_multiply(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((1,), dtype=dtype1)
x2 = knp.ones((1,), dtype=dtype2)
x1_jax = jnp.ones((1,), dtype=dtype1)
x2_jax = jnp.ones((1,), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.multiply(x1_jax, x2_jax).dtype)
self.assertEqual(
standardize_dtype(knp.multiply(x1, x2).dtype), expected_dtype
)
self.assertEqual(
knp.Multiply().symbolic_call(x1, x2).dtype, expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_multiply_python_types(self, dtype):
import jax.experimental
import jax.numpy as jnp
# We have to disable x64 for jax since jnp.multiply doesn't respect
# JAX_DEFAULT_DTYPE_BITS=32 in `./conftest.py`. We also need to downcast
# the expected dtype from 64 bit to 32 bit when using jax backend.
with jax.experimental.disable_x64():
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
# python int
expected_dtype = standardize_dtype(jnp.multiply(x_jax, 1).dtype)
if dtype == "float64":
expected_dtype = "float64"
elif dtype == "int64":
expected_dtype = "int64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.multiply(x, 1).dtype), expected_dtype
)
self.assertEqual(
knp.Multiply().symbolic_call(x, 1).dtype, expected_dtype
)
# python float
expected_dtype = standardize_dtype(jnp.multiply(x_jax, 1.0).dtype)
if dtype == "float64":
expected_dtype = "float64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.multiply(x, 1.0).dtype), expected_dtype
)
self.assertEqual(
knp.Multiply().symbolic_call(x, 1.0).dtype, expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_mean(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.mean(x_jax).dtype)
if dtype == "int64":
expected_dtype = "float32"
self.assertEqual(standardize_dtype(knp.mean(x).dtype), expected_dtype)
self.assertEqual(knp.Mean().symbolic_call(x).dtype, expected_dtype)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_max(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.max(x_jax).dtype)
self.assertEqual(standardize_dtype(knp.max(x).dtype), expected_dtype)
self.assertEqual(knp.Max().symbolic_call(x).dtype, expected_dtype)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_ones(self, dtype):
import jax.numpy as jnp
expected_dtype = standardize_dtype(jnp.ones([2, 3], dtype=dtype).dtype)
self.assertEqual(
standardize_dtype(knp.ones([2, 3], dtype=dtype).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(
knp.Ones().symbolic_call([2, 3], dtype=dtype).dtype
),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_zeros(self, dtype):
import jax.numpy as jnp
expected_dtype = standardize_dtype(jnp.zeros([2, 3], dtype=dtype).dtype)
self.assertEqual(
standardize_dtype(knp.zeros([2, 3], dtype=dtype).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(
knp.Zeros().symbolic_call([2, 3], dtype=dtype).dtype
),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_absolute(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.absolute(x_jax).dtype)
self.assertEqual(
standardize_dtype(knp.absolute(x).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Absolute().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_all(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.all(x_jax).dtype)
self.assertEqual(standardize_dtype(knp.all(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.All().symbolic_call(x).dtype), expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_amax(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.amax(x_jax).dtype)
self.assertEqual(standardize_dtype(knp.amax(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Amax().symbolic_call(x).dtype), expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_amin(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.amin(x_jax).dtype)
self.assertEqual(standardize_dtype(knp.amin(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Amin().symbolic_call(x).dtype), expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_any(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.any(x_jax).dtype)
self.assertEqual(standardize_dtype(knp.any(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Any().symbolic_call(x).dtype), expected_dtype
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_append(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((1,), dtype=dtype1)
x2 = knp.ones((1,), dtype=dtype2)
x1_jax = jnp.ones((1,), dtype=dtype1)
x2_jax = jnp.ones((1,), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.append(x1_jax, x2_jax).dtype)
self.assertEqual(
standardize_dtype(knp.append(x1, x2).dtype), expected_dtype
)
self.assertEqual(
knp.Append().symbolic_call(x1, x2).dtype, expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_argmax(self, dtype):
import jax.numpy as jnp
if dtype == "bool":
value = [[True, False, True], [False, True, False]]
else:
value = [[1, 2, 3], [3, 2, 1]]
x = knp.array(value, dtype=dtype)
x_jax = jnp.array(value, dtype=dtype)
expected_dtype = standardize_dtype(jnp.argmax(x_jax).dtype)
self.assertEqual(standardize_dtype(knp.argmax(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Argmax().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_argmin(self, dtype):
import jax.numpy as jnp
if dtype == "bool":
value = [[True, False, True], [False, True, False]]
else:
value = [[1, 2, 3], [3, 2, 1]]
x = knp.array(value, dtype=dtype)
x_jax = jnp.array(value, dtype=dtype)
expected_dtype = standardize_dtype(jnp.argmin(x_jax).dtype)
self.assertEqual(standardize_dtype(knp.argmin(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Argmin().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_argsort(self, dtype):
import jax.numpy as jnp
if dtype == "bool":
value = [[True, False, True], [False, True, False]]
else:
value = [[1, 2, 3], [4, 5, 6]]
x = knp.array(value, dtype=dtype)
x_jax = jnp.array(value, dtype=dtype)
expected_dtype = standardize_dtype(jnp.argsort(x_jax).dtype)
self.assertEqual(
standardize_dtype(knp.argsort(x).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Argsort().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.parameters(
(10, None, 1, None),
(0, 10, 1, None),
(0, 10, 0.5, None),
(10.0, None, 1, None),
(0, 10.0, 1, None),
(0.0, 10, 1, None),
(10, None, 1, "float32"),
(10, None, 1, "int32"),
(10, None, 1, "int16"),
(10, None, 1, "float16"),
)
def test_arange(self, start, stop, step, dtype):
import jax.numpy as jnp
expected_dtype = standardize_dtype(
jnp.arange(start, stop, step, dtype).dtype
)
self.assertEqual(
standardize_dtype(knp.arange(start, stop, step, dtype).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(
knp.Arange().symbolic_call(start, stop, step, dtype).dtype
),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_arccos(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.arccos(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.arccos(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Arccos().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_arccosh(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.arccosh(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(
standardize_dtype(knp.arccosh(x).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Arccosh().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_arcsin(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.arcsin(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.arcsin(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Arcsin().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_arcsinh(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.arcsinh(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(
standardize_dtype(knp.arcsinh(x).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Arcsinh().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_arctan(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.arctan(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.arctan(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Arctan().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_arctan2(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((1,), dtype=dtype1)
x2 = knp.ones((1,), dtype=dtype2)
x1_jax = jnp.ones((1,), dtype=dtype1)
x2_jax = jnp.ones((1,), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.arctan2(x1_jax, x2_jax).dtype)
if dtype1 is not None and "float" not in dtype1:
if dtype2 is not None and "float" not in dtype2:
if "int64" in (dtype1, dtype2) or "uint32" in (dtype1, dtype2):
expected_dtype = backend.floatx()
self.assertEqual(
standardize_dtype(knp.arctan2(x1, x2).dtype), expected_dtype
)
self.assertEqual(
knp.Arctan2().symbolic_call(x1, x2).dtype, expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_arctanh(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.arctanh(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(
standardize_dtype(knp.arctanh(x).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Arctanh().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.parameters(
(bool(0), "bool"),
(int(0), "int32"),
(float(0), backend.floatx()),
([False, True, False], "bool"),
([1, 2, 3], "int32"),
([1.0, 2.0, 3.0], backend.floatx()),
([1, 2.0, 3], backend.floatx()),
([[False], [True], [False]], "bool"),
([[1], [2], [3]], "int32"),
([[1], [2.0], [3]], backend.floatx()),
*[
(np.array(0, dtype=dtype), dtype)
for dtype in ALL_DTYPES
if dtype is not None
],
)
def test_array(self, x, expected_dtype):
# We have to disable x64 for jax backend since jnp.array doesn't respect
# JAX_DEFAULT_DTYPE_BITS=32 in `./conftest.py`. We also need to downcast
# the expected dtype from 64 bit to 32 bit.
if backend.backend() == "jax":
import jax.experimental
jax_disable_x64 = jax.experimental.disable_x64()
expected_dtype = expected_dtype.replace("64", "32")
else:
jax_disable_x64 = contextlib.nullcontext()
with jax_disable_x64:
self.assertEqual(
standardize_dtype(knp.array(x).dtype), expected_dtype
)
# TODO: support the assertion of knp.Array
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_average(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((1,), dtype=dtype1)
x2 = knp.ones((1,), dtype=dtype2)
x1_jax = jnp.ones((1,), dtype=dtype1)
x2_jax = jnp.ones((1,), dtype=dtype2)
expected_dtype = standardize_dtype(
jnp.average(x1_jax, weights=x2_jax).dtype
)
if dtype1 is not None and "float" not in dtype1:
if dtype2 is not None and "float" not in dtype2:
if "int64" in (dtype1, dtype2) or "uint32" in (dtype1, dtype2):
expected_dtype = backend.floatx()
self.assertEqual(
standardize_dtype(knp.average(x1, weights=x2).dtype), expected_dtype
)
self.assertEqual(
knp.Average().symbolic_call(x1, weights=x2).dtype, expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_broadcast_to(self, dtype):
import jax.numpy as jnp
x = knp.ones((3,), dtype=dtype)
x_jax = jnp.ones((3,), dtype=dtype)
expected_dtype = standardize_dtype(
jnp.broadcast_to(x_jax, (3, 3)).dtype
)
self.assertEqual(
standardize_dtype(knp.broadcast_to(x, (3, 3)).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.BroadcastTo((3, 3)).symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_ceil(self, dtype):
import jax.numpy as jnp
if dtype is None:
dtype = backend.floatx()
if dtype == "bool":
value = [[True, False, True], [True, False, True]]
elif "int" in dtype:
value = [[1, 2, 2], [2, 11, 5]]
else:
value = [[1.2, 2.1, 2.5], [2.4, 11.9, 5.5]]
x = knp.array(value, dtype=dtype)
x_jax = jnp.array(value, dtype=dtype)
expected_dtype = standardize_dtype(jnp.ceil(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.ceil(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Ceil().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_clip(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.clip(x_jax, -2, 2).dtype)
if dtype == "bool":
expected_dtype = "int32"
self.assertEqual(
standardize_dtype(knp.clip(x, -2, 2).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Clip(-2, 2).symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_concatenate(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((1,), dtype=dtype1)
x2 = knp.ones((1,), dtype=dtype2)
x1_jax = jnp.ones((1,), dtype=dtype1)
x2_jax = jnp.ones((1,), dtype=dtype2)
expected_dtype = standardize_dtype(
jnp.concatenate([x1_jax, x2_jax]).dtype
)
self.assertEqual(
standardize_dtype(knp.concatenate([x1, x2]).dtype), expected_dtype
)
self.assertEqual(
knp.Concatenate().symbolic_call([x1, x2]).dtype, expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_cos(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.cos(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.cos(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Cos().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_cosh(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.cosh(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.cosh(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Cosh().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_copy(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.copy(x_jax).dtype)
self.assertEqual(standardize_dtype(knp.copy(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Copy().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_count_nonzero(self, dtype):
x = knp.ones((1,), dtype=dtype)
expected_dtype = "int32"
self.assertEqual(
standardize_dtype(knp.count_nonzero(x).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.CountNonzero().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_cross(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((1, 1, 3), dtype=dtype1)
x2 = knp.ones((1, 1, 3), dtype=dtype2)
x1_jax = jnp.ones((1, 1, 3), dtype=dtype1)
x2_jax = jnp.ones((1, 1, 3), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.cross(x1_jax, x2_jax).dtype)
self.assertEqual(
standardize_dtype(knp.cross(x1, x2).dtype), expected_dtype
)
self.assertEqual(
knp.Cross().symbolic_call(x1, x2).dtype, expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_cumprod(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.cumprod(x_jax).dtype)
self.assertEqual(
standardize_dtype(knp.cumprod(x).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Cumprod().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_cumsum(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.cumsum(x_jax).dtype)
self.assertEqual(standardize_dtype(knp.cumsum(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Cumsum().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_diag(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.diag(x_jax).dtype)
self.assertEqual(standardize_dtype(knp.diag(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Diag().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_diagonal(self, dtype):
import jax.numpy as jnp
x = knp.ones((1, 1, 1), dtype=dtype)
x_jax = jnp.ones((1, 1, 1), dtype=dtype)
expected_dtype = standardize_dtype(jnp.diagonal(x_jax).dtype)
self.assertEqual(
standardize_dtype(knp.diagonal(x).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Diagonal().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_diff(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.diff(x_jax).dtype)
self.assertEqual(standardize_dtype(knp.diff(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Diff().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_digitize(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
bins = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
x_bins = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.digitize(x_jax, x_bins).dtype)
self.assertEqual(
standardize_dtype(knp.digitize(x, bins).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Digitize().symbolic_call(x, bins).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_divide(self, dtypes):
import jax.experimental
import jax.numpy as jnp
# We have to disable x64 for jax since jnp.divide doesn't respect
# JAX_DEFAULT_DTYPE_BITS=32 in `./conftest.py`. We also need to downcast
# the expected dtype from 64 bit to 32 bit when using jax backend.
with jax.experimental.disable_x64():
dtype1, dtype2 = dtypes
x1 = knp.ones((1,), dtype=dtype1)
x2 = knp.ones((1,), dtype=dtype2)
x1_jax = jnp.ones((1,), dtype=dtype1)
x2_jax = jnp.ones((1,), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.divide(x1_jax, x2_jax).dtype)
if "float64" in (dtype1, dtype2):
expected_dtype = "float64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.divide(x1, x2).dtype), expected_dtype
)
self.assertEqual(
knp.Divide().symbolic_call(x1, x2).dtype, expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_divide_python_types(self, dtype):
import jax.experimental
import jax.numpy as jnp
# We have to disable x64 for jax since jnp.divide doesn't respect
# JAX_DEFAULT_DTYPE_BITS=32 in `./conftest.py`. We also need to downcast
# the expected dtype from 64 bit to 32 bit when using jax backend.
with jax.experimental.disable_x64():
x = knp.ones((), dtype=dtype)
x_jax = jnp.ones((), dtype=dtype)
# python int
expected_dtype = standardize_dtype(jnp.divide(x_jax, 1).dtype)
if dtype == "float64":
expected_dtype = "float64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.divide(x, 1).dtype), expected_dtype
)
self.assertEqual(
knp.Divide().symbolic_call(x, 1).dtype, expected_dtype
)
# python float
expected_dtype = standardize_dtype(jnp.divide(x_jax, 1.0).dtype)
if dtype == "float64":
expected_dtype = "float64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.divide(x, 1.0).dtype), expected_dtype
)
self.assertEqual(
knp.Divide().symbolic_call(x, 1.0).dtype, expected_dtype
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_dot(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((2, 3, 4), dtype=dtype1)
x2 = knp.ones((4, 3), dtype=dtype2)
x1_jax = jnp.ones((2, 3, 4), dtype=dtype1)
x2_jax = jnp.ones((4, 3), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.dot(x1_jax, x2_jax).dtype)
self.assertEqual(
standardize_dtype(knp.dot(x1, x2).dtype), expected_dtype
)
self.assertEqual(knp.Dot().symbolic_call(x1, x2).dtype, expected_dtype)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_einsum(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((1, 1, 1), dtype=dtype1)
x2 = knp.ones((1, 1, 1), dtype=dtype2)
x1_jax = jnp.ones((1, 1, 1), dtype=dtype1)
x2_jax = jnp.ones((1, 1, 1), dtype=dtype2)
subscripts = "ijk,lkj->il"
expected_dtype = standardize_dtype(
jnp.einsum(subscripts, x1_jax, x2_jax).dtype
)
self.assertEqual(
standardize_dtype(knp.einsum(subscripts, x1, x2).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(
knp.Einsum(subscripts).symbolic_call(x1, x2).dtype
),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_empty(self, dtype):
import jax.numpy as jnp
expected_dtype = standardize_dtype(jnp.empty([2, 3], dtype=dtype).dtype)
self.assertEqual(
standardize_dtype(knp.empty([2, 3], dtype=dtype).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(
knp.Empty().symbolic_call([2, 3], dtype=dtype).dtype
),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_equal(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((), dtype=dtype1)
x2 = knp.ones((), dtype=dtype2)
x1_jax = jnp.ones((), dtype=dtype1)
x2_jax = jnp.ones((), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.equal(x1_jax, x2_jax).dtype)
self.assertEqual(
standardize_dtype(knp.equal(x1, x2).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Equal().symbolic_call(x1, x2).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_exp(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.exp(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.exp(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Exp().symbolic_call(x).dtype), expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_expand_dims(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.expand_dims(x_jax, -1).dtype)
self.assertEqual(
standardize_dtype(knp.expand_dims(x, -1).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.ExpandDims(-1).symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_expm1(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.expm1(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.expm1(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Expm1().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_eye(self, dtype):
import jax.numpy as jnp
expected_dtype = standardize_dtype(jnp.eye(3, dtype=dtype).dtype)
self.assertEqual(
standardize_dtype(knp.eye(3, dtype=dtype).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.Eye().symbolic_call(3, dtype=dtype).dtype),
expected_dtype,
)
expected_dtype = standardize_dtype(jnp.eye(3, 4, 1, dtype=dtype).dtype)
self.assertEqual(
standardize_dtype(knp.eye(3, 4, 1, dtype=dtype).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(
knp.Eye().symbolic_call(3, 4, 1, dtype=dtype).dtype
),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_flip(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.flip(x_jax, -1).dtype)
self.assertEqual(
standardize_dtype(knp.flip(x, -1).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Flip(-1).symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_floor(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.floor(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.floor(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Floor().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_floor_divide(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((), dtype=dtype1)
x2 = knp.ones((), dtype=dtype2)
x1_jax = jnp.ones((), dtype=dtype1)
x2_jax = jnp.ones((), dtype=dtype2)
expected_dtype = standardize_dtype(
jnp.floor_divide(x1_jax, x2_jax).dtype
)
self.assertEqual(
standardize_dtype(knp.floor_divide(x1, x2).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.FloorDivide().symbolic_call(x1, x2).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_floor_divide_python_types(self, dtype):
import jax.experimental
import jax.numpy as jnp
# We have to disable x64 for jax since jnp.floor_divide doesn't respect
# JAX_DEFAULT_DTYPE_BITS=32 in `./conftest.py`. We also need to downcast
# the expected dtype from 64 bit to 32 bit when using jax backend.
with jax.experimental.disable_x64():
x = knp.ones((), dtype=dtype)
x_jax = jnp.ones((), dtype=dtype)
# python int
expected_dtype = standardize_dtype(jnp.floor_divide(x_jax, 1).dtype)
if dtype == "float64":
expected_dtype = "float64"
elif dtype == "int64":
expected_dtype = "int64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.floor_divide(x, 1).dtype), expected_dtype
)
self.assertEqual(
knp.FloorDivide().symbolic_call(x, 1).dtype, expected_dtype
)
# python float
expected_dtype = standardize_dtype(
jnp.floor_divide(x_jax, 1.0).dtype
)
if dtype == "float64":
expected_dtype = "float64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.floor_divide(x, 1.0).dtype),
expected_dtype,
)
self.assertEqual(
knp.FloorDivide().symbolic_call(x, 1.0).dtype, expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_full(self, dtype):
import jax.numpy as jnp
expected_dtype = standardize_dtype(jnp.full((), 0, dtype=dtype).dtype)
if dtype is None:
expected_dtype = backend.floatx()
self.assertEqual(
standardize_dtype(knp.full((), 0, dtype=dtype).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(
knp.Full().symbolic_call((), 0, dtype=dtype).dtype
),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_full_like(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.full_like(x_jax, 0).dtype)
self.assertEqual(
standardize_dtype(knp.full_like(x, 0).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.FullLike().symbolic_call(x, 0).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_greater(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((), dtype=dtype1)
x2 = knp.ones((), dtype=dtype2)
x1_jax = jnp.ones((), dtype=dtype1)
x2_jax = jnp.ones((), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.greater(x1_jax, x2_jax).dtype)
self.assertEqual(
standardize_dtype(knp.greater(x1, x2).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Greater().symbolic_call(x1, x2).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_greater_equal(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((), dtype=dtype1)
x2 = knp.ones((), dtype=dtype2)
x1_jax = jnp.ones((), dtype=dtype1)
x2_jax = jnp.ones((), dtype=dtype2)
expected_dtype = standardize_dtype(
jnp.greater_equal(x1_jax, x2_jax).dtype
)
self.assertEqual(
standardize_dtype(knp.greater_equal(x1, x2).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.GreaterEqual().symbolic_call(x1, x2).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_hstack(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((1, 1), dtype=dtype1)
x2 = knp.ones((1, 1), dtype=dtype2)
x1_jax = jnp.ones((1, 1), dtype=dtype1)
x2_jax = jnp.ones((1, 1), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.hstack([x1_jax, x2_jax]).dtype)
self.assertEqual(
standardize_dtype(knp.hstack([x1, x2]).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Hstack().symbolic_call([x1, x2]).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_identity(self, dtype):
import jax.numpy as jnp
expected_dtype = standardize_dtype(jnp.identity(3, dtype=dtype).dtype)
self.assertEqual(
standardize_dtype(knp.identity(3, dtype=dtype).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(
knp.Identity().symbolic_call(3, dtype=dtype).dtype
),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_isclose(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((), dtype=dtype1)
x2 = knp.ones((), dtype=dtype2)
x1_jax = jnp.ones((), dtype=dtype1)
x2_jax = jnp.ones((), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.isclose(x1_jax, x2_jax).dtype)
self.assertEqual(
standardize_dtype(knp.isclose(x1, x2).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Isclose().symbolic_call(x1, x2).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_isfinite(self, dtype):
import jax.numpy as jnp
x = knp.ones((), dtype=dtype)
x_jax = jnp.ones((), dtype=dtype)
expected_dtype = standardize_dtype(jnp.isfinite(x_jax).dtype)
self.assertEqual(
standardize_dtype(knp.isfinite(x).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Isfinite().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_isinf(self, dtype):
import jax.numpy as jnp
x = knp.ones((), dtype=dtype)
x_jax = jnp.ones((), dtype=dtype)
expected_dtype = standardize_dtype(jnp.isinf(x_jax).dtype)
self.assertEqual(standardize_dtype(knp.isinf(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Isinf().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_isnan(self, dtype):
import jax.numpy as jnp
x = knp.ones((), dtype=dtype)
x_jax = jnp.ones((), dtype=dtype)
expected_dtype = standardize_dtype(jnp.isnan(x_jax).dtype)
self.assertEqual(standardize_dtype(knp.isnan(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Isnan().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_less(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((), dtype=dtype1)
x2 = knp.ones((), dtype=dtype2)
x1_jax = jnp.ones((), dtype=dtype1)
x2_jax = jnp.ones((), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.less(x1_jax, x2_jax).dtype)
self.assertEqual(
standardize_dtype(knp.less(x1, x2).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Less().symbolic_call(x1, x2).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_less_equal(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((), dtype=dtype1)
x2 = knp.ones((), dtype=dtype2)
x1_jax = jnp.ones((), dtype=dtype1)
x2_jax = jnp.ones((), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.less_equal(x1_jax, x2_jax).dtype)
self.assertEqual(
standardize_dtype(knp.less_equal(x1, x2).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.LessEqual().symbolic_call(x1, x2).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(
start_and_stop=[
[0, 10],
[0.5, 10.5],
[np.array([0, 1], "int32"), np.array([10, 20], "int32")],
[np.array([0, 1], "float32"), np.array([10, 20], "float32")],
],
num=[0, 1, 5],
dtype=FLOAT_DTYPES + [None],
)
)
def test_linspace(self, start_and_stop, num, dtype):
import jax.numpy as jnp
start, stop = start_and_stop
expected_dtype = standardize_dtype(
jnp.linspace(start, stop, num, dtype=dtype).dtype
)
self.assertEqual(
standardize_dtype(
knp.linspace(start, stop, num, dtype=dtype).dtype
),
expected_dtype,
)
self.assertEqual(
standardize_dtype(
knp.Linspace(num, dtype=dtype).symbolic_call(start, stop).dtype
),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_log(self, dtype):
import jax.numpy as jnp
x = knp.ones((3, 3), dtype=dtype)
x_jax = jnp.ones((3, 3), dtype=dtype)
expected_dtype = standardize_dtype(jnp.log(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.log(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Log().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_log10(self, dtype):
import jax.numpy as jnp
x = knp.ones((3, 3), dtype=dtype)
x_jax = jnp.ones((3, 3), dtype=dtype)
expected_dtype = standardize_dtype(jnp.log10(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.log10(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Log10().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_log1p(self, dtype):
import jax.numpy as jnp
x = knp.ones((3, 3), dtype=dtype)
x_jax = jnp.ones((3, 3), dtype=dtype)
expected_dtype = standardize_dtype(jnp.log1p(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.log1p(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Log1p().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_log2(self, dtype):
import jax.numpy as jnp
x = knp.ones((3, 3), dtype=dtype)
x_jax = jnp.ones((3, 3), dtype=dtype)
expected_dtype = standardize_dtype(jnp.log2(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.log2(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Log2().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_logaddexp(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((3, 3), dtype=dtype1)
x2 = knp.ones((3, 3), dtype=dtype2)
x1_jax = jnp.ones((3, 3), dtype=dtype1)
x2_jax = jnp.ones((3, 3), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.logaddexp(x1_jax, x2_jax).dtype)
# jnp.logaddexp will promote "int64" and "uint32" to "float64"
# force the promotion to `backend.floatx()`
if dtype1 is not None and "float" not in dtype1:
if dtype2 is not None and "float" not in dtype2:
if "int64" in (dtype1, dtype2) or "uint32" in (dtype1, dtype2):
expected_dtype = backend.floatx()
self.assertEqual(
standardize_dtype(knp.logaddexp(x1, x2).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Logaddexp().symbolic_call(x1, x2).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(
start_and_stop=[
[0, 10],
[0.5, 10.5],
[np.array([0, 1], "int32"), np.array([10, 20], "int32")],
[np.array([0, 1], "float32"), np.array([10, 20], "float32")],
],
num=[0, 1, 5],
dtype=FLOAT_DTYPES + [None],
)
)
def test_logspace(self, start_and_stop, num, dtype):
import jax.numpy as jnp
start, stop = start_and_stop
expected_dtype = standardize_dtype(
jnp.logspace(start, stop, num, dtype=dtype).dtype
)
self.assertEqual(
standardize_dtype(
knp.logspace(start, stop, num, dtype=dtype).dtype
),
expected_dtype,
)
self.assertEqual(
standardize_dtype(
knp.Logspace(num, dtype=dtype).symbolic_call(start, stop).dtype
),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_logical_and(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((), dtype=dtype1)
x2 = knp.ones((), dtype=dtype2)
x1_jax = jnp.ones((), dtype=dtype1)
x2_jax = jnp.ones((), dtype=dtype2)
expected_dtype = standardize_dtype(
jnp.logical_and(x1_jax, x2_jax).dtype
)
self.assertEqual(
standardize_dtype(knp.logical_and(x1, x2).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.LogicalAnd().symbolic_call(x1, x2).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_logical_not(self, dtype):
import jax.numpy as jnp
x = knp.ones((), dtype=dtype)
x_jax = jnp.ones((), dtype=dtype)
expected_dtype = standardize_dtype(jnp.logical_not(x_jax).dtype)
self.assertEqual(
standardize_dtype(knp.logical_not(x).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.LogicalNot().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_logical_or(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((), dtype=dtype1)
x2 = knp.ones((), dtype=dtype2)
x1_jax = jnp.ones((), dtype=dtype1)
x2_jax = jnp.ones((), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.logical_or(x1_jax, x2_jax).dtype)
self.assertEqual(
standardize_dtype(knp.logical_or(x1, x2).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.LogicalOr().symbolic_call(x1, x2).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_logical_xor(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((), dtype=dtype1)
x2 = knp.ones((), dtype=dtype2)
x1_jax = jnp.ones((), dtype=dtype1)
x2_jax = jnp.ones((), dtype=dtype2)
expected_dtype = standardize_dtype(
jnp.logical_xor(x1_jax, x2_jax).dtype
)
self.assertEqual(
standardize_dtype(knp.logical_xor(x1, x2).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.LogicalXor().symbolic_call(x1, x2).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_maximum(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((), dtype=dtype1)
x2 = knp.ones((), dtype=dtype2)
x1_jax = jnp.ones((), dtype=dtype1)
x2_jax = jnp.ones((), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.maximum(x1_jax, x2_jax).dtype)
self.assertEqual(
standardize_dtype(knp.maximum(x1, x2).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Maximum().symbolic_call(x1, x2).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_maximum_python_types(self, dtype):
import jax.experimental
import jax.numpy as jnp
# We have to disable x64 for jax since jnp.maximum doesn't respect
# JAX_DEFAULT_DTYPE_BITS=32 in `./conftest.py`.
with jax.experimental.disable_x64():
x = knp.ones((), dtype=dtype)
x_jax = jnp.ones((), dtype=dtype)
# python int
expected_dtype = standardize_dtype(jnp.maximum(x_jax, 1).dtype)
if dtype == "float64":
expected_dtype = "float64"
elif dtype == "int64":
expected_dtype = "int64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.maximum(x, 1).dtype), expected_dtype
)
self.assertEqual(
knp.Maximum().symbolic_call(x, 1).dtype, expected_dtype
)
# python float
expected_dtype = standardize_dtype(jnp.maximum(x_jax, 1.0).dtype)
if dtype == "float64":
expected_dtype = "float64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.maximum(x, 1.0).dtype), expected_dtype
)
self.assertEqual(
knp.Maximum().symbolic_call(x, 1.0).dtype, expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_median(self, dtype):
import jax.numpy as jnp
x = knp.ones((3, 3), dtype=dtype)
x_jax = jnp.ones((3, 3), dtype=dtype)
expected_dtype = standardize_dtype(jnp.median(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.median(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Median().symbolic_call(x).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.median(x, axis=1).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Median(axis=1).symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_meshgrid(self, dtype):
import jax.numpy as jnp
if dtype == "bool":
self.skipTest("meshgrid doesn't support bool dtype")
elif dtype is None:
dtype = backend.floatx()
x = knp.array([1, 2, 3], dtype=dtype)
y = knp.array([4, 5, 6], dtype=dtype)
x_jax = jnp.array([1, 2, 3], dtype=dtype)
y_jax = jnp.array([4, 5, 6], dtype=dtype)
expected_dtype = standardize_dtype(jnp.meshgrid(x_jax, y_jax)[0].dtype)
self.assertEqual(
standardize_dtype(knp.meshgrid(x, y)[0].dtype), expected_dtype
)
self.assertEqual(
knp.Meshgrid().symbolic_call(x, y)[0].dtype, expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_min(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.min(x_jax).dtype)
self.assertEqual(standardize_dtype(knp.min(x).dtype), expected_dtype)
self.assertEqual(knp.Min().symbolic_call(x).dtype, expected_dtype)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_minimum(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((), dtype=dtype1)
x2 = knp.ones((), dtype=dtype2)
x1_jax = jnp.ones((), dtype=dtype1)
x2_jax = jnp.ones((), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.minimum(x1_jax, x2_jax).dtype)
self.assertEqual(
standardize_dtype(knp.minimum(x1, x2).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Minimum().symbolic_call(x1, x2).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_minimum_python_types(self, dtype):
import jax.experimental
import jax.numpy as jnp
# We have to disable x64 for jax since jnp.minimum doesn't respect
# JAX_DEFAULT_DTYPE_BITS=32 in `./conftest.py`.
with jax.experimental.disable_x64():
x = knp.ones((), dtype=dtype)
x_jax = jnp.ones((), dtype=dtype)
# python int
expected_dtype = standardize_dtype(jnp.minimum(x_jax, 1).dtype)
if dtype == "float64":
expected_dtype = "float64"
elif dtype == "int64":
expected_dtype = "int64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.minimum(x, 1).dtype), expected_dtype
)
self.assertEqual(
knp.Minimum().symbolic_call(x, 1).dtype, expected_dtype
)
# python float
expected_dtype = standardize_dtype(jnp.minimum(x_jax, 1.0).dtype)
if dtype == "float64":
expected_dtype = "float64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.minimum(x, 1.0).dtype), expected_dtype
)
self.assertEqual(
knp.Minimum().symbolic_call(x, 1.0).dtype, expected_dtype
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_mod(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((), dtype=dtype1)
x2 = knp.ones((), dtype=dtype2)
x1_jax = jnp.ones((), dtype=dtype1)
x2_jax = jnp.ones((), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.mod(x1_jax, x2_jax).dtype)
self.assertEqual(
standardize_dtype(knp.mod(x1, x2).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Mod().symbolic_call(x1, x2).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_moveaxis(self, dtype):
import jax.numpy as jnp
x = knp.ones((1, 1, 1), dtype=dtype)
x_jax = jnp.ones((1, 1, 1), dtype=dtype)
expected_dtype = standardize_dtype(jnp.moveaxis(x_jax, -2, -1).dtype)
self.assertEqual(
standardize_dtype(knp.moveaxis(x, -2, -1).dtype), expected_dtype
)
self.assertEqual(
knp.Moveaxis(-2, -1).symbolic_call(x).dtype, expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_nan_to_num(self, dtype):
import jax.numpy as jnp
x = knp.ones((), dtype=dtype)
x_jax = jnp.ones((), dtype=dtype)
expected_dtype = standardize_dtype(jnp.nan_to_num(x_jax).dtype)
self.assertEqual(
standardize_dtype(knp.nan_to_num(x).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.NanToNum().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_nonzero(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.nonzero(x_jax)[0].dtype)
self.assertEqual(
standardize_dtype(knp.nonzero(x)[0].dtype), expected_dtype
)
# TODO: verify Nonzero
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_not_equal(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((), dtype=dtype1)
x2 = knp.ones((), dtype=dtype2)
x1_jax = jnp.ones((), dtype=dtype1)
x2_jax = jnp.ones((), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.not_equal(x1_jax, x2_jax).dtype)
self.assertEqual(
standardize_dtype(knp.not_equal(x1, x2).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.NotEqual().symbolic_call(x1, x2).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_ones_like(self, dtype):
import jax.numpy as jnp
x = knp.ones((), dtype=dtype)
x_jax = jnp.ones((), dtype=dtype)
expected_dtype = standardize_dtype(jnp.ones_like(x_jax).dtype)
self.assertEqual(
standardize_dtype(knp.ones_like(x).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.OnesLike().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_outer(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((1, 2), dtype=dtype1)
x2 = knp.ones((3, 4), dtype=dtype2)
x1_jax = jnp.ones((1, 2), dtype=dtype1)
x2_jax = jnp.ones((3, 4), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.outer(x1_jax, x2_jax).dtype)
self.assertEqual(
standardize_dtype(knp.outer(x1, x2).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Outer().symbolic_call(x1, x2).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_pad(self, dtype):
import jax.numpy as jnp
x = knp.ones((2, 2, 2, 2), dtype=dtype)
x_jax = jnp.ones((2, 2, 2, 2), dtype=dtype)
pad_width = ((0, 0), (1, 1), (1, 1), (1, 1))
for mode in ("constant", "symmetric", "reflect"):
expected_dtype = standardize_dtype(
jnp.pad(x_jax, pad_width, mode).dtype
)
self.assertEqual(
standardize_dtype(knp.pad(x, pad_width, mode).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(
knp.Pad(pad_width, mode).symbolic_call(x).dtype
),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_power(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x = knp.ones((1,), dtype=dtype1)
power = knp.ones((1,), dtype2)
x_jax = jnp.ones((1,), dtype=dtype1)
power_jax = jnp.ones((1,), dtype2)
expected_dtype = standardize_dtype(jnp.power(x_jax, power_jax).dtype)
self.assertEqual(
standardize_dtype(knp.power(x, power).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.Power().symbolic_call(x, power).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_power_python_types(self, dtype):
import jax.experimental
import jax.numpy as jnp
# We have to disable x64 for jax since jnp.power doesn't respect
# JAX_DEFAULT_DTYPE_BITS=32 in `./conftest.py`. We also need to downcast
# the expected dtype from 64 bit to 32 bit when using jax backend.
with jax.experimental.disable_x64():
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
# python int
expected_dtype = standardize_dtype(jnp.power(x_jax, 1).dtype)
if dtype == "float64":
expected_dtype = "float64"
elif dtype == "int64":
expected_dtype = "int64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.power(x, 1).dtype), expected_dtype
)
self.assertEqual(
knp.Power().symbolic_call(x, 1).dtype, expected_dtype
)
# python float
expected_dtype = standardize_dtype(jnp.power(x_jax, 1.0).dtype)
if dtype == "float64":
expected_dtype = "float64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.power(x, 1.0).dtype), expected_dtype
)
self.assertEqual(
knp.Power().symbolic_call(x, 1.0).dtype, expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_prod(self, dtype):
import jax.numpy as jnp
x = knp.ones((1, 1, 1), dtype=dtype)
x_jax = jnp.ones((1, 1, 1), dtype=dtype)
expected_dtype = standardize_dtype(jnp.prod(x_jax).dtype)
# TODO: torch doesn't support uint32
if backend.backend() == "torch" and expected_dtype == "uint32":
expected_dtype = "int32"
self.assertEqual(
standardize_dtype(knp.prod(x).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.Prod().symbolic_call(x).dtype), expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_quantile(self, dtype):
import jax.numpy as jnp
x = knp.ones((3,), dtype=dtype)
x_jax = jnp.ones((3,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.quantile(x_jax, 0.5).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(
standardize_dtype(knp.quantile(x, 0.5).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.Quantile().symbolic_call(x, 0.5).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_ravel(self, dtype):
import jax.numpy as jnp
x = knp.ones((), dtype=dtype)
x_jax = jnp.ones((), dtype=dtype)
expected_dtype = standardize_dtype(jnp.ravel(x_jax).dtype)
self.assertEqual(
standardize_dtype(knp.ravel(x).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.Ravel().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_repeat(self, dtype):
import jax.numpy as jnp
x = knp.ones((1, 1), dtype=dtype)
x_jax = jnp.ones((1, 1), dtype=dtype)
expected_dtype = standardize_dtype(jnp.repeat(x_jax, 2).dtype)
self.assertEqual(
standardize_dtype(knp.repeat(x, 2).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.Repeat(2).symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_reshape(self, dtype):
import jax.numpy as jnp
x = knp.ones((1, 1), dtype=dtype)
x_jax = jnp.ones((1, 1), dtype=dtype)
expected_dtype = standardize_dtype(jnp.reshape(x_jax, [1]).dtype)
self.assertEqual(
standardize_dtype(knp.reshape(x, [1]).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.Reshape([1]).symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_roll(self, dtype):
import jax.numpy as jnp
x = knp.ones((5,), dtype=dtype)
x_jax = jnp.ones((5,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.roll(x_jax, 2).dtype)
self.assertEqual(
standardize_dtype(knp.roll(x, 2).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.Roll(2).symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_round(self, dtype):
import jax.numpy as jnp
if dtype == "bool":
self.skipTest("round doesn't support bool dtype")
x = knp.ones((), dtype=dtype)
x_jax = jnp.ones((), dtype=dtype)
expected_dtype = standardize_dtype(jnp.round(x_jax).dtype)
self.assertEqual(
standardize_dtype(knp.round(x).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.Round().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_sign(self, dtype):
import jax.numpy as jnp
if dtype == "bool":
self.skipTest("sign doesn't support bool dtype")
x = knp.ones((), dtype=dtype)
x_jax = jnp.ones((), dtype=dtype)
expected_dtype = standardize_dtype(jnp.sign(x_jax).dtype)
self.assertEqual(
standardize_dtype(knp.sign(x).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.Sign().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_sin(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.sin(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.sin(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Sin().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_sinh(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.sinh(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.sinh(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Sinh().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_sort(self, dtype):
import jax.numpy as jnp
x = knp.ones((2,), dtype=dtype)
x_jax = jnp.ones((2,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.sort(x_jax).dtype)
self.assertEqual(
standardize_dtype(knp.sort(x).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.Sort().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_split(self, dtype):
import jax.numpy as jnp
x = knp.ones((1, 2), dtype=dtype)
x_jax = jnp.ones((1, 2), dtype=dtype)
expected_dtype = standardize_dtype(jnp.split(x_jax, 2, -1)[0].dtype)
self.assertEqual(
standardize_dtype(knp.split(x, 2, -1)[0].dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.Split(2, -1).symbolic_call(x)[0].dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_sqrt(self, dtype):
import jax.numpy as jnp
x1 = knp.ones((1,), dtype=dtype)
x1_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.sqrt(x1_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.sqrt(x1).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Sqrt().symbolic_call(x1).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_square(self, dtype):
import jax.numpy as jnp
x = knp.ones((), dtype=dtype)
x_jax = jnp.ones((), dtype=dtype)
expected_dtype = standardize_dtype(jnp.square(x_jax).dtype)
self.assertEqual(
standardize_dtype(knp.square(x).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.Square().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_squeeze(self, dtype):
import jax.numpy as jnp
x = knp.ones((1, 1), dtype=dtype)
x_jax = jnp.ones((1, 1), dtype=dtype)
expected_dtype = standardize_dtype(jnp.squeeze(x_jax).dtype)
self.assertEqual(
standardize_dtype(knp.squeeze(x).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.Squeeze().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_stack(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((1,), dtype=dtype1)
x2 = knp.ones((1,), dtype=dtype2)
x1_jax = jnp.ones((1,), dtype=dtype1)
x2_jax = jnp.ones((1,), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.stack([x1_jax, x2_jax]).dtype)
self.assertEqual(
standardize_dtype(knp.stack([x1, x2]).dtype), expected_dtype
)
self.assertEqual(
knp.Stack().symbolic_call([x1, x2]).dtype, expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_std(self, dtype):
import jax.numpy as jnp
x = knp.ones((), dtype=dtype)
x_jax = jnp.ones((), dtype=dtype)
expected_dtype = standardize_dtype(jnp.std(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(
standardize_dtype(knp.std(x).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.Std().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_sum(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.sum(x_jax).dtype)
# TODO: torch doesn't support uint32
if backend.backend() == "torch" and expected_dtype == "uint32":
expected_dtype = "int32"
self.assertEqual(standardize_dtype(knp.sum(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Sum().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_swapaxes(self, dtype):
import jax.numpy as jnp
x = knp.ones((1, 1), dtype=dtype)
x_jax = jnp.ones((1, 1), dtype=dtype)
expected_dtype = standardize_dtype(jnp.swapaxes(x_jax, -1, -2).dtype)
self.assertEqual(
standardize_dtype(knp.swapaxes(x, -1, -2).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.Swapaxes(-1, -2).symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_take(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.take(x_jax, 0).dtype)
self.assertEqual(
standardize_dtype(knp.take(x, 0).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.Take().symbolic_call(x, 0).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_take_along_axis(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
indices = knp.zeros((1,), dtype="int32")
x_jax = jnp.ones((1,), dtype=dtype)
indices_jax = jnp.zeros((1,), dtype="int32")
expected_dtype = standardize_dtype(
jnp.take_along_axis(x_jax, indices_jax, 0).dtype
)
self.assertEqual(
standardize_dtype(knp.take_along_axis(x, indices, 0).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(
knp.TakeAlongAxis(0).symbolic_call(x, indices).dtype
),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_tan(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.tan(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.tan(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Tan().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_tanh(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.tanh(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.tanh(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Tanh().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_tensordot(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((1, 1), dtype=dtype1)
x2 = knp.ones((1, 1), dtype=dtype2)
x1_jax = jnp.ones((1, 1), dtype=dtype1)
x2_jax = jnp.ones((1, 1), dtype=dtype2)
expected_dtype = standardize_dtype(
jnp.tensordot(x1_jax, x2_jax, 2).dtype
)
self.assertEqual(
standardize_dtype(knp.tensordot(x1, x2, 2).dtype), expected_dtype
)
self.assertEqual(
knp.Tensordot(2).symbolic_call(x1, x2).dtype, expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_tile(self, dtype):
import jax.numpy as jnp
x = knp.ones((1,), dtype=dtype)
x_jax = jnp.ones((1,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.tile(x_jax, [1]).dtype)
self.assertEqual(
standardize_dtype(knp.tile(x, [1]).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Tile([1]).symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_trace(self, dtype):
import jax.experimental
import jax.numpy as jnp
# We have to disable x64 for jax since jnp.true_divide doesn't respect
# JAX_DEFAULT_DTYPE_BITS=32 in `./conftest.py`. We also need to downcast
# the expected dtype from 64 bit to 32 bit when using jax backend.
with jax.experimental.disable_x64():
x = knp.ones((1, 1, 1), dtype=dtype)
x_jax = jnp.ones((1, 1, 1), dtype=dtype)
expected_dtype = standardize_dtype(jnp.trace(x_jax).dtype)
# jnp.trace is buggy with bool. We set the expected_dtype to int32
# for bool inputs
if dtype == "bool":
expected_dtype = "int32"
elif dtype == "float64":
expected_dtype = "float64"
elif dtype == "int64":
expected_dtype = "int64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.trace(x).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Trace().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_transpose(self, dtype):
import jax.numpy as jnp
x = knp.ones((1, 1), dtype=dtype)
x_jax = jnp.ones((1, 1), dtype=dtype)
expected_dtype = standardize_dtype(jnp.transpose(x_jax, [1, 0]).dtype)
self.assertEqual(
standardize_dtype(knp.transpose(x, [1, 0]).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Transpose([1, 0]).symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_tri(self, dtype):
import jax.numpy as jnp
expected_dtype = standardize_dtype(jnp.tri(3, dtype=dtype).dtype)
self.assertEqual(
standardize_dtype(knp.tri(3, dtype=dtype).dtype),
expected_dtype,
)
self.assertEqual(
standardize_dtype(knp.Tri().symbolic_call(3, dtype=dtype).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_tril(self, dtype):
import jax.numpy as jnp
x = knp.ones((1, 1), dtype=dtype)
x_jax = jnp.ones((1, 1), dtype=dtype)
expected_dtype = standardize_dtype(jnp.tril(x_jax, 0).dtype)
self.assertEqual(
standardize_dtype(knp.tril(x, 0).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Tril(0).symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_triu(self, dtype):
import jax.numpy as jnp
x = knp.ones((1, 1), dtype=dtype)
x_jax = jnp.ones((1, 1), dtype=dtype)
expected_dtype = standardize_dtype(jnp.triu(x_jax, 0).dtype)
self.assertEqual(
standardize_dtype(knp.triu(x, 0).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.Triu(0).symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_true_divide(self, dtypes):
import jax.experimental
import jax.numpy as jnp
# We have to disable x64 for jax since jnp.true_divide doesn't respect
# JAX_DEFAULT_DTYPE_BITS=32 in `./conftest.py`. We also need to downcast
# the expected dtype from 64 bit to 32 bit when using jax backend.
with jax.experimental.disable_x64():
dtype1, dtype2 = dtypes
x1 = knp.ones((1,), dtype=dtype1)
x2 = knp.ones((1,), dtype=dtype2)
x1_jax = jnp.ones((1,), dtype=dtype1)
x2_jax = jnp.ones((1,), dtype=dtype2)
expected_dtype = standardize_dtype(
jnp.true_divide(x1_jax, x2_jax).dtype
)
if "float64" in (dtype1, dtype2):
expected_dtype = "float64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.true_divide(x1, x2).dtype), expected_dtype
)
self.assertEqual(
knp.TrueDivide().symbolic_call(x1, x2).dtype, expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_var(self, dtype):
import jax.numpy as jnp
x = knp.ones((2,), dtype=dtype)
x_jax = jnp.ones((2,), dtype=dtype)
expected_dtype = standardize_dtype(jnp.var(x_jax).dtype)
if dtype == "int64":
expected_dtype = backend.floatx()
self.assertEqual(standardize_dtype(knp.var(x).dtype), expected_dtype)
self.assertEqual(
standardize_dtype(knp.Var().symbolic_call(x).dtype),
expected_dtype,
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_vdot(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((1,), dtype=dtype1)
x2 = knp.ones((1,), dtype=dtype2)
x1_jax = jnp.ones((1,), dtype=dtype1)
x2_jax = jnp.ones((1,), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.vdot(x1_jax, x2_jax).dtype)
self.assertEqual(
standardize_dtype(knp.vdot(x1, x2).dtype), expected_dtype
)
self.assertEqual(knp.Vdot().symbolic_call(x1, x2).dtype, expected_dtype)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_vstack(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
x1 = knp.ones((1,), dtype=dtype1)
x2 = knp.ones((1,), dtype=dtype2)
x1_jax = jnp.ones((1,), dtype=dtype1)
x2_jax = jnp.ones((1,), dtype=dtype2)
expected_dtype = standardize_dtype(jnp.vstack([x1_jax, x2_jax]).dtype)
self.assertEqual(
standardize_dtype(knp.vstack([x1, x2]).dtype), expected_dtype
)
self.assertEqual(
knp.Vstack().symbolic_call([x1, x2]).dtype, expected_dtype
)
@parameterized.named_parameters(
named_product(dtypes=itertools.combinations(ALL_DTYPES, 2))
)
def test_where(self, dtypes):
import jax.numpy as jnp
dtype1, dtype2 = dtypes
condition = knp.ones((10,), dtype="bool")
x1 = knp.ones((10,), dtype=dtype1)
x2 = knp.ones((10,), dtype=dtype2)
condition_jax = jnp.ones((10,), dtype="bool")
x1_jax = jnp.ones((10,), dtype=dtype1)
x2_jax = jnp.ones((10,), dtype=dtype2)
expected_dtype = standardize_dtype(
jnp.where(condition_jax, x1_jax, x2_jax).dtype
)
self.assertEqual(
standardize_dtype(knp.where(condition, x1, x2).dtype),
expected_dtype,
)
self.assertEqual(
knp.Where().symbolic_call(condition, x1, x2).dtype, expected_dtype
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_where_python_types(self, dtype):
import jax.experimental
import jax.numpy as jnp
# We have to disable x64 for jax since jnp.power doesn't respect
# JAX_DEFAULT_DTYPE_BITS=32 in `./conftest.py`. We also need to downcast
# the expected dtype from 64 bit to 32 bit when using jax backend.
with jax.experimental.disable_x64():
condition = knp.ones((10,), dtype="bool")
x = knp.ones((10,), dtype=dtype)
condition_jax = jnp.ones((10,), dtype="bool")
x_jax = jnp.ones((10,), dtype=dtype)
# python int
expected_dtype = standardize_dtype(
jnp.where(condition_jax, x_jax, 1).dtype
)
if dtype == "float64":
expected_dtype = "float64"
elif dtype == "int64":
expected_dtype = "int64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.where(condition, x, 1).dtype),
expected_dtype,
)
self.assertEqual(
knp.Where().symbolic_call(condition, x, 1).dtype, expected_dtype
)
# python float
expected_dtype = standardize_dtype(
jnp.where(condition_jax, x_jax, 1.0).dtype
)
if dtype == "float64":
expected_dtype = "float64"
if backend.backend() == "jax":
expected_dtype = expected_dtype.replace("64", "32")
self.assertEqual(
standardize_dtype(knp.where(condition, x, 1.0).dtype),
expected_dtype,
)
self.assertEqual(
knp.Where().symbolic_call(condition, x, 1.0).dtype,
expected_dtype,
)
@parameterized.named_parameters(named_product(dtype=ALL_DTYPES))
def test_zeros_like(self, dtype):
import jax.numpy as jnp
x = knp.ones((), dtype=dtype)
x_jax = jnp.ones((), dtype=dtype)
expected_dtype = standardize_dtype(jnp.ones_like(x_jax).dtype)
self.assertEqual(
standardize_dtype(knp.zeros_like(x).dtype), expected_dtype
)
self.assertEqual(
standardize_dtype(knp.ZerosLike().symbolic_call(x).dtype),
expected_dtype,
)
| keras/keras/ops/numpy_test.py/0 | {
"file_path": "keras/keras/ops/numpy_test.py",
"repo_id": "keras",
"token_count": 138768
} | 154 |
from keras.saving.object_registration import CustomObjectScope
from keras.saving.object_registration import custom_object_scope
from keras.saving.object_registration import get_custom_objects
from keras.saving.object_registration import get_registered_name
from keras.saving.object_registration import get_registered_object
from keras.saving.object_registration import register_keras_serializable
from keras.saving.saving_api import load_model
from keras.saving.serialization_lib import deserialize_keras_object
from keras.saving.serialization_lib import serialize_keras_object
| keras/keras/saving/__init__.py/0 | {
"file_path": "keras/keras/saving/__init__.py",
"repo_id": "keras",
"token_count": 155
} | 155 |
import types
from keras.distribution import distribution_lib
from keras.trainers.data_adapters import array_data_adapter
from keras.trainers.data_adapters import py_dataset_adapter
from keras.trainers.data_adapters.array_data_adapter import ArrayDataAdapter
from keras.trainers.data_adapters.generator_data_adapter import (
GeneratorDataAdapter,
)
from keras.trainers.data_adapters.py_dataset_adapter import PyDatasetAdapter
from keras.trainers.data_adapters.tf_dataset_adapter import TFDatasetAdapter
from keras.trainers.data_adapters.torch_data_loader_adapter import (
TorchDataLoaderAdapter,
)
def get_data_adapter(
x,
y=None,
sample_weight=None,
batch_size=None,
steps_per_epoch=None,
shuffle=False,
class_weight=None,
):
# Check for multi-process/worker distribution. Since only tf.dataset
# is supported at the moment, we will raise error if the inputs fail
# the type check
distribution = distribution_lib.distribution()
if getattr(distribution, "_is_multi_process", False) and not is_tf_dataset(
x
):
raise ValueError(
"When using multi-worker distribution, the data must be provided "
f"as a `tf.data.Dataset` instance. Received: type(x)={type(x)}."
)
if array_data_adapter.can_convert_arrays((x, y, sample_weight)):
return ArrayDataAdapter(
x,
y,
sample_weight=sample_weight,
class_weight=class_weight,
shuffle=shuffle,
batch_size=batch_size,
steps=steps_per_epoch,
)
elif is_tf_dataset(x):
# Unsupported args: y, sample_weight, shuffle
if y is not None:
raise_unsupported_arg("y", "the targets", "tf.data.Dataset")
if sample_weight is not None:
raise_unsupported_arg(
"sample_weights", "the sample weights", "tf.data.Dataset"
)
return TFDatasetAdapter(
x, class_weight=class_weight, distribution=distribution
)
# TODO: should we warn or not?
# warnings.warn(
# "`shuffle=True` was passed, but will be ignored since the "
# "data `x` was provided as a tf.data.Dataset. The Dataset is "
# "expected to already be shuffled "
# "(via `.shuffle(tf.data.AUTOTUNE)`)"
# )
elif isinstance(x, py_dataset_adapter.PyDataset):
if y is not None:
raise_unsupported_arg("y", "the targets", "PyDataset")
if sample_weight is not None:
raise_unsupported_arg(
"sample_weights", "the sample weights", "PyDataset"
)
return PyDatasetAdapter(x, class_weight=class_weight, shuffle=shuffle)
elif is_torch_dataloader(x):
if y is not None:
raise_unsupported_arg("y", "the targets", "torch DataLoader")
if sample_weight is not None:
raise_unsupported_arg(
"sample_weights", "the sample weights", "torch DataLoader"
)
if class_weight is not None:
raise ValueError(
"Argument `class_weight` is not supported for torch "
f"DataLoader inputs. Received: class_weight={class_weight}"
)
return TorchDataLoaderAdapter(x)
# TODO: should we warn or not?
# warnings.warn(
# "`shuffle=True` was passed, but will be ignored since the "
# "data `x` was provided as a torch DataLoader. The DataLoader "
# "is expected to already be shuffled."
# )
elif isinstance(x, types.GeneratorType):
if y is not None:
raise_unsupported_arg("y", "the targets", "PyDataset")
if sample_weight is not None:
raise_unsupported_arg(
"sample_weights", "the sample weights", "PyDataset"
)
if class_weight is not None:
raise ValueError(
"Argument `class_weight` is not supported for Python "
f"generator inputs. Received: class_weight={class_weight}"
)
return GeneratorDataAdapter(x)
# TODO: should we warn or not?
# warnings.warn(
# "`shuffle=True` was passed, but will be ignored since the "
# "data `x` was provided as a generator. The generator "
# "is expected to yield already-shuffled data."
# )
else:
raise ValueError(f"Unrecognized data type: x={x} (of type {type(x)})")
def raise_unsupported_arg(arg_name, arg_description, input_type):
raise ValueError(
f"When providing `x` as a {input_type}, `{arg_name}` "
f"should not be passed. Instead, {arg_description} should "
f"be included as part of the {input_type}."
)
def is_tf_dataset(x):
if hasattr(x, "__class__"):
for parent in x.__class__.__mro__:
if parent.__name__ in (
"DatasetV2",
"DistributedDataset",
) and "tensorflow.python." in str(parent.__module__):
return True
return False
def is_torch_dataloader(x):
if hasattr(x, "__class__"):
for parent in x.__class__.__mro__:
if parent.__name__ == "DataLoader" and "torch.utils.data" in str(
parent.__module__
):
return True
return False
| keras/keras/trainers/data_adapters/__init__.py/0 | {
"file_path": "keras/keras/trainers/data_adapters/__init__.py",
"repo_id": "keras",
"token_count": 2443
} | 156 |
from unittest import mock
import numpy as np
import pytest
from absl.testing import parameterized
import keras
from keras import backend
from keras import initializers
from keras import layers
from keras import losses
from keras import metrics
from keras import models
from keras import ops
from keras import optimizers
from keras import testing
from keras.callbacks.callback import Callback
from keras.optimizers.rmsprop import RMSprop
from keras.testing.test_utils import named_product
if backend.backend() == "jax":
from keras.backend.jax.trainer import JAXTrainer as Trainer
elif backend.backend() == "torch":
from keras.backend.torch.trainer import TorchTrainer as Trainer
elif backend.backend() == "tensorflow":
from keras.backend.tensorflow.trainer import TensorFlowTrainer as Trainer
elif backend.backend() == "numpy":
from keras.backend.numpy.trainer import NumpyTrainer as Trainer
else:
raise ImportError(f"Invalid backend: {backend.backend()}")
# A model is just a layer mixed in with a Trainer.
class ExampleModel(Trainer, layers.Dense):
def __init__(self, units):
layers.Dense.__init__(
self,
units=units,
use_bias=False,
kernel_initializer=initializers.Ones(),
)
Trainer.__init__(self)
class StructModel(Trainer, layers.Layer):
def __init__(self, units):
layers.Layer.__init__(self)
Trainer.__init__(self)
self.dense_1 = layers.Dense(
units,
use_bias=False,
kernel_initializer=initializers.Ones(),
)
self.dense_2 = layers.Dense(
units,
use_bias=False,
kernel_initializer=initializers.Ones(),
)
def call(self, x):
return {
"y_one": self.dense_1(x["x_one"]),
"y_two": self.dense_2(x["x_two"]),
}
class ListModel(Trainer, layers.Layer):
def __init__(self, units):
layers.Layer.__init__(self)
Trainer.__init__(self)
self.dense_1 = layers.Dense(
units,
use_bias=False,
kernel_initializer=initializers.Ones(),
)
self.dense_2 = layers.Dense(
units,
use_bias=False,
kernel_initializer=initializers.Ones(),
)
def call(self, x):
assert isinstance(x, (list, tuple))
return self.dense_1(x[0]) + self.dense_2(x[1])
class TrainingTestingLayer(Trainer, layers.Layer):
def __init__(self, **kwargs):
layers.Layer.__init__(self, **kwargs)
Trainer.__init__(self)
def call(self, x, training=False):
if training:
return x
return x * 0
def sparse_generator(generator_type):
if generator_type == "scipy":
import scipy
for _ in range(4):
x = scipy.sparse.random(2, 4, density=0.25, dtype="float32")
y = np.random.rand(2, 3).astype("float32")
yield x, y
elif generator_type == "tf":
import tensorflow as tf
for _ in range(4):
x = tf.random.uniform((2, 4), dtype="float32")
x = tf.sparse.from_dense(tf.nn.dropout(x, 0.25))
y = tf.random.uniform((2, 3), dtype="float32")
yield x, y
elif generator_type == "jax":
import jax
import jax.experimental.sparse as jax_sparse
for _ in range(4):
seed = jax.random.PRNGKey(0)
x = jax_sparse.random_bcoo(seed, (2, 4), dtype="float32", nse=0.25)
y = jax.random.uniform(seed, (2, 3), dtype="float32")
yield x, y
else:
raise ValueError(f"Invalid generator type {generator_type}")
class TestTrainer(testing.TestCase, parameterized.TestCase):
@pytest.mark.requires_trainable_backend
def test_metric_tracking(self):
class ModelWithMetric(Trainer, layers.Dense):
def __init__(self, units):
layers.Dense.__init__(
self,
units=units,
use_bias=False,
kernel_initializer=initializers.Ones(),
)
Trainer.__init__(self)
self.my_metric = metrics.MeanSquaredError(name="my_metric")
model = ModelWithMetric(units=3)
model.compile(
optimizer=optimizers.SGD(),
loss=losses.MeanSquaredError(),
metrics=[metrics.MeanSquaredError()],
)
x = np.ones((2, 4))
y = np.zeros((2, 3))
# Fit the model to make sure compile_metrics are built
model.fit(x, y, batch_size=2, epochs=1)
# The model should have 3 metrics: loss_tracker, compile_metrics,
# my_metric.
self.assertEqual(len(model.metrics), 3)
self.assertEqual(model.metrics[0], model._loss_tracker)
self.assertEqual(model.metrics[1], model.my_metric)
self.assertEqual(model.metrics[2], model._compile_metrics)
# All metrics should have their weights created
self.assertEqual(len(model._loss_tracker.variables), 2)
self.assertEqual(len(model._compile_metrics.variables), 2)
self.assertEqual(len(model.my_metric.variables), 2)
# And those weights are tracked at the model level
self.assertEqual(len(model.metrics_variables), 6)
self.assertLen(model.non_trainable_variables, 0)
# Models with only weighted_metrics should have the same 3 metrics
model_weighted = ModelWithMetric(units=3)
model_weighted.compile(
optimizer=optimizers.SGD(),
loss=losses.MeanSquaredError(),
weighted_metrics=[metrics.MeanSquaredError()],
)
model_weighted.fit(
x,
y,
batch_size=2,
epochs=1,
sample_weight=np.ones(2),
)
self.assertEqual(len(model_weighted.metrics), 3)
@pytest.mark.skipif(
backend.backend() != "torch",
reason="torch backend runs in eager mode for jit_compile='auto'",
)
def test_compile_eager_vs_jit_torch(self):
model = ExampleModel(units=3)
model.compile(jit_compile="auto")
# torch trainer en/disables torch.compile only based on the value of
# model.jit_compile (not model.run_eagerly)
self.assertFalse(model.run_eagerly)
self.assertFalse(model.jit_compile)
@parameterized.named_parameters(
[
("eager", True, False, False),
("graph_fn", False, False, False),
("jit", False, True, False),
("steps_per_epoch_eager", True, False, True),
("steps_per_epoch_graph_fn", False, False, True),
("steps_per_epoch_jit", False, True, True),
]
)
@pytest.mark.requires_trainable_backend
def test_fit_flow(self, run_eagerly, jit_compile, use_steps_per_epoch):
if not run_eagerly and not jit_compile and use_steps_per_epoch:
if backend.backend() == "tensorflow":
self.skipTest(
"TODO: Graph mode without XLA in TF backend leads to "
"unexpected logs, need further checks."
)
model = ExampleModel(units=3)
epochs = 3
batch_size = 20
steps_per_epoch = 7
dataset_size = batch_size * (steps_per_epoch - 2)
x = np.ones((dataset_size, 4))
y = np.zeros((dataset_size, 3))
model.compile(
optimizer=optimizers.SGD(),
loss=losses.MeanSquaredError(),
metrics=[metrics.MeanSquaredError()],
run_eagerly=run_eagerly,
jit_compile=jit_compile,
)
history = model.fit(
x,
y,
batch_size=batch_size,
steps_per_epoch=steps_per_epoch if use_steps_per_epoch else None,
epochs=epochs,
)
history = history.history
self.assertIn("loss", history)
self.assertIn("mean_squared_error", history)
self.assertAllClose(
history["mean_squared_error"],
[14.402393, 10.991339, 8.388159],
atol=6.1051628e-1,
)
@parameterized.named_parameters(
[
("eager", True, False, False),
("graph_fn", False, False, False),
("jit", False, True, False),
("steps_per_epoch_eager", True, False, True),
("steps_per_epoch_graph_fn", False, False, True),
("steps_per_epoch_jit", False, True, True),
]
)
@pytest.mark.requires_trainable_backend
def test_fit_with_val_split(
self, run_eagerly, jit_compile, use_steps_per_epoch
):
if not run_eagerly and not jit_compile and use_steps_per_epoch:
if backend.backend() == "tensorflow":
self.skipTest(
"TODO: Graph mode without XLA in TF backend leads to "
"unexpected logs, need further checks."
)
model = ExampleModel(units=3)
epochs = 3
batch_size = 20
steps_per_epoch = 7
dataset_size = batch_size * (steps_per_epoch - 2)
x = np.ones((dataset_size, 4))
y = np.zeros((dataset_size, 3))
model.compile(
optimizer=optimizers.SGD(),
loss=losses.MeanSquaredError(),
metrics=[metrics.MeanSquaredError()],
run_eagerly=run_eagerly,
jit_compile=jit_compile,
)
history = model.fit(
x,
y,
batch_size=batch_size,
steps_per_epoch=steps_per_epoch if use_steps_per_epoch else None,
epochs=epochs,
validation_split=0.2,
)
history = history.history
self.assertIn("loss", history)
self.assertIn("val_loss", history)
# Test with backend-native tensors.
x = ops.ones((dataset_size, 4))
y = ops.zeros((dataset_size, 3))
history = model.fit(
x,
y,
batch_size=batch_size,
steps_per_epoch=steps_per_epoch if use_steps_per_epoch else None,
epochs=epochs,
validation_split=0.2,
)
history = history.history
self.assertIn("loss", history)
self.assertIn("val_loss", history)
@parameterized.named_parameters(
named_product(
generator_type=["tf", "jax", "scipy"], mode=["eager", "graph"]
)
)
@pytest.mark.skipif(
not backend.SUPPORTS_SPARSE_TENSORS,
reason="Backend does not support sparse tensors.",
)
def test_fit_sparse(self, generator_type, mode):
model = ExampleModel(units=3)
optimizer = optimizers.Adagrad()
model.compile(
optimizer=optimizer,
loss=losses.MeanSquaredError(),
metrics=[metrics.MeanSquaredError()],
run_eagerly=(mode == "eager"),
jit_compile=False,
)
dataset = sparse_generator(generator_type)
sparse_variable_updates = False
def mock_optimizer_assign(variable, value):
nonlocal sparse_variable_updates
if value.__class__.__name__ == "IndexedSlices":
sparse_variable_updates = True
with mock.patch.object(
optimizer, "assign_sub", autospec=True
) as optimizer_assign_sub:
optimizer_assign_sub.side_effect = mock_optimizer_assign
model.fit(dataset)
# JAX does not produce sparse gradients the way we use it.
if backend.backend() != "jax":
# Verify tensors did not get densified along the way.
self.assertTrue(sparse_variable_updates)
@parameterized.named_parameters(
[
("eager", True, False),
("graph_fn", False, False),
("jit", False, True),
]
)
def test_evaluate_flow(self, run_eagerly, jit_compile):
model = ExampleModel(units=3)
x = np.ones((100, 4))
y = np.zeros((100, 3))
batch_size = 16
model.compile(
optimizer=optimizers.SGD(),
loss=losses.MeanSquaredError(),
metrics=[metrics.MeanSquaredError()],
run_eagerly=run_eagerly,
jit_compile=jit_compile,
)
output = model.evaluate(x, y, batch_size=batch_size)
self.assertAllClose(output, [16.0, 16.0])
output = model.evaluate(x, y, batch_size=batch_size, return_dict=True)
self.assertIsInstance(output, dict)
self.assertIn("loss", output)
self.assertIn("mean_squared_error", output)
self.assertAllClose(output["mean_squared_error"], 16.0)
@parameterized.named_parameters(
named_product(
generator_type=["tf", "jax", "scipy"], mode=["eager", "graph"]
)
)
@pytest.mark.skipif(
not backend.SUPPORTS_SPARSE_TENSORS,
reason="Backend does not support sparse tensors.",
)
def test_evaluate_sparse(self, generator_type, mode):
model = ExampleModel(units=3)
model.compile(
optimizer=optimizers.Adagrad(),
loss=losses.MeanSquaredError(),
metrics=[metrics.MeanSquaredError()],
run_eagerly=(mode == "eager"),
jit_compile=False,
)
dataset = sparse_generator(generator_type)
model.evaluate(dataset)
@parameterized.named_parameters(
[
("eager", True, False),
("graph_fn", False, False),
("jit", False, True),
]
)
def test_predict_flow(self, run_eagerly, jit_compile):
# Test basic example
model = ExampleModel(units=3)
model.run_eagerly = run_eagerly
model.jit_compile = jit_compile
x = np.ones((100, 4))
batch_size = 16
outputs = model.predict(x, batch_size=batch_size)
self.assertAllClose(outputs, 4 * np.ones((100, 3)))
@parameterized.named_parameters(
[
("eager", True, False),
("graph_fn", False, False),
("jit", False, True),
]
)
def test_predict_flow_struct(self, run_eagerly, jit_compile):
# Test with input/output structs
model = StructModel(units=3)
model.run_eagerly = run_eagerly
model.jit_compile = jit_compile
x = {
"x_one": np.ones((100, 4)),
"x_two": np.ones((100, 4)),
}
batch_size = 16
outputs = model.predict(x, batch_size=batch_size)
self.assertIsInstance(outputs, dict)
self.assertEqual(len(outputs), 2)
self.assertAllClose(outputs["y_one"], 4 * np.ones((100, 3)))
self.assertAllClose(outputs["y_two"], 4 * np.ones((100, 3)))
@parameterized.named_parameters(
named_product(
generator_type=["tf", "jax", "scipy"], mode=["eager", "graph"]
)
)
@pytest.mark.skipif(
not backend.SUPPORTS_SPARSE_TENSORS,
reason="Backend does not support sparse tensors.",
)
def test_predict_sparse(self, generator_type, mode):
model = ExampleModel(units=3)
model.compile(
optimizer=optimizers.Adagrad(),
loss=losses.MeanSquaredError(),
metrics=[metrics.MeanSquaredError()],
run_eagerly=(mode == "eager"),
jit_compile=False,
)
dataset = sparse_generator(generator_type)
model.predict(dataset)
@pytest.mark.skipif(
backend.backend() != "jax",
reason="Memory optimization is only implemented in JAX",
)
def test_fit_eval_flow_for_jax_model_weights(self):
model = ExampleModel(units=3)
epochs = 3
batch_size = 20
steps_per_epoch = 7
dataset_size = batch_size * (steps_per_epoch - 2)
x = np.ones((dataset_size, 4))
y = np.zeros((dataset_size, 3))
class ModelWeightCheck(Callback):
def __init__(self):
super().__init__()
# Note that we access model via self._model since self.model
# will trigger a sync of the jax training state back to the model.
def on_train_batch_end(self, batch, logs=None):
for v in self._model.trainable_variables:
assert v._value is None
for v in self._model.non_trainable_variables:
assert v._value is None
for v in self._model.optimizer.variables:
assert v._value is None
for v in self._model.metrics_variables:
assert v._value is None
def on_test_batch_end(self, batch, logs=None):
for v in self._model.non_trainable_variables:
assert v._value is None
for v in self._model.metrics_variables:
assert v._value is None
model.compile(
optimizer=optimizers.SGD(),
loss=losses.MeanSquaredError(),
metrics=[metrics.MeanSquaredError()],
)
model.fit(
x,
y,
batch_size=batch_size,
steps_per_epoch=steps_per_epoch,
epochs=epochs,
callbacks=[ModelWeightCheck()],
)
model.evaluate(
x,
y,
batch_size=batch_size,
callbacks=[ModelWeightCheck()],
)
@pytest.mark.requires_trainable_backend
@pytest.mark.skipif(
backend.backend() == "torch",
reason="`steps_per_execution` not implemented for torch yet",
)
def test_steps_per_execution_steps_count(self):
class StepCount(Callback):
def __init__(self):
super().__init__()
self.count = 0
self.batches = [0, 3, 6]
def on_batch_begin(self, batch, logs=None):
assert batch == self.batches[self.count]
self.count += 1
x = np.ones((100, 4))
y = np.ones((100, 1))
batch_size = 16
model = ExampleModel(units=1)
model.compile(
loss="mse",
optimizer="adam",
steps_per_execution=3,
jit_compile=True, # TODO: fails in eager?
)
step_count = StepCount()
model.fit(x=x, y=y, batch_size=16, callbacks=[step_count], verbose=0)
self.assertEqual(step_count.count, 3)
model_2 = ExampleModel(units=1)
model_2.compile(loss="mse", optimizer="adam", steps_per_execution=1)
model_2.fit(x=x, y=y, batch_size=batch_size, verbose=0)
self.assertAllClose(model.get_weights(), model_2.get_weights())
self.assertAllClose(
model.predict(x, batch_size=batch_size),
model_2.predict(x, batch_size=batch_size),
)
self.assertAllClose(model.evaluate(x, y), model_2.evaluate(x, y))
@pytest.mark.skipif(
backend.backend() == "torch",
reason="`steps_per_execution` not implemented for torch yet",
)
def test_steps_per_execution_steps_count_without_training(self):
class StepCount(Callback):
def __init__(self):
super().__init__()
self.test_count = 0
self.predict_count = 0
self.batches = [0, 3, 6]
def on_test_batch_begin(self, batch, logs=None):
assert batch == self.batches[self.test_count]
self.test_count += 1
def on_predict_batch_begin(self, batch, logs=None):
assert batch == self.batches[self.predict_count]
self.predict_count += 1
x = np.ones((100, 4))
y = np.ones((100, 1))
batch_size = 16
model = ExampleModel(units=1)
model.compile(loss="mse", steps_per_execution=3)
step_count = StepCount()
model.predict(x, batch_size=batch_size, callbacks=[step_count])
self.assertEqual(step_count.predict_count, 3)
model.evaluate(x, y, batch_size=batch_size, callbacks=[step_count])
self.assertEqual(step_count.test_count, 3)
@pytest.mark.requires_trainable_backend
def test_adds_loss_scaling_optimizer(self):
model = TrainingTestingLayer(dtype="mixed_float16")
model.compile(optimizer="rmsprop", loss="mse")
x = np.ones((128, 1))
y = np.zeros((128, 1))
model.fit(x, y, batch_size=32)
self.assertIsInstance(model.optimizer, optimizers.LossScaleOptimizer)
model = TrainingTestingLayer(dtype="mixed_float16")
model.compile(optimizer="rmsprop", loss="mse", auto_scale_loss=False)
x = np.ones((128, 1))
y = np.zeros((128, 1))
model.fit(x, y, batch_size=32)
self.assertIsInstance(model.optimizer, RMSprop)
model = TrainingTestingLayer(dtype="mixed_bfloat16")
model.compile(optimizer="rmsprop", loss="mse")
x = np.ones((128, 1))
y = np.zeros((128, 1))
model.fit(x, y, batch_size=32)
self.assertIsInstance(model.optimizer, RMSprop)
@pytest.mark.requires_trainable_backend
@pytest.mark.skipif(
backend.backend() == "torch",
reason="half precision unsupported on torch CPU.",
)
def test_loss_scaling_prevents_underflow(self):
class DeepModel(Trainer, layers.Layer):
def __init__(self):
layers.Layer.__init__(self, dtype="mixed_float16")
Trainer.__init__(self)
self.layers = []
for _ in range(15):
# Sigmoid has a small gradient, will eventually underflow.
self.layers.append(
layers.Dense(
1,
use_bias=False,
kernel_initializer="ones",
activation="sigmoid",
dtype="mixed_float16",
)
)
def call(self, x):
for layer in self.layers:
x = layer(x)
return x
loss = losses.MeanSquaredError()
# Blow up any gradient updates, so underflow is obvious.
optimizer = optimizers.SGD(learning_rate=1e9)
model = DeepModel()
model.compile(optimizer, loss=loss, auto_scale_loss=False)
model.fit(np.ones((1, 1)), np.ones((1, 1)), batch_size=1)
first_kernel = model.layers[0].kernel
# Without autoscaling, the first dense will not update.
self.assertEqual(first_kernel, np.ones_like(first_kernel))
# Blow up any gradient updates, so underflow is obvious.
optimizer = optimizers.SGD(learning_rate=1e9)
model = DeepModel()
model.compile(optimizer, loss=loss, auto_scale_loss=True)
model.fit(np.ones((1, 1)), np.ones((1, 1)), batch_size=1)
first_kernel = model.layers[0].kernel
# With autoscaling, the first dense will update.
self.assertNotEqual(first_kernel, np.ones_like(first_kernel))
@pytest.mark.requires_trainable_backend
def test_training_arg(self):
model = TrainingTestingLayer()
model.compile(optimizer="rmsprop", loss="mse")
x = np.ones((128, 1))
y = np.zeros((128, 1))
history = model.fit(x, y, batch_size=32)
self.assertAllClose(history.history["loss"], [1.0])
val_loss = model.evaluate(x, y, batch_size=32)
self.assertAllClose(val_loss, 0.0)
preds = model.predict(x)
self.assertAllClose(preds, np.zeros((128, 1)))
@parameterized.named_parameters(
[
("eager", True, False),
("graph_fn", False, False),
("jit", False, True),
]
)
@pytest.mark.requires_trainable_backend
def test_on_batch_methods(self, run_eagerly, jit_compile):
model = ExampleModel(units=3)
x = np.ones((100, 4))
y = np.zeros((100, 3))
sw = np.arange(100).reshape((100,)).astype("float32") / 50.0
model.compile(
optimizer=optimizers.SGD(),
loss=losses.MeanSquaredError(),
metrics=[metrics.MeanSquaredError()],
run_eagerly=run_eagerly,
jit_compile=jit_compile,
)
logs = model.train_on_batch(x, y)
self.assertIsInstance(logs, list)
self.assertEqual(len(logs), 2)
self.assertAlmostEqual(logs[0], 16.0)
logs = model.train_on_batch(x, y, return_dict=True)
self.assertIsInstance(logs, dict)
self.assertEqual(len(logs), 2)
self.assertAlmostEqual(logs["loss"], 15.579)
logs = model.test_on_batch(x, y)
self.assertIsInstance(logs, list)
self.assertEqual(len(logs), 2)
self.assertAlmostEqual(logs[0], 15.173)
logs = model.test_on_batch(x, y, return_dict=True)
self.assertIsInstance(logs, dict)
self.assertEqual(len(logs), 2)
self.assertAlmostEqual(logs["loss"], 14.97)
output = model.predict_on_batch(x)
self.assertIsInstance(output, np.ndarray)
self.assertAllClose(output[0], np.array([3.789511, 3.789511, 3.789511]))
# With sample weights
logs = model.train_on_batch(x, y, sw)
self.assertAlmostEqual(logs[0], 14.819)
logs = model.test_on_batch(x, y, sw)
self.assertAlmostEqual(logs[0], 14.595)
output = model.predict_on_batch(x)
self.assertAllClose(output[0], np.array([3.689468, 3.689468, 3.689468]))
# With class weights
logs = model.train_on_batch(x, y, class_weight={1: 0.3, 0: 0.2})
self.assertAlmostEqual(logs[0], 12.899)
@parameterized.named_parameters(
[
("eager", True, False),
("graph_fn", False, False),
("jit", False, True),
]
)
def test_on_batch_methods_without_training(self, run_eagerly, jit_compile):
model = ExampleModel(units=3)
x = np.ones((100, 4))
y = np.zeros((100, 3))
model.compile(
loss=losses.MeanSquaredError(),
metrics=[metrics.MeanSquaredError()],
run_eagerly=run_eagerly,
jit_compile=jit_compile,
)
output = model.predict_on_batch(x)
self.assertIsInstance(output, np.ndarray)
self.assertAllClose(output[0], np.array([4.0, 4.0, 4.0]))
logs = model.test_on_batch(x, y)
self.assertIsInstance(logs, list)
self.assertEqual(len(logs), 2)
self.assertAlmostEqual(logs[0], 16.0)
logs = model.test_on_batch(x, y, return_dict=True)
self.assertIsInstance(logs, dict)
self.assertEqual(len(logs), 2)
self.assertAlmostEqual(logs["loss"], 16.0)
def test_nested_input_predict(self):
# https://github.com/keras-team/keras/issues/325
class TupleInputModel(keras.Model):
def call(self, inputs):
a, b = inputs
return a + b
model = TupleInputModel()
x1, x2 = np.random.rand(2, 3, 4)
out = model.predict((x1, x2))
self.assertEqual(out.shape, (3, 4))
class DictInputModel(keras.Model):
def call(self, inputs):
return inputs["a"] + inputs["b"]
model = DictInputModel()
x1, x2 = np.random.rand(2, 3, 4)
out = model.predict({"a": x1, "b": x2})
self.assertEqual(out.shape, (3, 4))
@pytest.mark.requires_trainable_backend
def test_for_eval_epoch_iterator(self):
model = ExampleModel(units=3)
model.compile(
optimizer="adam", loss="mse", metrics=["mean_absolute_error"]
)
x = np.ones((16, 4))
y = np.zeros((16, 3))
x_test = np.ones((16, 4))
y_test = np.zeros((16, 3))
model.fit(
x,
y,
batch_size=4,
validation_data=(x_test, y_test),
)
assert getattr(model, "_eval_epoch_iterator", None) is None
# Try model.fit with reshaped validation_data
# This will throw an exception which is intended
try:
model.fit(
x,
y,
batch_size=4,
validation_data=(
x_test.reshape((-1, 16, 4)),
y_test.reshape((-1, 16, 3)),
),
)
except:
pass
# Try model.fit with correct validation_data this should work.
# After successful training `_eval_epoch_iterator` should be None
model.fit(
x,
y,
batch_size=4,
validation_data=(x_test, y_test),
)
assert getattr(model, "_eval_epoch_iterator", None) is None
@pytest.mark.requires_trainable_backend
def test_callback_methods_keys(self):
class CustomCallback(Callback):
def on_train_begin(self, logs=None):
keys = sorted(list(logs.keys()))
assert keys == []
def on_train_end(self, logs=None):
keys = sorted(list(logs.keys()))
assert keys == [
"loss",
"mean_absolute_error",
"val_loss",
"val_mean_absolute_error",
]
def on_epoch_begin(self, epoch, logs=None):
keys = sorted(list(logs.keys()))
assert keys == []
def on_epoch_end(self, epoch, logs=None):
keys = sorted(list(logs.keys()))
assert keys == [
"loss",
"mean_absolute_error",
"val_loss",
"val_mean_absolute_error",
]
def on_test_begin(self, logs=None):
keys = sorted(list(logs.keys()))
assert keys == []
def on_test_end(self, logs=None):
keys = sorted(list(logs.keys()))
assert keys == ["loss", "mean_absolute_error"]
def on_predict_begin(self, logs=None):
keys = sorted(list(logs.keys()))
assert keys == []
def on_predict_end(self, logs=None):
keys = sorted(list(logs.keys()))
assert keys == []
def on_train_batch_begin(self, batch, logs=None):
keys = sorted(list(logs.keys()))
assert keys == []
def on_train_batch_end(self, batch, logs=None):
keys = sorted(list(logs.keys()))
assert keys == ["loss", "mean_absolute_error"]
def on_test_batch_begin(self, batch, logs=None):
keys = sorted(list(logs.keys()))
assert keys == []
def on_test_batch_end(self, batch, logs=None):
keys = sorted(list(logs.keys()))
assert keys == ["loss", "mean_absolute_error"]
def on_predict_batch_begin(self, batch, logs=None):
keys = sorted(list(logs.keys()))
assert keys == []
def on_predict_batch_end(self, batch, logs=None):
keys = sorted(list(logs.keys()))
assert keys == ["outputs"]
model = ExampleModel(units=3)
model.compile(
optimizer="adam", loss="mse", metrics=["mean_absolute_error"]
)
x = np.ones((16, 4))
y = np.zeros((16, 3))
x_test = np.ones((16, 4))
y_test = np.zeros((16, 3))
model.fit(
x,
y,
callbacks=[CustomCallback()],
batch_size=4,
validation_data=(x_test, y_test),
)
model.evaluate(x_test, y_test, batch_size=4)
model.predict(x_test, batch_size=4)
@pytest.mark.requires_trainable_backend
def test_internal_only_loss(self):
class LossLayer(layers.Layer):
def call(self, x):
self.add_loss(ops.sum(x))
return x
model = keras.Sequential(
[
layers.Dense(2),
LossLayer(),
layers.Dense(1),
]
)
model.compile(optimizer="adam")
x = np.ones((16, 2))
y = np.zeros((16, 1))
model.fit(x, y, batch_size=4)
def get_layer(self):
class ExampleLayer(keras.Layer):
def call(self, x):
return x * 2
return ExampleLayer
def get_model(self):
class ExampleModel(keras.Model):
def call(self, x):
return x * 2
return ExampleModel
def get_functional(self):
ExampleLayer = self.get_layer()
class ExampleFunctional(keras.Functional):
def __init__(self, input_shape=(None,)):
inputs = keras.Input(input_shape)
outputs = ExampleLayer()(inputs)
super().__init__(inputs=inputs, outputs=outputs)
return ExampleFunctional
@parameterized.named_parameters(
[
{
"testcase_name": "model",
"model_class": "get_model",
},
{
"testcase_name": "layer",
"model_class": "get_layer",
},
{
"testcase_name": "functional",
"model_class": "get_functional",
},
]
)
@pytest.mark.requires_trainable_backend
@pytest.mark.skipif(
keras.backend.backend() != "tensorflow",
reason="Only tensorflow supports raggeds",
)
def test_trainer_with_raggeds(self, model_class):
from keras.utils.module_utils import tensorflow as tf
def loss_fn(y, y_pred, sample_weight=None):
return 0
model = getattr(self, model_class)()()
x = tf.ragged.constant([[1], [2, 3]])
# test forward pass
y = model(x)
self.assertEqual(type(y), tf.RaggedTensor)
# test training
if model_class in ["get_model", "get_functional"]:
model.compile(optimizer="adam", loss=loss_fn)
model.fit(x, x)
y = model.predict(x)
self.assertEqual(type(y), tf.RaggedTensor)
# test if everything works with the sequential model
model = keras.Sequential([model])
model.compile(optimizer="adam", loss=loss_fn)
model.fit(x, x)
y = model.predict(x)
self.assertEqual(type(y), tf.RaggedTensor)
def test_predict_dropout(self):
# Test that `predict` with a dropout op
# has nondeterministic behavior across batches.
inputs = layers.Input((20,))
outputs = layers.Dropout(0.5, seed=1337)(inputs, training=True)
model = keras.Model(inputs, outputs)
out1 = model.predict(np.ones((4, 20)), batch_size=2)
self.assertGreater(5, np.sum(np.abs(out1[:2, :] - out1[2:4, :])))
out2 = model.predict_on_batch(np.ones((2, 20)))
out3 = model.predict_on_batch(np.ones((2, 20)))
self.assertGreater(5, np.sum(np.abs(out2 - out3)))
@pytest.mark.requires_trainable_backend
def test_recompile(self):
model = ExampleModel(units=3)
model.compile(
optimizer="sgd", loss="mse", metrics=["mean_squared_error"]
)
history_1 = model.fit(np.ones((3, 2)), np.ones((3, 3))).history
eval_out_1 = model.evaluate(
np.ones((3, 2)), np.ones((3, 3)), return_dict=True
)
model.compile(
optimizer="sgd", loss="mse", metrics=["mean_absolute_error"]
)
history_2 = model.fit(np.ones((3, 2)), np.ones((3, 3))).history
eval_out_2 = model.evaluate(
np.ones((3, 2)), np.ones((3, 3)), return_dict=True
)
self.assertEqual(
sorted(list(history_1.keys())), ["loss", "mean_squared_error"]
)
self.assertEqual(
sorted(list(eval_out_1.keys())), ["loss", "mean_squared_error"]
)
self.assertEqual(
sorted(list(history_2.keys())), ["loss", "mean_absolute_error"]
)
self.assertEqual(
sorted(list(eval_out_2.keys())), ["loss", "mean_absolute_error"]
)
@pytest.mark.requires_trainable_backend
def test_nested_inputs(self):
model = ListModel(units=2)
out = model([np.ones((3, 2)), np.ones((3, 3))])
self.assertEqual(tuple(out.shape), (3, 2))
model.compile(optimizer="sgd", loss="mse", metrics=["mse"])
history = model.fit(
[np.ones((3, 2)), np.ones((3, 3))], np.ones((3, 2))
).history
self.assertAllClose(history["loss"], 16.0)
train_out = model.train_on_batch(
[np.ones((3, 2)), np.ones((3, 3))], np.ones((3, 2))
)
self.assertAllClose(train_out[0], 15.2200)
eval_out = model.evaluate(
[np.ones((3, 2)), np.ones((3, 3))], np.ones((3, 2))
)
self.assertAllClose(eval_out[0], 13.0321)
eval_out = model.test_on_batch(
[np.ones((3, 2)), np.ones((3, 3))], np.ones((3, 2))
)
self.assertAllClose(eval_out[0], 13.0321)
predict_out = model.predict([np.ones((3, 2)), np.ones((3, 3))])
self.assertEqual(predict_out.shape, (3, 2))
predict_out = model.predict_on_batch([np.ones((3, 2)), np.ones((3, 3))])
self.assertEqual(predict_out.shape, (3, 2))
@pytest.mark.requires_trainable_backend
def test_validation_data_infinite_generator(self):
# Test that you can pass an infinite generator to `validation_data`
# arg of fit() as well as a `validation_steps` argument and that
# validation only runs for the correct number of steps.
model = ExampleModel(units=3)
model.compile(optimizer="sgd", loss="mse", metrics=["mse"])
class Recorder(keras.callbacks.Callback):
def __init__(self):
self.train_counter = 0
self.val_counter = 0
def on_train_batch_end(self, *args, **kwargs):
self.train_counter += 1
def on_test_batch_end(self, *args, **kwargs):
self.val_counter += 1
def infinite_gen():
while True:
yield np.ones((2, 2)), np.ones((2, 3))
recorder = Recorder()
model.fit(
infinite_gen(),
validation_data=infinite_gen(),
steps_per_epoch=3,
validation_steps=4,
epochs=1,
shuffle=False,
callbacks=[recorder],
)
self.assertEqual(recorder.train_counter, 3)
self.assertEqual(recorder.val_counter, 4)
@parameterized.named_parameters(
[
("fit", "fit", "training", "train"),
("evaluate", "evaluate", "evaluating", "test"),
("predict", "predict", "predicting", "predict"),
]
)
@pytest.mark.requires_trainable_backend
def test_stop_loop(self, method, method_gerund, on_end_name):
model = ExampleModel(units=3)
model.compile(optimizer="sgd", loss="mse", metrics=["mse"])
class Stopper(keras.callbacks.Callback):
def __init__(self, stop_count):
self.stop_count = stop_count
self.counter = 0
setattr(self, f"on_{on_end_name}_batch_end", self.batch_end)
def batch_end(self, *args, **kwargs):
self.counter += 1
if self.counter == self.stop_count:
setattr(self.model, f"stop_{method_gerund}", True)
def infinite_gen():
while True:
x = np.ones((2, 2))
y = np.ones((2, 3))
yield (x,) if method == "predict" else (x, y)
stop_count = 5
stopper = Stopper(stop_count)
getattr(model, method)(
infinite_gen(),
callbacks=[stopper],
)
self.assertEqual(stopper.counter, stop_count)
@pytest.mark.requires_trainable_backend
def test_constraints_are_applied(self):
model = models.Sequential(
[layers.Dense(2, kernel_constraint="non_neg")]
)
x = np.ones((2, 3))
y = np.ones((2, 2))
model.compile(optimizer="rmsprop", loss="mse")
model.fit(x, y)
self.assertGreaterEqual(
np.min(backend.convert_to_numpy(model.layers[0].kernel)), 0.0
)
@pytest.mark.requires_trainable_backend
def test_rng_updated_during_predict(self):
class TestTimeDropout(layers.Layer):
def __init__(self):
super().__init__()
self.random_generator = keras.random.SeedGenerator()
def call(self, x):
return keras.random.dropout(
x, rate=0.5, seed=self.random_generator
)
inputs = layers.Input((20,))
outputs = TestTimeDropout()(inputs)
model = keras.Model(inputs, outputs)
model.compile(optimizer="rmsprop", loss="mse")
x = np.ones((32, 20))
out_1 = model.predict(x)
out_2 = model.predict(x)
self.assertGreater(np.mean(np.abs(out_1 - out_2)), 0.01)
@pytest.mark.requires_trainable_backend
def test_callbacks_can_update_state_at_batch_boundary(self):
class CounterModel(keras.Model):
def __init__(self):
super().__init__()
self.train_counter = self.add_weight(
shape=(),
initializer="zeros",
)
self.test_counter = self.add_weight(
shape=(),
initializer="zeros",
)
self.predict_counter = self.add_weight(
shape=(),
initializer="zeros",
)
self.dense = layers.Dense(3)
def call(self, x):
return self.dense(x)
class CounterCallback(keras.callbacks.Callback):
def __init__(self):
self.eager_call_counter_train = 0
self.eager_call_counter_test = 0
self.eager_call_counter_predict = 0
def on_train_batch_end(self, *args, **kwargs):
self.model.train_counter.assign_add(1)
self.eager_call_counter_train += 1
def on_test_batch_end(self, *args, **kwargs):
self.model.test_counter.assign_add(1)
self.eager_call_counter_test += 1
def on_predict_batch_end(self, *args, **kwargs):
self.model.predict_counter.assign_add(1)
self.eager_call_counter_predict += 1
model = CounterModel()
model.compile(
optimizer="sgd", loss="mse", metrics=["mse"], run_eagerly=True
)
cbk = CounterCallback()
model.fit(
np.ones((4, 3)),
np.ones((4, 3)),
callbacks=[cbk],
epochs=3,
batch_size=1,
verbose=0,
validation_data=(np.ones((2, 3)), np.ones((2, 3))),
)
self.assertAlmostEqual(cbk.eager_call_counter_train, 12)
self.assertAlmostEqual(model.train_counter.numpy(), 12)
self.assertAlmostEqual(cbk.eager_call_counter_test, 6)
self.assertAlmostEqual(model.test_counter.numpy(), 6)
model.predict(
np.ones((4, 3)),
callbacks=[cbk],
batch_size=1,
)
self.assertAlmostEqual(cbk.eager_call_counter_predict, 4)
self.assertAlmostEqual(model.predict_counter.numpy(), 4)
| keras/keras/trainers/trainer_test.py/0 | {
"file_path": "keras/keras/trainers/trainer_test.py",
"repo_id": "keras",
"token_count": 22208
} | 157 |
"""Script to create (and optionally install) a `.whl` archive for Keras 3.
Usage:
1. Create a `.whl` file in `dist/`:
```
python3 pip_build.py
```
2. Also install the new package immediately after:
```
python3 pip_build.py --install
```
"""
import argparse
import datetime
import glob
import os
import pathlib
import shutil
import namex
# Needed because importing torch after TF causes the runtime to crash
import torch # noqa: F401
package = "keras"
build_directory = "tmp_build_dir"
dist_directory = "dist"
to_copy = ["setup.py", "README.md"]
def ignore_files(_, filenames):
return [f for f in filenames if f.endswith("_test.py")]
def copy_source_to_build_directory(root_path):
# Copy sources (`keras/` directory and setup files) to build
# directory
os.chdir(root_path)
os.mkdir(build_directory)
shutil.copytree(
package, os.path.join(build_directory, package), ignore=ignore_files
)
for fname in to_copy:
shutil.copy(fname, os.path.join(f"{build_directory}", fname))
os.chdir(build_directory)
def run_namex_conversion():
# Restructure the codebase so that source files live in `keras/src`
namex.convert_codebase(package, code_directory="src")
# Generate API __init__.py files in `keras/`
namex.generate_api_files(package, code_directory="src", verbose=True)
def create_legacy_directory():
# Make keras/_tf_keras/ by copying keras/
tf_keras_dirpath_parent = os.path.join(package, "_tf_keras")
tf_keras_dirpath = os.path.join(tf_keras_dirpath_parent, "keras")
os.makedirs(tf_keras_dirpath)
with open(os.path.join(tf_keras_dirpath_parent, "__init__.py"), "w") as f:
f.write("from keras._tf_keras import keras\n")
with open(os.path.join(package, "__init__.py")) as f:
init_file = f.read()
init_file = init_file.replace(
"from keras import _legacy",
"from keras import _tf_keras",
)
with open(os.path.join(package, "__init__.py"), "w") as f:
f.write(init_file)
with open(os.path.join(tf_keras_dirpath, "__init__.py"), "w") as f:
f.write(init_file)
for dirname in os.listdir(package):
dirpath = os.path.join(package, dirname)
if os.path.isdir(dirpath) and dirname not in (
"_legacy",
"_tf_keras",
"src",
):
shutil.copytree(
dirpath,
os.path.join(tf_keras_dirpath, dirname),
ignore=ignore_files,
)
# Copy keras/_legacy/ file contents to keras/_tf_keras/keras
legacy_submodules = [
path[:-3]
for path in os.listdir(os.path.join(package, "src", "legacy"))
if path.endswith(".py")
]
legacy_submodules += [
path
for path in os.listdir(os.path.join(package, "src", "legacy"))
if os.path.isdir(os.path.join(package, "src", "legacy", path))
]
for root, _, fnames in os.walk(os.path.join(package, "_legacy")):
for fname in fnames:
if fname.endswith(".py"):
legacy_fpath = os.path.join(root, fname)
tf_keras_root = root.replace("/_legacy", "/_tf_keras/keras")
core_api_fpath = os.path.join(
root.replace("/_legacy", ""), fname
)
if not os.path.exists(tf_keras_root):
os.makedirs(tf_keras_root)
tf_keras_fpath = os.path.join(tf_keras_root, fname)
with open(legacy_fpath) as f:
legacy_contents = f.read()
legacy_contents = legacy_contents.replace(
"keras._legacy", "keras._tf_keras.keras"
)
if os.path.exists(core_api_fpath):
with open(core_api_fpath) as f:
core_api_contents = f.read()
core_api_contents = core_api_contents.replace(
"from keras import _tf_keras\n", ""
)
for legacy_submodule in legacy_submodules:
core_api_contents = core_api_contents.replace(
f"from keras import {legacy_submodule}\n",
"",
)
core_api_contents = core_api_contents.replace(
f"keras.{legacy_submodule}",
f"keras._tf_keras.keras.{legacy_submodule}",
)
legacy_contents = core_api_contents + "\n" + legacy_contents
with open(tf_keras_fpath, "w") as f:
f.write(legacy_contents)
# Delete keras/_legacy/
shutil.rmtree(os.path.join(package, "_legacy"))
def export_version_string(version, is_nightly=False, rc_index=None):
"""Export Version and Package Name."""
if is_nightly:
date = datetime.datetime.now()
version += f".dev{date.strftime('%Y%m%d%H')}"
# Replaces `name="keras"` string in `setup.py` with `keras-nightly`
with open("setup.py") as f:
setup_contents = f.read()
with open("setup.py", "w") as f:
setup_contents = setup_contents.replace(
'name="keras"', 'name="keras-nightly"'
)
f.write(setup_contents)
elif rc_index is not None:
version += "rc" + str(rc_index)
# Make sure to export the __version__ string
with open(os.path.join(package, "__init__.py")) as f:
init_contents = f.read()
with open(os.path.join(package, "__init__.py"), "w") as f:
f.write(init_contents + "\n\n" + f'__version__ = "{version}"\n')
def build_and_save_output(root_path, __version__):
# Build the package
os.system("python3 -m build")
# Save the dist files generated by the build process
os.chdir(root_path)
if not os.path.exists(dist_directory):
os.mkdir(dist_directory)
for fpath in glob.glob(
os.path.join(build_directory, dist_directory, "*.*")
):
shutil.copy(fpath, dist_directory)
# Find the .whl file path
whl_path = None
for fname in os.listdir(dist_directory):
if __version__ in fname and fname.endswith(".whl"):
whl_path = os.path.abspath(os.path.join(dist_directory, fname))
if whl_path:
print(f"Build successful. Wheel file available at {whl_path}")
else:
print("Build failed.")
return whl_path
def build(root_path, is_nightly=False, rc_index=None):
if os.path.exists(build_directory):
raise ValueError(f"Directory already exists: {build_directory}")
try:
copy_source_to_build_directory(root_path)
run_namex_conversion()
create_legacy_directory()
from keras.src.version import __version__ # noqa: E402
export_version_string(__version__, is_nightly, rc_index)
return build_and_save_output(root_path, __version__)
finally:
# Clean up: remove the build directory (no longer needed)
shutil.rmtree(build_directory)
def install_whl(whl_fpath):
print(f"Installing wheel file: {whl_fpath}")
os.system(f"pip3 install {whl_fpath} --force-reinstall --no-dependencies")
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--install",
action="store_true",
help="Whether to install the generated wheel file.",
)
parser.add_argument(
"--nightly",
action="store_true",
help="Whether to generate nightly wheel file.",
)
parser.add_argument(
"--rc",
type=int,
help="Specify `[0-9] when generating RC wheels.",
)
args = parser.parse_args()
root_path = pathlib.Path(__file__).parent.resolve()
whl_path = build(root_path, args.nightly, args.rc)
if whl_path and args.install:
install_whl(whl_path)
| keras/pip_build.py/0 | {
"file_path": "keras/pip_build.py",
"repo_id": "keras",
"token_count": 3788
} | 158 |
"""Build the TF-Keras pip package.
The steps are as follows:
0. Run bazel build in TF-Keras root directory to obtain protobuf Python files.
1. Create a temporary build directory (e.g. `/tmp/keras_build`)
2. Copy the TF-Keras codebase to it (to `/tmp/keras_build/tf_keras/src`)
and rewrite internal imports so that they refer to `keras.src` rather than
just `keras`.
3. Also copy `setup.py` to the build directory.
4. List and import every file in codebase (in `/tmp/keras_build/tf_keras/src`),
so we can inspect the symbols the codebase contains.
5. Use the annotations left by the `keras_export` decorator to filter the
symbols that should be exported, as well as their export path (default one
and v1 one).
6. Use this information to generate `__init__.py` files in
`tmp/keras_build/tf_keras/`.
7. Run the setup script to write out build artifacts to `tmp/keras_build/dist`.
8. Copy the artifacts out. This is what should be uploaded to PyPI.
This script borrows heavily from Namex (https://github.com/fchollet/namex).
Notes:
* This script should be run on the TF-Keras codebase as obtained from GitHub
(OSS-facing), not the Google-internal one. The files are expect to be already
converted to their public form.
* This script only targets Linux x86 64. It could be adapted to MacOS
relatively easily by changing requirements.txt and the bazel build script.
* This script should be run from an environment that has all TF-Keras
dependencies installed. Note that their specific version is not important; the
only thing that matters is that we should be able to import the TF-Keras
codebase in its current state (so we can perform step 4). If you install the
dependencies used by the latest TF-nightly you should be good.
"""
import argparse
import datetime
import glob
import importlib
import inspect
import os
import pathlib
import shutil
import subprocess
import sys
import tempfile
PACKAGE_NAME = "tf_keras"
DIST_DIRNAME = "dist"
SRC_DIRNAME = "src"
TMP_BUILD_DIRNAME = "keras_build"
TMP_TEST_DIRNAME = "keras_test"
VERBOSE = True
INIT_FILE_HEADER = """AUTOGENERATED. DO NOT EDIT."""
# These are symbols that have export issues and that we skip for now.
SYMBOLS_TO_SKIP = ["layer_test"]
def copy_keras_codebase(source_dir, target_dir):
disallowed = [
"tools",
"integration_test",
]
def ignore(path, names):
to_ignore = []
for name in names:
if name.endswith("_test.py"):
to_ignore.append(name)
elif name in disallowed:
to_ignore.append(name)
return to_ignore
shutil.copytree(source_dir, target_dir, ignore=ignore)
def convert_keras_imports(src_directory):
def _convert_line(line):
if (
"import tf_keras.protobuf" in line
or "from tf_keras.protobuf" in line
):
return line
# Imports starting from `root_name`.
if line.strip() == f"import {PACKAGE_NAME}":
line = line.replace(
f"import {PACKAGE_NAME}",
f"import {PACKAGE_NAME}.{SRC_DIRNAME} as {PACKAGE_NAME}",
)
return line
line = line.replace(
f"import {PACKAGE_NAME}.",
f"import {PACKAGE_NAME}.{SRC_DIRNAME}.",
)
line = line.replace(
f"from {PACKAGE_NAME}.",
f"from {PACKAGE_NAME}.{SRC_DIRNAME}.",
)
line = line.replace(
f"from {PACKAGE_NAME} import",
f"from {PACKAGE_NAME}.{SRC_DIRNAME} import",
)
# Convert `import tf_keras as keras` into `import tf_keras.src as keras`
line = line.replace(
f"import {PACKAGE_NAME} as ",
f"import {PACKAGE_NAME}.{SRC_DIRNAME} as ",
)
# A way to catch LazyLoader calls. Hacky.
line = line.replace(
'globals(), "tf_keras.', 'globals(), "tf_keras.src.'
)
return line
for root, _, files in os.walk(src_directory):
for fname in files:
if fname.endswith(".py") and not fname.endswith("_pb2.py"):
fpath = os.path.join(root, fname)
if VERBOSE:
print(f"...processing {fpath}")
with open(fpath) as f:
contents = f.read()
lines = contents.split("\n")
in_string = False
new_lines = []
for line in lines:
if line.strip().startswith('"""') or line.strip().endswith(
'"""'
):
if line.count('"') % 2 == 1:
in_string = not in_string
else:
line = _convert_line(line)
new_lines.append(line)
with open(fpath, "w") as f:
f.write("\n".join(new_lines) + "\n")
def generate_keras_api_files(package_directory, src_directory):
if VERBOSE:
print("# Compiling codebase entry points.")
codebase_walk_entry_points = []
for root, _, files in os.walk(src_directory):
for fname in files:
parts = root.split("/")
parts = parts[parts.index("tf_keras") :]
base_entry_point = ".".join(parts)
if fname == "__init__.py":
codebase_walk_entry_points.append(base_entry_point)
elif fname.endswith(".py") and not fname.endswith("_test.py"):
module_name = fname[:-3]
codebase_walk_entry_points.append(
base_entry_point + "." + module_name
)
# Import all Python modules found in the code directory.
modules = []
sys.path.insert(0, os.getcwd())
for entry_point in codebase_walk_entry_points:
if VERBOSE:
print(f"Load entry point: {entry_point}")
mod = importlib.import_module(entry_point, package=".")
modules.append(mod)
if VERBOSE:
print("# Compiling list of symbols to export.")
# Populate list of all symbols to register.
all_symbols = set()
processed = set()
from tensorflow.python.util import tf_decorator
for module in modules:
for name in dir(module):
if name in SYMBOLS_TO_SKIP:
continue
symbol = getattr(module, name)
# Get the real symbol behind any TF decorator
try:
_, symbol = tf_decorator.unwrap(symbol)
except ModuleNotFoundError:
# unwrap will not work on a ModuleSpec (which can't be
# an API symbol anyway)
continue
# Skip if already seen
if id(symbol) in processed:
continue
processed.add(id(symbol))
try:
if not hasattr(symbol, "_keras_api_names"):
continue
except: # noqa: E722
if VERBOSE:
print(
f"[!] Could not inspect symbol '{name}' from {module}."
)
continue
# If the symbol is a non-registered subclass of
# a registered symbol, skip it.
skip = False
def has_same_metadata(a, b):
if (
hasattr(a, "_keras_api_names")
and hasattr(b, "_keras_api_names")
and a._keras_api_names == b._keras_api_names
and a._keras_api_names_v1 == b._keras_api_names_v1
):
return True
return False
try:
classes = inspect.getmro(symbol)
if len(classes) >= 2:
parents = classes[1:]
for p in parents:
if has_same_metadata(p, symbol):
skip = True
except AttributeError:
# getmro will error out on a non-class
# (in which case there can be no subclassing issues).
pass
if not skip:
all_symbols.add(symbol)
# Generate __init__ files content.
if VERBOSE:
print("# Processing export path data for each symbol.")
init_files_content = grab_symbol_metadata(all_symbols, is_v1=False)
init_files_content_v1 = grab_symbol_metadata(all_symbols, is_v1=True)
if VERBOSE:
print("# Writing out API files.")
write_out_api_files(
init_files_content,
target_dir=pathlib.Path(package_directory).parent.resolve(),
)
v1_path = os.path.join(package_directory, "api", "_v1")
v2_path = os.path.join(package_directory, "api", "_v2")
write_out_api_files(
init_files_content,
target_dir=v2_path,
root_offset=["api", "_v2", "keras"],
)
write_out_api_files(
init_files_content_v1,
target_dir=v1_path,
root_offset=["api", "_v1", "keras"],
)
# Add missing __init__ files in api dirs.
with open(os.path.join(package_directory, "api", "__init__.py"), "w"):
pass
with open(os.path.join(v1_path, "__init__.py"), "w"):
pass
with open(os.path.join(v2_path, "__init__.py"), "w"):
pass
def grab_symbol_metadata(all_symbols, is_v1=False):
# init_files_content is a dict mapping a directory path to a list of
# symbol metadata entries to populate the __init__ file for the directory.
# Each entry is a dict with keys 'symbol' and 'export_name'.
init_files_content = {}
for symbol in all_symbols:
if VERBOSE:
print(f"...processing symbol '{symbol.__name__}'")
if is_v1:
api_names = symbol._keras_api_names_v1
else:
api_names = symbol._keras_api_names
for export_path in api_names:
export_modules = export_path.split(".")
export_name = export_modules[-1]
parent_path = os.path.join(*export_modules[:-1])
if parent_path not in init_files_content:
init_files_content[parent_path] = []
init_files_content[parent_path].append(
{"symbol": symbol, "export_name": export_name}
)
for i in range(1, len(export_modules[:-1])):
intermediate_path = os.path.join(*export_modules[:i])
if intermediate_path not in init_files_content:
init_files_content[intermediate_path] = []
init_files_content[intermediate_path].append(
{
"module": export_modules[i],
"location": ".".join(export_modules[:i]),
}
)
return init_files_content
def write_out_api_files(init_files_content, target_dir, root_offset=None):
# Go over init_files_content, make dirs,
# create __init__.py file, populate file with public symbol imports.
root_offset = root_offset or []
for path, contents in init_files_content.items():
# Use`tf_keras.<module>` format.
module_path = path
if path.startswith("keras"):
module_path = "tf_" + module_path
# Change pathnames from keras/layers -> tf_keras/layers unless
# root_offset is explitly provided for API generation.
if path.startswith("keras") and not root_offset:
path = "tf_" + path
os.makedirs(os.path.join(target_dir, path), exist_ok=True)
init_file_lines = []
modules_included = set()
for symbol_metadata in contents:
if "symbol" in symbol_metadata:
symbol = symbol_metadata["symbol"]
name = symbol_metadata["export_name"]
if name == symbol.__name__:
init_file_lines.append(
f"from {symbol.__module__} import {symbol.__name__}"
)
else:
init_file_lines.append(
f"from {symbol.__module__} "
f"import {symbol.__name__} as {name}"
)
elif "module" in symbol_metadata:
if symbol_metadata["module"] not in modules_included:
parts = module_path.split("/")
parts = [parts[0]] + root_offset + parts[1:]
module_location = ".".join(parts)
init_file_lines.append(
f"from {module_location} "
f"import {symbol_metadata['module']}"
)
modules_included.add(symbol_metadata["module"])
init_path = os.path.join(target_dir, path, "__init__.py")
if VERBOSE:
print(f"...writing {init_path}")
init_file_lines = sorted(init_file_lines)
with open(init_path, "w") as f:
contents = (
f'"""{INIT_FILE_HEADER}"""\n\n'
+ "\n".join(init_file_lines)
+ "\n"
)
f.write(contents)
def build_pip_package(
keras_root_directory,
build_directory,
package_directory,
src_directory,
dist_directory,
is_nightly=False,
rc=None,
):
# Build TF-Keras with Bazel to get the protobuf .py files
os.chdir(keras_root_directory)
os.system(f"sh {os.path.join('tf_keras', 'tools', 'bazel_build.sh')}")
os.chdir(build_directory)
# Copy sources (`keras/` directory and setup files) to build directory
copy_keras_codebase(
os.path.join(keras_root_directory, "tf_keras"), src_directory
)
shutil.copy(
os.path.join(keras_root_directory, "oss_setup.py"),
os.path.join(build_directory, "setup.py"),
)
# Add blank __init__.py file at package root
# to make the package directory importable.
with open(os.path.join(package_directory, "__init__.py"), "w") as f:
pass
# Move protobuf .py files to package root.
shutil.rmtree(os.path.join(src_directory, "protobuf"))
shutil.move(
os.path.join(keras_root_directory, "bazel-bin", "tf_keras", "protobuf"),
package_directory,
)
# Add blank __init__.py file in protobuf dir.
with open(
os.path.join(package_directory, "protobuf", "__init__.py"), "w"
) as f:
pass
# Convert imports from `tf_keras.xyz` to `tf_keras.src.xyz`.
convert_keras_imports(src_directory)
# Generate API __init__.py files in `tf_keras/`
generate_keras_api_files(package_directory, src_directory)
# Make sure to export the __version__ string
version = getattr(
importlib.import_module("tf_keras.src", package="."), "__version__"
)
if is_nightly:
date = datetime.datetime.now()
version += f".dev{date.strftime('%Y%m%d%H')}"
elif rc:
version += rc
with open(os.path.join(package_directory, "__init__.py")) as f:
init_contents = f.read()
with open(os.path.join(package_directory, "__init__.py"), "w") as f:
f.write(init_contents + "\n\n" + f'__version__ = "{version}"\n')
# Insert {{PACKAGE}} and {{VERSION}} strings in setup.py
if is_nightly:
package = PACKAGE_NAME + "-nightly"
else:
package = PACKAGE_NAME
with open(os.path.join(build_directory, "setup.py")) as f:
setup_contents = f.read()
with open(os.path.join(build_directory, "setup.py"), "w") as f:
setup_contents = setup_contents.replace("{{VERSION}}", version)
setup_contents = setup_contents.replace("{{PACKAGE}}", package)
f.write(setup_contents)
# Build the package
os.system("python3 -m build")
# Save the dist files generated by the build process
saved_filenames = []
for filename in glob.glob(os.path.join(build_directory, "dist", "*.*")):
if VERBOSE:
print(f"Saving build artifact {filename}")
shutil.copy(filename, dist_directory)
saved_filenames.append(filename)
if VERBOSE:
print(f"Saved artifacts to {dist_directory}")
return saved_filenames, version
def test_wheel(wheel_path, expected_version, requirements_path):
test_directory = os.path.join(tempfile.gettempdir(), TMP_TEST_DIRNAME)
os.mkdir(test_directory)
os.chdir(test_directory)
symbols_to_check = [
"tf_keras.layers",
"tf_keras.Input",
"tf_keras.__internal__",
"tf_keras.experimental",
]
checks = ";".join(symbols_to_check)
# Uninstall `keras-nightly` after installing requirements
# otherwise both will register `experimentalOptimizer` and test will fail.
# Skip install deps for `tf_keras` as TensorFlow installed from requirements
script = (
"#!/bin/bash\n"
"virtualenv kenv\n"
f"source {os.path.join('kenv', 'bin', 'activate')}\n"
f"pip3 install -r {requirements_path}\n"
"pip3 uninstall -y keras-nightly\n"
f"pip3 install {wheel_path} --force-reinstall --no-deps\n"
f"python3 -c 'import tf_keras;{checks};print(tf_keras.__version__)'\n"
)
try:
# Check version is correct
output = subprocess.check_output(script.encode(), shell=True)
output = output.decode().rstrip().split("\n")[-1].strip()
if not output == expected_version:
raise ValueError(
"Incorrect version; expected "
f"{expected_version} but received {output}"
)
finally:
shutil.rmtree(test_directory)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--nightly",
action="store_true",
help="Whether this is for the `keras-nightly` package.",
)
parser.add_argument(
"--RC",
type=str,
help="Whether this is for the release candidate.",
)
args = parser.parse_args()
is_nightly = args.nightly
rc = args.RC
build_directory = os.path.join(tempfile.gettempdir(), TMP_BUILD_DIRNAME)
keras_root_directory = pathlib.Path(__file__).parent.resolve()
dist_directory = os.path.join(keras_root_directory, DIST_DIRNAME)
package_directory = os.path.join(build_directory, PACKAGE_NAME)
src_directory = os.path.join(build_directory, PACKAGE_NAME, SRC_DIRNAME)
if VERBOSE:
print(
"Using:\n"
f"build_directory={build_directory}\n"
f"keras_root_directory={keras_root_directory}\n"
f"dist_directory={dist_directory}\n"
f"package_directory={package_directory}\n"
f"src_directory={src_directory}\n"
f"is_nightly={is_nightly}\n"
f"rc={rc}"
)
if os.path.exists(build_directory):
raise ValueError(f"Directory already exists: {build_directory}")
os.mkdir(build_directory)
os.mkdir(package_directory)
if not os.path.exists(dist_directory):
os.mkdir(dist_directory)
try:
saved_filenames, version = build_pip_package(
keras_root_directory,
build_directory,
package_directory,
src_directory,
dist_directory,
is_nightly,
rc,
)
wheel_filename = [f for f in saved_filenames if f.endswith(".whl")][0]
if VERBOSE:
print("Testing wheel artifact.")
test_wheel(
wheel_path=os.path.join(dist_directory, wheel_filename),
expected_version=version,
requirements_path=os.path.join(
keras_root_directory, "requirements.txt"
),
)
if VERBOSE:
print("Test successful.")
finally:
# Clean up: remove the build directory (no longer needed)
if VERBOSE:
print(f"Deleting temp build directory at {build_directory}...")
shutil.rmtree(build_directory)
| tf-keras/pip_build.py/0 | {
"file_path": "tf-keras/pip_build.py",
"repo_id": "tf-keras",
"token_count": 9392
} | 159 |
# Copyright 2021 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# ==============================================================================
"""Keras API compatibility tests.
This test ensures all changes to the public API of TF-Keras are intended.
If this test fails, it means a change has been made to the public API. Backwards
incompatible changes are not allowed. You can run the test with
"--update_goldens" flag set to "True" to update goldens when making changes to
the public TF-Keras python API.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import os
import re
import sys
import six
import tensorflow as tf
# isort: off
from google.protobuf import message
from google.protobuf import text_format
from tensorflow.python.lib.io import file_io
from tensorflow.tools.api.lib import api_objects_pb2
from tensorflow.tools.api.lib import (
python_object_to_proto_visitor,
)
from tensorflow.tools.common import public_api
from tensorflow.tools.common import traverse
# FLAGS defined at the bottom:
FLAGS = None
# DEFINE_boolean, update_goldens, default False:
_UPDATE_GOLDENS_HELP = """
Update stored golden files if API is updated. WARNING: All API changes
have to be authorized by TensorFlow leads.
"""
# DEFINE_boolean, verbose_diffs, default True:
_VERBOSE_DIFFS_HELP = """
If set to true, print line by line diffs on all libraries. If set to
false, only print which libraries have differences.
"""
# Initialized with _InitPathConstants function below.
_API_GOLDEN_FOLDER_V1 = None
_API_GOLDEN_FOLDER_V2 = None
def _InitPathConstants():
global _API_GOLDEN_FOLDER_V1
global _API_GOLDEN_FOLDER_V2
root_golden_path_v2 = os.path.join(
tf.compat.v1.resource_loader.get_data_files_path(),
"..",
"golden",
"v2",
"tensorflow.keras.pbtxt",
)
if FLAGS.update_goldens:
root_golden_path_v2 = os.path.realpath(root_golden_path_v2)
# Get API directories based on the root golden file. This way
# we make sure to resolve symbolic links before creating new files.
_API_GOLDEN_FOLDER_V2 = os.path.dirname(root_golden_path_v2)
_API_GOLDEN_FOLDER_V1 = os.path.normpath(
os.path.join(_API_GOLDEN_FOLDER_V2, "..", "v1")
)
_TEST_README_FILE = os.path.join(
tf.compat.v1.resource_loader.get_data_files_path(), "README.txt"
)
_UPDATE_WARNING_FILE = os.path.join(
tf.compat.v1.resource_loader.get_data_files_path(), "API_UPDATE_WARNING.txt"
)
def _KeyToFilePath(key, api_version):
"""From a given key, construct a filepath.
Filepath will be inside golden folder for api_version.
Args:
key: a string used to determine the file path
api_version: a number indicating the tensorflow API version, e.g. 1 or 2.
Returns:
A string of file path to the pbtxt file which describes the public API
"""
def _ReplaceCapsWithDash(matchobj):
match = matchobj.group(0)
return f"-{match.lower()}"
case_insensitive_key = re.sub(
"([A-Z]{1})", _ReplaceCapsWithDash, six.ensure_str(key)
)
api_folder = (
_API_GOLDEN_FOLDER_V2 if api_version == 2 else _API_GOLDEN_FOLDER_V1
)
return os.path.join(api_folder, f"{case_insensitive_key}.pbtxt")
def _FileNameToKey(filename):
"""From a given filename, construct a key we use for api objects."""
def _ReplaceDashWithCaps(matchobj):
match = matchobj.group(0)
return match[1].upper()
base_filename = os.path.basename(filename)
base_filename_without_ext = os.path.splitext(base_filename)[0]
api_object_key = re.sub(
"((-[a-z]){1})",
_ReplaceDashWithCaps,
six.ensure_str(base_filename_without_ext),
)
return api_object_key
def _VerifyNoSubclassOfMessageVisitor(path, parent, unused_children):
"""A Visitor that crashes on subclasses of generated proto classes."""
# If the traversed object is a proto Message class
if not (isinstance(parent, type) and issubclass(parent, message.Message)):
return
if parent is message.Message:
return
# Check that it is a direct subclass of Message.
if message.Message not in parent.__bases__:
raise NotImplementedError(
"Object tf.%s is a subclass of a generated proto Message. "
"They are not yet supported by the API tools." % path
)
def _FilterGoldenProtoDict(golden_proto_dict, omit_golden_symbols_map):
"""Filter out golden proto dict symbols that should be omitted."""
if not omit_golden_symbols_map:
return golden_proto_dict
filtered_proto_dict = dict(golden_proto_dict)
for key, symbol_list in six.iteritems(omit_golden_symbols_map):
api_object = api_objects_pb2.TFAPIObject()
api_object.CopyFrom(filtered_proto_dict[key])
filtered_proto_dict[key] = api_object
module_or_class = None
if api_object.HasField("tf_module"):
module_or_class = api_object.tf_module
elif api_object.HasField("tf_class"):
module_or_class = api_object.tf_class
if module_or_class is not None:
for members in (
module_or_class.member,
module_or_class.member_method,
):
filtered_members = [
m for m in members if m.name not in symbol_list
]
# Two steps because protobuf repeated fields disallow slice
# assignment.
del members[:]
members.extend(filtered_members)
return filtered_proto_dict
class ApiCompatibilityTest(tf.test.TestCase):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._update_golden_warning = file_io.read_file_to_string(
_UPDATE_WARNING_FILE
)
self._test_readme_message = file_io.read_file_to_string(
_TEST_README_FILE
)
def _AssertProtoDictEquals(
self,
expected_dict,
actual_dict,
verbose=False,
update_goldens=False,
additional_missing_object_message="",
api_version=2,
):
"""Diff given dicts of protobufs and report differences a readable way.
Args:
expected_dict: a dict of TFAPIObject protos constructed from golden
files.
actual_dict: a ict of TFAPIObject protos constructed by reading from
the TF package linked to the test.
verbose: Whether to log the full diffs, or simply report which files
were different.
update_goldens: Whether to update goldens when there are diffs found.
additional_missing_object_message: Message to print when a symbol is
missing.
api_version: TensorFlow API version to test.
"""
diffs = []
verbose_diffs = []
expected_keys = set(expected_dict.keys())
actual_keys = set(actual_dict.keys())
only_in_expected = expected_keys - actual_keys
only_in_actual = actual_keys - expected_keys
all_keys = expected_keys | actual_keys
# This will be populated below.
updated_keys = []
for key in all_keys:
diff_message = ""
verbose_diff_message = ""
# First check if the key is not found in one or the other.
if key in only_in_expected:
diff_message = (
"Object %s expected but not found (removed). %s"
% (key, additional_missing_object_message)
)
verbose_diff_message = diff_message
elif key in only_in_actual:
diff_message = f"New object {key} found (added)."
verbose_diff_message = diff_message
else:
# Do not truncate diff
self.maxDiff = None
# Now we can run an actual proto diff.
try:
self.assertProtoEquals(expected_dict[key], actual_dict[key])
except AssertionError as e:
updated_keys.append(key)
diff_message = f"Change detected in python object: {key}."
verbose_diff_message = str(e)
# All difference cases covered above. If any difference found, add
# to the list.
if diff_message:
diffs.append(diff_message)
verbose_diffs.append(verbose_diff_message)
# If diffs are found, handle them based on flags.
if diffs:
diff_count = len(diffs)
tf.compat.v1.logging.error(self._test_readme_message)
tf.compat.v1.logging.error(
"%d differences found between API and golden.", diff_count
)
if update_goldens:
# Write files if requested.
tf.compat.v1.logging.warning(self._update_golden_warning)
# If the keys are only in expected, some objects are deleted.
# Remove files.
for key in only_in_expected:
filepath = _KeyToFilePath(key, api_version)
tf.io.gfile.remove(filepath)
# If the files are only in actual (current library), these are
# new modules. Write them to files. Also record all updates in
# files.
for key in only_in_actual | set(updated_keys):
filepath = _KeyToFilePath(key, api_version)
file_io.write_string_to_file(
filepath, text_format.MessageToString(actual_dict[key])
)
else:
# Include the actual differences to help debugging.
for d, verbose_d in zip(diffs, verbose_diffs):
tf.compat.v1.logging.error(" %s", d)
tf.compat.v1.logging.error(" %s", verbose_d)
# Fail if we cannot fix the test by updating goldens.
self.fail(
"%d differences found between API and golden." % diff_count
)
else:
tf.compat.v1.logging.info(
"No differences found between API and golden."
)
def _checkBackwardsCompatibility(
self,
root,
golden_file_patterns,
api_version,
additional_private_map=None,
omit_golden_symbols_map=None,
):
# Extract all API stuff.
visitor = python_object_to_proto_visitor.PythonObjectToProtoVisitor(
default_path="tensorflow.keras"
)
public_api_visitor = public_api.PublicAPIVisitor(visitor)
if additional_private_map:
public_api_visitor.private_map.update(additional_private_map)
public_api_visitor.set_root_name("tf.keras")
traverse.traverse(root, public_api_visitor)
proto_dict = visitor.GetProtos()
# Read all golden files.
golden_file_list = tf.compat.v1.gfile.Glob(golden_file_patterns)
def _ReadFileToProto(filename):
"""Read a filename, create a protobuf from its contents."""
ret_val = api_objects_pb2.TFAPIObject()
text_format.Merge(file_io.read_file_to_string(filename), ret_val)
return ret_val
golden_proto_dict = {
_FileNameToKey(filename): _ReadFileToProto(filename)
for filename in golden_file_list
}
golden_proto_dict = _FilterGoldenProtoDict(
golden_proto_dict, omit_golden_symbols_map
)
# Diff them. Do not fail if called with update.
# If the test is run to update goldens, only report diffs but do not
# fail.
self._AssertProtoDictEquals(
golden_proto_dict,
proto_dict,
verbose=FLAGS.verbose_diffs,
update_goldens=FLAGS.update_goldens,
api_version=api_version,
)
def testAPIBackwardsCompatibility(self):
api_version = 1
if hasattr(tf, "_major_api_version") and tf._major_api_version == 2:
api_version = 2
golden_file_patterns = [
os.path.join(
tf.compat.v1.resource_loader.get_root_dir_with_all_resources(),
_KeyToFilePath("*", api_version),
)
]
self._checkBackwardsCompatibility(
tf.keras,
golden_file_patterns,
api_version,
# Skip compat.v1 and compat.v2 since they are validated
# in separate tests.
additional_private_map={"tf.compat": ["v1", "v2"]},
omit_golden_symbols_map={},
)
def testAPIBackwardsCompatibilityV1(self):
api_version = 1
golden_file_patterns = os.path.join(
tf.compat.v1.resource_loader.get_root_dir_with_all_resources(),
_KeyToFilePath("*", api_version),
)
self._checkBackwardsCompatibility(
tf.compat.v1.keras,
golden_file_patterns,
api_version,
additional_private_map={
"tf": ["pywrap_tensorflow"],
"tf.compat": ["v1", "v2"],
},
omit_golden_symbols_map={},
)
def testAPIBackwardsCompatibilityV2(self):
api_version = 2
golden_file_patterns = [
os.path.join(
tf.compat.v1.resource_loader.get_root_dir_with_all_resources(),
_KeyToFilePath("*", api_version),
)
]
self._checkBackwardsCompatibility(
tf.compat.v2.keras,
golden_file_patterns,
api_version,
additional_private_map={"tf.compat": ["v1", "v2"]},
omit_golden_symbols_map={},
)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"--update_goldens", type=bool, default=False, help=_UPDATE_GOLDENS_HELP
)
parser.add_argument(
"--verbose_diffs", type=bool, default=True, help=_VERBOSE_DIFFS_HELP
)
FLAGS, unparsed = parser.parse_known_args()
_InitPathConstants()
# Now update argv, so that unittest library does not get confused.
sys.argv = [sys.argv[0]] + unparsed
tf.test.main()
| tf-keras/tf_keras/api/tests/api_compatibility_test.py/0 | {
"file_path": "tf-keras/tf_keras/api/tests/api_compatibility_test.py",
"repo_id": "tf-keras",
"token_count": 6723
} | 160 |
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for benchmark utitilies."""
import tensorflow.compat.v2 as tf
import tf_keras as keras
from tf_keras.benchmarks import benchmark_util
class BenchmarkUtilTest(tf.test.TestCase):
def test_get_benchmark_name(self):
name = "benchmark_layer_call__Conv2D_small_shape"
expected = ["Conv2D", "small", "shape"]
out = benchmark_util.get_benchmark_name(name)
self.assertAllEqual(out, expected)
def test_generate_benchmark_params_cpu_gpu(self):
adam_opt = keras.optimizers.Adam()
sgd_opt = keras.optimizers.SGD()
params = [
("Adam", adam_opt, 10),
("SGD", sgd_opt, 10),
]
expected = [
("Adam_CPU", adam_opt, 10),
("SGD_CPU", sgd_opt, 10),
("Adam_GPU", adam_opt, 10),
("SGD_GPU", sgd_opt, 10),
]
out = benchmark_util.generate_benchmark_params_cpu_gpu(params)
self.assertAllEqual(out, expected)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/benchmarks/benchmark_util_test.py/0 | {
"file_path": "tf-keras/tf_keras/benchmarks/benchmark_util_test.py",
"repo_id": "tf-keras",
"token_count": 644
} | 161 |
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Benchmarks on TF-Keras layers."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import functools
import numpy as np
import tensorflow.compat.v2 as tf
import tf_keras as keras
from tf_keras.benchmarks import benchmark_util
from tf_keras.benchmarks.layer_benchmarks import layer_benchmarks_test_base
def _get_metadata(name):
return {
"model_name": "ideal_layers",
"parameters": name[1] + "_shape",
}
def _get_layer_args(layer_cls, layer_args):
# To make benchmark parameters compatible with GPU platform.
if layer_cls is keras.layers.Bidirectional:
return {"layer": keras.layers.LSTM(1)}
return layer_args
def _get_input_data(inputs):
if "input_shape" in inputs:
return tf.ones(inputs["input_shape"])
elif "input" in inputs:
return inputs["input"]
else:
raise ValueError(
"Please specify either `input_shape` or `input`"
"for the benchmark test"
)
def _layer_call_backward(layer, x):
with tf.GradientTape() as tape:
y = layer(x)
loss = tf.reduce_mean(y**2)
_ = tape.gradient(loss, layer.trainable_variables)
CORE_LAYERS = [
(
"Dense_small_shape",
keras.layers.Dense,
{"units": 32, "activation": "relu"},
{"input_shape": (1, 16)},
100,
),
(
"Activation_small_shape",
keras.layers.Activation,
{"activation": "relu"},
{"input_shape": (1, 4)},
100,
),
(
"Embedding_small_shape",
keras.layers.Embedding,
{"input_dim": 1, "output_dim": 1, "input_length": 1},
{"input": np.random.randint(1, size=(1, 1))},
100,
),
(
"Embedding_normal_shape",
keras.layers.Embedding,
{"input_dim": 1000, "output_dim": 64, "input_length": 10},
{"input": np.random.randint(1000, size=(32, 10))},
100,
),
(
"Masking_small_shape",
keras.layers.Masking,
{"mask_value": 1},
{"input_shape": (1, 1)},
100,
),
(
"Lambda_small_shape",
keras.layers.Lambda,
{"function": lambda x: x**2},
{"input_shape": (1, 1)},
100,
),
(
"Flatten_small_shape",
keras.layers.Flatten,
{},
{"input_shape": (1, 1)},
100,
),
]
CONV_LAYERS = [
(
"Conv1D_small_shape",
keras.layers.Conv1D,
{"filters": 1, "kernel_size": 1, "activation": "relu"},
{"input_shape": (1, 1, 1)},
100,
),
(
"Conv2D_small_shape",
keras.layers.Conv2D,
{"filters": 1, "kernel_size": 1, "activation": "relu"},
{"input_shape": (1, 1, 1, 1)},
100,
),
(
"Conv2D_normal_shape",
keras.layers.Conv2D,
{"filters": 1, "kernel_size": 1, "activation": "relu"},
{"input_shape": (64, 28, 28, 3)},
100,
),
(
"Conv3D_small_shape",
keras.layers.Conv3D,
{"filters": 1, "kernel_size": 1, "activation": "relu"},
{"input_shape": (1, 1, 1, 1, 1)},
100,
),
(
"Conv1DTranspose_small_shape",
keras.layers.Conv1DTranspose,
{"filters": 1, "kernel_size": 1, "activation": "relu"},
{"input_shape": (1, 1, 1)},
100,
),
(
"Conv2DTranspose_small_shape",
keras.layers.Conv2DTranspose,
{"filters": 1, "kernel_size": 1, "activation": "relu"},
{"input_shape": (1, 1, 1, 1)},
100,
),
(
"Conv3DTranspose_small_shape",
keras.layers.Conv3DTranspose,
{"filters": 1, "kernel_size": 1, "activation": "relu"},
{"input_shape": (1, 1, 1, 1, 1)},
100,
),
(
"SeparableConv1D_small_shape",
keras.layers.SeparableConv1D,
{"filters": 1, "kernel_size": 1, "activation": "relu"},
{"input_shape": (1, 1, 1)},
100,
),
(
"SeparableConv2D_small_shape",
keras.layers.SeparableConv2D,
{"filters": 1, "kernel_size": 1, "activation": "relu"},
{"input_shape": (1, 1, 1, 1)},
100,
),
(
"DepthwiseConv2D_small_shape",
keras.layers.DepthwiseConv2D,
{"kernel_size": 1, "activation": "relu"},
{"input_shape": (1, 1, 1, 1)},
100,
),
]
RECURRENT_LAYERS = [
(
"LSTM_small_shape",
keras.layers.LSTM,
{"units": 1},
{"input_shape": (1, 1, 1)},
100,
),
(
"LSTM_normal_shape",
keras.layers.LSTM,
{"units": 4},
{"input_shape": (32, 10, 8)},
100,
),
(
"GRU_small_shape",
keras.layers.GRU,
{"units": 1},
{"input_shape": (1, 1, 1)},
100,
),
(
"SimpleRNN_small_shape",
keras.layers.SimpleRNN,
{"units": 1},
{"input_shape": (1, 1, 1)},
100,
),
(
"TimeDistributed_small_shape",
keras.layers.TimeDistributed,
{"layer": keras.layers.Conv2D(1, 1)},
{"input_shape": (1, 1, 1, 1, 1)},
100,
),
(
"Bidirectional_small_shape",
keras.layers.Bidirectional,
{},
{"input_shape": (1, 1, 1)},
100,
),
(
"ConvLSTM2D_small_shape",
keras.layers.ConvLSTM2D,
{"filters": 1, "kernel_size": 1, "activation": "relu"},
{"input_shape": (1, 1, 1, 1, 1)},
100,
),
(
"RNN_small_shape",
keras.layers.RNN,
{"cell": keras.layers.LSTMCell(1)},
{"input_shape": (1, 1, 1)},
100,
),
]
NORMALIZATION_LAYERS = [
(
"BatchNormalization_small_shape",
keras.layers.BatchNormalization,
{"axis": -1},
{"input_shape": (1, 1, 1)},
100,
),
(
"LayerNormalization_small_shape",
keras.layers.LayerNormalization,
{"axis": -1},
{"input_shape": (1, 1, 1)},
100,
),
]
REGULARIZATION_LAYERS = [
(
"Dropout_small_shape",
keras.layers.Dropout,
{"rate": 0.2},
{"input_shape": (1, 1, 1)},
100,
),
(
"SpatialDropout1D_small_shape",
keras.layers.SpatialDropout1D,
{"rate": 0.2},
{"input_shape": (1, 1, 1)},
100,
),
(
"SpatialDropout2D_small_shape",
keras.layers.SpatialDropout2D,
{"rate": 0.2},
{"input_shape": (1, 1, 1, 1)},
100,
),
(
"SpatialDropout3D_small_shape",
keras.layers.SpatialDropout3D,
{"rate": 0.2},
{"input_shape": (1, 1, 1, 1, 1)},
100,
),
(
"GaussianDropout_small_shape",
keras.layers.GaussianDropout,
{"rate": 0.2},
{"input_shape": (1, 1, 1)},
100,
),
(
"GaussianNoise_small_shape",
keras.layers.GaussianNoise,
{"stddev": 0.1},
{"input_shape": (1, 1, 1)},
100,
),
(
"ActivityRegularization_small_shape",
keras.layers.ActivityRegularization,
{"l1": 0.3},
{"input_shape": (1, 1, 1)},
100,
),
(
"AlphaDropout_small_shape",
keras.layers.AlphaDropout,
{"rate": 0.2},
{"input_shape": (1, 1, 1)},
100,
),
]
ATTENSION_LAYERS = [
(
"Attention_small_shape",
keras.layers.Attention,
{"use_scale": False},
{"input": [np.ones((1, 1, 1)), np.ones((1, 1, 1))]},
100,
),
(
"AdditiveAttention_small_shape",
keras.layers.AdditiveAttention,
{"use_scale": True},
{"input": [np.ones((1, 1, 1)), np.ones((1, 1, 1))]},
100,
),
]
POOLING_LAYERS = [
(
"MaxPooling1D_small_shape",
keras.layers.MaxPooling1D,
{"pool_size": 1, "strides": 1},
{"input_shape": (1, 1, 1)},
100,
),
(
"MaxPooling2D_small_shape",
keras.layers.MaxPooling2D,
{"pool_size": 1, "strides": 1},
{"input_shape": (1, 1, 1, 1)},
100,
),
(
"MaxPooling3D_small_shape",
keras.layers.MaxPooling3D,
{"pool_size": 1, "strides": 1},
{"input_shape": (1, 1, 1, 1, 1)},
100,
),
(
"AveragePooling1D_small_shape",
keras.layers.AveragePooling1D,
{"pool_size": 1, "strides": 1},
{"input_shape": (1, 1, 1)},
100,
),
(
"AveragePooling2D_small_shape",
keras.layers.AveragePooling2D,
{"pool_size": 1, "strides": 1},
{"input_shape": (1, 1, 1, 1)},
100,
),
(
"AveragePooling3D_small_shape",
keras.layers.AveragePooling3D,
{"pool_size": 1, "strides": 1},
{"input_shape": (1, 1, 1, 1, 1)},
100,
),
(
"GlobalMaxPooling1D_small_shape",
keras.layers.GlobalMaxPooling1D,
{},
{"input_shape": (1, 1, 1)},
100,
),
(
"GlobalMaxPooling2D_small_shape",
keras.layers.GlobalMaxPooling2D,
{},
{"input_shape": (1, 1, 1, 1)},
100,
),
(
"GlobalMaxPooling3D_small_shape",
keras.layers.GlobalMaxPooling3D,
{},
{"input_shape": (1, 1, 1, 1, 1)},
100,
),
(
"GlobalAveragePooling1D_small_shape",
keras.layers.GlobalAveragePooling1D,
{},
{"input_shape": (1, 1, 1)},
100,
),
(
"GlobalAveragePooling2D_small_shape",
keras.layers.GlobalAveragePooling2D,
{},
{"input_shape": (1, 1, 1, 1)},
100,
),
(
"GlobalAveragePooling3D_small_shape",
keras.layers.GlobalAveragePooling3D,
{},
{"input_shape": (1, 1, 1, 1, 1)},
100,
),
]
class KerasLayerBenchmarks(
layer_benchmarks_test_base.LayerBenchmarksBase,
metaclass=tf.__internal__.test.ParameterizedBenchmark,
):
# The parameter of each layer benchmark is a tuple, and the first one is
# the benchmark name. It must follow the convention of
# "{layer_name}_{small|normal|large}_shape" to make it compatible with
# `self.report_benchmark()` method.
_benchmark_parameters = benchmark_util.generate_benchmark_params_cpu_gpu(
CORE_LAYERS
+ CONV_LAYERS
+ RECURRENT_LAYERS
+ NORMALIZATION_LAYERS
+ REGULARIZATION_LAYERS
+ ATTENSION_LAYERS
+ POOLING_LAYERS
)
def benchmark_layer_call(self, layer_cls, layer_args, inputs, num_iters):
layer = layer_cls(**_get_layer_args(layer_cls, layer_args))
x = _get_input_data(inputs)
fn = functools.partial(layer, x)
name = benchmark_util.get_benchmark_name(self._get_name())
metadata = {"implementation": name[0] + ".layer.call"}
metadata.update(_get_metadata(name))
self.run_report(fn, num_iters, metadata)
def benchmark_layer_call_with_function(
self, layer_cls, layer_args, inputs, num_iters
):
layer = layer_cls(**_get_layer_args(layer_cls, layer_args))
x = _get_input_data(inputs)
layer.call = tf.function(layer.call)
fn = functools.partial(layer, x)
name = benchmark_util.get_benchmark_name(self._get_name())
metadata = {"implementation": name[0] + ".layer.call.function"}
metadata.update(_get_metadata(name))
self.run_report(fn, num_iters, metadata)
def benchmark_layer_call_with_xla(
self, layer_cls, layer_args, inputs, num_iters
):
name = benchmark_util.get_benchmark_name(self._get_name())
# TODO(b/173461426)
if layer_cls is keras.layers.Embedding and name[-1] == "GPU":
return
layer = layer_cls(**_get_layer_args(layer_cls, layer_args))
x = _get_input_data(inputs)
layer.call = tf.function(layer.call, jit_compile=True)
fn = functools.partial(layer, x)
metadata = {"implementation": name[0] + ".layer.call.xla"}
metadata.update(_get_metadata(name))
self.run_report(fn, num_iters, metadata)
def benchmark_layer_call_backward(
self, layer_cls, layer_args, inputs, num_iters
):
layer = layer_cls(**_get_layer_args(layer_cls, layer_args))
x = _get_input_data(inputs)
fn = functools.partial(_layer_call_backward, layer, x)
name = benchmark_util.get_benchmark_name(self._get_name())
metadata = {"implementation": name[0] + ".layer.call.backward"}
metadata.update(_get_metadata(name))
self.run_report(fn, num_iters, metadata)
def benchmark_layer_call_backward_with_function(
self, layer_cls, layer_args, inputs, num_iters
):
layer = layer_cls(**_get_layer_args(layer_cls, layer_args))
x = _get_input_data(inputs)
layer.call = tf.function(layer.call)
fn = functools.partial(_layer_call_backward, layer, x)
name = benchmark_util.get_benchmark_name(self._get_name())
metadata = {"implementation": name[0] + ".layer.call.backward.function"}
metadata.update(_get_metadata(name))
self.run_report(fn, num_iters, metadata)
def benchmark_layer_call_backward_with_xla(
self, layer_cls, layer_args, inputs, num_iters
):
name = benchmark_util.get_benchmark_name(self._get_name())
# TODO(b/153480400)
if layer_cls in [
keras.layers.LSTM,
keras.layers.Bidirectional,
keras.layers.ConvLSTM2D,
keras.layers.GRU,
keras.layers.RNN,
keras.layers.SimpleRNN,
]:
return
# TODO(b/173461426)
if layer_cls is keras.layers.Embedding and name[-1] == "GPU":
return
layer = layer_cls(**_get_layer_args(layer_cls, layer_args))
x = _get_input_data(inputs)
layer.call = tf.function(layer.call, jit_compile=True)
fn = functools.partial(_layer_call_backward, layer, x)
metadata = {"implementation": name[0] + ".layer.call.backward.xla"}
metadata.update(_get_metadata(name))
self.run_report(fn, num_iters, metadata)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/benchmarks/layer_benchmarks/layer_benchmarks_test.py/0 | {
"file_path": "tf-keras/tf_keras/benchmarks/layer_benchmarks/layer_benchmarks_test.py",
"repo_id": "tf-keras",
"token_count": 7585
} | 162 |
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for distributed_file_utils."""
import os
import tensorflow.compat.v2 as tf
from tf_keras.distribute import distributed_file_utils
class DistributedFileUtilsTest(tf.test.TestCase):
class MockedExtended:
pass
class MockedChiefStrategy:
def __init__(self):
self.extended = DistributedFileUtilsTest.MockedExtended()
self.extended._in_multi_worker_mode = lambda: True
self.extended.should_checkpoint = True
class MockedWorkerStrategy:
def __init__(self):
self.extended = DistributedFileUtilsTest.MockedExtended()
self.extended._in_multi_worker_mode = lambda: True
self.extended.should_checkpoint = False
self.extended._task_id = 3
class MockedSingleWorkerStrategy:
def __init__(self):
self.extended = DistributedFileUtilsTest.MockedExtended()
self.extended._in_multi_worker_mode = lambda: False
def _write_dummy_file(self, file_to_write):
with open(file_to_write, "w") as f:
f.write("foo bar")
def testChiefWriteDirAndFilePath(self):
dirpath = self.get_temp_dir()
filepath = os.path.join(dirpath, "foo.bar")
strategy = DistributedFileUtilsTest.MockedChiefStrategy()
self.assertEqual(
distributed_file_utils.write_filepath(filepath, strategy), filepath
)
self.assertEqual(
distributed_file_utils.write_dirpath(dirpath, strategy), dirpath
)
def testWorkerWriteDirAndFilePath(self):
dirpath = self.get_temp_dir()
filepath = os.path.join(dirpath, "foo.bar")
strategy = DistributedFileUtilsTest.MockedWorkerStrategy()
self.assertEqual(
distributed_file_utils.write_filepath(filepath, strategy),
os.path.join(dirpath, "workertemp_3", "foo.bar"),
)
self.assertEqual(
distributed_file_utils.write_dirpath(dirpath, strategy),
os.path.join(dirpath, "workertemp_3"),
)
def testChiefDoesNotRemoveDirAndFilePath(self):
temp_dir = self.get_temp_dir()
strategy = DistributedFileUtilsTest.MockedChiefStrategy()
dir_to_write = distributed_file_utils.write_dirpath(temp_dir, strategy)
file_to_write = os.path.join(dir_to_write, "tmp")
self.assertFalse(os.path.exists(file_to_write))
self._write_dummy_file(file_to_write)
self.assertTrue(os.path.exists(file_to_write))
distributed_file_utils.remove_temp_dir_with_filepath(
file_to_write, strategy
)
self.assertTrue(os.path.exists(file_to_write))
def testWorkerDoesRemoveFilePath(self):
temp_dir = self.get_temp_dir()
strategy = DistributedFileUtilsTest.MockedWorkerStrategy()
dir_to_write = distributed_file_utils.write_dirpath(temp_dir, strategy)
file_to_write = os.path.join(dir_to_write, "tmp")
self.assertFalse(os.path.exists(file_to_write))
self._write_dummy_file(file_to_write)
self.assertTrue(os.path.exists(file_to_write))
distributed_file_utils.remove_temp_dir_with_filepath(
file_to_write, strategy
)
self.assertFalse(os.path.exists(file_to_write))
def testWorkerDoesRemoveDirPath(self):
temp_dir = self.get_temp_dir()
strategy = DistributedFileUtilsTest.MockedWorkerStrategy()
dir_to_write = distributed_file_utils.write_dirpath(temp_dir, strategy)
file_to_write = os.path.join(dir_to_write, "tmp")
self.assertFalse(os.path.exists(file_to_write))
self._write_dummy_file(file_to_write)
self.assertTrue(os.path.exists(file_to_write))
distributed_file_utils.remove_temp_dirpath(temp_dir, strategy)
self.assertFalse(os.path.exists(file_to_write))
self.assertFalse(os.path.exists(os.path.dirname(file_to_write)))
def testMultipleRemoveOrigDirPathIsFine(self):
temp_dir = self.get_temp_dir()
strategy = DistributedFileUtilsTest.MockedWorkerStrategy()
dir_to_write = distributed_file_utils.write_dirpath(temp_dir, strategy)
file_to_write = os.path.join(dir_to_write, "tmp")
self._write_dummy_file(file_to_write)
distributed_file_utils.remove_temp_dirpath(temp_dir, strategy)
distributed_file_utils.remove_temp_dirpath(temp_dir, strategy)
distributed_file_utils.remove_temp_dirpath(temp_dir, strategy)
def testMultipleRemoveDirToWritePathIsFine(self):
temp_dir = self.get_temp_dir()
strategy = DistributedFileUtilsTest.MockedWorkerStrategy()
dir_to_write = distributed_file_utils.write_dirpath(temp_dir, strategy)
file_to_write = os.path.join(dir_to_write, "tmp")
self._write_dummy_file(file_to_write)
distributed_file_utils.remove_temp_dirpath(dir_to_write, strategy)
distributed_file_utils.remove_temp_dirpath(dir_to_write, strategy)
distributed_file_utils.remove_temp_dirpath(dir_to_write, strategy)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/distribute/distributed_file_utils_test.py/0 | {
"file_path": "tf-keras/tf_keras/distribute/distributed_file_utils_test.py",
"repo_id": "tf-keras",
"token_count": 2365
} | 163 |
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for running legacy optimizer code with DistributionStrategy."""
import numpy
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
from tf_keras.distribute import optimizer_combinations
from tf_keras.distribute.test_example import batchnorm_example
from tf_keras.distribute.test_example import minimize_loss_example
from tf_keras.layers import core
from tf_keras.optimizers.legacy import optimizer_v2
VAR_MAP_V1 = {
"GradientDescent": ("dense/kernel", "dense/bias"),
"Adagrad": (
"dense/kernel/Adagrad",
"dense/kernel",
"dense/bias/Adagrad",
"dense/bias",
),
"Ftrl": (
"dense/kernel/Ftrl",
"dense/kernel",
"dense/bias/Ftrl",
"dense/bias",
"dense/kernel/Ftrl_1",
"dense/bias/Ftrl_1",
),
"RMSProp": (
"dense/kernel",
"dense/bias/RMSProp",
"dense/bias/RMSProp_1",
"dense/bias",
"dense/kernel/RMSProp_1",
"dense/kernel/RMSProp",
),
}
VAR_MAP_V2 = {
"SGD": (
"dense/bias",
"SGD/learning_rate",
"SGD/decay",
"SGD/iter",
"dense/kernel",
"SGD/momentum",
),
"Adagrad": (
"Adagrad/iter",
"dense/bias",
"dense/kernel",
"Adagrad/learning_rate",
"Adagrad/decay",
"Adagrad/dense/kernel/accumulator",
"Adagrad/dense/bias/accumulator",
),
}
class MinimizeLossStepTest(tf.test.TestCase, parameterized.TestCase):
def _get_iterator(self, strategy, input_fn):
iterator = strategy.make_input_fn_iterator(lambda _: input_fn())
self.evaluate(iterator.initializer)
return iterator
@tf.__internal__.distribute.combinations.generate(
tf.__internal__.test.combinations.times(
optimizer_combinations.distributions_and_v1_optimizers(),
tf.__internal__.test.combinations.combine(
mode=["graph"], use_callable_loss=[True, False]
)
+ tf.__internal__.test.combinations.combine(
mode=["eager"], use_callable_loss=[True]
),
)
+ tf.__internal__.test.combinations.times(
optimizer_combinations.distributions_and_v2_optimizers(),
tf.__internal__.test.combinations.combine(
mode=["graph", "eager"], use_callable_loss=[True]
),
)
+ tf.__internal__.test.combinations.combine(
distribution=[tf.__internal__.distribute.combinations.tpu_strategy],
optimizer_fn=optimizer_combinations.optimizers_v2,
mode=["graph"],
use_callable_loss=[True],
)
+ tf.__internal__.test.combinations.combine(
distribution=[tf.__internal__.distribute.combinations.tpu_strategy],
optimizer_fn=optimizer_combinations.optimizers_v1,
mode=["graph"],
use_callable_loss=[True, False],
)
)
def testTrainNetwork(self, distribution, optimizer_fn, use_callable_loss):
with distribution.scope():
optimizer = optimizer_fn()
model_fn, dataset_fn, layer = minimize_loss_example(
optimizer, use_bias=True, use_callable_loss=use_callable_loss
)
def step_fn(ctx, inputs):
del ctx # Unused
return distribution.group(
distribution.extended.call_for_each_replica(
model_fn, args=(inputs,)
)
)
iterator = self._get_iterator(distribution, dataset_fn)
def run_step():
return distribution.extended.experimental_run_steps_on_iterator(
step_fn, iterator, iterations=2
).run_op
if not tf.executing_eagerly():
with self.cached_session() as sess:
run_step = sess.make_callable(run_step())
self.evaluate(tf.compat.v1.global_variables_initializer())
weights, biases = [], []
for _ in range(5):
run_step()
weights.append(self.evaluate(layer.kernel))
biases.append(self.evaluate(layer.bias))
error = abs(
numpy.add(numpy.squeeze(weights), numpy.squeeze(biases)) - 1
)
is_not_increasing = all(y <= x for x, y in zip(error, error[1:]))
self.assertTrue(is_not_increasing)
@tf.__internal__.distribute.combinations.generate(
tf.__internal__.test.combinations.times(
optimizer_combinations.distributions_and_v1_optimizers(),
tf.__internal__.test.combinations.combine(
mode=["graph"], use_callable_loss=[True, False]
)
+ tf.__internal__.test.combinations.combine(
mode=["eager"], use_callable_loss=[True]
),
)
+ tf.__internal__.test.combinations.times(
optimizer_combinations.distributions_and_v2_optimizers(),
tf.__internal__.test.combinations.combine(
mode=["graph", "eager"], use_callable_loss=[True]
),
)
)
def testTrainNetworkByCallForEachReplica(
self, distribution, optimizer_fn, use_callable_loss
):
with distribution.scope():
optimizer = optimizer_fn()
model_fn, dataset_fn, layer = minimize_loss_example(
optimizer, use_bias=True, use_callable_loss=use_callable_loss
)
iterator = self._get_iterator(distribution, dataset_fn)
def run_step():
return distribution.group(
distribution.extended.call_for_each_replica(
model_fn, args=(iterator.get_next(),)
)
)
if not tf.executing_eagerly():
with self.cached_session() as sess:
run_step = sess.make_callable(run_step())
self.evaluate(tf.compat.v1.global_variables_initializer())
weights, biases = [], []
for _ in range(10):
run_step()
weights.append(self.evaluate(layer.kernel))
biases.append(self.evaluate(layer.bias))
error = abs(
numpy.add(numpy.squeeze(weights), numpy.squeeze(biases)) - 1
)
is_not_increasing = all(y <= x for x, y in zip(error, error[1:]))
self.assertTrue(is_not_increasing)
@tf.__internal__.distribute.combinations.generate(
tf.__internal__.test.combinations.times(
optimizer_combinations.distributions_and_v1_and_v2_optimizers(),
tf.__internal__.test.combinations.combine(mode=["graph", "eager"]),
)
+ tf.__internal__.test.combinations.combine(
distribution=[tf.__internal__.distribute.combinations.tpu_strategy],
optimizer_fn=optimizer_combinations.optimizers_v1_and_v2,
mode=["graph"],
)
)
def testOptimizerInsideModelFn(self, distribution, optimizer_fn):
if (
not tf.executing_eagerly()
and tf.compat.v1.control_flow_v2_enabled()
):
self.skipTest("b/138751864")
created_variables = []
trainable_variables = []
def appending_creator(next_creator, **kwargs):
v = next_creator(**kwargs)
# Skip the StateVar created in the tf.random.Generator, which is
# used by keras initializers.
if "StateVar" in v.name:
return v
created_variables.append(v.name)
if "trainable" in kwargs and kwargs["trainable"]:
trainable_variables.append(v.name)
return v
# Creator scope needs to be set before it's used inside
# `distribution.scope`.
with tf.variable_creator_scope(appending_creator), distribution.scope():
optimizer = optimizer_fn()
model_fn, dataset_fn, _ = minimize_loss_example(
optimizer, use_bias=True, use_callable_loss=True
)
def step_fn(ctx, inputs):
del ctx # Unused
return distribution.group(
distribution.extended.call_for_each_replica(
model_fn, args=(inputs,)
)
)
iterator = self._get_iterator(distribution, dataset_fn)
def run_step():
return distribution.extended.experimental_run_steps_on_iterator(
step_fn, iterator, iterations=1
).run_op
if not tf.executing_eagerly():
with self.cached_session() as sess:
run_step = sess.make_callable(run_step())
self.evaluate(tf.compat.v1.global_variables_initializer())
run_step()
def get_expected_variables(num_parameter_devices):
name = optimizer._name
if isinstance(optimizer, optimizer_v2.OptimizerV2):
variables = VAR_MAP_V2[name]
else:
variables = VAR_MAP_V1[name]
extended_variables = [
v + f"/replica_{replica}"
for v in variables
for replica in range(1, num_parameter_devices)
]
variables = list(variables) + extended_variables
return set(v + ":0" for v in variables)
self.assertEqual(
get_expected_variables(
len(distribution.extended.parameter_devices)
),
set(created_variables),
)
@tf.__internal__.distribute.combinations.generate(
tf.__internal__.test.combinations.times(
tf.__internal__.test.combinations.combine(
momentum=[0.8, 0.9, 0.99], renorm=[False, True]
),
tf.__internal__.test.combinations.times(
optimizer_combinations.distributions_and_v1_and_v2_optimizers(),
tf.__internal__.test.combinations.combine(
mode=["graph", "eager"],
# TODO(isaprykin): Allow False here. Currently subsequent
# replicas will re-execute UPDATE_OPS of previous replicas.
update_ops_in_cross_replica_mode=[True],
),
)
+ tf.__internal__.test.combinations.combine(
distribution=[
tf.__internal__.distribute.combinations.tpu_strategy
],
optimizer_fn=optimizer_combinations.optimizers_v1_and_v2,
mode=["graph"],
update_ops_in_cross_replica_mode=[False],
),
)
)
def testTrainNetworkWithBatchNorm(
self,
distribution,
optimizer_fn,
momentum,
renorm,
update_ops_in_cross_replica_mode,
):
"""Verifies that moving mean updates are reduced across replicas."""
with distribution.scope():
num_replicas = distribution.num_replicas_in_sync
model_fn, dataset_fn, batchnorm = batchnorm_example(
optimizer_fn,
batch_per_epoch=num_replicas,
momentum=momentum,
renorm=renorm,
update_ops_in_replica_mode=not update_ops_in_cross_replica_mode,
)
def step_fn(ctx, inputs):
del ctx # Unused
fetches = distribution.experimental_local_results(
distribution.extended.call_for_each_replica(
model_fn, args=(inputs,)
)
)
if update_ops_in_cross_replica_mode:
fetches += tuple(
tf.compat.v1.get_collection(
tf.compat.v1.GraphKeys.UPDATE_OPS
)
)
return tf.group(fetches)
iterator = self._get_iterator(distribution, dataset_fn)
def run_step():
return distribution.extended.experimental_run_steps_on_iterator(
step_fn, iterator, iterations=1
).run_op
if not tf.executing_eagerly():
with self.cached_session() as sess:
run_step = sess.make_callable(run_step())
self.evaluate(tf.compat.v1.global_variables_initializer())
expected_moving_means = [0.0] * 8
def averaged_batch_mean(i):
# Each batch has shape [16, 8] where the ith element in jth list
# is (8 * j + i + replica_id * 100). So the batch mean in each
# replica is (60 + i + replica_id * 100). So here comes its
# batch mean over all replicas:
return 60.0 + i + (num_replicas - 1.0) / 2.0 * 100.0
for _ in range(10):
run_step()
moving_means = self.evaluate(batchnorm.moving_mean)
# We make sure that the moving_mean is updated as if the sample
# mean is calculated over all replicas.
for i, expected_moving_mean in enumerate(expected_moving_means):
expected_moving_means[i] -= (
expected_moving_mean - averaged_batch_mean(i)
) * (1.0 - momentum)
self.assertNear(
expected_moving_means[i], moving_means[i], 0.0001
)
@tf.__internal__.distribute.combinations.generate(
tf.__internal__.test.combinations.times(
tf.__internal__.test.combinations.combine(
loss_reduction=[
tf.compat.v1.losses.Reduction.SUM,
tf.compat.v1.losses.Reduction.MEAN,
tf.compat.v1.losses.Reduction.SUM_OVER_BATCH_SIZE,
tf.compat.v1.losses.Reduction.SUM_OVER_NONZERO_WEIGHTS,
]
),
tf.__internal__.test.combinations.times(
tf.__internal__.test.combinations.combine(
distribution=[
tf.__internal__.distribute.combinations.one_device_strategy, # noqa: E501
tf.__internal__.distribute.combinations.mirrored_strategy_with_gpu_and_cpu, # noqa: E501
tf.__internal__.distribute.combinations.mirrored_strategy_with_two_gpus, # noqa: E501
tf.__internal__.distribute.combinations.mirrored_strategy_with_two_gpus_no_merge_call, # noqa: E501
]
),
tf.__internal__.test.combinations.times(
tf.__internal__.test.combinations.combine(
optimizer_fn=optimizer_combinations.gradient_descent_optimizer_v1_fn # noqa: E501
),
tf.__internal__.test.combinations.combine(
mode=["graph"], use_callable_loss=[True, False]
)
+ tf.__internal__.test.combinations.combine(
mode=["eager"], use_callable_loss=[True]
),
)
+ tf.__internal__.test.combinations.times(
tf.__internal__.test.combinations.combine(
optimizer_fn=optimizer_combinations.gradient_descent_optimizer_keras_v2_fn # noqa: E501
),
tf.__internal__.test.combinations.combine(
mode=["graph", "eager"], use_callable_loss=[True]
),
),
)
+ tf.__internal__.test.combinations.combine(
distribution=[
tf.__internal__.distribute.combinations.tpu_strategy
],
optimizer_fn=optimizer_combinations.gradient_descent_optimizer_v1_fn, # noqa: E501
mode=["graph"],
use_callable_loss=[True, False],
)
+ tf.__internal__.test.combinations.combine(
distribution=[
tf.__internal__.distribute.combinations.tpu_strategy
],
optimizer_fn=optimizer_combinations.gradient_descent_optimizer_keras_v2_fn, # noqa: E501
mode=["graph"],
use_callable_loss=[True],
),
)
)
def testMeanVsSum(
self, distribution, optimizer_fn, loss_reduction, use_callable_loss
):
with distribution.scope():
all_vars = []
def model_fn(inputs):
x, y = inputs
w = tf.compat.v1.get_variable("w", initializer=[[2.0]])
all_vars.append(w)
def loss_fn():
# Use fixed initialization to make the steps deterministic.
predict = tf.matmul(x, w)
loss = tf.compat.v1.losses.mean_squared_error(
y, predict, reduction=loss_reduction
)
if loss_reduction == tf.compat.v1.losses.Reduction.SUM:
return loss
return loss / distribution.num_replicas_in_sync
optimizer = (
optimizer_fn()
) # GradientDescent with 0.2 learning rate
if isinstance(optimizer, optimizer_v2.OptimizerV2):
return optimizer.minimize(loss_fn, [w])
else:
if use_callable_loss:
return optimizer.minimize(loss_fn)
else:
return optimizer.minimize(loss_fn())
def dataset_fn():
features = tf.data.Dataset.from_tensors([[2.0], [7.0]])
labels = tf.data.Dataset.from_tensors([[6.0], [21.0]])
return tf.data.Dataset.zip((features, labels)).repeat()
def step_fn(ctx, inputs):
del ctx # Unused
return distribution.group(
distribution.extended.call_for_each_replica(
model_fn, args=(inputs,)
)
)
iterator = self._get_iterator(distribution, dataset_fn)
def run_step():
return distribution.extended.experimental_run_steps_on_iterator(
step_fn, iterator, iterations=1
).run_op
if not tf.executing_eagerly():
with self.cached_session() as sess:
run_step = sess.make_callable(run_step())
self.evaluate(tf.compat.v1.global_variables_initializer())
run_step()
v = all_vars[0]
self.assertTrue(all(v is vi for vi in all_vars[1:]))
weight = numpy.squeeze(self.evaluate(v))
# Our model is:
# predict = x * w
# loss = (predict - y)^2
# dloss/dpredict = 2*(predict - y)
# dloss/dw = 2 * x^T @ (predict - y)
# For our batch size of 2, assuming sum loss reduction:
# x = [2, 7]
# y = [6, 21]
# w_initial = 2
# predict = [4, 14]
# predict - y = [-2, -7]
# dloss/dw = 2 <[2, 7], [-2, -7]> = - 2(4 + 49) = -106
# So unreplicated the update to w with lr=0.001 is -0.2 * -106 =
# 0.106 with sum loss reduction, or 0.053 with mean.
if loss_reduction == tf.compat.v1.losses.Reduction.SUM:
# Note that the "distribution.num_replicas_in_sync" factor will
# go away once we split the input across replicas, instead of
# pulling a complete batch of input per replica.
self.assertNear(
weight,
2 + 0.106 * distribution.num_replicas_in_sync,
0.0001,
)
else:
# One of the mean loss reductions.
self.assertNear(weight, 2 + 0.053, 0.0001)
@tf.__internal__.distribute.combinations.generate(
tf.__internal__.test.combinations.times(
optimizer_combinations.distributions_and_v1_and_v2_optimizers(),
tf.__internal__.test.combinations.combine(mode=["graph", "eager"]),
tf.__internal__.test.combinations.combine(is_tpu=[False]),
)
+ tf.__internal__.test.combinations.combine(
distribution=[tf.__internal__.distribute.combinations.tpu_strategy],
optimizer_fn=optimizer_combinations.optimizers_v1_and_v2,
mode=["graph"],
is_tpu=[True],
)
)
def testRunStepsWithOutputContext(self, distribution, optimizer_fn, is_tpu):
with distribution.scope():
def dataset_fn():
dataset = tf.data.Dataset.from_tensors([[1.0]]).repeat()
# TODO(priyag): batch with drop_remainder=True causes shapes to
# be fully defined for TPU. Remove this when XLA supports
# dynamic shapes.
return dataset.batch(batch_size=1, drop_remainder=True)
optimizer = optimizer_fn()
layer = core.Dense(1, use_bias=True)
key1 = "foo"
value1 = "bar"
def model_fn(output_context, x):
"""A very simple model written by the user."""
def loss_fn():
y = tf.reshape(layer(x), []) - tf.constant(1.0)
return y * y
if isinstance(optimizer, optimizer_v2.OptimizerV2):
train_op = optimizer.minimize(
loss_fn, lambda: layer.trainable_variables
)
else:
train_op = optimizer.minimize(loss_fn)
loss = loss_fn()
output_context.set_last_step_output(
name="replica_loss_reduced",
output=loss,
reduce_op=tf.distribute.ReduceOp.MEAN,
)
output_context.set_non_tensor_output(key1, value1)
return (train_op, loss)
def step_fn(output_context, inputs):
(train_op, loss) = distribution.extended.call_for_each_replica(
model_fn, args=(output_context, inputs)
)
output_context.set_last_step_output(
name="cross_replica_loss_reduced",
output=loss,
reduce_op=tf.distribute.ReduceOp.MEAN,
)
output_context.set_last_step_output(
name="cross_replica_loss_not_reduced", output=loss
)
return distribution.group(train_op)
iterator = self._get_iterator(distribution, dataset_fn)
def run_step():
initial_loss = lambda: tf.constant(1e7)
# Initial values corresponding to reduced losses are just single
# tensors. But for non reduced losses, we need to have initial
# values that are of the same structure as non reduced losses.
# In MirroredStrategy, this will be a list of losses, in
# TPUStrategy it will be single tensor. Using
# `call_for_each_replica` followed by
# `experimental_local_results` gives us the desired initial
# value structure.
not_reduced = distribution.experimental_local_results(
distribution.extended.call_for_each_replica(initial_loss)
)
initial_loop_values = {
"replica_loss_reduced": initial_loss(),
"cross_replica_loss_reduced": initial_loss(),
"cross_replica_loss_not_reduced": not_reduced,
}
ctx = distribution.extended.experimental_run_steps_on_iterator(
step_fn,
iterator,
iterations=2,
initial_loop_values=initial_loop_values,
)
self.assertEqual({key1: (value1,)}, ctx.non_tensor_outputs)
self._verify_loss_output(
initial_loss(),
loss_output=ctx.last_step_outputs["replica_loss_reduced"],
reduced=True,
distribution=distribution,
)
self._verify_loss_output(
initial_loss(),
loss_output=ctx.last_step_outputs[
"cross_replica_loss_reduced"
],
reduced=True,
distribution=distribution,
)
self._verify_loss_output(
initial_loss(),
loss_output=ctx.last_step_outputs[
"cross_replica_loss_not_reduced"
],
reduced=False,
distribution=distribution,
)
return (
ctx.run_op,
ctx.last_step_outputs["replica_loss_reduced"],
)
if not tf.executing_eagerly():
with self.cached_session() as sess:
run_step = sess.make_callable(run_step())
self.evaluate(tf.compat.v1.global_variables_initializer())
weights, biases = [], []
for _ in range(5):
run_step()
weights.append(self.evaluate(layer.kernel))
biases.append(self.evaluate(layer.bias))
error = abs(
numpy.add(numpy.squeeze(weights), numpy.squeeze(biases)) - 1
)
error_is_not_increasing = all(
y <= x for x, y in zip(error, error[1:])
)
self.assertTrue(error_is_not_increasing)
def _verify_loss_output(
self, initial_loss, loss_output, reduced, distribution
):
if not reduced:
self.assertLen(
distribution.experimental_local_results(loss_output),
distribution.num_replicas_in_sync,
)
loss_tensor = distribution.reduce(
tf.distribute.ReduceOp.MEAN, loss_output, axis=None
)
else:
unwrapped_output = distribution.experimental_local_results(
loss_output
)
self.assertLen(unwrapped_output, 1)
loss_tensor = unwrapped_output[0]
self.assertEqual(initial_loss.dtype, loss_tensor.dtype)
self.assertEqual(initial_loss.shape, loss_tensor.shape)
@tf.__internal__.distribute.combinations.generate(
optimizer_combinations.distributions_and_v2_optimizers()
)
def test_empty_var_list(self, distribution, optimizer_fn):
opt = optimizer_fn()
with distribution.scope():
def run_fn():
opt.minimize(lambda: tf.constant(1.0), [])
opt.apply_gradients([])
distribution.run(run_fn)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/distribute/minimize_loss_test.py/0 | {
"file_path": "tf-keras/tf_keras/distribute/minimize_loss_test.py",
"repo_id": "tf-keras",
"token_count": 14890
} | 164 |
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""A simple functional keras model with one layer."""
import numpy as np
import tensorflow.compat.v2 as tf
import tf_keras as keras
from tf_keras.distribute import model_collection_base
from tf_keras.optimizers.legacy import gradient_descent
_BATCH_SIZE = 10
def _get_data_for_simple_models():
x_train = tf.constant(np.random.rand(1000, 3), dtype=tf.float32)
y_train = tf.constant(np.random.rand(1000, 5), dtype=tf.float32)
x_predict = tf.constant(np.random.rand(1000, 3), dtype=tf.float32)
return x_train, y_train, x_predict
class SimpleFunctionalModel(model_collection_base.ModelAndInput):
"""A simple functional model and its inputs."""
def get_model(self, **kwargs):
output_name = "output_1"
x = keras.layers.Input(shape=(3,), dtype=tf.float32)
y = keras.layers.Dense(5, dtype=tf.float32, name=output_name)(x)
model = keras.Model(inputs=x, outputs=y)
optimizer = gradient_descent.SGD(learning_rate=0.001)
model.compile(loss="mse", metrics=["mae"], optimizer=optimizer)
return model
def get_data(self):
return _get_data_for_simple_models()
def get_batch_size(self):
return _BATCH_SIZE
class SimpleSequentialModel(model_collection_base.ModelAndInput):
"""A simple sequential model and its inputs."""
def get_model(self, **kwargs):
output_name = "output_1"
model = keras.Sequential()
y = keras.layers.Dense(
5, dtype=tf.float32, name=output_name, input_dim=3
)
model.add(y)
optimizer = gradient_descent.SGD(learning_rate=0.001)
model.compile(loss="mse", metrics=["mae"], optimizer=optimizer)
return model
def get_data(self):
return _get_data_for_simple_models()
def get_batch_size(self):
return _BATCH_SIZE
class _SimpleModel(keras.Model):
def __init__(self):
super().__init__()
self._dense_layer = keras.layers.Dense(5, dtype=tf.float32)
def call(self, inputs):
return self._dense_layer(inputs)
class SimpleSubclassModel(model_collection_base.ModelAndInput):
"""A simple subclass model and its data."""
def get_model(self, **kwargs):
model = _SimpleModel()
optimizer = gradient_descent.SGD(learning_rate=0.001)
model.compile(
loss="mse", metrics=["mae"], cloning=False, optimizer=optimizer
)
return model
def get_data(self):
return _get_data_for_simple_models()
def get_batch_size(self):
return _BATCH_SIZE
class _SimpleModule(tf.Module):
def __init__(self):
self.v = tf.Variable(3.0)
@tf.function
def __call__(self, x):
return self.v * x
class SimpleTFModuleModel(model_collection_base.ModelAndInput):
"""A simple model based on tf.Module and its data."""
def get_model(self, **kwargs):
model = _SimpleModule()
return model
def get_data(self):
return _get_data_for_simple_models()
def get_batch_size(self):
return _BATCH_SIZE
| tf-keras/tf_keras/distribute/simple_models.py/0 | {
"file_path": "tf-keras/tf_keras/distribute/simple_models.py",
"repo_id": "tf-keras",
"token_count": 1452
} | 165 |
# Copyright 2022 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for initializers."""
import os
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
from tf_keras import backend
from tf_keras import layers
from tf_keras import losses
from tf_keras import models
from tf_keras.dtensor import dtensor_api as dtensor
from tf_keras.dtensor import layout_map
from tf_keras.dtensor import test_util
from tf_keras.optimizers import adadelta
from tf_keras.optimizers import adagrad
from tf_keras.optimizers import adam
from tf_keras.optimizers import adamw
from tf_keras.optimizers import rmsprop
from tf_keras.optimizers import sgd
class OptimizersTest(test_util.DTensorBaseTest):
def setUp(self):
super().setUp()
global_ids = test_util.create_device_ids_array((2, 2))
local_device_ids = np.ravel(global_ids).tolist()
mesh_dict = {
"CPU": dtensor.Mesh(
["X", "Y"],
global_ids,
local_device_ids,
test_util.create_device_list((2, 2), "CPU"),
)
}
self.mesh = self.configTestMesh(mesh_dict)
def test_add_variable_from_reference(self):
optimizer = adam.Adam(mesh=self.mesh)
variable_init_value = tf.ones([4, 4], dtype=tf.float32)
variable_init_value = dtensor.copy_to_mesh(
variable_init_value,
layout=dtensor.Layout.replicated(self.mesh, rank=2),
)
model_variable = dtensor.DVariable(
variable_init_value, trainable=True, name="tmp"
)
state_variable = optimizer.add_variable_from_reference(
model_variable, "test"
)
self.assertEqual(state_variable._shared_name, "test/tmp")
self.assertAllClose(self.evaluate(state_variable), tf.zeros([4, 4]))
# Make sure the variable contains the correct layout info
self.assertEqual(state_variable.layout, model_variable.layout)
def test_build_index_dict(self):
optimizer = adam.Adam(mesh=self.mesh)
variable_init_value = tf.ones(shape=(), dtype=tf.float32)
variable_init_value = dtensor.copy_to_mesh(
variable_init_value,
layout=dtensor.Layout.replicated(self.mesh, rank=0),
)
var_list = [
dtensor.DVariable(variable_init_value, name=f"var{i}")
for i in range(10)
]
optimizer._build_index_dict(var_list)
self.assertEqual(
optimizer._index_dict[optimizer._var_key(var_list[7])], 7
)
def test_aggregate_gradients_noop(self):
optimizer = adam.Adam(mesh=self.mesh)
variable_init_value = tf.ones(shape=(), dtype=tf.float32)
model_variable = dtensor.DVariable(
variable_init_value,
trainable=True,
layout=dtensor.Layout.replicated(self.mesh, rank=0),
)
grads = tf.ones_like(variable_init_value)
grad_and_var = zip([grads], [model_variable])
result = optimizer.aggregate_gradients(grad_and_var)
self.assertEqual(result, grad_and_var)
@parameterized.named_parameters(
(
"Adadelta",
adadelta.Adadelta,
{},
[
"Adadelta/accumulated_grad/Variable",
"Adadelta/accumulated_delta_var/Variable",
"iteration",
],
),
(
"Adam",
adam.Adam,
{"amsgrad": True},
[
"Adam/m/Variable",
"Adam/v/Variable",
"Adam/vhat/Variable",
"iteration",
],
),
(
"AdamW",
adamw.AdamW,
{"amsgrad": True},
[
"AdamW/m/Variable",
"AdamW/v/Variable",
"AdamW/vhat/Variable",
"iteration",
],
),
(
"Adagrad",
adagrad.Adagrad,
{},
["Adagrad/accumulator/Variable", "iteration"],
),
(
"RMSprop",
rmsprop.RMSprop,
{"momentum": 0.1, "centered": True},
[
"RMSprop/velocity/Variable",
"RMSprop/momentum/Variable",
"RMSprop/average_gradient/Variable",
"iteration",
],
),
(
"SGD",
sgd.SGD,
{"momentum": 0.1},
["SGD/m/Variable", "iteration"],
),
)
def test_apply_gradients(
self, optimizer_cls, init_args, expect_variable_names
):
optimizer = optimizer_cls(mesh=self.mesh, **init_args)
self.assertEqual(self.evaluate(optimizer.iterations), 0)
self.assertEqual(
optimizer.iterations.layout,
dtensor.Layout.replicated(self.mesh, rank=0),
)
variable_init_value = tf.ones([4, 4], dtype=tf.float32)
variable_init_value = dtensor.copy_to_mesh(
variable_init_value,
layout=dtensor.Layout.replicated(self.mesh, rank=2),
)
model_variable = dtensor.DVariable(variable_init_value, trainable=True)
grads = tf.ones_like(variable_init_value)
optimizer.apply_gradients(zip([grads], [model_variable]))
optimizer_variables = optimizer.variables
self.assertEqual(self.evaluate(optimizer.iterations), 1)
all_names = [var._shared_name for var in optimizer_variables]
self.assertCountEqual(all_names, expect_variable_names)
def test_embedding_lookup_backward_path(self):
# See b/265441685 for more context.
backend.enable_tf_random_generator()
os.environ[
"DTENSOR_ENABLE_REPLICATED_SPMD_AS_DEFAULT_TF.RESOURCESCATTERADD"
] = "1"
# Build a small functional model with embedding layer, it contains
# tf.gather ops which will trigger the _deduplicate_sparse_grad() code
# path. tf.unique op will have a shape mismatch issue for dtensor.
batch_size = 16
seq_length = 10
vocab_size = 100
output_size = 8
def produce_data():
inputs = tf.random.uniform(
maxval=vocab_size,
shape=(batch_size, seq_length),
dtype=tf.int32,
)
label = tf.random.uniform(
maxval=output_size, shape=(batch_size,), dtype=tf.int32
)
inputs = dtensor.copy_to_mesh(
inputs, layout=dtensor.Layout.replicated(self.mesh, rank=2)
)
inputs = dtensor.relayout(
inputs, dtensor.Layout.batch_sharded(self.mesh, "X", 2)
)
label = dtensor.copy_to_mesh(
label, layout=dtensor.Layout.replicated(self.mesh, rank=1)
)
label = dtensor.relayout(
label, dtensor.Layout.batch_sharded(self.mesh, "X", 1)
)
return inputs, label
with layout_map.LayoutMap(self.mesh).scope():
inputs = layers.Input(shape=(seq_length,))
x = layers.Embedding(vocab_size, 64)(inputs)
x = layers.GlobalAveragePooling1D()(x)
preds = layers.Dense(output_size, activation="softmax")(x)
model = models.Model(inputs, preds)
optimizer = adam.Adam(mesh=self.mesh)
@tf.function
def train_func(model, inputs, label, optimizer):
with tf.GradientTape() as tape:
output = model(inputs)
loss = losses.sparse_categorical_crossentropy(label, output)
optimizer.minimize(loss, model.variables, tape)
return loss
# The error only happens across the batch, where the value of
# tf.unique are different.
input1, label1 = produce_data()
train_func(model, input1, label1, optimizer)
input2, label2 = produce_data()
train_func(model, input2, label2, optimizer)
# Assert nothing here, and expect the train_func can run properly with
# different inputs.
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/dtensor/optimizers_test.py/0 | {
"file_path": "tf-keras/tf_keras/dtensor/optimizers_test.py",
"repo_id": "tf-keras",
"token_count": 4255
} | 166 |
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for compile utitilies."""
import tensorflow.compat.v2 as tf
from tf_keras import backend
from tf_keras import losses as losses_mod
from tf_keras import metrics as metrics_mod
from tf_keras.engine import compile_utils
from tf_keras.testing_infra import test_combinations
class LossesContainerTest(test_combinations.TestCase):
def test_single_loss(self):
loss_container = compile_utils.LossesContainer("mse")
y_t, y_p = tf.ones((10, 5)), tf.zeros((10, 5))
total_loss = loss_container(y_t, y_p)
self.assertTrue(loss_container._built)
self.assertLen(loss_container._losses, 1)
self.assertIsInstance(total_loss, tf.Tensor)
self.assertEqual(total_loss.numpy(), 1.0)
self.assertLen(loss_container.metrics, 1)
loss_metric = loss_container.metrics[0]
self.assertEqual(loss_metric.name, "loss")
self.assertEqual(loss_metric.result().numpy(), 1.0)
loss_container.reset_state()
self.assertEqual(loss_metric.result().numpy(), 0.0)
def test_loss_list(self):
loss_container = compile_utils.LossesContainer(["mse", "mae"], [1, 0.5])
y_t = [tf.ones((10, 1)), tf.zeros((10, 1))]
y_p = [tf.ones((10, 1)), tf.ones((10, 1))]
sw = tf.convert_to_tensor([0, 0, 0, 0, 0, 1, 1, 1, 1, 1])
total_loss = loss_container(y_t, y_p, sample_weight=sw)
self.assertEqual(loss_container._output_names, ["output_1", "output_2"])
self.assertLen(loss_container._losses, 2)
self.assertEqual(total_loss.numpy(), 0.25)
loss_metric = loss_container.metrics[0]
self.assertEqual(loss_metric.name, "loss")
self.assertEqual(loss_metric.result().numpy(), 0.25)
output_1_metric = loss_container.metrics[1]
self.assertEqual(output_1_metric.name, "output_1_loss")
self.assertEqual(output_1_metric.result().numpy(), 0)
output_2_metric = loss_container.metrics[2]
self.assertEqual(output_2_metric.name, "output_2_loss")
self.assertEqual(output_2_metric.result().numpy(), 0.5)
loss_container.reset_state()
self.assertEqual(loss_metric.result().numpy(), 0)
self.assertEqual(output_1_metric.result().numpy(), 0)
self.assertEqual(output_2_metric.result().numpy(), 0)
def test_loss_dict(self):
loss_container = compile_utils.LossesContainer(
{"out1": "mse", "out2": "mae"}, {"out1": 1, "out2": 0.5}
)
y_t = {"out1": tf.ones((10, 1)), "out2": tf.zeros((10, 1))}
y_p = {"out1": tf.ones((10, 1)), "out2": tf.ones((10, 1))}
sw = tf.convert_to_tensor([0, 0, 0, 0, 0, 1, 1, 1, 1, 1])
total_loss = loss_container(y_t, y_p, sample_weight=sw)
self.assertLen(loss_container._losses, 2)
self.assertIsInstance(total_loss, tf.Tensor)
self.assertEqual(total_loss.numpy(), 0.25)
self.assertLen(loss_container.metrics, 3)
loss_metric = loss_container.metrics[0]
self.assertEqual(loss_metric.name, "loss")
self.assertEqual(loss_metric.result().numpy(), 0.25)
out1_metric = loss_container.metrics[1]
self.assertEqual(out1_metric.name, "out1_loss")
self.assertEqual(out1_metric.result().numpy(), 0)
out2_metric = loss_container.metrics[2]
self.assertEqual(out2_metric.name, "out2_loss")
self.assertEqual(out2_metric.result().numpy(), 0.5)
loss_container.reset_state()
self.assertEqual(loss_metric.result().numpy(), 0)
self.assertEqual(out1_metric.result().numpy(), 0)
self.assertEqual(out2_metric.result().numpy(), 0)
def test_loss_partial_dict_with_output_names(self):
loss_container = compile_utils.LossesContainer(
{"out2": "mae"}, {"out2": 1.0}, output_names=["out1", "out2"]
)
y_t = [tf.ones((10, 1)), tf.zeros((10, 1))]
y_p = [tf.ones((10, 1)), tf.ones((10, 1))]
sw = tf.convert_to_tensor([0, 0, 0, 0, 0, 1, 1, 1, 1, 1])
total_loss = loss_container(y_t, y_p, sample_weight=sw)
self.assertEqual(total_loss.numpy(), 0.5)
self.assertLen(loss_container.metrics, 2)
loss_metric = loss_container.metrics[0]
self.assertEqual(loss_metric.name, "loss")
self.assertEqual(loss_metric.result().numpy(), 0.5)
out2_metric = loss_container.metrics[1]
self.assertEqual(out2_metric.name, "out2_loss")
self.assertEqual(out2_metric.result().numpy(), 0.5)
def test_loss_dict_with_nones(self):
loss_container = compile_utils.LossesContainer(
{"out1": None, "out2": "mae"}
)
y_t = {"out1": tf.ones((10, 1)), "out2": tf.zeros((10, 1))}
y_p = {"out1": tf.ones((10, 1)), "out2": tf.ones((10, 1))}
sw = tf.convert_to_tensor([0, 0, 0, 0, 0, 1, 1, 1, 1, 1])
total_loss = loss_container(y_t, y_p, sample_weight=sw)
self.assertIsInstance(total_loss, tf.Tensor)
self.assertEqual(total_loss.numpy(), 0.5)
self.assertLen(loss_container.metrics, 2)
loss_metric = loss_container.metrics[0]
self.assertEqual(loss_metric.name, "loss")
self.assertEqual(loss_metric.result().numpy(), 0.5)
out2_metric = loss_container.metrics[1]
self.assertEqual(out2_metric.name, "out2_loss")
self.assertEqual(out2_metric.result().numpy(), 0.5)
def test_nested_structure(self):
loss_container = compile_utils.LossesContainer(
{"b": ["mse", None], "a": "mae"},
loss_weights={"b": [0.5, 0], "a": 1},
)
y_t = {
"b": [tf.ones((10, 1)), tf.zeros((10, 1))],
"a": tf.zeros((10, 1)),
}
y_p = {
"b": [tf.zeros((10, 1)), tf.zeros((10, 1))],
"a": tf.ones((10, 1)),
}
sw = tf.convert_to_tensor([0, 0, 0, 0, 0, 1, 1, 1, 1, 1])
total_loss = loss_container(y_t, y_p, sample_weight=sw)
self.assertIsInstance(total_loss, tf.Tensor)
self.assertEqual(total_loss.numpy(), 0.75)
self.assertLen(loss_container.metrics, 3)
loss_metric = loss_container.metrics[0]
self.assertEqual(loss_metric.name, "loss")
self.assertEqual(loss_metric.result().numpy(), 0.75)
a_metric = loss_container.metrics[1]
self.assertEqual(a_metric.name, "a_loss")
self.assertEqual(a_metric.result().numpy(), 0.5)
b_1_metric = loss_container.metrics[2]
self.assertEqual(b_1_metric.name, "b_1_loss")
self.assertEqual(b_1_metric.result().numpy(), 0.5)
def test_no_input_mutation(self):
loss = {"a": "mae"}
loss_container = compile_utils.LossesContainer(loss)
y_t = {"a": tf.zeros((10, 1))}
y_p = {"a": tf.ones((10, 1)), "b": tf.zeros((10, 1))}
sw = tf.convert_to_tensor([0, 0, 0, 0, 0, 1, 1, 1, 1, 1])
total_loss = loss_container(y_t, y_p, sample_weight=sw)
self.assertIsInstance(total_loss, tf.Tensor)
self.assertEqual(total_loss.numpy(), 0.5)
self.assertLen(loss, 1)
def test_broadcast_single_loss(self):
loss_container = compile_utils.LossesContainer("mse")
y_t = [tf.ones((10, 1)), tf.zeros((10, 1))]
y_p = [tf.ones((10, 1)), tf.ones((10, 1))]
sw = tf.convert_to_tensor([0, 0, 0, 0, 0, 1, 1, 1, 1, 1])
total_loss = loss_container(y_t, y_p, sample_weight=sw)
self.assertEqual(total_loss.numpy(), 0.5)
self.assertLen(loss_container.metrics, 3)
loss_metric = loss_container.metrics[0]
self.assertEqual(loss_metric.name, "loss")
self.assertEqual(loss_metric.result().numpy(), 0.5)
output_1_metric = loss_container.metrics[1]
self.assertEqual(output_1_metric.name, "output_1_loss")
self.assertEqual(output_1_metric.result().numpy(), 0.0)
output_2_metric = loss_container.metrics[2]
self.assertEqual(output_2_metric.name, "output_2_loss")
self.assertEqual(output_2_metric.result().numpy(), 0.5)
def test_missing_label_with_no_loss(self):
# It's ok to exclude a label if that label has no
# losses or metrics associated with it.
loss_container = compile_utils.LossesContainer(
{"output1": "mse", "output3": "mae"}
)
y_p = {
"output1": tf.convert_to_tensor([[0], [1], [2]]),
"output2": tf.convert_to_tensor([[3], [4], [5]]),
"output3": tf.convert_to_tensor([[6], [7], [8]]),
}
y_t = {
"output1": tf.convert_to_tensor([[1], [2], [3]]),
"output3": tf.convert_to_tensor([[4], [5], [6]]),
}
total_loss = loss_container(y_t, y_p)
self.assertEqual(total_loss.numpy(), 3.0)
self.assertLen(loss_container.metrics, 3)
loss_metric = loss_container.metrics[0]
self.assertEqual(loss_metric.name, "loss")
self.assertEqual(loss_metric.result().numpy(), 3.0)
output_1_metric = loss_container.metrics[1]
self.assertEqual(output_1_metric.name, "output1_loss")
self.assertEqual(output_1_metric.result().numpy(), 1.0)
output_3_metric = loss_container.metrics[2]
self.assertEqual(output_3_metric.name, "output3_loss")
self.assertEqual(output_3_metric.result().numpy(), 2.0)
def test_mismatched_dtypes(self):
y_t = tf.constant([1, 9, 2, -5], shape=(2, 2))
y_p = tf.constant([4, 8, 12, 8], shape=(2, 2), dtype=tf.float32)
def my_mae(labels, preds):
self.assertEqual(labels.dtype, tf.int32)
self.assertEqual(preds.dtype, tf.float32)
labels = tf.cast(labels, preds.dtype)
return backend.mean(tf.abs(preds - labels), axis=-1)
loss_container = compile_utils.LossesContainer(my_mae)
total_loss = loss_container(y_t, y_p)
self.assertEqual(total_loss.dtype, tf.float32)
def test_integer_dtypes(self):
y_t = tf.constant([1, 9, 2, -5], shape=(2, 2))
y_p = tf.constant([4, 8, 12, 8], shape=(2, 2), dtype=tf.int64)
def my_mae(labels, preds):
self.assertEqual(labels.dtype, tf.int64)
self.assertEqual(preds.dtype, tf.int64)
return backend.mean(tf.abs(preds - labels), axis=-1)
loss_container = compile_utils.LossesContainer(my_mae)
total_loss = loss_container(y_t, y_p)
self.assertEqual(total_loss.dtype, tf.int64)
def test_float_dtypes(self):
y_t = tf.constant([1, 9, 2, -5], shape=(2, 2), dtype=tf.float32)
y_p = tf.constant([4, 8, 12, 8], shape=(2, 2), dtype=tf.float64)
def my_mae(labels, preds):
self.assertEqual(labels.dtype, tf.float64)
self.assertEqual(preds.dtype, tf.float64)
return backend.mean(tf.abs(preds - labels), axis=-1)
loss_container = compile_utils.LossesContainer(my_mae)
total_loss = loss_container(y_t, y_p)
self.assertIsInstance(total_loss, tf.Tensor)
self.assertEqual(total_loss.dtype, tf.float64)
@test_combinations.generate(
test_combinations.combine(
input_type=["dense", "masked", "ragged"],
reduction=["auto", "sum"],
use_sample_weights=[True, False],
),
)
def test_loss_consistency(self, input_type, reduction, use_sample_weights):
y_p = tf.ragged.constant(
[[[1], [1], [1]], [[1], [1]]], dtype=tf.float32
)
y_t = tf.ragged.constant(
[[[1], [0], [0]], [[1], [1]]], dtype=tf.float32
)
if input_type == "masked":
mask = tf.ones_like(y_p).to_tensor()
y_p = y_p.to_tensor()
y_t = y_t.to_tensor()
y_p._keras_mask = mask
elif input_type == "dense":
y_p = y_p.to_tensor()
y_t = y_t.to_tensor()
if input_type == "dense":
count = 6
else:
count = 5
if use_sample_weights:
wrong = 4
maybe_sample_weight = {
"sample_weight": tf.constant([[2], [1]], dtype=tf.float32)
}
else:
wrong = 2
maybe_sample_weight = {}
expected = wrong
if reduction != "sum":
expected /= count
loss_obj = losses_mod.MeanAbsoluteError(reduction=reduction)
result = loss_obj(y_t, y_p, **maybe_sample_weight)
self.assertAlmostEqual(result.numpy(), expected)
container = compile_utils.LossesContainer(loss_obj)
container_result = container(y_t, y_p, **maybe_sample_weight)
self.assertAlmostEqual(container_result.numpy(), expected)
def test_loss_masking(self):
loss_container = compile_utils.LossesContainer("mae")
y_p = tf.constant([[[1], [1]], [[0], [0]]], dtype=tf.float32)
y_t = tf.constant([[[1], [1]], [[1], [1]]], dtype=tf.float32)
# Reduction is "sum_over_batch_size" that's not the literal batch size,
# but the number of elements being summed: The number of valid
# emlements. So since the mask has two valid items, the number of
# elements is 2.
y_p._keras_mask = tf.constant([[1, 0], [1, 0]], dtype=tf.float32)
total_loss = loss_container(y_t, y_p)
self.assertAlmostEqual(total_loss.numpy(), 0.5) # sum over num valid
self.assertLen(loss_container.metrics, 1)
loss_metric = loss_container.metrics[0]
self.assertEqual(loss_metric.name, "loss")
self.assertAlmostEqual(loss_metric.result().numpy(), 0.5)
def test_loss_sample_weight(self):
loss_container = compile_utils.LossesContainer("mae")
y_p = tf.constant([[[1], [1]], [[0], [0]]], dtype=tf.float32)
y_t = tf.constant([[[1], [1]], [[1], [1]]], dtype=tf.float32)
sw = tf.constant([[0.2, 0.3], [0.5, 0]], dtype=tf.float32)
total_loss = loss_container(y_t, y_p, sample_weight=sw)
# (0 * .2 + 0 * .3 + 1 * .5 + 1 * 0) / 4
self.assertAlmostEqual(total_loss.numpy(), 0.125)
self.assertLen(loss_container.metrics, 1)
loss_metric = loss_container.metrics[0]
self.assertEqual(loss_metric.name, "loss")
self.assertAlmostEqual(loss_metric.result().numpy(), 0.125)
def test_loss_masking_sample_weight(self):
loss_container = compile_utils.LossesContainer("mae")
y_p = tf.constant([[[1], [1]], [[0], [0]]], dtype=tf.float32)
y_t = tf.constant([[[1], [1]], [[1], [1]]], dtype=tf.float32)
sw = tf.constant([[0.2, 0.3], [0.5, 0]], dtype=tf.float32)
y_p._keras_mask = tf.constant([[1, 0], [1, 0]], dtype=tf.float32)
total_loss = loss_container(y_t, y_p, sample_weight=sw)
# (0 * .2 + 1 * .5) / 2
self.assertAlmostEqual(total_loss.numpy(), 0.25) # sum over num valid
self.assertLen(loss_container.metrics, 1)
loss_metric = loss_container.metrics[0]
self.assertEqual(loss_metric.name, "loss")
self.assertAlmostEqual(loss_metric.result().numpy(), 0.25)
def test_custom_loss_callables(self):
def custom_loss_fn(y_true, y_pred):
return tf.reduce_sum(y_true - y_pred)
class CustomLossClass:
def __call__(self, y_true, y_pred):
return tf.reduce_sum(y_true - y_pred)
loss_container = compile_utils.LossesContainer(
[custom_loss_fn, CustomLossClass()]
)
y_t, y_p = tf.ones((10, 5)), tf.zeros((10, 5))
loss_container(y_t, y_p)
self.assertEqual(loss_container._losses[0].name, "custom_loss_fn")
self.assertEqual(loss_container._losses[1].name, "custom_loss_class")
def test_ragged_tensor_output(self):
"""Ensure ragged tensors can be passed as targets and predictions."""
def custom_loss_fn(y_true, y_pred):
"""MSE supports RaggedTensors directly."""
return losses_mod.mse(y_true, y_pred)
class CustomLossClass(losses_mod.Loss):
"""User defined loss func must implement RaggedTensor support."""
def call(self, y_true, y_pred):
losses = tf.ragged.map_flat_values(
tf.math.squared_difference, y_true, y_pred
)
return tf.reduce_mean(losses)
loss_container = compile_utils.LossesContainer(
[custom_loss_fn, CustomLossClass()]
)
v_t = tf.constant([[3.0, 4.0], [1.0, 2.0], [3.0, 5.0]])
v_p = tf.constant([[3.1, 4.0], [1.0, 2.0], [3.0, 5.0]])
y_t = tf.expand_dims(tf.RaggedTensor.from_row_splits(v_t, [0, 2, 3]), 0)
y_p = tf.expand_dims(tf.RaggedTensor.from_row_splits(v_p, [0, 2, 3]), 0)
total_loss = loss_container(y_t, y_p)
self.assertIsInstance(total_loss, tf.Tensor)
self.assertEqual(loss_container._losses[0].name, "custom_loss_fn")
class MetricsContainerTest(test_combinations.TestCase):
def test_single_metric(self):
metric_container = compile_utils.MetricsContainer("mse")
y_t, y_p = tf.ones((10, 5)), tf.zeros((10, 5))
metric_container.update_state(y_t, y_p)
self.assertLen(metric_container.metrics, 1)
metric = metric_container.metrics[0]
self.assertEqual(metric.name, "mse")
self.assertEqual(metric.result().numpy(), 1.0)
metric_container.reset_state()
self.assertEqual(metric.result().numpy(), 0.0)
def test_list_of_metrics_one_output(self):
metric_container = compile_utils.MetricsContainer(["mse", "mae"])
y_t, y_p = 2 * tf.ones((10, 5)), tf.zeros((10, 5))
metric_container.update_state(y_t, y_p)
self.assertLen(metric_container.metrics, 2)
mse_metric = metric_container.metrics[0]
self.assertEqual(mse_metric.name, "mse")
self.assertEqual(mse_metric.result().numpy(), 4.0)
mae_metric = metric_container.metrics[1]
self.assertEqual(mae_metric.name, "mae")
self.assertEqual(mae_metric.result().numpy(), 2.0)
metric_container.reset_state()
self.assertEqual(mse_metric.result().numpy(), 0.0)
self.assertEqual(mae_metric.result().numpy(), 0.0)
def test_list_of_metrics_list_of_outputs(self):
metric_container = compile_utils.MetricsContainer(
metrics=["mse", "mae"], # Should broadcast to both outputs.
weighted_metrics=["accuracy"],
) # Should broadcast to both outputs.
y_t = [tf.ones((10, 1)), tf.zeros((10, 1))]
y_p = [tf.ones((10, 1)), 2 * tf.ones((10, 1))]
sw = tf.convert_to_tensor([0, 0, 0, 0, 0, 1, 1, 1, 1, 1])
metric_container.update_state(y_t, y_p, sample_weight=sw)
self.assertLen(metric_container.metrics, 6)
mse_metric = metric_container.metrics[0]
self.assertEqual(mse_metric.name, "output_1_mse")
self.assertEqual(mse_metric.result().numpy(), 0.0)
mse_metric = metric_container.metrics[1]
self.assertEqual(mse_metric.name, "output_1_mae")
self.assertEqual(mse_metric.result().numpy(), 0.0)
acc_metric_1 = metric_container.metrics[2]
self.assertEqual(acc_metric_1.name, "output_1_accuracy")
self.assertEqual(acc_metric_1.result().numpy(), 1.0)
self.assertEqual(acc_metric_1._fn, metrics_mod.binary_accuracy)
mae_metric = metric_container.metrics[3]
self.assertEqual(mae_metric.name, "output_2_mse")
self.assertEqual(mae_metric.result().numpy(), 4.0)
mae_metric = metric_container.metrics[4]
self.assertEqual(mae_metric.name, "output_2_mae")
self.assertEqual(mae_metric.result().numpy(), 2.0)
acc_metric_2 = metric_container.metrics[5]
self.assertEqual(acc_metric_2.name, "output_2_accuracy")
self.assertEqual(acc_metric_2.result().numpy(), 0.0)
self.assertEqual(acc_metric_2._fn, metrics_mod.binary_accuracy)
weighted_metrics = metric_container.weighted_metrics
self.assertLen(weighted_metrics, 2)
self.assertEqual(weighted_metrics[0].name, "output_1_accuracy")
self.assertEqual(weighted_metrics[1].name, "output_2_accuracy")
unweighted_metrics = metric_container.unweighted_metrics
self.assertLen(unweighted_metrics, 4)
self.assertEqual(unweighted_metrics[0].name, "output_1_mse")
self.assertEqual(unweighted_metrics[1].name, "output_1_mae")
self.assertEqual(unweighted_metrics[2].name, "output_2_mse")
self.assertEqual(unweighted_metrics[3].name, "output_2_mae")
def test_metric_dict(self):
metric_container = compile_utils.MetricsContainer(
metrics={"out1": "mse", "out2": "mae"},
weighted_metrics={"out1": "mse", "out2": "mae"},
)
y_t = {"out1": tf.ones((10, 1)), "out2": tf.zeros((10, 1))}
y_p = {"out1": tf.ones((10, 1)), "out2": 2 * tf.ones((10, 1))}
sw = tf.convert_to_tensor([0, 0, 0, 0, 0, 1, 1, 1, 1, 1])
metric_container.update_state(y_t, y_p, sample_weight=sw)
mse_metric = metric_container.metrics[0]
self.assertEqual(mse_metric.name, "out1_mse")
self.assertEqual(mse_metric.result().numpy(), 0.0)
weighted_mse_metric = metric_container.metrics[1]
self.assertEqual(weighted_mse_metric.name, "out1_weighted_mse")
self.assertEqual(weighted_mse_metric.result().numpy(), 0.0)
mae_metric = metric_container.metrics[2]
self.assertEqual(mae_metric.name, "out2_mae")
self.assertEqual(mae_metric.result().numpy(), 2.0)
weighted_mae_metric = metric_container.metrics[3]
self.assertEqual(weighted_mae_metric.name, "out2_weighted_mae")
self.assertEqual(weighted_mae_metric.result().numpy(), 2.0)
metric_container.reset_state()
self.assertEqual(mse_metric.result().numpy(), 0.0)
self.assertEqual(weighted_mse_metric.result().numpy(), 0.0)
self.assertEqual(mae_metric.result().numpy(), 0.0)
self.assertEqual(weighted_mae_metric.result().numpy(), 0.0)
def test_metric_partial_dict_with_output_names(self):
metric_container = compile_utils.MetricsContainer(
{"out2": "mae"}, output_names=["out1", "out2"]
)
y_t = [tf.ones((10, 1)), tf.zeros((10, 1))]
y_p = [tf.ones((10, 1)), tf.ones((10, 1))]
sw = tf.convert_to_tensor([0, 0, 0, 0, 0, 1, 1, 1, 1, 1])
metric_container.update_state(y_t, y_p, sample_weight=sw)
self.assertLen(metric_container.metrics, 1)
mae_metric = metric_container.metrics[0]
self.assertEqual(mae_metric.name, "out2_mae")
self.assertEqual(mae_metric.result().numpy(), 1.0)
def test_metric_partial_dict_with_nones(self):
metric_container = compile_utils.MetricsContainer(
{"out1": None, "out2": "mae"}
)
y_t = {"out1": tf.ones((10, 1)), "out2": tf.zeros((10, 1))}
y_p = {"out1": tf.ones((10, 1)), "out2": tf.ones((10, 1))}
sw = tf.convert_to_tensor([0, 0, 0, 0, 0, 1, 1, 1, 1, 1])
metric_container.update_state(y_t, y_p, sample_weight=sw)
self.assertLen(metric_container.metrics, 1)
mae_metric = metric_container.metrics[0]
self.assertEqual(mae_metric.name, "out2_mae")
self.assertEqual(mae_metric.result().numpy(), 1.0)
def test_nested_structure(self):
metric_container = compile_utils.MetricsContainer(
metrics={"b": ["mse", None], "a": "mae"},
weighted_metrics={"b": [None, None], "a": "mse"},
)
y_t = {
"b": [2 * tf.ones((10, 1)), tf.zeros((10, 1))],
"a": tf.zeros((10, 1)),
}
y_p = {
"b": [tf.zeros((10, 1)), tf.zeros((10, 1))],
"a": tf.ones((10, 1)),
}
sw = tf.convert_to_tensor([0, 0, 0, 0, 0, 1, 1, 1, 1, 1])
metric_container.update_state(y_t, y_p, sample_weight=sw)
self.assertLen(metric_container.metrics, 3)
a_mae_metric = metric_container.metrics[0]
self.assertEqual(a_mae_metric.name, "a_mae")
self.assertEqual(a_mae_metric.result().numpy(), 1.0)
weighted_a_mae_metric = metric_container.metrics[1]
self.assertEqual(weighted_a_mae_metric.name, "a_mse")
self.assertEqual(weighted_a_mae_metric.result().numpy(), 1.0)
b_1_mse_metric = metric_container.metrics[2]
self.assertEqual(b_1_mse_metric.name, "b_1_mse")
self.assertEqual(b_1_mse_metric.result().numpy(), 4.0)
def test_no_input_mutation(self):
metric = {"a": "mae"}
metric_container = compile_utils.MetricsContainer(metric)
y_t = {"a": tf.zeros((10, 1))}
y_p = {"a": tf.ones((10, 1)), "b": tf.zeros((10, 1))}
metric_container.update_state(y_t, y_p)
self.assertLen(metric, 1)
mae_metric = metric_container.metrics[0]
self.assertEqual(mae_metric.result().numpy(), 1.0)
def test_crossentropy(self):
metric_container = compile_utils.MetricsContainer("crossentropy")
y_t, y_p = tf.ones((10, 1)), tf.ones((10, 1))
metric_container.update_state(y_t, y_p)
self.assertEqual(
metric_container.metrics[0]._fn, metrics_mod.binary_crossentropy
)
metric_container = compile_utils.MetricsContainer("crossentropy")
y_t, y_p = tf.ones((10, 1)), tf.ones((10, 20))
self.assertEqual(y_p.shape.as_list()[-1], 20)
metric_container.update_state(y_t, y_p)
self.assertEqual(
metric_container.metrics[0]._fn,
metrics_mod.sparse_categorical_crossentropy,
)
metric_container = compile_utils.MetricsContainer("crossentropy")
y_t, y_p = tf.ones((10, 20)), tf.ones((10, 20))
metric_container.update_state(y_t, y_p)
self.assertEqual(
metric_container.metrics[0]._fn,
metrics_mod.categorical_crossentropy,
)
def test_accuracy(self):
metric_container = compile_utils.MetricsContainer("accuracy")
y_t, y_p = tf.ones((10, 1)), tf.ones((10, 1))
metric_container.update_state(y_t, y_p)
self.assertEqual(
metric_container.metrics[0]._fn, metrics_mod.binary_accuracy
)
metric_container = compile_utils.MetricsContainer("Accuracy")
y_t, y_p = tf.ones((10, 1)), tf.ones((10, 1))
metric_container.update_state(y_t, y_p)
self.assertEqual(
metric_container.metrics[0]._fn, metrics_mod.binary_accuracy
)
metric_container = compile_utils.MetricsContainer("accuracy")
y_t, y_p = tf.ones((10, 1)), tf.ones((10, 20))
self.assertEqual(y_p.shape.as_list()[-1], 20)
metric_container.update_state(y_t, y_p)
self.assertEqual(
metric_container.metrics[0]._fn,
metrics_mod.sparse_categorical_accuracy,
)
metric_container = compile_utils.MetricsContainer("accuracy")
y_t, y_p = tf.ones((10, 20)), tf.ones((10, 20))
metric_container.update_state(y_t, y_p)
self.assertEqual(
metric_container.metrics[0]._fn, metrics_mod.categorical_accuracy
)
def test_metric_weighting(self):
metric_container = compile_utils.MetricsContainer(
metrics=["mae"], weighted_metrics=["mae"]
)
y_t = tf.convert_to_tensor([[0], [3], [0]])
y_p = tf.convert_to_tensor([[0], [0], [0]])
sw = tf.convert_to_tensor([[1], [0], [1]])
metric_container.update_state(y_t, y_p, sample_weight=sw)
self.assertLen(metric_container.metrics, 2)
mae_metric = metric_container.metrics[0]
self.assertEqual(mae_metric.name, "mae")
self.assertEqual(mae_metric.result().numpy(), 1.0)
weighted_mae_metric = metric_container.metrics[1]
self.assertEqual(weighted_mae_metric.name, "weighted_mae")
self.assertEqual(weighted_mae_metric.result().numpy(), 0.0)
def test_broadcast_metrics_to_dict(self):
metric_container = compile_utils.MetricsContainer(metrics=["mae"])
y_p = {"output": tf.convert_to_tensor([[0], [1], [2]])}
y_t = {"output": tf.convert_to_tensor([[1], [2], [3]])}
metric_container.update_state(y_t, y_p)
mae_metric = metric_container.metrics[0]
self.assertEqual(mae_metric.name, "mae")
self.assertEqual(mae_metric.result().numpy(), 1.0)
def test_broadcast_metrics_to_dict_with_output_names(self):
metric_container = compile_utils.MetricsContainer(
metrics=["mae"], output_names=["output"]
)
y_p = tf.convert_to_tensor([[0], [1], [2]])
y_t = {"output": tf.convert_to_tensor([[1], [2], [3]])}
metric_container.update_state(y_t, y_p)
mae_metric = metric_container.metrics[0]
self.assertEqual(mae_metric.name, "mae")
self.assertEqual(mae_metric.result().numpy(), 1.0)
def test_missing_label_with_no_metrics(self):
# It's ok to exclude a label if that label has no
# losses or metrics associated with it.
metric_container = compile_utils.MetricsContainer(
metrics={"output1": "mae", "output3": "mse"}
)
y_p = {
"output1": tf.convert_to_tensor([[0], [1], [2]]),
"output2": tf.convert_to_tensor([[3], [4], [5]]),
"output3": tf.convert_to_tensor([[6], [7], [8]]),
}
y_t = {
"output1": tf.convert_to_tensor([[1], [2], [3]]),
"output3": tf.convert_to_tensor([[4], [5], [6]]),
}
metric_container.update_state(y_t, y_p)
self.assertLen(metric_container.metrics, 2)
mae_metric = metric_container.metrics[0]
self.assertEqual(mae_metric.name, "output1_mae")
self.assertEqual(mae_metric.result().numpy(), 1.0)
mse_metric = metric_container.metrics[1]
self.assertEqual(mse_metric.name, "output3_mse")
self.assertEqual(mse_metric.result().numpy(), 4.0)
def test_metrics_masking(self):
metrics_container = compile_utils.MetricsContainer(
metrics=["mae"], weighted_metrics=["mse"]
)
y_p = tf.constant([[[1], [1]], [[0], [0]]], dtype=tf.float32)
y_t = tf.constant([[[1], [1]], [[1], [1]]], dtype=tf.float32)
y_p._keras_mask = tf.constant([[1, 1], [0, 0]], dtype=tf.float32)
metrics_container.update_state(y_t, y_p)
self.assertLen(metrics_container.metrics, 2)
mae_metric = metrics_container.metrics[0]
self.assertEqual(mae_metric.name, "mae")
self.assertAlmostEqual(mae_metric.result().numpy(), 0)
weighted_mae_metric = metrics_container.metrics[1]
self.assertEqual(weighted_mae_metric.name, "mse")
self.assertAlmostEqual(weighted_mae_metric.result().numpy(), 0)
def test_metrics_sample_weight(self):
metrics_container = compile_utils.MetricsContainer(
metrics=["mae"], weighted_metrics=["mse"]
)
y_p = tf.constant([[[1], [1]], [[0], [1]]], dtype=tf.float32)
y_t = tf.constant([[[1], [1]], [[1], [1]]], dtype=tf.float32)
sw = tf.constant([[0.2, 0.3], [0.5, 0]], dtype=tf.float32)
metrics_container.update_state(y_t, y_p, sample_weight=sw)
self.assertLen(metrics_container.metrics, 2)
mae_metric = metrics_container.metrics[0]
self.assertEqual(mae_metric.name, "mae")
self.assertAlmostEqual(mae_metric.result().numpy(), 0.25) # 1 / 4
weighted_mae_metric = metrics_container.metrics[1]
self.assertEqual(weighted_mae_metric.name, "mse")
self.assertAlmostEqual(
weighted_mae_metric.result().numpy(), 0.5
) # .5 / 1
def test_metrics_masking_sample_weight(self):
metrics_container = compile_utils.MetricsContainer(
metrics=["mae"], weighted_metrics=["mse"]
)
y_p = tf.constant([[[1], [1]], [[0], [1]]], dtype=tf.float32)
y_t = tf.constant([[[1], [1]], [[1], [1]]], dtype=tf.float32)
sw = tf.constant([[0.3, 0.2], [0.2, 0.3]], dtype=tf.float32)
y_p._keras_mask = tf.constant([[1, 0], [1, 0]], dtype=tf.float32)
metrics_container.update_state(y_t, y_p, sample_weight=sw)
self.assertLen(metrics_container.metrics, 2)
mae_metric = metrics_container.metrics[0]
self.assertEqual(mae_metric.name, "mae")
self.assertAlmostEqual(mae_metric.result().numpy(), 0.5) # 1 / .5
weighted_mae_metric = metrics_container.metrics[1]
self.assertEqual(weighted_mae_metric.name, "mse")
self.assertAlmostEqual(weighted_mae_metric.result().numpy(), 0.2 / 0.5)
def test_loss_class_as_metric_with_distribution(self):
distribution = tf.distribute.OneDeviceStrategy("/device:CPU:0")
with distribution.scope():
metric_container = compile_utils.MetricsContainer(
losses_mod.MeanSquaredError()
)
y_t, y_p = tf.ones((10, 5)), tf.zeros((10, 5))
metric_container.update_state(y_t, y_p)
self.assertLen(metric_container.metrics, 1)
metric = metric_container.metrics[0]
self.assertEqual(metric.name, "mean_squared_error")
self.assertEqual(metric.result().numpy(), 1.0)
def test_custom_metric_callables(self):
def custom_metric_fn(y_true, y_pred):
return tf.reduce_sum(y_true - y_pred)
class CustomMetricClass:
def __call__(self, y_true, y_pred):
return tf.reduce_sum(y_true - y_pred)
metric_container = compile_utils.MetricsContainer(
[custom_metric_fn, CustomMetricClass()]
)
y_t, y_p = tf.ones((10, 5)), tf.zeros((10, 5))
metric_container.update_state(y_t, y_p)
self.assertEqual(metric_container.metrics[0].name, "custom_metric_fn")
self.assertEqual(
metric_container.metrics[1].name, "custom_metric_class"
)
def test_reset_state_existing_metric_before_built(self):
metric = metrics_mod.Mean()
metric.update_state([2.0, 4.0])
self.assertEqual(metric.result().numpy(), 3.0)
metric_container = compile_utils.MetricsContainer(metric)
metric_container.reset_state()
self.assertEqual(metric.result().numpy(), 0.0)
def test_duplicated_metric_instance(self):
mean_obj = metrics_mod.Mean()
metric = mean_obj
with self.assertRaisesRegex(ValueError, "Found duplicated metrics"):
compile_utils.MetricsContainer(
metrics=metric, weighted_metrics=metric
)
# duplicated string should be fine
metric = "acc"
compile_utils.MetricsContainer(metrics=metric, weighted_metrics=metric)
# complicated structure
metric = [mean_obj, "acc"]
weighted_metric = {"output1": mean_obj, "output2": "acc"}
with self.assertRaisesRegex(ValueError, "Found duplicated metrics"):
compile_utils.MetricsContainer(
metrics=metric, weighted_metrics=weighted_metric
)
if __name__ == "__main__":
tf.compat.v1.enable_eager_execution()
tf.test.main()
| tf-keras/tf_keras/engine/compile_utils_test.py/0 | {
"file_path": "tf-keras/tf_keras/engine/compile_utils_test.py",
"repo_id": "tf-keras",
"token_count": 17385
} | 167 |
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""InputSpec tests."""
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
from tf_keras import layers
from tf_keras.engine import keras_tensor
from tf_keras.engine import training
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
class CustomTypeSpec(tf.TypeSpec):
"""Stubbed-out custom type spec, for testing."""
def __init__(self, shape, dtype):
self.shape = tf.TensorShape(shape)
self.dtype = tf.dtypes.as_dtype(dtype)
# Stub implementations for all the TypeSpec methods:
value_type = None
_to_components = lambda self, value: None
_from_components = lambda self, components: None
_component_specs = property(lambda self: None)
_serialize = lambda self: (self.shape, self.dtype)
class CustomTypeSpec2(CustomTypeSpec):
"""Adds a with_shape method to CustomTypeSpec."""
def with_shape(self, new_shape):
return CustomTypeSpec2(new_shape, self.dtype)
@test_utils.run_v2_only
class KerasTensorTest(test_combinations.TestCase):
def test_repr_and_string(self):
kt = keras_tensor.KerasTensor(
type_spec=tf.TensorSpec(shape=(1, 2, 3), dtype=tf.float32)
)
expected_str = (
"KerasTensor(type_spec=TensorSpec(shape=(1, 2, 3), "
"dtype=tf.float32, name=None))"
)
expected_repr = "<KerasTensor: shape=(1, 2, 3) dtype=float32>"
self.assertEqual(expected_str, str(kt))
self.assertEqual(expected_repr, repr(kt))
kt = keras_tensor.KerasTensor(
type_spec=tf.TensorSpec(shape=(2,), dtype=tf.int32),
inferred_value=[2, 3],
)
expected_str = (
"KerasTensor(type_spec=TensorSpec(shape=(2,), "
"dtype=tf.int32, name=None), inferred_value=[2, 3])"
)
expected_repr = (
"<KerasTensor: shape=(2,) dtype=int32 inferred_value=[2, 3]>"
)
self.assertEqual(expected_str, str(kt))
self.assertEqual(expected_repr, repr(kt))
kt = keras_tensor.KerasTensor(
type_spec=tf.SparseTensorSpec(shape=(1, 2, 3), dtype=tf.float32)
)
expected_str = (
"KerasTensor(type_spec=SparseTensorSpec("
"TensorShape([1, 2, 3]), tf.float32))"
)
expected_repr = (
"<KerasTensor: type_spec=SparseTensorSpec("
"TensorShape([1, 2, 3]), tf.float32)>"
)
self.assertEqual(expected_str, str(kt))
self.assertEqual(expected_repr, repr(kt))
inp = layers.Input(shape=(3, 5))
kt = layers.Dense(10)(inp)
expected_str = (
"KerasTensor(type_spec=TensorSpec(shape=(None, 3, 10), "
"dtype=tf.float32, name=None), name='dense/BiasAdd:0', "
"description=\"created by layer 'dense'\")"
)
expected_repr = (
"<KerasTensor: shape=(None, 3, 10) dtype=float32 (created "
"by layer 'dense')>"
)
self.assertEqual(expected_str, str(kt))
self.assertEqual(expected_repr, repr(kt))
kt = tf.reshape(kt, shape=(3, 5, 2))
expected_str = (
"KerasTensor(type_spec=TensorSpec(shape=(3, 5, 2), "
"dtype=tf.float32, name=None), name='tf.reshape/Reshape:0', "
"description=\"created by layer 'tf.reshape'\")"
)
expected_repr = (
"<KerasTensor: shape=(3, 5, 2) dtype=float32 (created "
"by layer 'tf.reshape')>"
)
self.assertEqual(expected_str, str(kt))
self.assertEqual(expected_repr, repr(kt))
kts = tf.unstack(kt)
for i in range(3):
expected_str = (
"KerasTensor(type_spec=TensorSpec(shape=(5, 2), "
"dtype=tf.float32, name=None), name='tf.unstack/unstack:%s', "
"description=\"created by layer 'tf.unstack'\")" % (i,)
)
expected_repr = (
"<KerasTensor: shape=(5, 2) dtype=float32 "
"(created by layer 'tf.unstack')>"
)
self.assertEqual(expected_str, str(kts[i]))
self.assertEqual(expected_repr, repr(kts[i]))
@parameterized.parameters(
{"property_name": "values"},
{"property_name": "indices"},
{"property_name": "dense_shape"},
)
def test_sparse_instance_property(self, property_name):
inp = layers.Input(shape=[3], sparse=True)
out = getattr(inp, property_name)
model = training.Model(inp, out)
x = tf.SparseTensor(
[[0, 0], [0, 1], [1, 1], [1, 2]], [1, 2, 3, 4], [2, 3]
)
expected_property = getattr(x, property_name)
self.assertAllEqual(model(x), expected_property)
# Test that it works with serialization and deserialization as well
model_config = model.get_config()
model2 = training.Model.from_config(model_config)
self.assertAllEqual(model2(x), expected_property)
@parameterized.parameters(
[
(tf.TensorSpec([2, 3], tf.int32), [2, 3]),
(tf.RaggedTensorSpec([2, None]), [2, None]),
(tf.SparseTensorSpec([8]), [8]),
(CustomTypeSpec([3, 8], tf.int32), [3, 8]),
]
)
def test_shape(self, spec, expected_shape):
kt = keras_tensor.KerasTensor(spec)
self.assertEqual(kt.shape.as_list(), expected_shape)
@parameterized.parameters(
[
(tf.TensorSpec([8, 3], tf.int32), [8, 3], [8, 3]),
(tf.TensorSpec([None, 3], tf.int32), [8, 3], [8, 3]),
(tf.TensorSpec([8, 3], tf.int32), [None, 3], [8, 3]),
(tf.TensorSpec(None, tf.int32), [8, 3], [8, 3]),
(tf.TensorSpec(None, tf.int32), [8, None], [8, None]),
(tf.TensorSpec(None, tf.int32), None, None),
(tf.RaggedTensorSpec([2, None, None]), [2, None, 5], [2, None, 5]),
(tf.SparseTensorSpec([8]), [8], [8]),
(CustomTypeSpec2([3, None], tf.int32), [3, 8], [3, 8]),
]
)
def test_set_shape(self, spec, new_shape, expected_shape):
kt = keras_tensor.KerasTensor(spec)
kt.set_shape(new_shape)
if expected_shape is None:
self.assertIsNone(kt.type_spec.shape.rank)
else:
self.assertEqual(kt.type_spec.shape.as_list(), expected_shape)
self.assertTrue(kt.type_spec.is_compatible_with(spec))
@parameterized.parameters(
[
(layers.Input(shape=[3, 4], batch_size=7), tf.reshape),
(layers.Input(shape=[3, 4], ragged=True, batch_size=7), tf.reshape),
(
layers.Input(shape=[3, 4], sparse=True, batch_size=7),
tf.sparse.reshape,
),
]
)
def test_reshape(self, inp, reshape_op):
out = reshape_op(inp, shape=[7, 4, 3])
self.assertEqual(out.type_spec.shape.as_list(), [7, 4, 3])
def test_set_shape_error(self):
spec = CustomTypeSpec([3, None], tf.int32)
kt = keras_tensor.KerasTensor(spec)
with self.assertRaisesRegex(
ValueError, "Keras requires TypeSpec to have a `with_shape` method"
):
kt.set_shape([3, 3])
def test_set_shape_equals_expected_shape(self):
# Tests b/203201161: DenseSpec has both a _shape and a _shape_tuple
# field, and we need to be sure both get updated.
kt = keras_tensor.KerasTensor(tf.TensorSpec([8, None], tf.int32))
kt.set_shape([8, 3])
self.assertEqual(kt.type_spec, tf.TensorSpec([8, 3], tf.int32))
def test_type_spec_with_shape_equals_expected_shape(self):
# Tests b/203201161: DenseSpec has both a _shape and a _shape_tuple
# field, and we need to be sure both get updated.
spec1 = tf.TensorSpec([8, None], tf.int32)
spec2 = keras_tensor.type_spec_with_shape(spec1, [8, 3])
expected = tf.TensorSpec([8, 3], tf.int32)
self.assertEqual(spec2, expected)
def test_missing_shape_error(self):
spec = CustomTypeSpec(None, tf.int32)
del spec.shape
with self.assertRaisesRegex(
ValueError,
"KerasTensor only supports TypeSpecs that have a shape field; .*",
):
keras_tensor.KerasTensor(spec)
def test_wrong_shape_type_error(self):
spec = CustomTypeSpec(None, tf.int32)
spec.shape = "foo"
with self.assertRaisesRegex(
TypeError,
"KerasTensor requires that wrapped TypeSpec's shape is a "
"TensorShape; .*",
):
keras_tensor.KerasTensor(spec)
def test_missing_dtype_error(self):
spec = CustomTypeSpec(None, tf.int32)
del spec.dtype
kt = keras_tensor.KerasTensor(spec)
with self.assertRaisesRegex(
AttributeError,
"KerasTensor wraps TypeSpec .* which does not have a dtype.",
):
kt.dtype
def test_wrong_dtype_type_error(self):
spec = CustomTypeSpec(None, tf.int32)
spec.dtype = "foo"
kt = keras_tensor.KerasTensor(spec)
with self.assertRaisesRegex(
TypeError,
"KerasTensor requires that wrapped TypeSpec's dtype is a DType; .*",
):
kt.dtype
def test_from_tensor_mask_tensor_is_none(self):
tensor = tf.constant([1.0])
kt = keras_tensor.keras_tensor_from_tensor(tensor)
self.assertIsNone(getattr(kt, "_keras_mask", None))
def test_from_tensor_mask_tensor_is_not_none(self):
tensor = tf.constant([1.0])
tensor._keras_mask = tf.constant([1.0])
kt = keras_tensor.keras_tensor_from_tensor(tensor)
self.assertIsInstance(kt._keras_mask, keras_tensor.KerasTensor)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/engine/keras_tensor_test.py/0 | {
"file_path": "tf-keras/tf_keras/engine/keras_tensor_test.py",
"repo_id": "tf-keras",
"token_count": 5025
} | 168 |
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Part of TF-Keras training engine related to Python generators of array data.
"""
import functools
import math
import numpy as np
import tensorflow.compat.v2 as tf
from tf_keras import backend
from tf_keras import callbacks as cbks
from tf_keras.engine import training_utils
from tf_keras.engine import training_utils_v1
from tf_keras.utils import data_utils
from tf_keras.utils import generic_utils
from tf_keras.utils.mode_keys import ModeKeys
# isort: off
from tensorflow.python.platform import tf_logging as logging
def model_iteration(
model,
data,
steps_per_epoch=None,
epochs=1,
verbose=1,
callbacks=None,
validation_data=None,
validation_steps=None,
validation_freq=1,
class_weight=None,
max_queue_size=10,
workers=1,
use_multiprocessing=False,
shuffle=False,
initial_epoch=0,
mode=ModeKeys.TRAIN,
batch_size=None,
steps_name="steps",
**kwargs,
):
"""Loop function for arrays of data with modes TRAIN/TEST/PREDICT.
Args:
model: TF-Keras Model instance.
data: Either a tuple of NumPy/Tensor inputs (i.e. `(x,)` or `(x, y)` or
`(x, y, sample_weights)`) or a generator or
`keras.utils.data_utils.Sequence` object or Eager Iterator or Dataset.
steps_per_epoch: Total number of steps (batches of samples) before
declaring one epoch finished and starting the next epoch. Ignored with
the default value of `None`.
epochs: Number of times to iterate over the data.
verbose: 0, 1, or 2. Verbosity mode.
0 = silent, 1 = progress bar, 2 = one line per epoch.
Note that the progress bar is not particularly useful when
logged to a file, so verbose=2 is recommended when not running
interactively (eg, in a production environment).
callbacks: List of callbacks to be called during training.
validation_data: Either a tuple of NumPy/Tensor inputs (i.e. `(x,)` or
`(x, y)` or `(x, y, sample_weights)`) or a generator or
`keras.utils.data_utils.Sequence` object or Eager Iterator or Dataset.
validation_steps: Total number of steps (batches of samples) before
declaring validation finished.
validation_freq: Only relevant if validation data is provided. Integer
or `collections.abc.Container` instance (e.g. list, tuple, etc.). If
an integer, specifies how many training epochs to run before a new
validation run is performed, e.g. `validation_freq=2` runs validation
every 2 epochs. If a Container, specifies the epochs on which to run
validation, e.g. `validation_freq=[1, 2, 10]` runs validation at the
end of the 1st, 2nd, and 10th epochs.
class_weight: Dictionary mapping class indices to a weight for the
class.
max_queue_size: Integer. Maximum size for the generator queue. If
unspecified, `max_queue_size` will default to 10.
workers: Integer. Maximum number of processes to spin up when using
process-based threading. If unspecified, `workers` will default to 1.
If 0, will execute the generator on the main thread.
use_multiprocessing: Boolean. If `True`, use process-based threading. If
unspecified, `use_multiprocessing` will default to `False`. Note that
because this implementation relies on multiprocessing, you should not
pass non-pickleable arguments to the generator as they can't be passed
easily to children processes.
shuffle: Boolean. Whether to shuffle the order of the batches at the
beginning of each epoch. Only used with instances of `Sequence`
(`keras.utils.Sequence`). Has no effect when `steps_per_epoch` is not
`None`.
initial_epoch: Epoch at which to start training (useful for resuming a
previous training run).
mode: One of ModeKeys.TRAIN/ModeKeys.TEST/ModeKeys.PREDICT.
batch_size: Integer batch size or None if unknown. Will only be used if
`data` is in NumPy/Tensor format.
steps_name: The string name of the steps argument, either `steps`,
`validation_steps`, or `steps_per_epoch`. Only used for error message
formatting.
**kwargs: Additional arguments for backwards compatibility. `steps` is
accepted as an alias for `steps_per_epoch`.
Returns:
- In TRAIN mode: `History` object.
- In TEST mode: Evaluation metrics.
- In PREDICT mode: Outputs of the Model called on inputs.
Raises:
ValueError: in case of invalid arguments.
"""
if "steps" in kwargs:
steps_per_epoch = kwargs["steps"]
# Determine the number of steps per epoch and whether we should reset the
# dataset at the end of each epoch.
reset_dataset_after_each_epoch = False
original_dataset = None
is_dataset = isinstance(data, (tf.data.Dataset, tf.compat.v1.data.Dataset))
if is_dataset:
original_dataset = data
if steps_per_epoch is None:
reset_dataset_after_each_epoch = True
steps_per_epoch = training_utils_v1.infer_steps_for_dataset(
model,
data,
steps_per_epoch,
epochs=epochs,
steps_name=steps_name,
)
# Convert to a format that supports `next(generator)`.
generator, steps_per_epoch = convert_to_generator_like(
data,
steps_per_epoch=steps_per_epoch,
batch_size=batch_size,
epochs=epochs - initial_epoch,
shuffle=shuffle,
)
do_validation = validation_data is not None
is_sequence = isinstance(generator, data_utils.Sequence)
_validate_arguments(
is_sequence,
is_dataset,
use_multiprocessing,
workers,
steps_per_epoch,
validation_data,
validation_steps,
mode,
kwargs,
)
batch_function = _make_execution_function(
model, mode, class_weight=class_weight
)
# Create the queue for the generator.
enqueuer = None
if not is_dataset:
generator, enqueuer = _make_enqueued_generator(
generator,
workers=workers,
use_multiprocessing=use_multiprocessing,
max_queue_size=max_queue_size,
shuffle=shuffle,
)
num_samples_or_steps, use_steps = _get_num_samples_or_steps(
data, steps_per_epoch
)
count_mode = "steps" if use_steps else "samples"
callbacks = cbks.configure_callbacks(
callbacks,
model,
do_validation=do_validation,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
batch_size=batch_size,
samples=num_samples_or_steps,
count_mode=count_mode,
verbose=verbose,
mode=mode,
)
if mode == ModeKeys.PREDICT:
aggregator = training_utils_v1.OutputsAggregator(
True, steps=steps_per_epoch
)
else:
aggregator = training_utils_v1.MetricsAggregator(
True, steps=steps_per_epoch
)
should_set_learning_phase = tf.executing_eagerly() and model.run_eagerly
if should_set_learning_phase:
learning_phase_scope = backend.eager_learning_phase_scope(
1 if mode == ModeKeys.TRAIN else 0
)
learning_phase_scope.__enter__()
callbacks.model.stop_training = False
callbacks._call_begin_hook(mode)
initial_epoch = model._maybe_load_initial_epoch_from_ckpt(
initial_epoch, mode
)
for epoch in range(initial_epoch, epochs):
if callbacks.model.stop_training:
break
# Setup work for each epoch.
model.reset_metrics()
epoch_logs = {}
if mode == ModeKeys.TRAIN:
callbacks.on_epoch_begin(epoch, epoch_logs)
if steps_per_epoch is None:
# Loop over dataset until `OutOfRangeError` is raised.
target_steps = np.inf
else:
# Loop over dataset for the specified number of steps.
target_steps = steps_per_epoch
step = 0
while step < target_steps:
batch_data = _get_next_batch(generator)
if batch_data is None:
if is_dataset:
# The dataset passed by the user ran out of batches. Now we
# know the cardinality of the dataset. If steps_per_epoch
# was specified, then running out of data is unexpected, so
# we stop training and inform the user.
if steps_per_epoch:
callbacks.model.stop_training = True
logging.warning(
"Your dataset ran out of data; interrupting "
"training. Make sure that your dataset can "
"generate at least `%s * epochs` batches (in "
"this case, %d batches). You may need to use "
"the repeat() function when building your dataset."
% (steps_name, steps_per_epoch * epochs)
)
elif step > 0:
steps_per_epoch = step
aggregator.steps = steps_per_epoch
else:
# We ran out of batches while the user passed an iterator
# (legacy).
callbacks.model.stop_training = True
logging.warning(
"Your dataset iterator ran out of data; "
"interrupting training. Make sure that your iterator "
"can generate at least `%s * epochs` "
"batches (in this case, %d batches). You may need to"
"use the repeat() function when building your "
"dataset." % (steps_name, steps_per_epoch * epochs)
)
break
# `batch_size` used for validation data if validation
# data is NumPy/EagerTensors.
batch_size = int(tf.nest.flatten(batch_data)[0].shape[0])
# Callbacks batch begin.
batch_logs = {"batch": step, "size": batch_size}
callbacks._call_batch_hook(mode, "begin", step, batch_logs)
is_deferred = not model._is_compiled
batch_outs = batch_function(*batch_data)
if not isinstance(batch_outs, list):
batch_outs = [batch_outs]
if step == 0:
aggregator.create(batch_outs)
if is_deferred:
# Set callbacks params. We do this here when model is
# compiled only in the first iteration of this loop
# (deferred build scenario).
cbks.set_callback_parameters(
callbacks,
model,
do_validation=do_validation,
batch_size=batch_size,
epochs=epochs,
steps_per_epoch=steps_per_epoch,
samples=num_samples_or_steps,
verbose=verbose,
mode=mode,
)
# Aggregate results.
aggregator.aggregate(batch_outs)
# Callbacks batch end.
batch_logs = callbacks.make_logs(
model, batch_logs, batch_outs, mode
)
callbacks._call_batch_hook(mode, "end", step, batch_logs)
step += 1
if callbacks.model.stop_training:
break
aggregator.finalize()
results = aggregator.results
epoch_logs = callbacks.make_logs(model, epoch_logs, results, mode)
if len(results) == 1:
results = results[0]
# Run the test loop every epoch during training.
if (
do_validation
and training_utils_v1.should_run_validation(validation_freq, epoch)
and not callbacks.model.stop_training
):
val_results = model_iteration(
model,
validation_data,
steps_per_epoch=validation_steps,
batch_size=batch_size,
class_weight=class_weight,
workers=workers,
use_multiprocessing=use_multiprocessing,
max_queue_size=max_queue_size,
callbacks=callbacks,
verbose=verbose,
mode=ModeKeys.TEST,
steps_name="validation_steps",
)
if not isinstance(val_results, list):
val_results = [val_results]
epoch_logs = callbacks.make_logs(
model, epoch_logs, val_results, mode, prefix="val_"
)
if mode == ModeKeys.TRAIN:
# Epochs only apply to `fit`.
callbacks.on_epoch_end(epoch, epoch_logs)
# Recreate dataset iterator for the next epoch.
if reset_dataset_after_each_epoch and epoch < epochs - 1:
generator = tf.compat.v1.data.make_one_shot_iterator(
original_dataset
)
model._successful_loop_finish = True
callbacks._call_end_hook(mode)
if enqueuer is not None:
enqueuer.stop()
if should_set_learning_phase:
learning_phase_scope.__exit__(None, None, None)
if mode == ModeKeys.TRAIN:
return model.history
return results
# Maintain compatibility with the existing names.
fit_generator = functools.partial(model_iteration, mode=ModeKeys.TRAIN)
evaluate_generator = functools.partial(
model_iteration, mode=ModeKeys.TEST, shuffle=False
)
predict_generator = functools.partial(
model_iteration, mode=ModeKeys.PREDICT, shuffle=False
)
def _get_next_batch(generator):
"""Retrieves the next batch of input data."""
try:
generator_output = next(generator)
except (StopIteration, tf.errors.OutOfRangeError):
return None
if not isinstance(generator_output, tuple):
# Always wrap in a tuple.
generator_output = (generator_output,)
if len(generator_output) not in [1, 2, 3]:
raise ValueError(
"Output of generator should be a tuple of 1 or 2 or 3 "
"elements: (input,) or (input, target) or "
"(input, target, sample_weights). Received {}".format(
generator_output
)
)
return generator_output
def _validate_arguments(
is_sequence,
is_dataset,
use_multiprocessing,
workers,
steps_per_epoch,
validation_data,
validation_steps,
mode,
kwargs,
):
"""Raises errors if arguments are invalid.
Args:
is_sequence: Boolean, whether data is a `keras.utils.data_utils.Sequence`
instance.
is_dataset: Boolean, whether data is a dataset instance.
use_multiprocessing: Boolean. If `True`, use process-based threading. If
unspecified, `use_multiprocessing` will default to `False`. Note that
because this implementation relies on multiprocessing, you should not
pass non-pickleable arguments to the generator as they can't be passed
easily to children processes.
workers: Integer. Maximum number of processes to spin up when using
process-based threading. If unspecified, `workers` will default to 1. If
0, will execute the generator on the main thread.
steps_per_epoch: Total number of steps (batches of samples) before
declaring one epoch finished and starting the next epoch. Ignored with
the default value of `None`.
validation_data: Either a tuple of NumPy/Tensor inputs (i.e. `(x,)` or
`(x, y)` or `(x, y, sample_weights)`) or a generator or
`keras.utils.data_utils.Sequence` object or Eager Iterator or Dataset.
validation_steps: Total number of steps (batches of samples) before
declaring validation finished.
mode: One of ModeKeys.TRAIN/ModeKeys.TEST/ModeKeys.PREDICT.
kwargs: Additional arguments for backwards compatibility.
Raises:
ValueError: If `steps_per_epoch` or `validation_steps` are not passed
for data types that require them, or if unrecognized keyword
arguments are passed.
"""
if not is_sequence and use_multiprocessing and workers > 1:
logging.warning(
UserWarning(
"Using a generator with `use_multiprocessing=True`"
" and multiple workers may duplicate your data."
" Please consider using the `keras.utils.Sequence`"
" class."
)
)
if steps_per_epoch is None and not is_dataset:
arg_name = "steps_per_epoch" if mode == ModeKeys.TRAIN else "steps"
raise ValueError(
f"Please specify the number of steps via the `{arg_name}` argument."
)
val_gen = data_utils.is_generator_or_sequence(
validation_data
) or isinstance(validation_data, tf.data.Iterator)
if (
val_gen
and not isinstance(validation_data, data_utils.Sequence)
and not validation_steps
):
raise ValueError("Please specify the `validation_steps` argument.")
if any(k != "steps" for k in kwargs):
raise ValueError(
f"Invalid arguments passed: {[k for k in kwargs if k != 'steps']}"
)
def convert_to_generator_like(
data, batch_size=None, steps_per_epoch=None, epochs=1, shuffle=False
):
"""Make a generator out of NumPy or EagerTensor inputs.
Args:
data: Either a generator or `keras.utils.data_utils.Sequence` object or
`Dataset`, `Iterator`, or a {1,2,3}-tuple of NumPy arrays or
EagerTensors. If a tuple, the elements represent `(x, y,
sample_weights)` and may be `None` or `[None]`.
batch_size: Used when creating a generator out of tuples of NumPy arrays
or EagerTensors.
steps_per_epoch: Steps of the generator to run each epoch. If `None` the
number of steps will be read from the data (for
`keras.utils.data_utils.Sequence` types).
epochs: Total number of epochs to run.
shuffle: Whether the data should be shuffled.
Returns:
- Generator, `keras.utils.data_utils.Sequence`, or `Iterator`.
Raises:
- ValueError: If `batch_size` is not provided for NumPy or EagerTensor
inputs.
"""
if isinstance(data, tuple):
# Scrub `Nones` that might have been passed for `targets`,
# `sample_weights`.
data = tuple(
ele
for ele in data
if not all(e is None for e in tf.nest.flatten(ele))
)
if data_utils.is_generator_or_sequence(data) or isinstance(
data, tf.data.Iterator
):
if isinstance(data, data_utils.Sequence):
if steps_per_epoch is None:
steps_per_epoch = len(data)
return data, steps_per_epoch
if isinstance(data, tf.data.Dataset):
return tf.compat.v1.data.make_one_shot_iterator(data), steps_per_epoch
# Create generator from NumPy or EagerTensor Input.
num_samples = int(tf.nest.flatten(data)[0].shape[0])
if batch_size is None:
raise ValueError(
"When passing input data as arrays, do not specify "
"`steps_per_epoch`/`steps` argument. "
"Please use `batch_size` instead."
)
steps_per_epoch = int(math.ceil(num_samples / batch_size))
def _gen(data):
"""Makes a generator out of a structure of NumPy/EagerTensors."""
index_array = np.arange(num_samples)
for _ in range(epochs):
if shuffle:
np.random.shuffle(index_array)
batches = generic_utils.make_batches(num_samples, batch_size)
for batch_start, batch_end in batches:
batch_ids = index_array[batch_start:batch_end]
flat_batch_data = training_utils.slice_arrays(
tf.nest.flatten(data), batch_ids, contiguous=(not shuffle)
)
yield tf.nest.pack_sequence_as(data, flat_batch_data)
return _gen(data), steps_per_epoch
def _make_enqueued_generator(
generator,
workers=1,
use_multiprocessing=False,
max_queue_size=10,
shuffle=False,
):
"""Create a buffered queue of next elements of the generator."""
is_sequence = isinstance(generator, data_utils.Sequence)
enqueuer = None
if workers > 0:
if is_sequence:
enqueuer = data_utils.OrderedEnqueuer(
generator,
use_multiprocessing=use_multiprocessing,
shuffle=shuffle,
)
else:
enqueuer = data_utils.GeneratorEnqueuer(
generator, use_multiprocessing=use_multiprocessing
)
enqueuer.start(workers=workers, max_queue_size=max_queue_size)
output_generator = enqueuer.get()
else:
if is_sequence:
output_generator = data_utils.iter_sequence_infinite(generator)
else:
output_generator = generator
return output_generator, enqueuer
def _make_execution_function(model, mode, class_weight=None):
"""Makes function to run one step of model execution."""
if mode == ModeKeys.TRAIN:
f = functools.partial(model.train_on_batch, class_weight=class_weight)
elif mode == ModeKeys.TEST:
f = model.test_on_batch
else:
# Match signature of other modes to allow
# 1, 2, or 3-tuples from generator
def predict_on_batch(x, y=None, sample_weights=None):
return model.predict_on_batch(x)
f = predict_on_batch
# Maintain stateful metrics across batch-level calls.
if mode != ModeKeys.PREDICT:
f = functools.partial(f, reset_metrics=False)
return f
def _get_num_samples_or_steps(data, steps_per_epoch):
"""Returns number of samples or steps, and whether to use steps count
mode."""
flat_inputs = tf.nest.flatten(data)
if hasattr(flat_inputs[0], "shape"):
return int(flat_inputs[0].shape[0]), False
return steps_per_epoch, True
class GeneratorOrSequenceTrainingLoop(training_utils_v1.TrainingLoop):
"""Generator-like.
Input is Python generator, or Sequence object.
The difference between this class and `GeneratorLikeTrainingFunction` is
that this class only handles inputs that with x, y and sample_weight fused
into one param.
"""
def fit(
self,
model,
x=None,
y=None,
batch_size=None,
epochs=1,
verbose=1,
callbacks=None,
validation_split=0.0,
validation_data=None,
shuffle=True,
class_weight=None,
sample_weight=None,
initial_epoch=0,
steps_per_epoch=None,
validation_steps=None,
validation_freq=1,
max_queue_size=10,
workers=1,
use_multiprocessing=False,
):
model._validate_or_infer_batch_size(batch_size, steps_per_epoch, x)
training_utils_v1.check_generator_arguments(
y, sample_weight, validation_split=validation_split
)
return fit_generator(
model,
x,
steps_per_epoch=steps_per_epoch,
epochs=epochs,
verbose=verbose,
callbacks=callbacks,
validation_data=validation_data,
validation_steps=validation_steps,
validation_freq=validation_freq,
class_weight=class_weight,
max_queue_size=max_queue_size,
workers=workers,
use_multiprocessing=use_multiprocessing,
shuffle=shuffle,
initial_epoch=initial_epoch,
steps_name="steps_per_epoch",
)
def evaluate(
self,
model,
x=None,
y=None,
batch_size=None,
verbose=1,
sample_weight=None,
steps=None,
callbacks=None,
max_queue_size=10,
workers=1,
use_multiprocessing=False,
):
model._validate_or_infer_batch_size(batch_size, steps, x)
training_utils_v1.check_generator_arguments(y, sample_weight)
return evaluate_generator(
model,
x,
steps=steps,
verbose=verbose,
callbacks=callbacks,
max_queue_size=max_queue_size,
workers=workers,
use_multiprocessing=use_multiprocessing,
)
def predict(
self,
model,
x,
batch_size=None,
verbose=0,
steps=None,
callbacks=None,
max_queue_size=10,
workers=1,
use_multiprocessing=False,
):
model._validate_or_infer_batch_size(batch_size, steps, x)
return predict_generator(
model,
x,
steps=steps,
verbose=verbose,
callbacks=callbacks,
max_queue_size=max_queue_size,
workers=workers,
use_multiprocessing=use_multiprocessing,
)
class EagerDatasetOrIteratorTrainingLoop(training_utils_v1.TrainingLoop):
"""A non-distributed Dataset or iterator in eager execution."""
def fit(
self,
model,
x=None,
y=None,
batch_size=None,
epochs=1,
verbose=1,
callbacks=None,
validation_split=0.0,
validation_data=None,
shuffle=True,
class_weight=None,
sample_weight=None,
initial_epoch=0,
steps_per_epoch=None,
validation_steps=None,
validation_freq=1,
**kwargs,
):
model._validate_or_infer_batch_size(batch_size, steps_per_epoch, x)
# Make sure that y, sample_weights, validation_split are not passed.
training_utils_v1.validate_dataset_input(
x, y, sample_weight, validation_split
)
if (
isinstance(x, (tf.compat.v1.data.Dataset, tf.data.Dataset))
and shuffle
):
training_utils_v1.verify_dataset_shuffled(x)
return fit_generator(
model,
x,
steps_per_epoch=steps_per_epoch,
epochs=epochs,
verbose=verbose,
callbacks=callbacks,
validation_data=validation_data,
validation_steps=validation_steps,
validation_freq=validation_freq,
class_weight=class_weight,
workers=0,
shuffle=shuffle,
initial_epoch=initial_epoch,
steps_name="steps_per_epoch",
)
def evaluate(
self,
model,
x=None,
y=None,
batch_size=None,
verbose=1,
sample_weight=None,
steps=None,
callbacks=None,
**kwargs,
):
model._validate_or_infer_batch_size(batch_size, steps, x)
# Make sure that y, sample_weights, validation_split are not passed.
training_utils_v1.validate_dataset_input(x, y, sample_weight)
return evaluate_generator(
model,
x,
steps=steps,
verbose=verbose,
workers=0,
callbacks=callbacks,
)
def predict(
self,
model,
x,
batch_size=None,
verbose=0,
steps=None,
callbacks=None,
**kwargs,
):
model._validate_or_infer_batch_size(batch_size, steps, x)
return predict_generator(
model,
x,
steps=steps,
verbose=verbose,
workers=0,
callbacks=callbacks,
)
class GeneratorLikeTrainingLoop(training_utils_v1.TrainingLoop):
"""TrainingLoop that handle inputs like python generator.
This is the default handler for most of the input data types, includes
symbolic tensors or Numpy array-like, Datasets and iterators in graph mode
(since they generate symbolic tensors). This Function is used to handle
model with `run_eagerly` = True.
"""
def fit(
self,
model,
x=None,
y=None,
batch_size=None,
epochs=1,
verbose=1,
callbacks=None,
validation_split=0.0,
validation_data=None,
shuffle=True,
class_weight=None,
sample_weight=None,
initial_epoch=0,
steps_per_epoch=None,
validation_steps=None,
validation_freq=1,
**kwargs,
):
batch_size = model._validate_or_infer_batch_size(
batch_size, steps_per_epoch, x
)
x, y, sample_weights = model._standardize_user_data(
x,
y,
sample_weight=sample_weight,
class_weight=class_weight,
batch_size=batch_size,
check_steps=True,
steps_name="steps_per_epoch",
steps=steps_per_epoch,
validation_split=validation_split,
shuffle=shuffle,
)
if validation_data:
validation_data = model._prepare_validation_data(
validation_data, batch_size, validation_steps
)
elif validation_split and 0.0 < validation_split < 1.0:
(
x,
y,
sample_weights,
val_x,
val_y,
val_sample_weights,
) = training_utils_v1.split_training_and_validation_data(
x, y, sample_weights, validation_split
)
validation_data = (val_x, val_y, val_sample_weights)
else:
if validation_steps:
raise ValueError(
"`validation_steps` should not be specified if "
"`validation_data` is None."
)
return fit_generator(
model,
(x, y, sample_weights),
steps_per_epoch=steps_per_epoch,
batch_size=batch_size,
epochs=epochs,
verbose=verbose,
callbacks=callbacks,
validation_data=validation_data,
validation_steps=validation_steps,
validation_freq=validation_freq,
workers=0,
shuffle=shuffle,
initial_epoch=initial_epoch,
steps_name="steps_per_epoch",
)
def evaluate(
self,
model,
x=None,
y=None,
batch_size=None,
verbose=1,
sample_weight=None,
steps=None,
callbacks=None,
**kwargs,
):
batch_size = model._validate_or_infer_batch_size(batch_size, steps, x)
x, y, sample_weights = model._standardize_user_data(
x,
y,
sample_weight=sample_weight,
batch_size=batch_size,
check_steps=True,
steps_name="steps",
steps=steps,
)
return evaluate_generator(
model,
(x, y, sample_weights),
steps=steps,
batch_size=batch_size,
verbose=verbose,
workers=0,
callbacks=callbacks,
)
def predict(
self,
model,
x,
batch_size=None,
verbose=0,
steps=None,
callbacks=None,
**kwargs,
):
batch_size = model._validate_or_infer_batch_size(batch_size, steps, x)
x, _, _ = model._standardize_user_data(
x, check_steps=True, steps_name="steps", steps=steps
)
return predict_generator(
model,
x,
steps=steps,
batch_size=batch_size,
verbose=verbose,
workers=0,
callbacks=callbacks,
)
| tf-keras/tf_keras/engine/training_generator_v1.py/0 | {
"file_path": "tf-keras/tf_keras/engine/training_generator_v1.py",
"repo_id": "tf-keras",
"token_count": 15423
} | 169 |
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""This API defines FeatureColumn abstraction."""
# This file was originally under tf/python/feature_column, and was moved to
# TF-Keras package to remove the reverse dependency from TF to TF-Keras.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import re
import tensorflow.compat.v2 as tf
from tf_keras.engine.base_layer import Layer
from tf_keras.saving import serialization_lib
class _BaseFeaturesLayer(Layer):
"""Base class for DenseFeatures and SequenceFeatures.
Defines common methods and helpers.
Args:
feature_columns: An iterable containing the FeatureColumns to use as
inputs to your model.
expected_column_type: Expected class for provided feature columns.
trainable: Boolean, whether the layer's variables will be updated via
gradient descent during training.
name: Name to give to the DenseFeatures.
**kwargs: Keyword arguments to construct a layer.
Raises:
ValueError: if an item in `feature_columns` doesn't match
`expected_column_type`.
"""
def __init__(
self,
feature_columns,
expected_column_type,
trainable,
name,
partitioner=None,
**kwargs
):
super().__init__(name=name, trainable=trainable, **kwargs)
self._feature_columns = _normalize_feature_columns(feature_columns)
self._state_manager = tf.__internal__.feature_column.StateManager(
self, self.trainable
)
self._partitioner = partitioner
for column in self._feature_columns:
if not isinstance(column, expected_column_type):
raise ValueError(
"Items of feature_columns must be a {}. "
"You can wrap a categorical column with an "
"embedding_column or indicator_column. Given: {}".format(
expected_column_type, column
)
)
def build(self, _):
for column in self._feature_columns:
with tf.compat.v1.variable_scope(
self.name, partitioner=self._partitioner
):
with tf.compat.v1.variable_scope(
_sanitize_column_name_for_variable_scope(column.name)
):
column.create_state(self._state_manager)
super().build(None)
def _output_shape(self, input_shape, num_elements):
"""Computes expected output shape of the dense tensor of the layer.
Args:
input_shape: Tensor or array with batch shape.
num_elements: Size of the last dimension of the output.
Returns:
Tuple with output shape.
"""
raise NotImplementedError("Calling an abstract method.")
def compute_output_shape(self, input_shape):
total_elements = 0
for column in self._feature_columns:
total_elements += column.variable_shape.num_elements()
return self._target_shape(input_shape, total_elements)
def _process_dense_tensor(self, column, tensor):
"""Reshapes the dense tensor output of a column based on expected shape.
Args:
column: A DenseColumn or SequenceDenseColumn object.
tensor: A dense tensor obtained from the same column.
Returns:
Reshaped dense tensor.
"""
num_elements = column.variable_shape.num_elements()
target_shape = self._target_shape(tf.shape(tensor), num_elements)
return tf.reshape(tensor, shape=target_shape)
def _verify_and_concat_tensors(self, output_tensors):
"""Verifies and concatenates the dense output of several columns."""
_verify_static_batch_size_equality(
output_tensors, self._feature_columns
)
return tf.concat(output_tensors, -1)
def get_config(self):
column_configs = [
tf.__internal__.feature_column.serialize_feature_column(fc)
for fc in self._feature_columns
]
config = {"feature_columns": column_configs}
config["partitioner"] = serialization_lib.serialize_keras_object(
self._partitioner
)
base_config = super().get_config()
return dict(list(base_config.items()) + list(config.items()))
@classmethod
def from_config(cls, config, custom_objects=None):
config_cp = config.copy()
columns_by_name = {}
config_cp["feature_columns"] = [
tf.__internal__.feature_column.deserialize_feature_column(
c, custom_objects, columns_by_name
)
for c in config["feature_columns"]
]
config_cp["partitioner"] = serialization_lib.deserialize_keras_object(
config["partitioner"], custom_objects
)
return cls(**config_cp)
def _sanitize_column_name_for_variable_scope(name):
"""Sanitizes user-provided feature names for use as variable scopes."""
invalid_char = re.compile("[^A-Za-z0-9_.\\-]")
return invalid_char.sub("_", name)
def _verify_static_batch_size_equality(tensors, columns):
"""Verify equality between static batch sizes.
Args:
tensors: iterable of input tensors.
columns: Corresponding feature columns.
Raises:
ValueError: in case of mismatched batch sizes.
"""
expected_batch_size = None
for i in range(0, len(tensors)):
# bath_size is a Dimension object.
batch_size = tf.compat.v1.Dimension(
tf.compat.dimension_value(tensors[i].shape[0])
)
if batch_size.value is not None:
if expected_batch_size is None:
bath_size_column_index = i
expected_batch_size = batch_size
elif not expected_batch_size.is_compatible_with(batch_size):
raise ValueError(
"Batch size (first dimension) of each feature must be "
"same. Batch size of columns ({}, {}): ({}, {})".format(
columns[bath_size_column_index].name,
columns[i].name,
expected_batch_size,
batch_size,
)
)
def _normalize_feature_columns(feature_columns):
"""Normalizes the `feature_columns` input.
This method converts the `feature_columns` to list type as best as it can.
In addition, verifies the type and other parts of feature_columns, required
by downstream library.
Args:
feature_columns: The raw feature columns, usually passed by users.
Returns:
The normalized feature column list.
Raises:
ValueError: for any invalid inputs, such as empty, duplicated names, etc.
"""
if isinstance(
feature_columns, tf.__internal__.feature_column.FeatureColumn
):
feature_columns = [feature_columns]
if isinstance(feature_columns, collections.abc.Iterator):
feature_columns = list(feature_columns)
if isinstance(feature_columns, dict):
raise ValueError("Expected feature_columns to be iterable, found dict.")
for column in feature_columns:
if not isinstance(column, tf.__internal__.feature_column.FeatureColumn):
raise ValueError(
"Items of feature_columns must be a FeatureColumn. "
"Given (type {}): {}.".format(type(column), column)
)
if not feature_columns:
raise ValueError("feature_columns must not be empty.")
name_to_column = {}
for column in feature_columns:
if column.name in name_to_column:
raise ValueError(
"Duplicate feature column name found for columns: {} "
"and {}. This usually means that these columns refer to "
"same base feature. Either one must be discarded or a "
"duplicated but renamed item must be inserted in "
"features dict.".format(column, name_to_column[column.name])
)
name_to_column[column.name] = column
return sorted(feature_columns, key=lambda x: x.name)
| tf-keras/tf_keras/feature_column/base_feature_layer.py/0 | {
"file_path": "tf-keras/tf_keras/feature_column/base_feature_layer.py",
"repo_id": "tf-keras",
"token_count": 3644
} | 170 |
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests that Custom Training Loop docs match actual behavior.
The tutorial at https://www.tensorflow.org/tutorials/distribute/custom_training,
defined at
https://github.com/tensorflow/docs/blob/master/site/en/tutorials/distribute/custom_training.ipynb
makes several statements about
* ways to reduce loss terms to the actual training loss, and
* how they compare to the built-in behavior of TF-Keras Model.fit().
This test verifies that these statements match the actual behavior,
under a variety of distribution strategies.
"""
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
from tf_keras.distribute import strategy_combinations
def make_compute_loss_fn(variant, loss_object, GLOBAL_BATCH_SIZE):
"""Returns the `compute_loss()` function as defined in the tutorial."""
if variant == "basic":
# The basic form of the loss function, shown verbatim in the tutorial.
def compute_loss(labels, predictions, model_losses):
per_example_loss = loss_object(labels, predictions)
loss = tf.nn.compute_average_loss(per_example_loss)
if model_losses:
loss += tf.nn.scale_regularization_loss(tf.add_n(model_losses))
return loss
elif variant == "fixed_batch_size":
# The variant that adds a fixed `global_batch_size=` arg
# (described but not shown verbatim).
def compute_loss(labels, predictions, model_losses):
per_example_loss = loss_object(labels, predictions)
loss = tf.nn.compute_average_loss(
per_example_loss, global_batch_size=GLOBAL_BATCH_SIZE
)
if model_losses:
loss += tf.nn.scale_regularization_loss(tf.add_n(model_losses))
return loss
elif variant == "balanced":
# The variant that scales the loss to balance out varying batch sizes
# (described but not shown verbatim).
def compute_loss(labels, predictions, model_losses):
per_example_loss = loss_object(labels, predictions)
loss = tf.nn.compute_average_loss(per_example_loss)
if model_losses:
loss += tf.nn.scale_regularization_loss(tf.add_n(model_losses))
observed_global_batch_size = (
tf.distribute.get_strategy().num_replicas_in_sync
* tf.shape(per_example_loss)[0]
)
loss *= tf.math.divide(
tf.cast(observed_global_batch_size, tf.float32),
tf.cast(GLOBAL_BATCH_SIZE, tf.float32),
)
return loss
else:
raise ValueError(f"Unknown {variant=}")
return compute_loss
def create_dataset(global_batch_size):
"""Creates the dataset for ImpliedExampleWeightsTest.
It contains two batches: the first has full size, the second just 1 element.
The i-th element `(x,y)` has model input `x = onehot(i)` and label `y = 0`.
"""
n = global_batch_size + 1
ds = tf.data.Dataset.from_tensor_slices((tf.eye(n), tf.zeros([n, 1])))
ds = ds.batch(global_batch_size)
return ds
def create_model(n):
"""Creates the model for ImpliedExampleWeightsTest.
The model has three trainable weights of interest, all initialized to 1.0:
* "predicting/kernel:0" of shape [n, 1] maps a one-hot encoded input to
the model output. When used with the MeanAbsoluteError loss, an input
onehot(i) produces a gradient onehot(i) for this weight, subject to
the training loop's loss reduction across examples.
* "activity_regularized/kernel:0" of shape [n, 1] has an activity
regularizer loss in the model so that input onehot(i) produces a
gradient of 1/batch_size * onehot(i) for this weight.
* "weight_regularized:0" of shape [1] has a weight regularizer loss in
the model that produces a gradient of 1 for this weight, independent
of batch size.
"""
inputs = tf.keras.Input(shape=(n,), name="inputs")
predicting = tf.keras.layers.Dense(
1, use_bias=False, kernel_initializer="ones", name="predicting"
)
activity_regularized = tf.keras.layers.Dense(
1,
use_bias=False,
kernel_initializer="ones",
activity_regularizer=tf.keras.regularizers.L1(l1=1.0),
name="activity_regularized",
)
weight_regularized = tf.keras.layers.Dense(
1,
kernel_initializer="zeros",
bias_initializer="ones",
bias_regularizer=tf.keras.regularizers.L1(l1=1.0),
name="weight_regularized",
)
# Make outputs = predicting(inputs), depending on the other Layers as well.
add = tf.keras.layers.Add(name="add")
multiply = tf.keras.layers.Multiply(name="multiply")
outputs = add(
[
predicting(inputs),
multiply(
[np.array([[0.0]], np.float32), activity_regularized(inputs)]
),
multiply(
[np.array([[0.0]], np.float32), weight_regularized(inputs)]
),
]
)
model = tf.keras.Model(inputs, outputs)
return model
def create_loss(**kwargs):
"""Returns the loss to be used with the model from create_model()."""
return tf.keras.losses.MeanAbsoluteError(**kwargs)
def create_optimizer(learning_rate):
"""Returns the optimizer that applies gradients in the most obvious way."""
return tf.keras.optimizers.SGD(learning_rate)
def get_expected_example_weights(
ctl_variant, *, local_batch_size, num_replicas_in_sync
):
"""Returns the weights that examples have in the gradient updates seen."""
global_batch_size = local_batch_size * num_replicas_in_sync
n = global_batch_size + 1
num_batches = 2
expected = dict(
# Examples in a full batch receive the expected gradient weight,
# independent of the CTL variant.
example_prediction_fullbatch=1.0,
example_activity_fullbatch=1.0,
)
if ctl_variant == "basic":
# In the basic variant of the CTL, when a batch of size 1 hits a
# replica, the singleton example receives the weight that is
# normally spread evenly across the local_batch_size.
expected["example_prediction_singleton"] = local_batch_size
expected["example_activity_singleton"] = local_batch_size
# Weight regularization applies equally in each batch,
# irrespective of its size.
expected["total_weight_regularization"] = num_batches
elif ctl_variant == "fixed_batch_size":
# In the CTL variant that fixes GLOBAL_BATCH_SIZE for the reduction
# of prediction losses, the weight of a singleton example is
# reverted to normal for prediction, but activity and weight
# regularization behaves as in the "basic" variant.
expected["example_prediction_singleton"] = 1.0
expected["example_activity_singleton"] = local_batch_size
expected["total_weight_regularization"] = num_batches
elif ctl_variant == "balanced":
# The CTL variant that corrects both prediction and regularization
# losses for the batch size achieves equal weights of examples
# both for the prediction and for an activity regularizer
expected["example_prediction_singleton"] = 1.0
expected["example_activity_singleton"] = 1.0
# Weight regularization, in sync with the other loss terms,
# applies proportional to the number of examples.
expected["total_weight_regularization"] = n / global_batch_size
return expected
class MaybeStrategyScope:
"""Provides a context allowing no distribution strategy."""
def __init__(self, strategy):
self._strategy = strategy
self._scope = None
def __enter__(self):
if self._strategy:
self._scope = self._strategy.scope()
self._scope.__enter__()
def __exit__(self, exc_type, value, traceback):
if self._strategy:
self._scope.__exit__(exc_type, value, traceback)
self._scope = None
class ImpliedExampleWeightsTest(tf.test.TestCase, parameterized.TestCase):
"""Tests weights of loss terms depending on batch size and training loop."""
@tf.__internal__.distribute.combinations.generate(
tf.__internal__.test.combinations.combine(
strategy=strategy_combinations.all_strategies
+ strategy_combinations.multiworker_strategies
+ [None],
ctl_variant=["basic", "fixed_batch_size", "balanced"],
)
)
def test_ctl(self, strategy, ctl_variant):
"""Tests a variant of the CTL under a distribution strategy."""
if strategy is None:
num_replicas_in_sync = 1
else:
num_replicas_in_sync = strategy.num_replicas_in_sync
local_batch_size = 2 # For a full batch; greater than 1.
global_batch_size = local_batch_size * num_replicas_in_sync
ds = create_dataset(global_batch_size)
if strategy is not None:
ds = strategy.experimental_distribute_dataset(ds)
n = global_batch_size + 1
learning_rate = 0.01
with MaybeStrategyScope(strategy):
model = create_model(n)
loss_object = create_loss(reduction=tf.keras.losses.Reduction.NONE)
compute_loss = make_compute_loss_fn(
ctl_variant, loss_object, global_batch_size
)
optimizer = create_optimizer(learning_rate)
def train_step(inputs):
x, labels = inputs
with tf.GradientTape() as tape:
predictions = model(x, training=True)
loss = compute_loss(labels, predictions, model.losses)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(
zip(gradients, model.trainable_variables)
)
return loss
@tf.function
def wrapped_train_step(inputs):
if strategy is None:
return train_step(inputs)
else:
per_replica_losses = strategy.run(
train_step, args=(inputs,)
)
return strategy.reduce(
tf.distribute.ReduceOp.SUM,
per_replica_losses,
axis=None,
)
num_epochs = 1
num_batches = 0
for epoch in range(num_epochs):
total_loss = 0.0
for x in ds:
total_loss += wrapped_train_step(x)
num_batches += 1
train_loss = total_loss / num_batches
self.assertTrue(tf.math.is_finite(train_loss).numpy())
self.assertEqual(num_batches, 2)
expected = get_expected_example_weights(
ctl_variant,
local_batch_size=local_batch_size,
num_replicas_in_sync=num_replicas_in_sync,
)
self.assert_implied_example_weights(
model,
**expected,
rtol=1e-6 if strategy is None else 1e-4,
learning_rate=learning_rate,
global_batch_size=global_batch_size,
)
@tf.__internal__.distribute.combinations.generate(
tf.__internal__.test.combinations.combine(
strategy=strategy_combinations.all_strategies
+ strategy_combinations.multiworker_strategies
+ [None],
)
)
def test_fit(self, strategy):
"""Tests Model.fit()."""
if strategy is None:
num_replicas_in_sync = 1
else:
num_replicas_in_sync = strategy.num_replicas_in_sync
local_batch_size = 2 # For a full batch; greater than 1.
global_batch_size = local_batch_size * num_replicas_in_sync
ds = create_dataset(global_batch_size)
n = global_batch_size + 1
learning_rate = 0.01
with MaybeStrategyScope(strategy):
model = create_model(n)
model.compile(
optimizer=create_optimizer(learning_rate), loss=create_loss()
)
epochs = 1
steps_per_epoch = 2
model.fit(ds, epochs=epochs, steps_per_epoch=steps_per_epoch)
expected = get_expected_example_weights(
ctl_variant="basic", # The tutorial claims this consistency!
local_batch_size=local_batch_size,
num_replicas_in_sync=num_replicas_in_sync,
)
self.assert_implied_example_weights(
model,
**expected,
rtol=1e-6 if strategy is None else 1e-4,
learning_rate=learning_rate,
global_batch_size=global_batch_size,
)
def assert_implied_example_weights(
self,
model,
*,
learning_rate,
global_batch_size,
rtol,
example_prediction_fullbatch,
example_prediction_singleton,
example_activity_fullbatch,
example_activity_singleton,
total_weight_regularization,
):
"""Checks model.weights for the expected effects of training."""
model_weights = {
v.name: self._get_var_value(v).numpy()
for v in model.trainable_variables
}
# The total weight received by each one-hot example in the prediction
# loss is the change of its corresponding weight from the initial
# value 1, adjusted for the expected averaging by global_batch_size and
# scaling by SGD's learning_rate.
predicting_kernel = model_weights["predicting/kernel:0"]
example_prediction_weights = (
(1.0 - predicting_kernel) / learning_rate * global_batch_size
)
# There was one full batch of examples, followed by a singleton.
self.assertEqual(predicting_kernel.shape, (global_batch_size + 1, 1))
# Check the examples in the full batch.
actual_example_prediction_fullbatch = self.reduce_assert_equal(
example_prediction_weights[:-1, 0]
)
self.assertAllClose(
example_prediction_fullbatch,
actual_example_prediction_fullbatch,
rtol=rtol,
)
# Check the singleton example after the full batch.
actual_example_prediction_singleton = example_prediction_weights[-1, 0]
self.assertAllClose(
example_prediction_singleton,
actual_example_prediction_singleton,
rtol=rtol,
)
# Analogous to predictions, check weights for acticity regularization.
activity_regularized_kernel = model_weights[
"activity_regularized/kernel:0"
]
example_activity_weights = (
(1.0 - activity_regularized_kernel)
/ learning_rate
* global_batch_size
)
self.assertEqual(
activity_regularized_kernel.shape, (global_batch_size + 1, 1)
)
actual_example_activity_fullbatch = self.reduce_assert_equal(
example_activity_weights[:-1, 0]
)
self.assertAllClose(
example_activity_fullbatch,
actual_example_activity_fullbatch,
rtol=rtol,
)
actual_example_activity_singleton = example_activity_weights[-1, 0]
self.assertAllClose(
example_activity_singleton,
actual_example_activity_singleton,
rtol=rtol,
)
# The total weight of weight regularization is the change of this
# (otherwise unused) bias term from its initial value 1,
# adjusted for the expected scaling by SGD's learning_rate.
actual_total_weight_reguarization = (
1.0 - model_weights["weight_regularized/bias:0"][0]
) / learning_rate
self.assertAllClose(
total_weight_regularization,
actual_total_weight_reguarization,
rtol=rtol,
)
def reduce_assert_equal(self, x):
"""Returns first element of x and asserts all others are equal."""
result = x[0]
for i, value in enumerate(x[1:]):
self.assertAllEqual(result, value, msg=f"at position {i=}")
return result
def _get_var_value(self, var):
"""Returns the (unique) value of a (possibly distributed) Variable."""
if hasattr(var, "values"): # Distributed.
result = self.reduce_assert_equal([v.value() for v in var.values])
else:
result = var.value()
return result
if __name__ == "__main__":
tf.__internal__.distribute.multi_process_runner.test_main()
| tf-keras/tf_keras/integration_test/ctl_tutorial_test.py/0 | {
"file_path": "tf-keras/tf_keras/integration_test/ctl_tutorial_test.py",
"repo_id": "tf-keras",
"token_count": 7533
} | 171 |
"""Model that incorporates a set of edge case development patterns.
"""
import tensorflow as tf
from tensorflow import keras
from tf_keras.integration_test.models.input_spec import InputSpec
INPUT_DIM = 32
NUM_CLASSES = 5
def get_data_spec(batch_size):
return (
InputSpec((batch_size, INPUT_DIM)),
InputSpec((batch_size, NUM_CLASSES)),
)
def get_input_preprocessor():
return None
class LinearA(keras.layers.Layer):
"""Standard custom layer with 2 call() inputs."""
def __init__(self, units=32, input_dim=32):
super().__init__()
self.w = self.add_weight(
shape=(input_dim, units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(units,), initializer="zeros", trainable=True
)
def call(self, inputs_1, inputs_2):
return (
tf.matmul(inputs_1, self.w) + tf.matmul(inputs_2, self.w) + self.b
)
class LinearB(keras.layers.Layer):
"""Layer that tracks weights in a dict attribute that gets updated later."""
def __init__(self, units=32, input_dim=32, **kwargs):
super().__init__(**kwargs)
w_init = tf.random_normal_initializer()
b_init = tf.zeros_initializer()
self.state = {
"kernel": tf.Variable(
initial_value=w_init(shape=(input_dim, units), dtype="float32"),
trainable=True,
name="kernel",
)
}
self.state["bias"] = tf.Variable(
initial_value=b_init(shape=(units,), dtype="float32"),
trainable=True,
name="bias",
)
def call(self, inputs):
return tf.matmul(inputs, self.state["kernel"]) + self.state["bias"]
class LinearC(keras.layers.Layer):
"""Layer that creates weights in call()."""
def __init__(self, units=32, input_dim=32, **kwargs):
super().__init__(**kwargs)
self._custom_built = False
self.units = units
self.input_dim = input_dim
def call(self, inputs):
if not self._custom_built:
self.w = self.add_weight(
shape=(self.input_dim, self.units),
initializer="random_normal",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,), initializer="zeros", trainable=True
)
self._custom_built = True
return tf.matmul(inputs, self.w) + self.b
class BatchNorm(keras.layers.Layer):
"""Layer with different training/test behavior and non-trainable updates."""
def __init__(
self, scale=True, center=True, epsilon=1e-6, momentum=0.9, **kwargs
):
super().__init__(**kwargs)
self.scale = scale
self.center = center
self.epsilon = epsilon
self.momentum = momentum
def build(self, input_shape):
self.var = self.add_weight(
shape=[input_shape[1]], initializer="ones", trainable=False
)
self.mean = self.add_weight(
shape=[input_shape[1]], initializer="zeros", trainable=False
)
self.gamma = self.add_weight(shape=[input_shape[1]], initializer="ones")
self.beta = self.add_weight(shape=[input_shape[1]], initializer="zeros")
def call(self, inputs, training=False):
if training:
mean, var = tf.nn.moments(inputs, axes=[0])
outputs = (inputs - mean) / (var + self.epsilon)
self.var.assign(self.var * self.momentum + var * 0.1)
self.mean.assign(self.mean * self.momentum + mean * 0.1)
else:
outputs = (inputs - self.mean) / (self.var + self.epsilon)
if self.scale:
outputs *= self.gamma
if self.center:
outputs += self.beta
return outputs
class FunctionalSubclassModel(keras.Model):
def __init__(self, **kwargs):
inputs = keras.Input((INPUT_DIM,))
x = inputs
x = LinearA(32, INPUT_DIM)(x, x)
x = LinearB(32, 32)(x)
x = LinearC(32, 32)(x)
x = BatchNorm()(x)
outputs = keras.layers.Dense(NUM_CLASSES, activation="softmax")(x)
super().__init__(inputs, outputs, **kwargs)
def get_model(
build=False, compile=False, jit_compile=False, include_preprocessing=True
):
model = FunctionalSubclassModel()
if compile:
model.compile("rmsprop", "mse", jit_compile=jit_compile)
return model
def get_custom_objects():
return {
"LinearA": LinearA,
"LinearB": LinearB,
"LinearC": LinearC,
"BatchNorm": BatchNorm,
}
| tf-keras/tf_keras/integration_test/models/edge_case_model.py/0 | {
"file_path": "tf-keras/tf_keras/integration_test/models/edge_case_model.py",
"repo_id": "tf-keras",
"token_count": 2183
} | 172 |
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for ClusterCoordinator and TF-Keras models."""
import multiprocessing
import os
import random
import tempfile
import numpy as np
import portpicker
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
from tf_keras.testing_infra import test_utils
# These vocabularies usually come from TFT or a Beam pipeline.
FEATURE_VOCAB = [
"avenger",
"ironman",
"batman",
"hulk",
"spiderman",
"kingkong",
"wonder_woman",
]
LABEL_VOCAB = ["yes", "no"]
def create_in_process_cluster(num_workers, num_ps):
"""Creates and starts local servers and returns the cluster_resolver."""
worker_ports = [portpicker.pick_unused_port() for _ in range(num_workers)]
ps_ports = [portpicker.pick_unused_port() for _ in range(num_ps)]
cluster_dict = {}
cluster_dict["worker"] = [f"localhost:{port}" for port in worker_ports]
if num_ps > 0:
cluster_dict["ps"] = [f"localhost:{port}" for port in ps_ports]
cluster_spec = tf.train.ClusterSpec(cluster_dict)
# Workers need some inter_ops threads to work properly.
worker_config = tf.compat.v1.ConfigProto()
if multiprocessing.cpu_count() < num_workers + 1:
worker_config.inter_op_parallelism_threads = num_workers + 1
for i in range(num_workers):
tf.distribute.Server(
cluster_spec,
job_name="worker",
task_index=i,
config=worker_config,
protocol="grpc",
)
for i in range(num_ps):
tf.distribute.Server(
cluster_spec, job_name="ps", task_index=i, protocol="grpc"
)
return cluster_spec
@test_utils.run_v2_only
class KPLTest(tf.test.TestCase, parameterized.TestCase):
def setUp(self):
super().setUp()
cluster_spec = create_in_process_cluster(num_workers=3, num_ps=2)
cluster_resolver = tf.distribute.cluster_resolver.SimpleClusterResolver(
cluster_spec, rpc_layer="grpc"
)
self.strategy = tf.distribute.experimental.ParameterServerStrategy(
cluster_resolver
)
self.coordinator = (
tf.distribute.experimental.coordinator.ClusterCoordinator(
self.strategy
)
)
def define_kpls_for_training(self, use_adapt):
# Define KPLs under strategy's scope. Right now, if they have look up
# tables, they will be created on the client. Their variables will be
# created on PS. Ideally they should be cached on each worker since they
# will not be changed in a training step.
if use_adapt:
feature_lookup_layer = tf.keras.layers.StringLookup(
num_oov_indices=1
)
feature_lookup_layer.adapt(FEATURE_VOCAB)
label_lookup_layer = tf.keras.layers.StringLookup(
num_oov_indices=0, mask_token=None
)
label_lookup_layer.adapt(LABEL_VOCAB)
else:
# Do vocab shuffling.
shuffled_vocab = FEATURE_VOCAB.copy()
random.shuffle(shuffled_vocab)
feature_lookup_layer = tf.keras.layers.StringLookup(
vocabulary=shuffled_vocab, num_oov_indices=1
)
label_lookup_layer = tf.keras.layers.StringLookup(
vocabulary=LABEL_VOCAB, num_oov_indices=0, mask_token=None
)
raw_feature_input = tf.keras.Input(
shape=(3,), dtype=tf.string, name="feature", ragged=True
)
feature_id_input = feature_lookup_layer(raw_feature_input)
# Model creates variables as well.
feature_ps = tf.keras.Model(
{"features": raw_feature_input}, feature_id_input
)
raw_label_input = tf.keras.Input(
shape=(1,), dtype=tf.string, name="label"
)
label_id_input = label_lookup_layer(raw_label_input)
label_ps = tf.keras.Model({"label": raw_label_input}, label_id_input)
return feature_ps, label_ps
def define_reverse_lookup_layer(self):
# Only needed for serving.
label_inverse_lookup_layer = tf.keras.layers.StringLookup(
num_oov_indices=0,
mask_token=None,
vocabulary=LABEL_VOCAB,
invert=True,
)
return label_inverse_lookup_layer
@tf.__internal__.distribute.combinations.generate(
tf.__internal__.test.combinations.combine(
mode=["eager"],
use_adapt=[True, False],
test_training_with_loaded=[True, False],
# TODO(b/1949359300): `load_for_serving_under_strategy=True` flakily
# times out.
load_for_serving_under_strategy=[False],
)
)
def testTrainAndLoadAndServe(
self,
use_adapt,
test_training_with_loaded,
load_for_serving_under_strategy,
):
# test_training_with_loaded=False tests distributed training with newly
# constructed KPL, while test_training_with_loaded=True tests
# distributed training with a loaded KPL which was created under
# strategy scope as well.
#
# load_for_serving_under_strategy test serving with a model loaded
# under distribution strategy or not.
with self.coordinator.strategy.scope():
feature_ps, label_ps = self.define_kpls_for_training(use_adapt)
if test_training_with_loaded:
saved_kpl_dir = tempfile.mkdtemp(dir=self.get_temp_dir())
feature_ps_dir = os.path.join(saved_kpl_dir, "feature")
label_ps_dir = os.path.join(saved_kpl_dir, "label")
feature_ps.save(feature_ps_dir)
label_ps.save(label_ps_dir)
del feature_ps, label_ps
feature_ps = tf.keras.models.load_model(feature_ps_dir)
label_ps = tf.keras.models.load_model(label_ps_dir)
def dataset_fn():
def feature_and_label_gen():
while True:
features = random.sample(FEATURE_VOCAB, 3)
label = ["yes"] if "avenger" in features else ["no"]
yield {"features": features, "label": label}
# The dataset will be created on the coordinator.
raw_dataset = (
tf.data.Dataset.from_generator(
feature_and_label_gen,
output_signature={
"features": tf.TensorSpec([3], tf.string),
"label": tf.TensorSpec([1], tf.string),
},
)
.shuffle(100)
.batch(32)
)
train_dataset = raw_dataset.map(
lambda x: (
{"features": feature_ps(x["features"])},
label_ps(x["label"]),
)
)
return train_dataset
# Create the model. The input needs to be compatible with KPLs.
model_input = tf.keras.Input(
shape=(3,), dtype=tf.int64, name="model_input"
)
# input_dim includes a mask token and an oov token.
emb_output = tf.keras.layers.Embedding(
input_dim=len(FEATURE_VOCAB) + 2, output_dim=20
)(model_input)
emb_output = tf.reduce_mean(emb_output, axis=1)
dense_output = tf.keras.layers.Dense(units=1, activation="sigmoid")(
emb_output
)
model = tf.keras.Model({"features": model_input}, dense_output)
optimizer = tf.keras.optimizers.RMSprop(learning_rate=0.1)
accuracy = tf.keras.metrics.Accuracy()
@tf.function
def worker_fn(iterator):
def replica_fn(iterator):
batch_data, labels = next(iterator)
with tf.GradientTape() as tape:
pred = model(batch_data, training=True)
loss = tf.nn.compute_average_loss(
tf.keras.losses.BinaryCrossentropy(
reduction=tf.keras.losses.Reduction.NONE
)(labels, pred)
)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(
zip(gradients, model.trainable_variables)
)
actual_pred = tf.cast(tf.greater(pred, 0.5), tf.int64)
accuracy.update_state(labels, actual_pred)
self.coordinator.strategy.run(replica_fn, args=(iterator,))
distributed_dataset = self.coordinator.create_per_worker_dataset(
dataset_fn
)
distributed_iterator = iter(distributed_dataset)
for _ in range(4):
accuracy.reset_state()
for _ in range(7):
self.coordinator.schedule(
worker_fn, args=(distributed_iterator,)
)
self.coordinator.join()
self.assertGreater(accuracy.result().numpy(), 0.5)
# Create a saved model.
model.feature_ps = feature_ps
model.label_ps = label_ps
model.label_inverse_lookup_layer = self.define_reverse_lookup_layer()
def create_serving_signature(model):
@tf.function
def serve_fn(raw_features):
raw_features = tf.expand_dims(raw_features, axis=0)
transformed_features = model.feature_ps(raw_features)
outputs = model(transformed_features)
outputs = tf.squeeze(outputs, axis=0)
outputs = tf.cast(tf.greater(outputs, 0.5), tf.int64)
decoded_outputs = model.label_inverse_lookup_layer(outputs)
return tf.squeeze(decoded_outputs, axis=0)
# serving does NOT have batch dimension
return serve_fn.get_concrete_function(
tf.TensorSpec(shape=(3), dtype=tf.string, name="example")
)
serving_fn = create_serving_signature(model)
saved_model_dir = tempfile.mkdtemp(dir=self.get_temp_dir())
model.save(saved_model_dir, signatures={"serving_default": serving_fn})
if load_for_serving_under_strategy:
with self.coordinator.strategy.scope():
loaded_serving_fn = tf.keras.models.load_model(
saved_model_dir
).signatures["serving_default"]
outputs = []
for _ in range(7):
outputs.append(
self.coordinator.schedule(
loaded_serving_fn,
args=(tf.constant(["avenger", "ironman", "avenger"]),),
)
)
self.coordinator.join()
for prediction0 in outputs:
self.assertIn(
prediction0._get_values()["output_0"], ("yes", "no")
)
else:
loaded_serving_fn = tf.keras.models.load_model(
saved_model_dir
).signatures["serving_default"]
# check the result w/ and w/o avenger.
prediction0 = loaded_serving_fn(
tf.constant(["avenger", "ironman", "avenger"])
)["output_0"]
self.assertIn(prediction0, ("yes", "no"))
prediction1 = loaded_serving_fn(
tf.constant(["ironman", "ironman", "unknown"])
)["output_0"]
self.assertIn(prediction1, ("yes", "no"))
@test_utils.run_v2_only
class KPLCreatedInDatasetsFromFunctionTest(
tf.test.TestCase, parameterized.TestCase
):
def setUp(self):
super().setUp()
cluster_spec = create_in_process_cluster(num_workers=3, num_ps=2)
cluster_resolver = tf.distribute.cluster_resolver.SimpleClusterResolver(
cluster_spec, rpc_layer="grpc"
)
self.strategy = tf.distribute.experimental.ParameterServerStrategy(
cluster_resolver
)
self.coordinator = (
tf.distribute.experimental.coordinator.ClusterCoordinator(
self.strategy
)
)
def testKPLCreatedInDatasetsFromFunction(self):
filepath = os.path.join(self.get_temp_dir(), "vocab")
with open(filepath, "w") as f:
f.write("\n".join(["earth", "wind", "and", "fire"]))
def per_worker_dataset_fn():
def dataset_fn(input_context):
del input_context
lookup_layer = tf.keras.layers.StringLookup(
num_oov_indices=1, vocabulary=filepath
)
x = np.array(
[
["earth", "wind", "and", "fire"],
["fire", "and", "earth", "michigan"],
]
)
y = np.array([0, 1])
map_fn = lambda x, y: (lookup_layer(x), y)
return (
tf.data.Dataset.from_tensor_slices((x, y))
.shuffle(10)
.repeat()
.batch(2)
.map(map_fn)
)
return self.coordinator.strategy.distribute_datasets_from_function(
dataset_fn
)
per_worker_distribute_dataset = (
self.coordinator.create_per_worker_dataset(per_worker_dataset_fn)
)
per_worker_iter = iter(per_worker_distribute_dataset)
@tf.function
def worker_fn(iterator):
def replica_fn(data):
return data
return self.coordinator.strategy.run(
replica_fn, args=(next(iterator),)
)
result = []
for _ in range(10):
result.append(
self.coordinator.schedule(worker_fn, args=(per_worker_iter,))
)
self.coordinator.join()
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/integration_test/parameter_server_keras_preprocessing_test.py/0 | {
"file_path": "tf-keras/tf_keras/integration_test/parameter_server_keras_preprocessing_test.py",
"repo_id": "tf-keras",
"token_count": 7376
} | 173 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Thresholded Rectified Linear Unit activation layer."""
import tensorflow.compat.v2 as tf
from tf_keras import backend
from tf_keras.engine.base_layer import Layer
from tf_keras.utils import tf_utils
# isort: off
from tensorflow.python.util.tf_export import keras_export
@keras_export("keras.layers.ThresholdedReLU")
class ThresholdedReLU(Layer):
"""Thresholded Rectified Linear Unit.
It follows:
```
f(x) = x for x > theta
f(x) = 0 otherwise`
```
Input shape:
Arbitrary. Use the keyword argument `input_shape`
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.
Output shape:
Same shape as the input.
Args:
theta: Float >= 0. Threshold location of activation.
"""
def __init__(self, theta=1.0, **kwargs):
super().__init__(**kwargs)
if theta is None:
raise ValueError(
"Theta of a Thresholded ReLU layer cannot be None, expecting a "
f"float. Received: {theta}"
)
if theta < 0:
raise ValueError(
"The theta value of a Thresholded ReLU layer "
f"should be >=0. Received: {theta}"
)
self.supports_masking = True
self.theta = backend.cast_to_floatx(theta)
def call(self, inputs):
dtype = self.compute_dtype
return inputs * tf.cast(tf.greater(inputs, self.theta), dtype)
def get_config(self):
config = {"theta": float(self.theta)}
base_config = super().get_config()
return dict(list(base_config.items()) + list(config.items()))
@tf_utils.shape_type_conversion
def compute_output_shape(self, input_shape):
return input_shape
| tf-keras/tf_keras/layers/activation/thresholded_relu.py/0 | {
"file_path": "tf-keras/tf_keras/layers/activation/thresholded_relu.py",
"repo_id": "tf-keras",
"token_count": 935
} | 174 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Keras abstract base layer for separable nD convolution."""
import tensorflow.compat.v2 as tf
from tf_keras import activations
from tf_keras import constraints
from tf_keras import initializers
from tf_keras import regularizers
from tf_keras.engine.input_spec import InputSpec
from tf_keras.layers.convolutional.base_conv import Conv
class SeparableConv(Conv):
"""Abstract base layer for separable nD convolution.
This layer performs a depthwise convolution that acts separately on
channels, followed by a pointwise convolution that mixes channels.
If `use_bias` is True and a bias initializer is provided,
it adds a bias vector to the output.
It then optionally applies an activation function to produce the final
output.
Args:
rank: An integer, the rank of the convolution, e.g. "2" for 2D
convolution.
filters: Integer, the dimensionality of the output space (i.e. the number
of filters in the convolution).
kernel_size: A tuple or list of integers specifying the spatial
dimensions of the filters. Can be a single integer to specify the same
value for all spatial dimensions.
strides: A tuple or list of integers specifying the strides
of the convolution. Can be a single integer to specify the same value
for all spatial dimensions.
Specifying any `stride` value != 1 is incompatible with specifying
any `dilation_rate` value != 1.
padding: One of `"valid"` or `"same"` (case-insensitive).
`"valid"` means no padding. `"same"` results in padding with zeros
evenly to the left/right or up/down of the input such that output has
the same height/width dimension as the input.
data_format: A string, one of `channels_last` (default) or
`channels_first`. The ordering of the dimensions in the inputs.
`channels_last` corresponds to inputs with shape
`(batch_size, ..., channels)` while `channels_first` corresponds to
inputs with shape `(batch_size, channels, ...)`.
dilation_rate: An integer or tuple/list of 2 integers, specifying
the dilation rate to use for dilated convolution.
Can be a single integer to specify the same value for
all spatial dimensions.
Currently, specifying any `dilation_rate` value != 1 is
incompatible with specifying any stride value != 1.
depth_multiplier: The number of depthwise convolution output channels for
each input channel. The total number of depthwise convolution output
channels will be equal to `num_filters_in * depth_multiplier`.
activation: Activation function to use.
If you don't specify anything, no activation is applied
(see `keras.activations`).
use_bias: Boolean, whether the layer uses a bias.
depthwise_initializer: An initializer for the depthwise convolution kernel
(see `keras.initializers`). If None, then the default initializer
('glorot_uniform') will be used.
pointwise_initializer: An initializer for the pointwise convolution kernel
(see `keras.initializers`). If None, then the default initializer
('glorot_uniform') will be used.
bias_initializer: An initializer for the bias vector. If None, the default
initializer ('zeros') will be used (see `keras.initializers`).
depthwise_regularizer: Optional regularizer for the depthwise
convolution kernel.
pointwise_regularizer: Optional regularizer for the pointwise
convolution kernel.
bias_regularizer: Optional regularizer for the bias vector.
activity_regularizer: Optional regularizer function for the output.
depthwise_constraint: Optional projection function to be applied to the
depthwise kernel after being updated by an `Optimizer` (e.g. used for
norm constraints or value constraints for layer weights). The function
must take as input the unprojected variable and must return the
projected variable (which must have the same shape). Constraints are
not safe to use when doing asynchronous distributed training.
pointwise_constraint: Optional projection function to be applied to the
pointwise kernel after being updated by an `Optimizer`.
bias_constraint: Optional projection function to be applied to the
bias after being updated by an `Optimizer`.
trainable: Boolean, if `True` the weights of this layer will be marked as
trainable (and listed in `layer.trainable_weights`).
"""
def __init__(
self,
rank,
filters,
kernel_size,
strides=1,
padding="valid",
data_format=None,
dilation_rate=1,
depth_multiplier=1,
activation=None,
use_bias=True,
depthwise_initializer="glorot_uniform",
pointwise_initializer="glorot_uniform",
bias_initializer="zeros",
depthwise_regularizer=None,
pointwise_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
depthwise_constraint=None,
pointwise_constraint=None,
bias_constraint=None,
trainable=True,
name=None,
**kwargs,
):
super().__init__(
rank=rank,
filters=filters,
kernel_size=kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
activation=activations.get(activation),
use_bias=use_bias,
bias_initializer=initializers.get(bias_initializer),
bias_regularizer=regularizers.get(bias_regularizer),
activity_regularizer=regularizers.get(activity_regularizer),
bias_constraint=bias_constraint,
trainable=trainable,
name=name,
**kwargs,
)
self.depth_multiplier = depth_multiplier
self.depthwise_initializer = initializers.get(depthwise_initializer)
self.pointwise_initializer = initializers.get(pointwise_initializer)
self.depthwise_regularizer = regularizers.get(depthwise_regularizer)
self.pointwise_regularizer = regularizers.get(pointwise_regularizer)
self.depthwise_constraint = constraints.get(depthwise_constraint)
self.pointwise_constraint = constraints.get(pointwise_constraint)
def build(self, input_shape):
input_shape = tf.TensorShape(input_shape)
channel_axis = self._get_channel_axis()
if input_shape.dims[channel_axis].value is None:
raise ValueError(
"The channel dimension of the inputs should be defined. "
f"The input_shape received is {input_shape}, "
f"where axis {channel_axis} (0-based) "
"is the channel dimension, which found to be `None`."
)
input_dim = int(input_shape[channel_axis])
self.input_spec = InputSpec(
ndim=self.rank + 2, axes={channel_axis: input_dim}
)
depthwise_kernel_shape = self.kernel_size + (
input_dim,
self.depth_multiplier,
)
pointwise_kernel_shape = (1,) * self.rank + (
self.depth_multiplier * input_dim,
self.filters,
)
self.depthwise_kernel = self.add_weight(
name="depthwise_kernel",
shape=depthwise_kernel_shape,
initializer=self.depthwise_initializer,
regularizer=self.depthwise_regularizer,
constraint=self.depthwise_constraint,
trainable=True,
dtype=self.dtype,
)
self.pointwise_kernel = self.add_weight(
name="pointwise_kernel",
shape=pointwise_kernel_shape,
initializer=self.pointwise_initializer,
regularizer=self.pointwise_regularizer,
constraint=self.pointwise_constraint,
trainable=True,
dtype=self.dtype,
)
if self.use_bias:
self.bias = self.add_weight(
name="bias",
shape=(self.filters,),
initializer=self.bias_initializer,
regularizer=self.bias_regularizer,
constraint=self.bias_constraint,
trainable=True,
dtype=self.dtype,
)
else:
self.bias = None
self.built = True
def call(self, inputs):
raise NotImplementedError
def get_config(self):
config = {
"filters": self.filters,
"kernel_size": self.kernel_size,
"strides": self.strides,
"padding": self.padding,
"data_format": self.data_format,
"depth_multiplier": self.depth_multiplier,
"dilation_rate": self.dilation_rate,
"activation": activations.serialize(self.activation),
"use_bias": self.use_bias,
"depthwise_initializer": initializers.serialize(
self.depthwise_initializer
),
"pointwise_initializer": initializers.serialize(
self.pointwise_initializer
),
"bias_initializer": initializers.serialize(self.bias_initializer),
"depthwise_regularizer": regularizers.serialize(
self.depthwise_regularizer
),
"pointwise_regularizer": regularizers.serialize(
self.pointwise_regularizer
),
"bias_regularizer": regularizers.serialize(self.bias_regularizer),
"activity_regularizer": regularizers.serialize(
self.activity_regularizer
),
"depthwise_constraint": constraints.serialize(
self.depthwise_constraint
),
"pointwise_constraint": constraints.serialize(
self.pointwise_constraint
),
"bias_constraint": constraints.serialize(self.bias_constraint),
}
base_config = super().get_config()
return dict(list(base_config.items()) + list(config.items()))
| tf-keras/tf_keras/layers/convolutional/base_separable_conv.py/0 | {
"file_path": "tf-keras/tf_keras/layers/convolutional/base_separable_conv.py",
"repo_id": "tf-keras",
"token_count": 4417
} | 175 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Core TF-Keras layers."""
from tf_keras.layers.core.activation import Activation
from tf_keras.layers.core.dense import Dense
from tf_keras.layers.core.einsum_dense import EinsumDense
from tf_keras.layers.core.embedding import Embedding
from tf_keras.layers.core.identity import Identity
from tf_keras.layers.core.lambda_layer import Lambda
from tf_keras.layers.core.masking import Masking
# Required by third_party/py/tensorflow_gnn/tf_keras/keras_tensors.py
from tf_keras.layers.core.tf_op_layer import ClassMethod
from tf_keras.layers.core.tf_op_layer import InstanceMethod
from tf_keras.layers.core.tf_op_layer import InstanceProperty
from tf_keras.layers.core.tf_op_layer import SlicingOpLambda
from tf_keras.layers.core.tf_op_layer import TFOpLambda
from tf_keras.layers.core.tf_op_layer import _delegate_method
from tf_keras.layers.core.tf_op_layer import _delegate_property
# Regularization layers imported for backwards namespace compatibility
from tf_keras.layers.regularization.activity_regularization import (
ActivityRegularization,
)
from tf_keras.layers.regularization.dropout import Dropout
from tf_keras.layers.regularization.spatial_dropout1d import SpatialDropout1D
from tf_keras.layers.regularization.spatial_dropout2d import SpatialDropout2D
from tf_keras.layers.regularization.spatial_dropout3d import SpatialDropout3D
# Reshaping layers imported for backwards namespace compatibility
from tf_keras.layers.reshaping.flatten import Flatten
from tf_keras.layers.reshaping.permute import Permute
from tf_keras.layers.reshaping.repeat_vector import RepeatVector
from tf_keras.layers.reshaping.reshape import Reshape
| tf-keras/tf_keras/layers/core/__init__.py/0 | {
"file_path": "tf-keras/tf_keras/layers/core/__init__.py",
"repo_id": "tf-keras",
"token_count": 713
} | 176 |
"""Test DynamicEmbeddingLayer."""
import numpy as np
import tensorflow as tf
from tf_keras import layers
from tf_keras import models
from tf_keras.callbacks import UpdateEmbeddingCallback
from tf_keras.layers.experimental import dynamic_embedding
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
@test_utils.run_v2_only
class DynamicEmbeddingTest(test_combinations.TestCase):
def test_dynamic_embedding_layer(self):
input_ = np.array([["a", "j", "c", "d", "e"]])
vocab = tf.constant(["a", "b", "c", "d", "e"])
eviction_policy = "LFU"
# Define the layer
layer = dynamic_embedding.DynamicEmbedding(
input_dim=5,
output_dim=2,
input_length=5,
eviction_policy=eviction_policy,
initial_vocabulary=vocab,
)
output = layer(input_)
self.assertTrue(
tf.reduce_all(tf.equal(tf.shape(output), tf.constant([1, 5, 2])))
)
self.assertTrue((layer.built))
self.assertTrue((layer.dynamic_lookup_layer.built))
self.assertTrue((layer.embedding_layer.built))
def test_model_save_load(self):
train_data = np.array(
[
["a", "j", "c", "d", "e"],
["a", "h", "i", "j", "b"],
["i", "h", "c", "j", "e"],
]
)
train_labels = np.array([0, 1, 2])
vocab = tf.constant(["a", "b", "c", "d", "e"])
eviction_policy = "LFU"
# Define the model
model = models.Sequential(
[
dynamic_embedding.DynamicEmbedding(
input_dim=5,
output_dim=2,
input_length=5,
eviction_policy=eviction_policy,
initial_vocabulary=vocab,
name="dynamic_embedding",
),
layers.Flatten(),
layers.Dense(3, activation="softmax"),
]
)
# Compile the model
model.compile(
optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
model.fit(
train_data,
train_labels,
epochs=10,
batch_size=1,
)
# Save the model to a temporary file
filepath = self.create_tempdir()
model.save(filepath)
# Load the model from the temporary file
reloaded_model = models.load_model(filepath)
self.assertTrue(
tf.reduce_all(
tf.equal(
model.get_layer(
"dynamic_embedding"
).dynamic_lookup_layer.vocabulary.numpy(),
reloaded_model.get_layer(
"dynamic_embedding"
).dynamic_lookup_layer.vocabulary.numpy(),
)
)
)
def test_dynamic_embedding_layer_with_callback(self):
self.skipTest("copybara failing , b/306414657")
# Generate dummy data
train_data = np.array(
[
["a", "j", "c", "d", "e"],
["a", "h", "i", "j", "b"],
["i", "h", "c", "j", "e"],
]
)
train_labels = np.array([0, 1, 2])
vocab = tf.constant(["a", "b", "c", "d", "e"])
eviction_policy = "LFU"
# Define the model
model = models.Sequential(
[
dynamic_embedding.DynamicEmbedding(
input_dim=5,
output_dim=2,
input_length=5,
eviction_policy=eviction_policy,
initial_vocabulary=vocab,
),
layers.Flatten(),
layers.Dense(3, activation="softmax"),
]
)
# Compile the model
model.compile(
optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
update_embedding_callback = UpdateEmbeddingCallback(
model.layers[0],
interval=2,
)
with update_embedding_callback:
result = model.fit(
train_data,
train_labels,
epochs=100,
batch_size=1,
callbacks=[update_embedding_callback],
)
# Assert model trains
self.assertEqual(result.history["loss"][0] > 0, True)
# assert vocab is updated in DynamicLookup
self.assertTrue(
tf.reduce_all(
tf.not_equal(
model.layers[0].dynamic_lookup_layer.vocabulary, vocab
)
)
)
# assert embedding matrix size
self.assertTrue(
tf.reduce_all(
tf.equal(
tf.shape(model.layers[0].embedding_layer.embeddings),
tf.constant([6, 2]),
)
)
)
def test_embedding_matrix_update(self):
# Generate dummy data
train_data = np.array(
[
["a", "j", "c", "d", "e"],
["a", "h", "i", "j", "b"],
["i", "h", "c", "j", "e"],
]
)
train_labels = np.array([0, 1, 2])
vocab = tf.constant(["a", "b", "c", "d", "e"])
eviction_policy = "LFU"
# Define the model
model = models.Sequential(
[
dynamic_embedding.DynamicEmbedding(
input_dim=5,
output_dim=2,
input_length=5,
eviction_policy=eviction_policy,
initial_vocabulary=vocab,
),
layers.Flatten(),
layers.Dense(3, activation="softmax"),
]
)
# Compile the model
model.compile(
optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
# freeze training of all layers
for layer in model.layers:
layer.trainable = False
# define update_embedding_callback to update embedding matrix and
# vocabulary
update_embedding_callback = UpdateEmbeddingCallback(
model.layers[0],
interval=5,
)
embedding_matrix_before = model.layers[0].embedding_layer.get_weights()
with update_embedding_callback:
model.fit(
train_data,
train_labels,
epochs=100,
batch_size=1,
callbacks=[update_embedding_callback],
)
# assert the UpdateEmbeddingCallback did modify the embedding matrix
self.assertNotEqual(
model.layers[0].embedding_layer.get_weights(),
embedding_matrix_before,
)
def test_get_vocabulary(self):
# Generate dummy data
train_data = np.array(
[
["a", "j", "c", "d", "e"],
["a", "h", "i", "j", "b"],
["i", "h", "c", "j", "e"],
]
)
train_labels = np.array([0, 1, 2])
vocab = tf.constant(["a", "b", "c", "d", "e"])
eviction_policy = "LFU"
# Define the model
model = models.Sequential(
[
dynamic_embedding.DynamicEmbedding(
input_dim=5,
output_dim=2,
input_length=5,
eviction_policy=eviction_policy,
initial_vocabulary=vocab,
),
layers.Flatten(),
layers.Dense(3, activation="softmax"),
]
)
model.compile(
optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
model.fit(
train_data,
train_labels,
epochs=100,
batch_size=1,
)
vocabulary_output = model.layers[0].get_vocabulary()
self.assertTrue(
tf.reduce_all(
tf.equal(
vocabulary_output,
vocab,
)
)
)
def test_default_initial_vocabulary(self):
train_data = np.array(
[
["a", "j", "c", "d", "e"],
["a", "h", "i", "j", "b"],
["i", "h", "c", "j", "e"],
]
)
train_labels = np.array([0, 1, 2])
eviction_policy = "LFU"
# Define the model
model = models.Sequential(
[
dynamic_embedding.DynamicEmbedding(
input_dim=5,
output_dim=2,
input_length=5,
eviction_policy=eviction_policy,
initial_vocabulary=tf.string,
name="dynamic_embedding",
),
layers.Flatten(),
layers.Dense(3, activation="softmax"),
]
)
# Compile the model
model.compile(
optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
model.fit(
train_data,
train_labels,
epochs=10,
batch_size=1,
)
vocabulary_output = model.layers[0].get_vocabulary()
self.assertEqual(vocabulary_output.dtype, tf.string)
self.assertEqual(tf.shape(vocabulary_output)[0], 5)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/layers/experimental/dynamic_embedding_test.py/0 | {
"file_path": "tf-keras/tf_keras/layers/experimental/dynamic_embedding_test.py",
"repo_id": "tf-keras",
"token_count": 5578
} | 177 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Layer that averages several inputs."""
from tf_keras.layers.merging.base_merge import _Merge
# isort: off
from tensorflow.python.util.tf_export import keras_export
@keras_export("keras.layers.Average")
class Average(_Merge):
"""Layer that averages a list of inputs element-wise.
It takes as input a list of tensors, all of the same shape, and returns
a single tensor (also of the same shape).
Example:
>>> x1 = np.ones((2, 2))
>>> x2 = np.zeros((2, 2))
>>> y = tf.keras.layers.Average()([x1, x2])
>>> y.numpy().tolist()
[[0.5, 0.5], [0.5, 0.5]]
Usage in a functional model:
>>> input1 = tf.keras.layers.Input(shape=(16,))
>>> x1 = tf.keras.layers.Dense(8, activation='relu')(input1)
>>> input2 = tf.keras.layers.Input(shape=(32,))
>>> x2 = tf.keras.layers.Dense(8, activation='relu')(input2)
>>> avg = tf.keras.layers.Average()([x1, x2])
>>> out = tf.keras.layers.Dense(4)(avg)
>>> model = tf.keras.models.Model(inputs=[input1, input2], outputs=out)
Raises:
ValueError: If there is a shape mismatch between the inputs and the shapes
cannot be broadcasted to match.
"""
def _merge_function(self, inputs):
output = inputs[0]
for i in range(1, len(inputs)):
output += inputs[i]
return output / len(inputs)
@keras_export("keras.layers.average")
def average(inputs, **kwargs):
"""Functional interface to the `tf.keras.layers.Average` layer.
Example:
>>> x1 = np.ones((2, 2))
>>> x2 = np.zeros((2, 2))
>>> y = tf.keras.layers.Average()([x1, x2])
>>> y.numpy().tolist()
[[0.5, 0.5], [0.5, 0.5]]
Usage in a functional model:
>>> input1 = tf.keras.layers.Input(shape=(16,))
>>> x1 = tf.keras.layers.Dense(8, activation='relu')(input1)
>>> input2 = tf.keras.layers.Input(shape=(32,))
>>> x2 = tf.keras.layers.Dense(8, activation='relu')(input2)
>>> avg = tf.keras.layers.Average()([x1, x2])
>>> out = tf.keras.layers.Dense(4)(avg)
>>> model = tf.keras.models.Model(inputs=[input1, input2], outputs=out)
Args:
inputs: A list of input tensors.
**kwargs: Standard layer keyword arguments.
Returns:
A tensor, the average of the inputs.
Raises:
ValueError: If there is a shape mismatch between the inputs and the shapes
cannot be broadcasted to match.
"""
return Average(**kwargs)(inputs)
| tf-keras/tf_keras/layers/merging/average.py/0 | {
"file_path": "tf-keras/tf_keras/layers/merging/average.py",
"repo_id": "tf-keras",
"token_count": 1184
} | 178 |
# Copyright 2022 The TF-Keras Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Group normalization layer"""
import tensorflow.compat.v2 as tf
from tf_keras import backend
from tf_keras import constraints
from tf_keras import initializers
from tf_keras import regularizers
from tf_keras.layers import InputSpec
from tf_keras.layers import Layer
from tf_keras.utils import tf_utils
# isort: off
from tensorflow.python.util.tf_export import keras_export
@keras_export("keras.layers.GroupNormalization", v1=[])
class GroupNormalization(Layer):
"""Group normalization layer.
Group Normalization divides the channels into groups and computes
within each group the mean and variance for normalization.
Empirically, its accuracy is more stable than batch norm in a wide
range of small batch sizes, if learning rate is adjusted linearly
with batch sizes.
Relation to Layer Normalization:
If the number of groups is set to 1, then this operation becomes nearly
identical to Layer Normalization (see Layer Normalization docs for details).
Relation to Instance Normalization:
If the number of groups is set to the input dimension (number of groups is
equal to number of channels), then this operation becomes identical to
Instance Normalization.
Args:
groups: Integer, the number of groups for Group Normalization. Can be in
the range [1, N] where N is the input dimension. The input dimension
must be divisible by the number of groups. Defaults to `32`.
axis: Integer or List/Tuple. The axis or axes to normalize across.
Typically, this is the features axis/axes. The left-out axes are
typically the batch axis/axes. `-1` is the last dimension in the
input. Defaults to `-1`.
epsilon: Small float added to variance to avoid dividing by zero. Defaults
to 1e-3
center: If True, add offset of `beta` to normalized tensor. If False,
`beta` is ignored. Defaults to `True`.
scale: If True, multiply by `gamma`. If False, `gamma` is not used.
When the next layer is linear (also e.g. `nn.relu`), this can be
disabled since the scaling will be done by the next layer.
Defaults to `True`.
beta_initializer: Initializer for the beta weight. Defaults to zeros.
gamma_initializer: Initializer for the gamma weight. Defaults to ones.
beta_regularizer: Optional regularizer for the beta weight. None by
default.
gamma_regularizer: Optional regularizer for the gamma weight. None by
default.
beta_constraint: Optional constraint for the beta weight. None by default.
gamma_constraint: Optional constraint for the gamma weight. None by
default. Input shape: Arbitrary. Use the keyword argument `input_shape`
(tuple of integers, does not include the samples axis) when using this
layer as the first layer in a model. Output shape: Same shape as input.
Call arguments:
inputs: Input tensor (of any rank).
mask: The mask parameter is a tensor that indicates the weight for each
position in the input tensor when computing the mean and variance.
Reference: - [Yuxin Wu & Kaiming He, 2018](https://arxiv.org/abs/1803.08494)
"""
def __init__(
self,
groups=32,
axis=-1,
epsilon=1e-3,
center=True,
scale=True,
beta_initializer="zeros",
gamma_initializer="ones",
beta_regularizer=None,
gamma_regularizer=None,
beta_constraint=None,
gamma_constraint=None,
**kwargs,
):
super().__init__(**kwargs)
self.supports_masking = True
self.groups = groups
self.axis = axis
self.epsilon = epsilon
self.center = center
self.scale = scale
self.beta_initializer = initializers.get(beta_initializer)
self.gamma_initializer = initializers.get(gamma_initializer)
self.beta_regularizer = regularizers.get(beta_regularizer)
self.gamma_regularizer = regularizers.get(gamma_regularizer)
self.beta_constraint = constraints.get(beta_constraint)
self.gamma_constraint = constraints.get(gamma_constraint)
def build(self, input_shape):
tf_utils.validate_axis(self.axis, input_shape)
dim = input_shape[self.axis]
if dim is None:
raise ValueError(
f"Axis {self.axis} of input tensor should have a defined "
"dimension but the layer received an input with shape "
f"{input_shape}."
)
if self.groups == -1:
self.groups = dim
if dim < self.groups:
raise ValueError(
f"Number of groups ({self.groups}) cannot be more than the "
f"number of channels ({dim})."
)
if dim % self.groups != 0:
raise ValueError(
f"Number of groups ({self.groups}) must be a multiple "
f"of the number of channels ({dim})."
)
self.input_spec = InputSpec(
ndim=len(input_shape), axes={self.axis: dim}
)
if self.scale:
self.gamma = self.add_weight(
shape=(dim,),
name="gamma",
initializer=self.gamma_initializer,
regularizer=self.gamma_regularizer,
constraint=self.gamma_constraint,
)
else:
self.gamma = None
if self.center:
self.beta = self.add_weight(
shape=(dim,),
name="beta",
initializer=self.beta_initializer,
regularizer=self.beta_regularizer,
constraint=self.beta_constraint,
)
else:
self.beta = None
super().build(input_shape)
def call(self, inputs, mask=None):
input_shape = tf.shape(inputs)
if mask is None:
mask = tf.ones_like(inputs)
else:
# We broadcast before we group in case the mask does not have the
# same shape as the input.
mask = tf.broadcast_to(mask, input_shape)
reshaped_inputs = self._reshape_into_groups(inputs)
reshaped_mask = self._reshape_into_groups(mask)
normalized_inputs = self._apply_normalization(
reshaped_inputs=reshaped_inputs,
input_shape=input_shape,
reshaped_mask=reshaped_mask,
)
return tf.reshape(normalized_inputs, input_shape)
def _reshape_into_groups(self, inputs):
input_shape = tf.shape(inputs)
group_shape = [input_shape[i] for i in range(inputs.shape.rank)]
group_shape[self.axis] = input_shape[self.axis] // self.groups
group_shape.insert(self.axis, self.groups)
group_shape = tf.stack(group_shape)
reshaped_inputs = tf.reshape(inputs, group_shape)
return reshaped_inputs
def _apply_normalization(
self,
*,
reshaped_inputs,
reshaped_mask,
input_shape,
):
group_reduction_axes = list(range(1, reshaped_inputs.shape.rank))
axis = self.axis - 1
group_reduction_axes.pop(axis)
mask_weights = tf.cast(reshaped_mask, reshaped_inputs.dtype)
mean, variance = tf.nn.weighted_moments(
reshaped_inputs,
axes=group_reduction_axes,
frequency_weights=mask_weights,
keepdims=True,
)
gamma, beta = self._get_reshaped_weights(input_shape)
normalized_inputs = tf.nn.batch_normalization(
reshaped_inputs,
mean=mean,
variance=variance,
scale=gamma,
offset=beta,
variance_epsilon=self.epsilon,
)
return normalized_inputs
def _get_reshaped_weights(self, input_shape):
broadcast_shape = self._create_broadcast_shape(input_shape)
gamma = None
beta = None
if self.scale:
gamma = tf.reshape(self.gamma, broadcast_shape)
if self.center:
beta = tf.reshape(self.beta, broadcast_shape)
return gamma, beta
def _create_broadcast_shape(self, input_shape):
broadcast_shape = [1] * backend.int_shape(input_shape)[0]
broadcast_shape[self.axis] = input_shape[self.axis] // self.groups
broadcast_shape.insert(self.axis, self.groups)
return broadcast_shape
def compute_output_shape(self, input_shape):
return input_shape
def get_config(self):
config = {
"groups": self.groups,
"axis": self.axis,
"epsilon": self.epsilon,
"center": self.center,
"scale": self.scale,
"beta_initializer": initializers.serialize(self.beta_initializer),
"gamma_initializer": initializers.serialize(self.gamma_initializer),
"beta_regularizer": regularizers.serialize(self.beta_regularizer),
"gamma_regularizer": regularizers.serialize(self.gamma_regularizer),
"beta_constraint": constraints.serialize(self.beta_constraint),
"gamma_constraint": constraints.serialize(self.gamma_constraint),
}
base_config = super().get_config()
return {**base_config, **config}
| tf-keras/tf_keras/layers/normalization/group_normalization.py/0 | {
"file_path": "tf-keras/tf_keras/layers/normalization/group_normalization.py",
"repo_id": "tf-keras",
"token_count": 4133
} | 179 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Private base class for global pooling 3D layers."""
import tensorflow.compat.v2 as tf
from tf_keras.engine.base_layer import Layer
from tf_keras.engine.input_spec import InputSpec
from tf_keras.utils import conv_utils
class GlobalPooling3D(Layer):
"""Abstract class for different global pooling 3D layers."""
def __init__(self, data_format=None, keepdims=False, **kwargs):
super().__init__(**kwargs)
self.data_format = conv_utils.normalize_data_format(data_format)
self.input_spec = InputSpec(ndim=5)
self.keepdims = keepdims
def _validate_reduction_axis(self, input_shape, axes):
for axis in axes:
if input_shape[axis] == 0:
raise ValueError(
f"Incorrect input shape {input_shape} "
f"with dimension 0 at reduction axis {axis}."
)
def build(self, input_shape):
input_shape = tf.TensorShape(input_shape).as_list()
if self.data_format == "channels_last":
self._validate_reduction_axis(input_shape, [1, 2, 3])
else:
self._validate_reduction_axis(input_shape, [2, 3, 4])
def compute_output_shape(self, input_shape):
input_shape = tf.TensorShape(input_shape).as_list()
if self.data_format == "channels_last":
if self.keepdims:
return tf.TensorShape([input_shape[0], 1, 1, 1, input_shape[4]])
else:
return tf.TensorShape([input_shape[0], input_shape[4]])
else:
if self.keepdims:
return tf.TensorShape([input_shape[0], input_shape[1], 1, 1, 1])
else:
return tf.TensorShape([input_shape[0], input_shape[1]])
def call(self, inputs):
raise NotImplementedError
def get_config(self):
config = {"data_format": self.data_format, "keepdims": self.keepdims}
base_config = super().get_config()
return dict(list(base_config.items()) + list(config.items()))
| tf-keras/tf_keras/layers/pooling/base_global_pooling3d.py/0 | {
"file_path": "tf-keras/tf_keras/layers/pooling/base_global_pooling3d.py",
"repo_id": "tf-keras",
"token_count": 1074
} | 180 |
# Description:
# Contains the TF-Keras preprocess layers (internal TensorFlow version).
# Placeholder: load unaliased py_library
load("@org_keras//tf_keras:tf_keras.bzl", "tf_py_test")
# buildifier: disable=same-origin-load
load("@org_keras//tf_keras:tf_keras.bzl", "cuda_py_test")
load("@org_keras//tf_keras:tf_keras.bzl", "distribute_py_test")
package(
# copybara:uncomment default_applicable_licenses = ["//tf_keras:license"],
default_visibility = [
"//tf_keras:friends",
"//third_party/tensorflow/tools/pip_package:__pkg__",
],
licenses = ["notice"],
)
py_library(
name = "preprocessing",
srcs = [
"__init__.py",
],
srcs_version = "PY3",
deps = [
":discretization",
":hashed_crossing",
":hashing",
":image_preprocessing",
":integer_lookup",
":normalization",
":preprocessing_stage",
":preprocessing_test_utils",
":string_lookup",
":text_vectorization",
],
)
py_library(
name = "discretization",
srcs = [
"discretization.py",
],
srcs_version = "PY3",
deps = [
":preprocessing_utils",
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras/engine",
"//tf_keras/utils:tf_utils",
],
)
py_library(
name = "hashing",
srcs = [
"hashing.py",
],
srcs_version = "PY3",
deps = [
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras/engine",
],
)
py_library(
name = "hashed_crossing",
srcs = [
"hashed_crossing.py",
],
srcs_version = "PY3",
deps = [
":preprocessing_utils",
"//:expect_tensorflow_installed",
"//tf_keras:backend",
"//tf_keras/engine:base_layer",
"//tf_keras/engine:base_preprocessing_layer",
"//tf_keras/utils:layer_utils",
],
)
py_library(
name = "image_preprocessing",
srcs = [
"image_preprocessing.py",
],
srcs_version = "PY3",
deps = [
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras:backend",
"//tf_keras/engine",
"//tf_keras/engine:input_spec",
"//tf_keras/preprocessing:image",
"//tf_keras/utils:image_utils",
"//tf_keras/utils:tf_utils",
],
)
py_library(
name = "index_lookup",
srcs = [
"index_lookup.py",
],
srcs_version = "PY3",
deps = [
":category_encoding",
":preprocessing_utils",
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras:backend",
"//tf_keras/engine",
],
)
py_library(
name = "normalization",
srcs = [
"normalization.py",
],
srcs_version = "PY3",
deps = [
":preprocessing_utils",
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras:backend",
"//tf_keras/engine",
],
)
py_library(
name = "integer_lookup",
srcs = [
"integer_lookup.py",
],
srcs_version = "PY3",
deps = [
":index_lookup",
"//:expect_tensorflow_installed",
"//tf_keras/engine",
],
)
py_library(
name = "text_vectorization",
srcs = [
"text_vectorization.py",
],
srcs_version = "PY3",
deps = [
":category_encoding",
":preprocessing_utils",
":string_lookup",
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras:backend",
"//tf_keras/engine",
"//tf_keras/utils:layer_utils",
"//tf_keras/utils:tf_utils",
],
)
py_library(
name = "category_encoding",
srcs = [
"category_encoding.py",
],
srcs_version = "PY3",
deps = [
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras:backend",
"//tf_keras/engine",
"//tf_keras/engine:input_spec",
"//tf_keras/utils:layer_utils",
],
)
py_library(
name = "string_lookup",
srcs = [
"string_lookup.py",
],
srcs_version = "PY3",
deps = [
":index_lookup",
"//:expect_tensorflow_installed",
"//tf_keras/engine",
],
)
py_library(
name = "preprocessing_stage",
srcs = [
"preprocessing_stage.py",
],
srcs_version = "PY3",
deps = [
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras/engine",
"//tf_keras/utils:tf_utils",
],
)
py_library(
name = "preprocessing_test_utils",
srcs = ["preprocessing_test_utils.py"],
srcs_version = "PY3",
deps = [
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
],
)
py_library(
name = "preprocessing_utils",
srcs = ["preprocessing_utils.py"],
srcs_version = "PY3",
deps = [
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
],
)
tf_py_test(
name = "preprocessing_utils_test",
srcs = ["preprocessing_utils_test.py"],
python_version = "PY3",
deps = [
":preprocessing_utils",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/utils:generic_utils",
],
)
tf_py_test(
name = "category_encoding_test",
srcs = ["category_encoding_test.py"],
python_version = "PY3",
deps = [
":category_encoding",
":preprocessing_test_utils",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/utils:generic_utils",
],
)
distribute_py_test(
name = "category_encoding_distribution_test",
srcs = ["category_encoding_distribution_test.py"],
disable_mlir_bridge = False,
env = {
"CUDA_MODULE_LOADING": "LAZY",
},
main = "category_encoding_distribution_test.py",
python_version = "PY3",
shard_count = 4,
tags = [
"multi_and_single_gpu",
"no_oss", # b/189866692
"noguitar", # b/190034522
"nomultivm", # TODO(b/170502145)
],
tpu_tags = [
"no_oss", # b/155502591
],
deps = [
":category_encoding",
":preprocessing_test_utils",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras:backend",
"//tf_keras/distribute:strategy_combinations",
"//tf_keras/testing_infra:test_combinations",
],
)
distribute_py_test(
name = "image_preprocessing_distribution_test",
srcs = ["image_preprocessing_distribution_test.py"],
env = {
"CUDA_MODULE_LOADING": "LAZY",
},
main = "image_preprocessing_distribution_test.py",
python_version = "PY3",
shard_count = 4,
tags = [
"multi_and_single_gpu",
"nomultivm", # TODO(b/170502145)
"notpu", # TODO(b/210148622)
],
tpu_tags = [
"no_oss",
"noguitar", # TODO(b/183957207)
],
deps = [
":image_preprocessing",
":preprocessing_test_utils",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/distribute:strategy_combinations",
"//tf_keras/testing_infra:test_combinations",
],
)
tf_py_test(
name = "discretization_test",
srcs = ["discretization_test.py"],
python_version = "PY3",
shard_count = 4,
tags = ["no_rocm"],
deps = [
":discretization",
":preprocessing_test_utils",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
],
)
distribute_py_test(
name = "discretization_distribution_test",
srcs = ["discretization_distribution_test.py"],
env = {
"CUDA_MODULE_LOADING": "LAZY",
},
main = "discretization_distribution_test.py",
python_version = "PY3",
shard_count = 4,
tags = [
"multi_and_single_gpu",
"no_oss", # TODO(b/189956080)
"noguitar", # b/190034522
"nomultivm", # TODO(b/170502145)
],
deps = [
":discretization",
":preprocessing_test_utils",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/distribute:strategy_combinations",
"//tf_keras/testing_infra:test_combinations",
],
)
cuda_py_test(
name = "hashing_test",
srcs = ["hashing_test.py"],
python_version = "PY3",
shard_count = 4,
deps = [
":hashing",
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/engine",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/testing_infra:test_utils",
],
)
distribute_py_test(
name = "hashing_distribution_test",
srcs = ["hashing_distribution_test.py"],
disable_mlir_bridge = False,
env = {
"CUDA_MODULE_LOADING": "LAZY",
},
main = "hashing_distribution_test.py",
python_version = "PY3",
shard_count = 4,
tags = [
"multi_and_single_gpu",
"nomultivm", # TODO(b/170502145)
],
deps = [
":hashing",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/distribute:strategy_combinations",
"//tf_keras/testing_infra:test_combinations",
],
)
tf_py_test(
name = "hashed_crossing_test",
srcs = ["hashed_crossing_test.py"],
python_version = "PY3",
shard_count = 4,
deps = [
":hashing",
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/engine",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/testing_infra:test_utils",
],
)
tf_py_test(
name = "index_lookup_test",
srcs = ["index_lookup_test.py"],
python_version = "PY3",
shard_count = 4,
tags = ["noasan"], # TODO(b/183961255)
deps = [
":index_lookup",
":preprocessing_test_utils",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/utils:generic_utils",
],
)
distribute_py_test(
name = "index_lookup_distribution_test",
srcs = ["index_lookup_distribution_test.py"],
disable_mlir_bridge = False,
env = {
"CUDA_MODULE_LOADING": "LAZY",
},
main = "index_lookup_distribution_test.py",
python_version = "PY3",
shard_count = 4,
tags = [
"multi_and_single_gpu",
"nomultivm", # TODO(b/170502145)
],
tpu_tags = ["no_oss"],
deps = [
":index_lookup",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/distribute:strategy_combinations",
"//tf_keras/testing_infra:test_combinations",
],
)
cuda_py_test(
name = "image_preprocessing_test",
srcs = ["image_preprocessing_test.py"],
python_version = "PY3",
shard_count = 4,
tags = [
"no_windows", # TODO(b/184424727): Re-enable this.
],
deps = [
":image_preprocessing",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/engine",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/testing_infra:test_utils",
"//tf_keras/utils:generic_utils",
],
)
tf_py_test(
name = "normalization_test",
srcs = ["normalization_test.py"],
python_version = "PY3",
shard_count = 4,
tags = [
"noasan", # TODO(b/337374867) fails with -fsanitize=null
],
deps = [
":normalization",
":preprocessing_test_utils",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
],
)
tf_py_test(
name = "integer_lookup_test",
srcs = ["integer_lookup_test.py"],
python_version = "PY3",
tags = ["noasan"], # TODO(b/183961255)
deps = [
":integer_lookup",
":preprocessing_test_utils",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/utils:generic_utils",
],
)
distribute_py_test(
name = "normalization_distribution_test",
srcs = ["normalization_distribution_test.py"],
env = {
"CUDA_MODULE_LOADING": "LAZY",
},
main = "normalization_distribution_test.py",
python_version = "PY3",
shard_count = 8,
tags = [
"no_oss",
"nomultivm", # TODO(b/170502145)
],
deps = [
":normalization",
":preprocessing_test_utils",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/distribute:strategy_combinations",
"//tf_keras/testing_infra:test_combinations",
],
)
tf_py_test(
name = "text_vectorization_test",
srcs = ["text_vectorization_test.py"],
python_version = "PY3",
shard_count = 4,
deps = [
":preprocessing_test_utils",
":text_vectorization",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/utils:generic_utils",
],
)
distribute_py_test(
name = "text_vectorization_distribution_test",
srcs = ["text_vectorization_distribution_test.py"],
disable_mlir_bridge = False,
env = {
"CUDA_MODULE_LOADING": "LAZY",
},
main = "text_vectorization_distribution_test.py",
python_version = "PY3",
shard_count = 8,
tags = [
"multi_and_single_gpu",
"nomultivm", # TODO(b/170502145)
],
tpu_tags = [
"no_oss", # b/155502591
],
deps = [
":preprocessing_test_utils",
":text_vectorization",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/distribute:strategy_combinations",
"//tf_keras/testing_infra:test_combinations",
],
)
tf_py_test(
name = "string_lookup_test",
srcs = ["string_lookup_test.py"],
python_version = "PY3",
tags = [
"notsan", #b/168758821
],
deps = [
":preprocessing_test_utils",
":string_lookup",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/utils:generic_utils",
],
)
tf_py_test(
name = "preprocessing_stage_test",
srcs = ["preprocessing_stage_test.py"],
python_version = "PY3",
tags = ["no_windows"], # TODO(b/152991402)
deps = [
":preprocessing_stage",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
],
)
| tf-keras/tf_keras/layers/preprocessing/BUILD/0 | {
"file_path": "tf-keras/tf_keras/layers/preprocessing/BUILD",
"repo_id": "tf-keras",
"token_count": 7727
} | 181 |
# Copyright 2021 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for hashed crossing layer."""
import os
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
import tf_keras as keras
from tf_keras.layers.preprocessing import hashed_crossing
from tf_keras.layers.preprocessing import preprocessing_test_utils
from tf_keras.testing_infra import test_combinations
@test_combinations.run_all_keras_modes(always_skip_v1=True)
class HashedCrossingTest(test_combinations.TestCase):
@parameterized.named_parameters(
("python_value", lambda x: x),
("dense", tf.constant),
)
def test_cross_scalars(self, data_fn):
layer = hashed_crossing.HashedCrossing(num_bins=10)
feat1 = data_fn("A")
feat2 = data_fn(101)
outputs = layer((feat1, feat2))
self.assertAllClose(outputs, 1)
self.assertAllEqual(outputs.shape.as_list(), [])
@parameterized.named_parameters(
("tuple", tuple),
("list", list),
("numpy", np.array),
("array_like", preprocessing_test_utils.ArrayLike),
("dense", tf.constant),
)
def test_cross_batch_of_scalars_1d(self, data_fn):
layer = hashed_crossing.HashedCrossing(num_bins=10)
feat1 = data_fn(["A", "B", "A", "B", "A"])
feat2 = data_fn([101, 101, 101, 102, 102])
outputs = layer((feat1, feat2))
self.assertAllClose(outputs, [1, 4, 1, 6, 3])
self.assertAllEqual(outputs.shape.as_list(), [5])
@parameterized.named_parameters(
("tuple", tuple),
("list", list),
("numpy", np.array),
("array_like", preprocessing_test_utils.ArrayLike),
("dense", tf.constant),
)
def test_cross_batch_of_scalars_2d(self, data_fn):
layer = hashed_crossing.HashedCrossing(num_bins=10)
feat1 = data_fn([["A"], ["B"], ["A"], ["B"], ["A"]])
feat2 = data_fn([[101], [101], [101], [102], [102]])
outputs = layer((feat1, feat2))
self.assertAllClose(outputs, [[1], [4], [1], [6], [3]])
self.assertAllEqual(outputs.shape.as_list(), [5, 1])
@parameterized.named_parameters(
("sparse", True),
("dense", False),
)
def test_cross_one_hot_output(self, sparse):
layer = hashed_crossing.HashedCrossing(
num_bins=5, output_mode="one_hot", sparse=sparse
)
feat1 = tf.constant([["A"], ["B"], ["A"], ["B"], ["A"]])
feat2 = tf.constant([[101], [101], [101], [102], [102]])
outputs = layer((feat1, feat2))
if sparse:
outputs = tf.sparse.to_dense(outputs)
self.assertAllClose(
outputs,
[
[0, 1, 0, 0, 0],
[0, 0, 0, 0, 1],
[0, 1, 0, 0, 0],
[0, 1, 0, 0, 0],
[0, 0, 0, 1, 0],
],
)
self.assertAllEqual(outputs.shape.as_list(), [5, 5])
def test_cross_output_dtype(self):
layer = hashed_crossing.HashedCrossing(num_bins=2)
self.assertAllEqual(layer(([1], [1])).dtype, tf.int64)
layer = hashed_crossing.HashedCrossing(num_bins=2, dtype=tf.int32)
self.assertAllEqual(layer(([1], [1])).dtype, tf.int32)
layer = hashed_crossing.HashedCrossing(
num_bins=2, output_mode="one_hot"
)
self.assertAllEqual(layer(([1], [1])).dtype, tf.float32)
layer = hashed_crossing.HashedCrossing(
num_bins=2, output_mode="one_hot", dtype=tf.float64
)
self.assertAllEqual(layer(([1], [1])).dtype, tf.float64)
def test_non_list_input_fails(self):
with self.assertRaisesRegex(ValueError, "should be called on a list"):
hashed_crossing.HashedCrossing(num_bins=10)(tf.constant(1))
def test_single_input_fails(self):
with self.assertRaisesRegex(ValueError, "at least two inputs"):
hashed_crossing.HashedCrossing(num_bins=10)([tf.constant(1)])
def test_sparse_input_fails(self):
with self.assertRaisesRegex(
ValueError, "inputs should be dense tensors"
):
sparse_in = tf.sparse.from_dense(tf.constant([1]))
hashed_crossing.HashedCrossing(num_bins=10)((sparse_in, sparse_in))
def test_float_input_fails(self):
with self.assertRaisesRegex(
ValueError, "should have an integer or string"
):
hashed_crossing.HashedCrossing(num_bins=10)(
(tf.constant([1.0]), tf.constant([1.0]))
)
def test_upsupported_shape_input_fails(self):
with self.assertRaisesRegex(ValueError, "inputs should have shape"):
hashed_crossing.HashedCrossing(num_bins=10)(
(tf.constant([[[1.0]]]), tf.constant([[[1.0]]]))
)
def test_from_config(self):
layer = hashed_crossing.HashedCrossing(
num_bins=5, output_mode="one_hot", sparse=True
)
cloned_layer = hashed_crossing.HashedCrossing.from_config(
layer.get_config()
)
feat1 = tf.constant([["A"], ["B"], ["A"], ["B"], ["A"]])
feat2 = tf.constant([[101], [101], [101], [102], [102]])
original_outputs = layer((feat1, feat2))
cloned_outputs = cloned_layer((feat1, feat2))
self.assertAllEqual(
tf.sparse.to_dense(cloned_outputs),
tf.sparse.to_dense(original_outputs),
)
def test_saving_keras(self):
string_in = keras.Input(shape=(1,), dtype=tf.string)
int_in = keras.Input(shape=(1,), dtype=tf.int64)
out = hashed_crossing.HashedCrossing(num_bins=10)((string_in, int_in))
model = keras.Model(inputs=(string_in, int_in), outputs=out)
string_data = tf.constant([["A"], ["B"], ["A"], ["B"], ["A"]])
int_data = tf.constant([[101], [101], [101], [102], [102]])
expected_output = [[1], [4], [1], [6], [3]]
output_data = model((string_data, int_data))
self.assertAllClose(output_data, expected_output)
with self.subTest("savedmodel"):
# Save the model to disk.
output_path = os.path.join(self.get_temp_dir(), "saved_model")
model.save(output_path, save_format="tf")
loaded_model = keras.models.load_model(
output_path,
custom_objects={
"HashedCrossing": hashed_crossing.HashedCrossing
},
)
# Validate correctness of the new model.
new_output_data = loaded_model((string_data, int_data))
self.assertAllClose(new_output_data, expected_output)
with self.subTest("keras_v3"):
if not tf.__internal__.tf2.enabled():
self.skipTest(
"TF2 must be enabled to use the new `.keras` saving."
)
# Save the model to disk.
output_path = os.path.join(self.get_temp_dir(), "model.keras")
model.save(output_path, save_format="keras_v3")
loaded_model = keras.models.load_model(
output_path,
custom_objects={
"HashedCrossing": hashed_crossing.HashedCrossing
},
)
# Validate correctness of the new model.
new_output_data = loaded_model((string_data, int_data))
self.assertAllClose(new_output_data, expected_output)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/layers/preprocessing/hashed_crossing_test.py/0 | {
"file_path": "tf-keras/tf_keras/layers/preprocessing/hashed_crossing_test.py",
"repo_id": "tf-keras",
"token_count": 3820
} | 182 |
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Functional preprocessing stage tests."""
import time
import numpy as np
import tensorflow.compat.v2 as tf
from tf_keras.engine import base_preprocessing_layer
from tf_keras.engine.input_layer import Input
from tf_keras.layers import convolutional
from tf_keras.layers import core
from tf_keras.layers import merging
from tf_keras.layers.preprocessing import image_preprocessing
from tf_keras.layers.preprocessing import normalization
from tf_keras.layers.preprocessing import preprocessing_stage
from tf_keras.layers.preprocessing import preprocessing_test_utils
from tf_keras.testing_infra import test_combinations
class PL(base_preprocessing_layer.PreprocessingLayer):
def __init__(self, **kwargs):
self.adapt_time = None
self.adapt_count = 0
super().__init__(**kwargs)
def adapt(self, data, reset_state=True):
self.adapt_time = time.time()
self.adapt_count += 1
def call(self, inputs):
return inputs + 1
class PLMerge(PL):
def call(self, inputs):
return inputs[0] + inputs[1]
class PLSplit(PL):
def call(self, inputs):
return inputs + 1, inputs - 1
@test_combinations.run_all_keras_modes(always_skip_v1=True)
class PreprocessingStageTest(
test_combinations.TestCase, preprocessing_test_utils.PreprocessingLayerTest
):
def test_adapt_preprocessing_stage_with_single_input_output(self):
x = Input(shape=(3,))
l0 = PL()
y = l0(x)
l1 = PL()
z = l1(y)
stage = preprocessing_stage.FunctionalPreprocessingStage(x, z)
stage.compile()
# Test with NumPy array
one_array = np.ones((4, 3), dtype="float32")
stage.adapt(one_array)
self.assertEqual(l0.adapt_count, 1)
self.assertEqual(l1.adapt_count, 1)
self.assertLessEqual(l0.adapt_time, l1.adapt_time)
# Check call
z = stage(tf.ones((4, 3), dtype="float32"))
self.assertAllClose(z, np.ones((4, 3), dtype="float32") + 2.0)
# Test with dataset
adapt_data = tf.data.Dataset.from_tensor_slices(one_array)
adapt_data = adapt_data.batch(2) # 5 batches of 2 samples
stage.adapt(adapt_data)
self.assertEqual(l0.adapt_count, 2)
self.assertEqual(l1.adapt_count, 2)
self.assertLessEqual(l0.adapt_time, l1.adapt_time)
# Test error with bad data
with self.assertRaisesRegex(ValueError, "requires a "):
stage.adapt(None)
# Disallow calling fit
with self.assertRaisesRegex(ValueError, "Preprocessing stage"):
stage.fit(None)
def test_adapt_preprocessing_stage_with_list_input(self):
x0 = Input(shape=(3,))
x1 = Input(shape=(3,))
x2 = Input(shape=(3,))
l0 = PLMerge()
y = l0([x0, x1])
l1 = PLMerge()
y = l1([y, x2])
l2 = PLSplit()
z, y = l2(y)
stage = preprocessing_stage.FunctionalPreprocessingStage(
[x0, x1, x2], [y, z]
)
stage.compile()
# Test with NumPy array
one_array = np.ones((4, 3), dtype="float32")
stage.adapt([one_array, one_array, one_array])
self.assertEqual(l0.adapt_count, 1)
self.assertEqual(l1.adapt_count, 1)
self.assertEqual(l2.adapt_count, 1)
self.assertLessEqual(l0.adapt_time, l1.adapt_time)
self.assertLessEqual(l1.adapt_time, l2.adapt_time)
# Check call
y, z = stage(
[
tf.ones((4, 3), dtype="float32"),
tf.ones((4, 3), dtype="float32"),
tf.ones((4, 3), dtype="float32"),
]
)
self.assertAllClose(y, np.ones((4, 3), dtype="float32") + 1.0)
self.assertAllClose(z, np.ones((4, 3), dtype="float32") + 3.0)
# Test with dataset
adapt_data = tf.data.Dataset.from_tensor_slices(
(one_array, one_array, one_array)
)
adapt_data = adapt_data.batch(2) # 5 batches of 2 samples
stage.adapt(adapt_data)
self.assertEqual(l0.adapt_count, 2)
self.assertEqual(l1.adapt_count, 2)
self.assertEqual(l2.adapt_count, 2)
self.assertLessEqual(l0.adapt_time, l1.adapt_time)
self.assertLessEqual(l1.adapt_time, l2.adapt_time)
# Test error with bad data
with self.assertRaisesRegex(ValueError, "requires a "):
stage.adapt(None)
def test_adapt_preprocessing_stage_with_dict_input(self):
x0 = Input(shape=(3,), name="x0")
x1 = Input(shape=(4,), name="x1")
x2 = Input(shape=(3, 5), name="x2")
# dimension will mismatch if x1 incorrectly placed.
x1_sum = core.Lambda(
lambda x: tf.reduce_sum(x, axis=-1, keepdims=True)
)(x1)
x2_sum = core.Lambda(lambda x: tf.reduce_sum(x, axis=-1))(x2)
l0 = PLMerge()
y = l0([x0, x1_sum])
l1 = PLMerge()
y = l1([y, x2_sum])
l2 = PLSplit()
z, y = l2(y)
stage = preprocessing_stage.FunctionalPreprocessingStage(
{"x2": x2, "x0": x0, "x1": x1}, [y, z]
)
stage.compile()
# Test with dict of NumPy array
one_array0 = np.ones((4, 3), dtype="float32")
one_array1 = np.ones((4, 4), dtype="float32")
one_array2 = np.ones((4, 3, 5), dtype="float32")
adapt_data = {"x1": one_array1, "x0": one_array0, "x2": one_array2}
stage.adapt(adapt_data)
self.assertEqual(l0.adapt_count, 1)
self.assertEqual(l1.adapt_count, 1)
self.assertEqual(l2.adapt_count, 1)
self.assertLessEqual(l0.adapt_time, l1.adapt_time)
self.assertLessEqual(l1.adapt_time, l2.adapt_time)
# Check call
y, z = stage(
{
"x1": tf.constant(one_array1),
"x2": tf.constant(one_array2),
"x0": tf.constant(one_array0),
}
)
self.assertAllClose(y, np.zeros((4, 3), dtype="float32") + 9.0)
self.assertAllClose(z, np.zeros((4, 3), dtype="float32") + 11.0)
# Test with list of NumPy array
adapt_data = [one_array0, one_array1, one_array2]
stage.adapt(adapt_data)
self.assertEqual(l0.adapt_count, 2)
self.assertEqual(l1.adapt_count, 2)
self.assertEqual(l2.adapt_count, 2)
self.assertLessEqual(l0.adapt_time, l1.adapt_time)
self.assertLessEqual(l1.adapt_time, l2.adapt_time)
# Test with flattened dataset
adapt_data = tf.data.Dataset.from_tensor_slices(
(one_array0, one_array1, one_array2)
)
adapt_data = adapt_data.batch(2) # 5 batches of 2 samples
stage.adapt(adapt_data)
self.assertEqual(l0.adapt_count, 3)
self.assertEqual(l1.adapt_count, 3)
self.assertEqual(l2.adapt_count, 3)
self.assertLessEqual(l0.adapt_time, l1.adapt_time)
self.assertLessEqual(l1.adapt_time, l2.adapt_time)
# Test with dataset in dict shape
adapt_data = tf.data.Dataset.from_tensor_slices(
{"x0": one_array0, "x2": one_array2, "x1": one_array1}
)
adapt_data = adapt_data.batch(2) # 5 batches of 2 samples
stage.adapt(adapt_data)
self.assertEqual(l0.adapt_count, 4)
self.assertEqual(l1.adapt_count, 4)
self.assertEqual(l2.adapt_count, 4)
self.assertLessEqual(l0.adapt_time, l1.adapt_time)
self.assertLessEqual(l1.adapt_time, l2.adapt_time)
# Test error with bad data
with self.assertRaisesRegex(ValueError, "requires a "):
stage.adapt(None)
def test_adapt_preprocessing_stage_with_dict_output(self):
x = Input(shape=(3,), name="x")
l0 = PLSplit()
y0, y1 = l0(x)
l1 = PLSplit()
z0, z1 = l1(y0)
stage = preprocessing_stage.FunctionalPreprocessingStage(
{"x": x}, {"y1": y1, "z1": z1, "y0": y0, "z0": z0}
)
stage.compile()
# Test with NumPy array
one_array = np.ones((4, 3), dtype="float32")
adapt_data = {"x": one_array}
stage.adapt(adapt_data)
self.assertEqual(l0.adapt_count, 1)
self.assertEqual(l1.adapt_count, 1)
self.assertLessEqual(l0.adapt_time, l1.adapt_time)
# Check call
outputs = stage({"x": tf.constant(one_array)})
self.assertEqual(set(outputs.keys()), {"y0", "y1", "z0", "z1"})
self.assertAllClose(
outputs["y0"], np.ones((4, 3), dtype="float32") + 1.0
)
self.assertAllClose(
outputs["y1"], np.ones((4, 3), dtype="float32") - 1.0
)
self.assertAllClose(
outputs["z0"], np.ones((4, 3), dtype="float32") + 2.0
)
self.assertAllClose(outputs["z1"], np.ones((4, 3), dtype="float32"))
def test_preprocessing_stage_with_nested_input(self):
# Test with NumPy array
x0 = Input(shape=(3,))
x1 = Input(shape=(3,))
x2 = Input(shape=(3,))
l0 = PLMerge()
y = l0([x0, x1])
l1 = PLMerge()
y = l1([y, x2])
l2 = PLSplit()
z, y = l2(y)
stage = preprocessing_stage.FunctionalPreprocessingStage(
[x0, [x1, x2]], [y, z]
)
stage.compile()
one_array = np.ones((4, 3), dtype="float32")
stage.adapt([one_array, [one_array, one_array]])
self.assertEqual(l0.adapt_count, 1)
self.assertEqual(l1.adapt_count, 1)
self.assertEqual(l2.adapt_count, 1)
self.assertLessEqual(l0.adapt_time, l1.adapt_time)
self.assertLessEqual(l1.adapt_time, l2.adapt_time)
# Check call
y, z = stage(
[
tf.ones((4, 3), dtype="float32"),
[
tf.ones((4, 3), dtype="float32"),
tf.ones((4, 3), dtype="float32"),
],
]
)
self.assertAllClose(y, np.ones((4, 3), dtype="float32") + 1.0)
self.assertAllClose(z, np.ones((4, 3), dtype="float32") + 3.0)
# Test with dataset
adapt_data = tf.data.Dataset.from_tensor_slices(
(one_array, (one_array, one_array))
)
adapt_data = adapt_data.batch(2) # 5 batches of 2 samples
stage.adapt(adapt_data)
self.assertEqual(l0.adapt_count, 2)
self.assertEqual(l1.adapt_count, 2)
self.assertEqual(l2.adapt_count, 2)
self.assertLessEqual(l0.adapt_time, l1.adapt_time)
self.assertLessEqual(l1.adapt_time, l2.adapt_time)
# Test error with bad data
with self.assertRaisesRegex(ValueError, "requires a "):
stage.adapt(None)
def test_include_layers_with_dict_input(self):
class PLMergeDict(PLMerge):
def call(self, inputs):
return inputs["a"] + inputs["b"]
x0 = Input(shape=(3,))
x1 = Input(shape=(3,))
l0 = PLMergeDict()
y = l0({"a": x0, "b": x1})
l1 = PLSplit()
z, y = l1(y)
stage = preprocessing_stage.FunctionalPreprocessingStage(
[x0, x1], [y, z]
)
stage.compile()
one_array = np.ones((4, 3), dtype="float32")
adapt_data = tf.data.Dataset.from_tensor_slices((one_array, one_array))
stage.adapt(adapt_data)
self.assertEqual(l0.adapt_count, 1)
self.assertEqual(l1.adapt_count, 1)
self.assertLessEqual(l0.adapt_time, l1.adapt_time)
# Check call
y, z = stage(
[tf.ones((4, 3), dtype="float32"), tf.ones((4, 3), dtype="float32")]
)
self.assertAllClose(y, np.ones((4, 3), dtype="float32"))
self.assertAllClose(z, np.ones((4, 3), dtype="float32") + 2.0)
def test_include_layers_with_nested_input(self):
class PLMergeNest(PLMerge):
def call(self, inputs):
a = inputs[0]
b = inputs[1][0]
c = inputs[1][1]
return a + b + c
x0 = Input(shape=(3,))
x1 = Input(shape=(3,))
x2 = Input(shape=(3,))
l0 = PLMergeNest()
y = l0([x0, [x1, x2]])
stage = preprocessing_stage.FunctionalPreprocessingStage(
[x0, x1, x2], y
)
stage.compile()
one_array = np.ones((4, 3), dtype="float32")
adapt_data = tf.data.Dataset.from_tensor_slices((one_array,) * 3)
stage.adapt(adapt_data)
self.assertEqual(l0.adapt_count, 1)
# Check call
y = stage(
[
tf.ones((4, 3), dtype="float32"),
tf.ones((4, 3), dtype="float32"),
tf.ones((4, 3), dtype="float32"),
]
)
self.assertAllClose(y, np.ones((4, 3), dtype="float32") + 2.0)
def test_mixing_preprocessing_and_regular_layers(self):
x0 = Input(shape=(10, 10, 3))
x1 = Input(shape=(10, 10, 3))
x2 = Input(shape=(10, 10, 3))
y0 = merging.Add()([x0, x1])
y1 = image_preprocessing.CenterCrop(8, 8)(x2)
y1 = convolutional.ZeroPadding2D(padding=1)(y1)
z = merging.Add()([y0, y1])
z = normalization.Normalization()(z)
z = convolutional.Conv2D(4, 3)(z)
stage = preprocessing_stage.FunctionalPreprocessingStage(
[x0, x1, x2], z
)
data = [
np.ones((12, 10, 10, 3), dtype="float32"),
np.ones((12, 10, 10, 3), dtype="float32"),
np.ones((12, 10, 10, 3), dtype="float32"),
]
stage.adapt(data)
_ = stage(data)
stage.compile("rmsprop", "mse")
with self.assertRaisesRegex(ValueError, "Preprocessing stage"):
stage.fit(data, np.ones((12, 8, 8, 4)))
ds_x0 = tf.data.Dataset.from_tensor_slices(np.ones((12, 10, 10, 3)))
ds_x1 = tf.data.Dataset.from_tensor_slices(np.ones((12, 10, 10, 3)))
ds_x2 = tf.data.Dataset.from_tensor_slices(np.ones((12, 10, 10, 3)))
ds_x = tf.data.Dataset.zip((ds_x0, ds_x1, ds_x2))
ds_y = tf.data.Dataset.from_tensor_slices(np.ones((12, 8, 8, 4)))
dataset = tf.data.Dataset.zip((ds_x, ds_y)).batch(4)
with self.assertRaisesRegex(ValueError, "Preprocessing stage"):
stage.fit(dataset)
_ = stage.evaluate(data, np.ones((12, 8, 8, 4)))
_ = stage.predict(data)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/layers/preprocessing/preprocessing_stage_functional_test.py/0 | {
"file_path": "tf-keras/tf_keras/layers/preprocessing/preprocessing_stage_functional_test.py",
"repo_id": "tf-keras",
"token_count": 7507
} | 183 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains the Dropout layer."""
import numbers
import tensorflow.compat.v2 as tf
from tf_keras import backend
from tf_keras.engine import base_layer
from tf_keras.utils import control_flow_util
# isort: off
from tensorflow.python.util.tf_export import keras_export
@keras_export("keras.layers.Dropout")
class Dropout(base_layer.BaseRandomLayer):
"""Applies Dropout to the input.
The Dropout layer randomly sets input units to 0 with a frequency of `rate`
at each step during training time, which helps prevent overfitting.
Inputs not set to 0 are scaled up by 1/(1 - rate) such that the sum over
all inputs is unchanged.
Note that the Dropout layer only applies when `training` is set to True
such that no values are dropped during inference. When using `model.fit`,
`training` will be appropriately set to True automatically, and in other
contexts, you can set the kwarg explicitly to True when calling the layer.
(This is in contrast to setting `trainable=False` for a Dropout layer.
`trainable` does not affect the layer's behavior, as Dropout does
not have any variables/weights that can be frozen during training.)
>>> tf.random.set_seed(0)
>>> layer = tf.keras.layers.Dropout(.2, input_shape=(2,))
>>> data = np.arange(10).reshape(5, 2).astype(np.float32)
>>> print(data)
[[0. 1.]
[2. 3.]
[4. 5.]
[6. 7.]
[8. 9.]]
>>> outputs = layer(data, training=True)
>>> print(outputs)
tf.Tensor(
[[ 0. 1.25]
[ 2.5 3.75]
[ 5. 6.25]
[ 7.5 8.75]
[10. 0. ]], shape=(5, 2), dtype=float32)
Args:
rate: Float between 0 and 1. Fraction of the input units to drop.
noise_shape: 1D integer tensor representing the shape of the
binary dropout mask that will be multiplied with the input.
For instance, if your inputs have shape
`(batch_size, timesteps, features)` and
you want the dropout mask to be the same for all timesteps,
you can use `noise_shape=(batch_size, 1, features)`.
seed: A Python integer to use as random seed.
Call arguments:
inputs: Input tensor (of any rank).
training: Python boolean indicating whether the layer should behave in
training mode (adding dropout) or in inference mode (doing nothing).
"""
def __init__(self, rate, noise_shape=None, seed=None, **kwargs):
super().__init__(seed=seed, **kwargs)
if isinstance(rate, (int, float)) and not 0 <= rate <= 1:
raise ValueError(
f"Invalid value {rate} received for "
"`rate`, expected a value between 0 and 1."
)
self.rate = rate
self.noise_shape = noise_shape
self.seed = seed
self.supports_masking = True
def _get_noise_shape(self, inputs):
# Subclasses of `Dropout` may implement `_get_noise_shape(self,
# inputs)`, which will override `self.noise_shape`, and allows for
# custom noise shapes with dynamically sized inputs.
if self.noise_shape is None:
return None
concrete_inputs_shape = tf.shape(inputs)
noise_shape = []
for i, value in enumerate(self.noise_shape):
noise_shape.append(
concrete_inputs_shape[i] if value is None else value
)
return tf.convert_to_tensor(noise_shape)
def call(self, inputs, training=None):
if isinstance(self.rate, numbers.Real) and self.rate == 0:
return tf.identity(inputs)
if training is None:
training = backend.learning_phase()
def dropped_inputs():
return self._random_generator.dropout(
inputs, self.rate, noise_shape=self._get_noise_shape(inputs)
)
output = control_flow_util.smart_cond(
training, dropped_inputs, lambda: tf.identity(inputs)
)
return output
def compute_output_shape(self, input_shape):
return input_shape
def get_config(self):
config = {
"rate": self.rate,
"noise_shape": self.noise_shape,
"seed": self.seed,
}
base_config = super().get_config()
return dict(list(base_config.items()) + list(config.items()))
| tf-keras/tf_keras/layers/regularization/dropout.py/0 | {
"file_path": "tf-keras/tf_keras/layers/regularization/dropout.py",
"repo_id": "tf-keras",
"token_count": 1903
} | 184 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains the flatten layer."""
import functools
import operator
import numpy as np
import tensorflow.compat.v2 as tf
from tf_keras.engine.base_layer import Layer
from tf_keras.engine.input_spec import InputSpec
from tf_keras.utils import conv_utils
# isort: off
from tensorflow.python.util.tf_export import keras_export
@keras_export("keras.layers.Flatten")
class Flatten(Layer):
"""Flattens the input. Does not affect the batch size.
Note: If inputs are shaped `(batch,)` without a feature axis, then
flattening adds an extra channel dimension and output shape is `(batch, 1)`.
Args:
data_format: A string,
one of `channels_last` (default) or `channels_first`.
The ordering of the dimensions in the inputs.
`channels_last` corresponds to inputs with shape
`(batch, ..., channels)` while `channels_first` corresponds to
inputs with shape `(batch, channels, ...)`.
When unspecified, uses
`image_data_format` value found in your TF-Keras config file at
`~/.keras/keras.json` (if exists) else 'channels_last'.
Defaults to 'channels_last'.
Example:
>>> model = tf.keras.Sequential()
>>> model.add(tf.keras.layers.Conv2D(64, 3, 3, input_shape=(3, 32, 32)))
>>> model.output_shape
(None, 1, 10, 64)
>>> model.add(Flatten())
>>> model.output_shape
(None, 640)
"""
def __init__(self, data_format=None, **kwargs):
super().__init__(**kwargs)
self.data_format = conv_utils.normalize_data_format(data_format)
self.input_spec = InputSpec(min_ndim=1)
self._channels_first = self.data_format == "channels_first"
def call(self, inputs):
if self._channels_first:
rank = inputs.shape.rank
if rank and rank > 1:
# Switch to channels-last format.
permutation = [0]
permutation.extend(range(2, rank))
permutation.append(1)
inputs = tf.transpose(inputs, perm=permutation)
if tf.executing_eagerly():
# Full static shape is guaranteed to be available.
# Performance: Using `constant_op` is much faster than passing a
# list.
flattened_shape = tf.constant([inputs.shape[0], -1])
return tf.reshape(inputs, flattened_shape)
else:
input_shape = inputs.shape
rank = input_shape.rank
if rank == 1:
return tf.expand_dims(inputs, axis=1)
else:
batch_dim = tf.compat.dimension_value(input_shape[0])
non_batch_dims = input_shape[1:]
# Reshape in a way that preserves as much shape info as
# possible.
if non_batch_dims.is_fully_defined():
last_dim = int(
functools.reduce(operator.mul, non_batch_dims)
)
flattened_shape = tf.constant([-1, last_dim])
elif batch_dim is not None:
flattened_shape = tf.constant([int(batch_dim), -1])
else:
flattened_shape = [tf.shape(inputs)[0], -1]
return tf.reshape(inputs, flattened_shape)
def compute_output_shape(self, input_shape):
input_shape = tf.TensorShape(input_shape).as_list()
if not input_shape:
output_shape = tf.TensorShape([1])
else:
output_shape = [input_shape[0]]
if np.all(input_shape[1:]):
output_shape += [np.prod(input_shape[1:], dtype=int)]
else:
output_shape += [None]
return tf.TensorShape(output_shape)
def get_config(self):
config = super().get_config()
config.update({"data_format": self.data_format})
return config
| tf-keras/tf_keras/layers/reshaping/flatten.py/0 | {
"file_path": "tf-keras/tf_keras/layers/reshaping/flatten.py",
"repo_id": "tf-keras",
"token_count": 1939
} | 185 |
# Description:
# Contains the TF-Keras recurrent layers.
# Placeholder: load unaliased py_library
load("@org_keras//tf_keras:tf_keras.bzl", "cuda_py_test")
# buildifier: disable=same-origin-load
load("@org_keras//tf_keras:tf_keras.bzl", "tf_py_test")
package(
# copybara:uncomment default_applicable_licenses = ["//tf_keras:license"],
default_visibility = [
"//tf_keras:friends",
"//third_party/tensorflow_models/official/projects/residual_mobilenet/modeling/backbones:__pkg__",
],
licenses = ["notice"],
)
py_library(
name = "rnn",
srcs = ["__init__.py"],
srcs_version = "PY3",
deps = [
":abstract_rnn_cell",
":base_rnn",
":base_wrapper",
":bidirectional",
":cell_wrappers",
":conv_lstm1d",
":conv_lstm2d",
":conv_lstm3d",
":cudnn_gru",
":cudnn_lstm",
":gru",
":gru_v1",
":lstm",
":lstm_v1",
":simple_rnn",
":stacked_rnn_cells",
":time_distributed",
"//:expect_tensorflow_installed",
],
)
py_library(
name = "rnn_utils",
srcs = ["rnn_utils.py"],
srcs_version = "PY3",
deps = [
"//:expect_tensorflow_installed",
"//tf_keras/utils:control_flow_util",
],
)
py_library(
name = "abstract_rnn_cell",
srcs = ["abstract_rnn_cell.py"],
srcs_version = "PY3",
deps = [
":rnn_utils",
"//tf_keras/engine:base_layer",
],
)
py_library(
name = "dropout_rnn_cell_mixin",
srcs = ["dropout_rnn_cell_mixin.py"],
srcs_version = "PY3",
deps = [
"//:expect_tensorflow_installed",
"//tf_keras:backend",
],
)
py_library(
name = "gru_lstm_utils",
srcs = ["gru_lstm_utils.py"],
srcs_version = "PY3",
deps = [
"//:expect_tensorflow_installed",
],
)
py_library(
name = "gru",
srcs = ["gru.py"],
srcs_version = "PY3",
deps = [
":base_rnn",
":dropout_rnn_cell_mixin",
":gru_lstm_utils",
":rnn_utils",
"//:expect_tensorflow_installed",
"//tf_keras:activations",
"//tf_keras:backend",
"//tf_keras:constraints",
"//tf_keras:regularizers",
"//tf_keras/engine:base_layer",
"//tf_keras/engine:input_spec",
"//tf_keras/initializers",
"//tf_keras/utils:tf_utils",
],
)
py_library(
name = "gru_v1",
srcs = ["gru_v1.py"],
srcs_version = "PY3",
deps = [
":base_rnn",
":gru",
":rnn_utils",
"//tf_keras:activations",
"//tf_keras:constraints",
"//tf_keras:regularizers",
"//tf_keras/engine:input_spec",
"//tf_keras/initializers",
],
)
py_library(
name = "lstm",
srcs = ["lstm.py"],
srcs_version = "PY3",
deps = [
":base_rnn",
":dropout_rnn_cell_mixin",
":gru_lstm_utils",
":rnn_utils",
"//:expect_tensorflow_installed",
"//tf_keras:activations",
"//tf_keras:backend",
"//tf_keras:constraints",
"//tf_keras:regularizers",
"//tf_keras/engine:base_layer",
"//tf_keras/engine:input_spec",
"//tf_keras/initializers",
"//tf_keras/utils:tf_utils",
],
)
py_library(
name = "lstm_v1",
srcs = ["lstm_v1.py"],
srcs_version = "PY3",
deps = [
":base_rnn",
":lstm",
":rnn_utils",
"//tf_keras:activations",
"//tf_keras:constraints",
"//tf_keras:regularizers",
"//tf_keras/engine:input_spec",
"//tf_keras/initializers",
],
)
py_library(
name = "stacked_rnn_cells",
srcs = ["stacked_rnn_cells.py"],
srcs_version = "PY3",
deps = [
":rnn_utils",
"//:expect_tensorflow_installed",
"//tf_keras:backend",
"//tf_keras/engine:base_layer",
"//tf_keras/utils:generic_utils",
"//tf_keras/utils:tf_utils",
],
)
py_library(
name = "base_rnn",
srcs = ["base_rnn.py"],
srcs_version = "PY3",
deps = [
":dropout_rnn_cell_mixin",
":rnn_utils",
":stacked_rnn_cells",
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras:backend",
"//tf_keras/engine:base_layer",
"//tf_keras/engine:input_spec",
"//tf_keras/saving/legacy/saved_model",
"//tf_keras/utils:generic_utils",
],
)
py_library(
name = "simple_rnn",
srcs = ["simple_rnn.py"],
srcs_version = "PY3",
deps = [
":base_rnn",
":dropout_rnn_cell_mixin",
":rnn_utils",
"//:expect_tensorflow_installed",
"//tf_keras:activations",
"//tf_keras:backend",
"//tf_keras:constraints",
"//tf_keras:regularizers",
"//tf_keras/engine:base_layer",
"//tf_keras/engine:input_spec",
"//tf_keras/initializers",
"//tf_keras/utils:tf_utils",
],
)
py_library(
name = "base_conv_rnn",
srcs = ["base_conv_rnn.py"],
srcs_version = "PY3",
deps = [
":base_rnn",
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras:backend",
"//tf_keras:base_layer",
"//tf_keras/engine:input_spec",
"//tf_keras/utils:engine_utils",
"//tf_keras/utils:generic_utils",
"//tf_keras/utils:tf_utils",
],
)
py_library(
name = "base_conv_lstm",
srcs = ["base_conv_lstm.py"],
srcs_version = "PY3",
deps = [
":base_conv_rnn",
":dropout_rnn_cell_mixin",
"//:expect_tensorflow_installed",
"//tf_keras:activations",
"//tf_keras:backend",
"//tf_keras:base_layer",
"//tf_keras:constraints",
"//tf_keras:regularizers",
"//tf_keras/initializers",
"//tf_keras/utils:engine_utils",
],
)
py_library(
name = "conv_lstm1d",
srcs = ["conv_lstm1d.py"],
srcs_version = "PY3",
deps = [
":base_conv_lstm",
],
)
py_library(
name = "conv_lstm2d",
srcs = ["conv_lstm2d.py"],
srcs_version = "PY3",
deps = [
":base_conv_lstm",
],
)
py_library(
name = "conv_lstm3d",
srcs = ["conv_lstm3d.py"],
srcs_version = "PY3",
deps = [
":base_conv_lstm",
],
)
py_library(
name = "base_cudnn_rnn",
srcs = ["base_cudnn_rnn.py"],
srcs_version = "PY3",
deps = [
":base_rnn",
"//:expect_tensorflow_installed",
"//tf_keras:backend",
"//tf_keras/engine:input_spec",
],
)
py_library(
name = "cudnn_lstm",
srcs = ["cudnn_lstm.py"],
srcs_version = "PY3",
deps = [
":base_cudnn_rnn",
":gru_lstm_utils",
"//:expect_tensorflow_installed",
"//tf_keras:constraints",
"//tf_keras:regularizers",
"//tf_keras/initializers",
],
)
py_library(
name = "cudnn_gru",
srcs = ["cudnn_gru.py"],
srcs_version = "PY3",
deps = [
":base_cudnn_rnn",
":gru_lstm_utils",
"//:expect_tensorflow_installed",
"//tf_keras:constraints",
"//tf_keras:regularizers",
"//tf_keras/initializers",
],
)
py_library(
name = "cell_wrappers",
srcs = ["cell_wrappers.py"],
srcs_version = "PY3",
deps = [
":abstract_rnn_cell",
":lstm",
"//:expect_tensorflow_installed",
"//tf_keras/utils:generic_utils",
"//tf_keras/utils:tf_inspect",
],
)
py_library(
name = "legacy_cell_wrappers",
srcs = ["legacy_cell_wrappers.py"],
srcs_version = "PY3",
deps = [
":cell_wrappers",
":legacy_cells",
"//:expect_tensorflow_installed",
],
)
py_library(
name = "legacy_cells",
srcs = ["legacy_cells.py"],
srcs_version = "PY3",
deps = [
"//:expect_tensorflow_installed",
"//tf_keras:activations",
"//tf_keras:backend",
"//tf_keras/engine:base_layer_utils",
"//tf_keras/engine:input_spec",
"//tf_keras/initializers",
"//tf_keras/legacy_tf_layers:layers_base",
"//tf_keras/utils:tf_utils",
],
)
py_library(
name = "base_wrapper",
srcs = ["base_wrapper.py"],
srcs_version = "PY3",
deps = [
"//tf_keras/engine:base_layer",
"//tf_keras/utils:generic_utils",
],
)
py_library(
name = "bidirectional",
srcs = ["bidirectional.py"],
srcs_version = "PY3",
deps = [
":base_wrapper",
":rnn_utils",
"//:expect_tensorflow_installed",
"//tf_keras:backend",
"//tf_keras/engine:base_layer",
"//tf_keras/engine:input_spec",
"//tf_keras/utils:generic_utils",
"//tf_keras/utils:tf_inspect",
"//tf_keras/utils:tf_utils",
],
)
py_library(
name = "time_distributed",
srcs = ["time_distributed.py"],
srcs_version = "PY3",
deps = [
":base_wrapper",
"//:expect_tensorflow_installed",
"//tf_keras:backend",
"//tf_keras/engine:base_layer",
"//tf_keras/engine:input_spec",
"//tf_keras/utils:generic_utils",
"//tf_keras/utils:layer_utils",
"//tf_keras/utils:tf_utils",
],
)
cuda_py_test(
name = "gru_lstm_test",
size = "medium",
srcs = ["gru_lstm_test.py"],
python_version = "PY3",
shard_count = 2,
tags = [
"no_oss", # TODO(b/277925387)
],
deps = [
":gru",
":lstm",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/testing_infra:test_utils",
],
)
cuda_py_test(
name = "gru_test",
size = "medium",
srcs = ["gru_test.py"],
python_version = "PY3",
shard_count = 12,
tags = [
"no_oss", # TODO(b/277925387)
],
deps = [
":gru_lstm_utils",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/testing_infra:test_utils",
"//tf_keras/utils:np_utils",
],
)
tf_py_test(
name = "gru_v1_test",
size = "medium",
srcs = ["gru_v1_test.py"],
python_version = "PY3",
shard_count = 4,
tags = [
"notsan", # http://b/62136390
],
deps = [
":gru",
":gru_v1",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/testing_infra:test_utils",
"//tf_keras/utils:np_utils",
],
)
cuda_py_test(
name = "lstm_test",
size = "medium",
srcs = ["lstm_test.py"],
python_version = "PY3",
shard_count = 12,
tags = [
"no_oss",
"notsan", # TODO(b/170954246)
],
deps = [
":gru_lstm_utils",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/testing_infra:test_utils",
"//tf_keras/utils:np_utils",
],
)
tf_py_test(
name = "lstm_v1_test",
size = "medium",
srcs = ["lstm_v1_test.py"],
python_version = "PY3",
shard_count = 4,
tags = [
"noasan", # times out b/63678675
"notsan", # http://b/62189182
],
deps = [
":lstm",
":lstm_v1",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/testing_infra:test_utils",
"//tf_keras/utils:np_utils",
],
)
tf_py_test(
name = "base_rnn_test",
size = "medium",
srcs = ["base_rnn_test.py"],
python_version = "PY3",
shard_count = 12,
tags = [
"notsan", # TODO(b/170870794)
],
deps = [
":gru",
":gru_v1",
":legacy_cells",
":lstm",
":lstm_v1",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/engine:base_layer_utils",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/testing_infra:test_utils",
"//tf_keras/utils:generic_utils",
],
)
tf_py_test(
name = "simple_rnn_test",
size = "medium",
srcs = ["simple_rnn_test.py"],
python_version = "PY3",
shard_count = 4,
tags = ["notsan"],
deps = [
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/testing_infra:test_utils",
],
)
tf_py_test(
name = "conv_lstm_test",
size = "medium",
srcs = ["conv_lstm_test.py"],
python_version = "PY3",
shard_count = 8,
deps = [
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/testing_infra:test_utils",
],
)
cuda_py_test(
name = "cudnn_test",
size = "medium",
srcs = ["cudnn_test.py"],
python_version = "PY3",
shard_count = 4,
tags = [
"no_oss", # TODO(b/277925387)
"no_windows_gpu",
],
deps = [
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/optimizers/legacy:optimizers",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/testing_infra:test_utils",
],
)
tf_py_test(
name = "cell_wrappers_test",
size = "medium",
srcs = ["cell_wrappers_test.py"],
python_version = "PY3",
shard_count = 4,
tags = [
"notsan",
],
deps = [
":cell_wrappers",
":legacy_cells",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras/layers",
"//tf_keras/legacy_tf_layers:layers_base",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/utils:generic_utils",
],
)
tf_py_test(
name = "legacy_cell_wrappers_test",
size = "small",
srcs = ["legacy_cell_wrappers_test.py"],
python_version = "PY3",
shard_count = 4,
deps = [
":legacy_cell_wrappers",
":legacy_cells",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_tensorflow_installed",
"//tf_keras/testing_infra:test_combinations",
],
)
tf_py_test(
name = "base_wrapper_test",
size = "small",
srcs = ["base_wrapper_test.py"],
python_version = "PY3",
shard_count = 4,
deps = [
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_tensorflow_installed",
"//tf_keras",
],
)
tf_py_test(
name = "bidirectional_test",
size = "medium",
srcs = ["bidirectional_test.py"],
python_version = "PY3",
shard_count = 12,
tags = [
"noasan", # http://b/78599823
"notsan",
],
deps = [
":cell_wrappers",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/engine:base_layer_utils",
"//tf_keras/layers/core",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/testing_infra:test_utils",
"//tf_keras/utils:generic_utils",
],
)
tf_py_test(
name = "time_distributed_test",
size = "medium",
srcs = ["time_distributed_test.py"],
python_version = "PY3",
shard_count = 12,
tags = [
"noasan", # http://b/78599823
"notsan",
],
deps = [
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/testing_infra:test_utils",
],
)
| tf-keras/tf_keras/layers/rnn/BUILD/0 | {
"file_path": "tf-keras/tf_keras/layers/rnn/BUILD",
"repo_id": "tf-keras",
"token_count": 8913
} | 186 |
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for LSTM layer."""
import copy
import os
import shutil
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
import tf_keras as keras
from tf_keras.layers.rnn import gru_lstm_utils
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
from tf_keras.utils import np_utils
# isort: off
from tensorflow.core.protobuf import rewriter_config_pb2
from tensorflow.python.framework import (
test_util as tf_test_util,
)
# Global config for grappler setting that is used for graph mode test.
_rewrites = rewriter_config_pb2.RewriterConfig()
_rewrites.implementation_selector = rewriter_config_pb2.RewriterConfig.ON
_rewrites.min_graph_nodes = -1
_graph_options = tf.compat.v1.GraphOptions(rewrite_options=_rewrites)
_config = tf.compat.v1.ConfigProto(graph_options=_graph_options)
@test_combinations.run_all_keras_modes(config=_config)
class LSTMGraphRewriteTest(test_combinations.TestCase):
input_shape = 10
output_shape = 8
rnn_state_size = 8
timestep = 4
batch = 100
epoch = 1
@parameterized.named_parameters(
("non_tan_activation", "relu", "sigmoid", 0, False, True),
("non_sigmoid_recur_activation", "tanh", "relu", 0, False, True),
("use_recurrent_dropout", "tanh", "sigmoid", 0.1, False, True),
("unroll", "tanh", "sigmoid", 0, True, True),
("not_use_bias", "tanh", "sigmoid", 0, False, False),
)
@test_utils.run_v2_only
def test_could_use_defun_backend(
self,
activation,
recurrent_activation,
recurrent_dropout,
unroll,
use_bias,
):
layer = keras.layers.LSTM(
1,
activation=activation,
recurrent_activation=recurrent_activation,
recurrent_dropout=recurrent_dropout,
unroll=unroll,
use_bias=use_bias,
)
self.assertFalse(layer._could_use_gpu_kernel)
@test_utils.run_v2_only
def test_use_on_default_activation_with_gpu_kernel(self):
layer = keras.layers.LSTM(1, activation=tf.tanh)
self.assertTrue(layer._could_use_gpu_kernel)
layer = keras.layers.LSTM(1, recurrent_activation=tf.sigmoid)
self.assertTrue(layer._could_use_gpu_kernel)
def test_static_shape_inference_LSTM(self):
# GitHub issue: 15165
timesteps = 3
embedding_dim = 4
units = 2
model = keras.models.Sequential()
inputs = keras.layers.Dense(
embedding_dim, input_shape=(timesteps, embedding_dim)
)
model.add(inputs)
layer = keras.layers.LSTM(units, return_sequences=True)
model.add(layer)
outputs = model.layers[-1].output
self.assertEqual(outputs.shape.as_list(), [None, timesteps, units])
def test_dynamic_behavior_LSTM(self):
num_samples = 2
timesteps = 3
embedding_dim = 4
units = 2
layer = keras.layers.LSTM(units, input_shape=(None, embedding_dim))
model = keras.models.Sequential()
model.add(layer)
model.compile(tf.compat.v1.train.GradientDescentOptimizer(0.001), "mse")
x = np.random.random((num_samples, timesteps, embedding_dim))
y = np.random.random((num_samples, units))
model.train_on_batch(x, y)
def test_stacking_LSTM(self):
inputs = np.random.random((2, 3, 4))
targets = np.abs(np.random.random((2, 3, 5)))
targets /= targets.sum(axis=-1, keepdims=True)
model = keras.models.Sequential()
model.add(keras.layers.LSTM(10, return_sequences=True, unroll=False))
model.add(keras.layers.LSTM(5, return_sequences=True, unroll=False))
model.compile(
loss="categorical_crossentropy",
optimizer=tf.compat.v1.train.GradientDescentOptimizer(0.01),
)
model.fit(inputs, targets, epochs=1, batch_size=2, verbose=1)
def test_from_config_LSTM(self):
layer_class = keras.layers.LSTM
for stateful in (False, True):
l1 = layer_class(units=1, stateful=stateful)
l2 = layer_class.from_config(l1.get_config())
assert l1.get_config() == l2.get_config()
def test_specify_initial_state_keras_tensor(self):
num_states = 2
timesteps = 3
embedding_dim = 4
units = 3
num_samples = 2
# Test with TF-Keras tensor
inputs = keras.Input((timesteps, embedding_dim))
initial_state = [keras.Input((units,)) for _ in range(num_states)]
layer = keras.layers.LSTM(units)
if len(initial_state) == 1:
output = layer(inputs, initial_state=initial_state[0])
else:
output = layer(inputs, initial_state=initial_state)
self.assertTrue(
any(
initial_state[0] is t
for t in layer._inbound_nodes[0].input_tensors
)
)
model = keras.models.Model([inputs] + initial_state, output)
model.compile(
loss="categorical_crossentropy",
optimizer=tf.compat.v1.train.GradientDescentOptimizer(0.01),
)
inputs = np.random.random((num_samples, timesteps, embedding_dim))
initial_state = [
np.random.random((num_samples, units)) for _ in range(num_states)
]
targets = np.random.random((num_samples, units))
model.train_on_batch([inputs] + initial_state, targets)
def test_specify_initial_state_non_keras_tensor(self):
num_states = 2
timesteps = 3
embedding_dim = 4
units = 3
num_samples = 2
# Test with non-Keras tensor
inputs = keras.Input((timesteps, embedding_dim))
initial_state = [
keras.backend.random_normal_variable((num_samples, units), 0, 1)
for _ in range(num_states)
]
layer = keras.layers.LSTM(units)
output = layer(inputs, initial_state=initial_state)
model = keras.models.Model(inputs, output)
model.compile(
loss="categorical_crossentropy",
optimizer=tf.compat.v1.train.GradientDescentOptimizer(0.01),
)
inputs = np.random.random((num_samples, timesteps, embedding_dim))
targets = np.random.random((num_samples, units))
model.train_on_batch(inputs, targets)
def test_reset_states_with_values(self):
num_states = 2
timesteps = 3
embedding_dim = 4
units = 3
num_samples = 2
layer = keras.layers.LSTM(units, stateful=True)
layer.build((num_samples, timesteps, embedding_dim))
initial_weight_count = len(layer.weights)
layer.reset_states()
assert len(layer.states) == num_states
assert layer.states[0] is not None
self.assertAllClose(
keras.backend.eval(layer.states[0]),
np.zeros(keras.backend.int_shape(layer.states[0])),
atol=1e-4,
)
state_shapes = [
keras.backend.int_shape(state) for state in layer.states
]
values = [np.ones(shape) for shape in state_shapes]
if len(values) == 1:
values = values[0]
layer.reset_states(values)
self.assertAllClose(
keras.backend.eval(layer.states[0]),
np.ones(keras.backend.int_shape(layer.states[0])),
atol=1e-4,
)
# Test with invalid data
with self.assertRaises(ValueError):
layer.reset_states([1] * (len(layer.states) + 1))
self.assertEqual(initial_weight_count, len(layer.weights))
# Variables in "states" shouldn't show up in .weights
layer.states = tf.nest.map_structure(tf.Variable, values)
layer.reset_states()
self.assertEqual(initial_weight_count, len(layer.weights))
def test_specify_state_with_masking(self):
num_states = 2
timesteps = 3
embedding_dim = 4
units = 3
num_samples = 2
inputs = keras.Input((timesteps, embedding_dim))
_ = keras.layers.Masking()(inputs)
initial_state = [keras.Input((units,)) for _ in range(num_states)]
output = keras.layers.LSTM(units)(inputs, initial_state=initial_state)
model = keras.models.Model([inputs] + initial_state, output)
model.compile(
loss="categorical_crossentropy",
optimizer=tf.compat.v1.train.GradientDescentOptimizer(0.01),
)
inputs = np.random.random((num_samples, timesteps, embedding_dim))
initial_state = [
np.random.random((num_samples, units)) for _ in range(num_states)
]
targets = np.random.random((num_samples, units))
model.train_on_batch([inputs] + initial_state, targets)
@tf.test.disable_with_predicate(
pred=tf.test.is_built_with_rocm,
skip_message=(
"Skipping as ROCm MIOpen does not support padded input yet."
),
)
def test_return_state(self):
num_states = 2
timesteps = 3
embedding_dim = 4
units = 3
num_samples = 2
inputs = keras.Input(
batch_shape=(num_samples, timesteps, embedding_dim)
)
masked = keras.layers.Masking()(inputs)
layer = keras.layers.LSTM(units, return_state=True, stateful=True)
outputs = layer(masked)
state = outputs[1:]
assert len(state) == num_states
model = keras.models.Model(inputs, state[0])
inputs = np.random.random((num_samples, timesteps, embedding_dim))
state = model.predict(inputs)
self.assertAllClose(
keras.backend.eval(layer.states[0]), state, atol=1e-4
)
def test_state_reuse(self):
timesteps = 3
embedding_dim = 4
units = 3
num_samples = 2
inputs = keras.Input(
batch_shape=(num_samples, timesteps, embedding_dim)
)
layer = keras.layers.LSTM(
units, return_state=True, return_sequences=True
)
outputs = layer(inputs)
output, state = outputs[0], outputs[1:]
output = keras.layers.LSTM(units)(output, initial_state=state)
model = keras.models.Model(inputs, output)
inputs = np.random.random((num_samples, timesteps, embedding_dim))
model.predict(inputs)
def test_initial_states_as_other_inputs(self):
timesteps = 3
embedding_dim = 4
units = 3
num_samples = 2
num_states = 2
layer_class = keras.layers.LSTM
# Test with TF-Keras tensor
main_inputs = keras.Input((timesteps, embedding_dim))
initial_state = [keras.Input((units,)) for _ in range(num_states)]
inputs = [main_inputs] + initial_state
layer = layer_class(units)
output = layer(inputs)
self.assertTrue(
any(
initial_state[0] is t
for t in layer._inbound_nodes[0].input_tensors
)
)
model = keras.models.Model(inputs, output)
model.compile(
loss="categorical_crossentropy",
optimizer=tf.compat.v1.train.GradientDescentOptimizer(0.01),
)
main_inputs = np.random.random((num_samples, timesteps, embedding_dim))
initial_state = [
np.random.random((num_samples, units)) for _ in range(num_states)
]
targets = np.random.random((num_samples, units))
model.train_on_batch([main_inputs] + initial_state, targets)
@parameterized.named_parameters(("v0", 0), ("v1", 1), ("v2", 2))
@tf.test.disable_with_predicate(
pred=tf.test.is_built_with_rocm,
skip_message=(
"Skipping as ROCm MIOpen does not support padded input yet."
),
)
def test_implementation_mode_LSTM(self, implementation_mode):
num_samples = 2
timesteps = 3
embedding_dim = 4
units = 2
test_utils.layer_test(
keras.layers.LSTM,
kwargs={"units": units, "implementation": implementation_mode},
input_shape=(num_samples, timesteps, embedding_dim),
)
layer_class = keras.layers.LSTM
k_constraint = keras.constraints.max_norm(0.01)
r_constraint = keras.constraints.max_norm(0.01)
b_constraint = keras.constraints.max_norm(0.01)
layer = layer_class(
5,
return_sequences=False,
weights=None,
input_shape=(None, embedding_dim),
kernel_constraint=k_constraint,
recurrent_constraint=r_constraint,
bias_constraint=b_constraint,
)
layer.build((None, None, embedding_dim))
self.assertEqual(layer.cell.kernel.constraint, k_constraint)
self.assertEqual(layer.cell.recurrent_kernel.constraint, r_constraint)
self.assertEqual(layer.cell.bias.constraint, b_constraint)
layer_class = keras.layers.LSTM
inputs = np.random.random((2, 3, 4))
targets = np.abs(np.random.random((2, 3, 5)))
targets /= targets.sum(axis=-1, keepdims=True)
model = keras.models.Sequential()
model.add(keras.layers.Masking(input_shape=(3, 4)))
model.add(layer_class(units=5, return_sequences=True, unroll=False))
model.compile(
loss="categorical_crossentropy",
optimizer=tf.compat.v1.train.GradientDescentOptimizer(0.01),
)
model.fit(inputs, targets, epochs=1, batch_size=2, verbose=1)
@tf.test.disable_with_predicate(
pred=tf.test.is_built_with_rocm,
skip_message=(
"Skipping as ROCm MIOpen does not support padded input yet."
),
)
def test_masking_with_stacking_LSTM(self):
inputs = np.random.random((2, 3, 4))
targets = np.abs(np.random.random((2, 3, 5)))
targets /= targets.sum(axis=-1, keepdims=True)
model = keras.models.Sequential()
model.add(keras.layers.Masking(input_shape=(3, 4)))
model.add(keras.layers.LSTM(10, return_sequences=True, unroll=False))
model.add(keras.layers.LSTM(5, return_sequences=True, unroll=False))
model.compile(
loss="categorical_crossentropy",
optimizer=tf.compat.v1.train.GradientDescentOptimizer(0.01),
)
model.fit(inputs, targets, epochs=1, batch_size=2, verbose=1)
@parameterized.named_parameters(
# test_name, use_bias, bias_initializer, activation
("normal", True, "zeros"),
("no_bias", False, "zeros"),
("random_bias", True, "random_uniform"),
)
def test_lstm_model_save_load(self, use_bias, bias_initializer):
temp_dir = self.get_temp_dir()
self.addCleanup(shutil.rmtree, temp_dir)
h5_path = os.path.join(temp_dir, "test.h5")
batch = 10
timestep = 3
input_dim = 5
units = 2
x = np.random.random((batch, timestep, input_dim))
def build_model():
inputs = keras.layers.Input(
shape=[timestep, input_dim], dtype=tf.float32
)
layer = keras.layers.LSTM(
units, use_bias=use_bias, bias_initializer=bias_initializer
)
output = layer(inputs)
return keras.models.Model(inputs, output), layer
model, layer = build_model()
y_ref = model.predict(x)
model.save_weights(h5_path)
cloned_model, new_layer = build_model()
cloned_model.load_weights(h5_path)
y = cloned_model.predict(x)
self.assertAllClose(y, y_ref)
self.assertAllClose(layer.get_weights(), new_layer.get_weights())
def test_lstm_output_on_multiple_kernel(self):
x_train = np.random.random(
(self.batch, self.timestep, self.input_shape)
)
inputs = keras.layers.Input(
shape=[self.timestep, self.input_shape], dtype=tf.float32
)
with test_utils.device(should_use_gpu=False):
layer = keras.layers.LSTM(self.rnn_state_size)
output = layer(inputs)
cpu_model = keras.models.Model(inputs, output)
weights = cpu_model.get_weights()
y_1 = cpu_model.predict(x_train)
with test_utils.device(should_use_gpu=True):
layer = keras.layers.LSTM(self.rnn_state_size)
output = layer(inputs)
gpu_model = keras.models.Model(inputs, output)
gpu_model.set_weights(weights)
y_2 = gpu_model.predict(x_train)
self.assertAllClose(y_1, y_2)
def test_return_sequences_LSTM(self):
num_samples = 2
timesteps = 3
embedding_dim = 4
units = 2
test_utils.layer_test(
keras.layers.LSTM,
kwargs={"units": units, "return_sequences": True},
input_shape=(num_samples, timesteps, embedding_dim),
)
@tf.test.disable_with_predicate(
pred=tf.test.is_built_with_rocm,
skip_message="Skipping as ROCm MIOpen does not support float64 yet.",
)
@test_utils.run_v2_only
def test_float64_LSTM(self):
num_samples = 2
timesteps = 3
embedding_dim = 4
units = 2
test_utils.layer_test(
keras.layers.LSTM,
kwargs={
"units": units,
"return_sequences": True,
"dtype": "float64",
},
input_shape=(num_samples, timesteps, embedding_dim),
input_dtype="float64",
)
def test_regularizers_LSTM(self):
embedding_dim = 4
layer_class = keras.layers.LSTM
layer = layer_class(
5,
return_sequences=False,
weights=None,
input_shape=(None, embedding_dim),
kernel_regularizer=keras.regularizers.l1(0.01),
recurrent_regularizer=keras.regularizers.l1(0.01),
bias_regularizer="l2",
activity_regularizer="l1",
)
layer.build((None, None, 2))
self.assertEqual(len(layer.losses), 3)
x = keras.backend.variable(np.ones((2, 3, 2)))
layer(x)
if tf.executing_eagerly():
self.assertEqual(len(layer.losses), 4)
else:
self.assertEqual(len(layer.get_losses_for(x)), 1)
@tf.test.disable_with_predicate(
pred=tf.test.is_built_with_rocm,
skip_message=(
"Skipping as ROCm MIOpen does not support padded input yet."
),
)
def test_statefulness_LSTM(self):
num_samples = 2
timesteps = 3
embedding_dim = 4
units = 2
layer_class = keras.layers.LSTM
model = keras.models.Sequential()
model.add(
keras.layers.Embedding(
4,
embedding_dim,
mask_zero=True,
input_length=timesteps,
batch_input_shape=(num_samples, timesteps),
)
)
layer = layer_class(
units, return_sequences=False, stateful=True, weights=None
)
model.add(layer)
model.compile(
optimizer=tf.compat.v1.train.GradientDescentOptimizer(0.01),
loss="mse",
run_eagerly=test_utils.should_run_eagerly(),
)
out1 = model.predict(np.ones((num_samples, timesteps)))
self.assertEqual(out1.shape, (num_samples, units))
# train once so that the states change
model.train_on_batch(
np.ones((num_samples, timesteps)), np.ones((num_samples, units))
)
out2 = model.predict(np.ones((num_samples, timesteps)))
# if the state is not reset, output should be different
self.assertNotEqual(out1.max(), out2.max())
# check that output changes after states are reset
# (even though the model itself didn't change)
layer.reset_states()
out3 = model.predict(np.ones((num_samples, timesteps)))
self.assertNotEqual(out2.max(), out3.max())
# check that container-level reset_states() works
model.reset_states()
out4 = model.predict(np.ones((num_samples, timesteps)))
self.assertAllClose(out3, out4, atol=1e-5)
# check that the call to `predict` updated the states
out5 = model.predict(np.ones((num_samples, timesteps)))
self.assertNotEqual(out4.max(), out5.max())
# Check masking
layer.reset_states()
left_padded_input = np.ones((num_samples, timesteps))
left_padded_input[0, :1] = 0
left_padded_input[1, :2] = 0
out6 = model.predict(left_padded_input)
layer.reset_states()
right_padded_input = np.ones((num_samples, timesteps))
right_padded_input[0, -1:] = 0
right_padded_input[1, -2:] = 0
out7 = model.predict(right_padded_input)
layer.reset_states()
mix_padded_input = np.ones((num_samples, timesteps))
mix_padded_input[0, 1] = 0
mix_padded_input[1, 0] = 0
mix_padded_input[1, 2] = 0
out8 = model.predict(mix_padded_input)
self.assertAllClose(out7, out6, atol=1e-5)
self.assertAllClose(out8, out7, atol=1e-5)
def test_stateful_LSTM_training(self):
# See b/123587692 for more context.
vocab_size = 20
embedding_dim = 10
batch_size = 8
timestep = 12
units = 5
x = np.random.randint(0, vocab_size, size=(batch_size, timestep))
y = np.random.randint(0, vocab_size, size=(batch_size, timestep))
model = keras.Sequential(
[
keras.layers.Embedding(
vocab_size,
embedding_dim,
batch_input_shape=[batch_size, timestep],
),
keras.layers.LSTM(units, return_sequences=True, stateful=True),
keras.layers.Dense(vocab_size),
]
)
model.compile(
optimizer="adam",
loss="sparse_categorical_crossentropy",
run_eagerly=test_utils.should_run_eagerly(),
)
model.fit(x, y, epochs=1, shuffle=False)
def test_dropout_LSTM(self):
num_samples = 2
timesteps = 3
embedding_dim = 4
units = 2
test_utils.layer_test(
keras.layers.LSTM,
kwargs={"units": units, "dropout": 0.1, "recurrent_dropout": 0.1},
input_shape=(num_samples, timesteps, embedding_dim),
)
def test_bidirectional(self):
batch = 128
timestep = 20
vocab_size = 1000
model = keras.Sequential(
[
keras.layers.Embedding(vocab_size, 64),
keras.layers.Bidirectional(
keras.layers.LSTM(64, return_sequences=True)
),
keras.layers.Bidirectional(keras.layers.LSTM(32)),
keras.layers.Dense(64, activation="relu"),
keras.layers.Dense(1, activation="sigmoid"),
]
)
model.compile(
loss="binary_crossentropy", optimizer="adam", metrics=["accuracy"]
)
x = np.random.randint(0, vocab_size, size=(batch, timestep))
y = np.random.randint(0, 1, size=(batch))
model.fit(x, y, epochs=1, shuffle=False)
model.evaluate(x, y)
model.predict(x)
@tf.test.disable_with_predicate(
pred=tf.test.is_built_with_rocm,
skip_message=(
"Skipping as ROCm MIOpen does not support padded input yet."
),
)
@test_utils.run_v2_only
def test_explicit_device_with_go_backward_and_mask(self):
batch_size = 8
timestep = 7
masksteps = 5
units = 4
inputs = np.random.randn(batch_size, timestep, units).astype(np.float32)
mask = np.ones((batch_size, timestep)).astype(bool)
mask[:, masksteps:] = 0
lstm_layer = keras.layers.LSTM(
units, return_sequences=True, go_backwards=True
)
with test_utils.device(should_use_gpu=True):
outputs_masked = lstm_layer(inputs, mask=tf.constant(mask))
outputs_trimmed = lstm_layer(inputs[:, :masksteps])
self.assertAllClose(outputs_masked[:, -masksteps:], outputs_trimmed)
@tf_test_util.enable_output_all_intermediates
def test_v1_session_behavior(self):
with tf.compat.v1.get_default_graph().as_default():
# See b/139132348 for more details.
x = np.random.uniform(size=(100, 4, 8))
y = np.random.uniform(size=(100, 1))
dataset = (
tf.data.Dataset.from_tensor_slices((x, y))
.shuffle(100)
.batch(32)
)
inp = keras.layers.Input(shape=(4, 8))
layer = keras.layers.LSTM(1)(inp)
layer = keras.layers.Dense(1)(layer)
model = keras.models.Model(inp, layer)
model.compile(loss="mse", optimizer="sgd")
model.fit(dataset)
def test_with_fully_masked_inputs(self):
num_samples = 8
timestep = 5
embedding_dim = 4
vocab_size = 20
units = 2
inputs = np.random.randint(0, vocab_size, size=(num_samples, timestep))
# Set the first inputs to be fully zero.
inputs[0, :] = 0.0
model = keras.models.Sequential()
model.add(
keras.layers.Embedding(
vocab_size,
embedding_dim,
mask_zero=True,
input_length=timestep,
batch_input_shape=(num_samples, timestep),
)
)
layer = keras.layers.LSTM(units)
model.add(layer)
model.compile(
optimizer=tf.compat.v1.train.GradientDescentOptimizer(0.01),
loss="mse",
run_eagerly=test_utils.should_run_eagerly(),
)
# Make sure it doesn't crash with cudnn kernel.
model.predict(inputs)
# TODO (b/169895267): test with xla_gpu is disabled.
def test_deepcopy(self):
if not tf.executing_eagerly():
self.skipTest("v2-only test")
original_layer = keras.layers.LSTM(5)
copied_layer = copy.deepcopy(original_layer)
self.assertEqual(copied_layer.units, 5)
self.assertEqual(
original_layer.get_config(), original_layer.get_config()
)
# Copy layer before layer call on inputs without weight initialization.
inputs = np.random.normal(size=[32, 10, 8]).astype(np.float32)
original_layer = keras.layers.LSTM(4)
copied_layer = copy.deepcopy(original_layer)
outputs = original_layer(inputs)
copied_outputs = copied_layer(inputs)
self.assertNotAllClose(
self.evaluate(outputs), self.evaluate(copied_outputs)
)
# Copy layer after layer call on inputs with weight initialization.
original_layer = keras.layers.LSTM(4)
outputs = original_layer(inputs)
copied_layer = copy.deepcopy(original_layer)
copied_outputs = copied_layer(inputs)
self.assertAllClose(
self.evaluate(outputs), self.evaluate(copied_outputs)
)
def _test_runtime_with_model(self, model):
(x_train, y_train), _ = test_utils.get_test_data(
train_samples=self.batch,
test_samples=0,
input_shape=(self.timestep, self.input_shape),
num_classes=self.output_shape,
)
y_train = np_utils.to_categorical(y_train, self.output_shape)
model.compile(
optimizer="sgd",
loss=["categorical_crossentropy", None],
run_eagerly=test_utils.should_run_eagerly(),
)
existing_loss = 0
for _ in range(self.epoch):
history = model.fit(x_train, y_train)
loss_value = history.history["loss"][0]
self.assertNotEqual(existing_loss, loss_value)
existing_loss = loss_value
_, runtime_value = model.predict(x_train)
if not tf.sysconfig.get_build_info()["is_rocm_build"]:
if tf.test.is_gpu_available():
self.assertEqual(runtime_value[0], gru_lstm_utils.RUNTIME_GPU)
else:
self.assertEqual(runtime_value[0], gru_lstm_utils.RUNTIME_CPU)
@test_utils.run_v2_only
def test_LSTM_runtime(self):
layer = keras.layers.LSTM(self.rnn_state_size, return_runtime=True)
inputs = keras.layers.Input(
shape=[self.timestep, self.input_shape], dtype=tf.float32
)
outputs, runtime = layer(inputs)
# Expand the runtime so that it is a 1D tensor instead of scalar.
# TF model does not work with scalar model output, specially during
# aggregation.
runtime = keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1))(
runtime
)
model = keras.models.Model(inputs=inputs, outputs=[outputs, runtime])
self._test_runtime_with_model(model)
@tf.test.disable_with_predicate(
pred=tf.test.is_built_with_rocm,
skip_message=(
"Skipping as ROCm MIOpen does not support padded input yet."
),
)
@test_utils.run_v2_only
def test_LSTM_runtime_with_mask(self):
# Masking will affect which backend is selected based on whether the
# mask is strictly right padded.
layer = keras.layers.LSTM(self.rnn_state_size, return_runtime=True)
inputs = keras.layers.Input(
shape=[self.timestep, self.input_shape], dtype=tf.float32
)
masked_inputs = keras.layers.Masking()(inputs)
outputs, runtime = layer(masked_inputs)
# Expand the runtime so that it is a 1D tensor instead of scalar.
# TF model does not work with scalar model output, specially during
# aggregation.
runtime = keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1))(
runtime
)
model = keras.models.Model(inputs=inputs, outputs=[outputs, runtime])
(x_train, y_train), _ = test_utils.get_test_data(
train_samples=self.batch,
test_samples=0,
input_shape=(self.timestep, self.input_shape),
num_classes=self.output_shape,
)
y_train = np_utils.to_categorical(y_train, self.output_shape)
model.compile(
optimizer="sgd",
loss=["categorical_crossentropy", None],
run_eagerly=test_utils.should_run_eagerly(),
)
model.fit(x_train, y_train)
# Verify unpadded data.
_, runtime_value = model.predict(x_train)
if tf.test.is_gpu_available():
self.assertEqual(runtime_value[0], gru_lstm_utils.RUNTIME_GPU)
else:
self.assertEqual(runtime_value[0], gru_lstm_utils.RUNTIME_CPU)
# Update x/y to be right padded by setting the last timestep to 0
x_train[:, -1, :] = 0
y_train[:, -1] = 0
_, runtime_value = model.predict(x_train)
if tf.test.is_gpu_available():
self.assertEqual(runtime_value[0], gru_lstm_utils.RUNTIME_GPU)
else:
self.assertEqual(runtime_value[0], gru_lstm_utils.RUNTIME_CPU)
# Further update x/y to be mix padded (masks in the middle), and verify
# only cpu kernel can be selected.
x_train[:, -3, :] = 0
y_train[:, -3] = 0
_, runtime_value = model.predict(x_train)
self.assertEqual(runtime_value[0], gru_lstm_utils.RUNTIME_CPU)
@test_utils.run_v2_only
def test_LSTM_runtime_with_cond(self):
# This test is to demonstrate the graph rewrite of grappler plugin under
# the condition that the function returns different number of internal
# states.
layer = keras.layers.LSTM(self.rnn_state_size, return_runtime=True)
inputs = keras.layers.Input(
shape=[self.timestep, self.input_shape], dtype=tf.float32
)
zeros = tf.zeros([self.batch, self.output_shape])
dummy_runtime = gru_lstm_utils.runtime(gru_lstm_utils.RUNTIME_UNKNOWN)
a = tf.constant(0)
b = tf.constant(1)
# Will always run the lstm layer.
outputs, runtime = tf.cond(
tf.less(a, b), lambda: layer(inputs), lambda: (zeros, dummy_runtime)
)
# Expand the runtime so that it is a 1D tensor instead of scalar.
# TF model does not work with scalar model output, specially during
# aggregation.
runtime = keras.layers.Lambda(lambda x: tf.expand_dims(x, axis=-1))(
runtime
)
model = keras.models.Model(inputs=inputs, outputs=[outputs, runtime])
self._test_runtime_with_model(model)
@test_combinations.run_all_keras_modes
class LSTMLayerTest(test_combinations.TestCase):
def test_return_sequences_LSTM(self):
num_samples = 2
timesteps = 3
embedding_dim = 4
units = 2
test_utils.layer_test(
keras.layers.LSTM,
kwargs={"units": units, "return_sequences": True},
input_shape=(num_samples, timesteps, embedding_dim),
)
@tf.test.disable_with_predicate(
pred=tf.test.is_built_with_rocm,
skip_message="Double type is yet not supported in ROCm",
)
@test_utils.run_v2_only
def test_float64_LSTM(self):
num_samples = 2
timesteps = 3
embedding_dim = 4
units = 2
test_utils.layer_test(
keras.layers.LSTM,
kwargs={
"units": units,
"return_sequences": True,
"dtype": "float64",
},
input_shape=(num_samples, timesteps, embedding_dim),
input_dtype="float64",
)
def test_static_shape_inference_LSTM(self):
# GitHub issue: 15165
timesteps = 3
embedding_dim = 4
units = 2
model = keras.models.Sequential()
inputs = keras.layers.Dense(
embedding_dim, input_shape=(timesteps, embedding_dim)
)
model.add(inputs)
layer = keras.layers.LSTM(units, return_sequences=True)
model.add(layer)
outputs = model.layers[-1].output
self.assertEqual(outputs.shape.as_list(), [None, timesteps, units])
def test_dynamic_behavior_LSTM(self):
num_samples = 2
timesteps = 3
embedding_dim = 4
units = 2
layer = keras.layers.LSTM(units, input_shape=(None, embedding_dim))
model = keras.models.Sequential()
model.add(layer)
model.compile(
"rmsprop", "mse", run_eagerly=test_utils.should_run_eagerly()
)
x = np.random.random((num_samples, timesteps, embedding_dim))
y = np.random.random((num_samples, units))
model.train_on_batch(x, y)
def test_dropout_LSTM(self):
num_samples = 2
timesteps = 3
embedding_dim = 4
units = 2
test_utils.layer_test(
keras.layers.LSTM,
kwargs={"units": units, "dropout": 0.1, "recurrent_dropout": 0.1},
input_shape=(num_samples, timesteps, embedding_dim),
)
def test_recurrent_dropout_with_implementation_restriction(self):
layer = keras.layers.LSTM(2, recurrent_dropout=0.1, implementation=2)
# The implementation is force to 1 due to the limit of
# recurrent_dropout.
self.assertEqual(layer.implementation, 1)
@test_utils.run_v2_only
def test_dropout_variable_name(self):
layer = keras.layers.RNN(
keras.layers.LSTMCell(2, dropout=0.1, force_generator=True)
)
layer(np.random.random((2, 3, 4)))
self.assertEqual(
layer.cell._random_generator._generator._state_var.name,
"rnn/lstm_cell/StateVar:0",
)
layer = keras.layers.LSTM(2, dropout=0.1, force_generator=True)
layer(np.random.random((2, 3, 4)))
self.assertEqual(
layer._random_generator._generator._state_var.name,
"lstm/StateVar:0",
)
@parameterized.parameters([0, 1, 2])
def test_implementation_mode_LSTM(self, implementation_mode):
num_samples = 2
timesteps = 3
embedding_dim = 4
units = 2
test_utils.layer_test(
keras.layers.LSTM,
kwargs={"units": units, "implementation": implementation_mode},
input_shape=(num_samples, timesteps, embedding_dim),
)
def test_constraints_LSTM(self):
embedding_dim = 4
layer_class = keras.layers.LSTM
k_constraint = keras.constraints.max_norm(0.01)
r_constraint = keras.constraints.max_norm(0.01)
b_constraint = keras.constraints.max_norm(0.01)
layer = layer_class(
5,
return_sequences=False,
weights=None,
input_shape=(None, embedding_dim),
kernel_constraint=k_constraint,
recurrent_constraint=r_constraint,
bias_constraint=b_constraint,
)
layer.build((None, None, embedding_dim))
self.assertEqual(layer.cell.kernel.constraint, k_constraint)
self.assertEqual(layer.cell.recurrent_kernel.constraint, r_constraint)
self.assertEqual(layer.cell.bias.constraint, b_constraint)
@parameterized.parameters([True, False])
@tf.test.disable_with_predicate(
pred=tf.test.is_built_with_rocm,
skip_message="Skipping as ROCm MIOpen does not support padded input.",
)
def test_with_masking_layer_LSTM(self, unroll):
layer_class = keras.layers.LSTM
inputs = np.random.random((2, 3, 4))
targets = np.abs(np.random.random((2, 3, 5)))
targets /= targets.sum(axis=-1, keepdims=True)
model = keras.models.Sequential()
model.add(keras.layers.Masking(input_shape=(3, 4)))
model.add(layer_class(units=5, return_sequences=True, unroll=unroll))
model.compile(
loss="categorical_crossentropy",
optimizer="rmsprop",
run_eagerly=test_utils.should_run_eagerly(),
)
model.fit(inputs, targets, epochs=1, batch_size=2, verbose=1)
@parameterized.parameters([True, False])
def test_masking_with_stacking_LSTM(self, unroll):
inputs = np.random.random((2, 3, 4))
targets = np.abs(np.random.random((2, 3, 5)))
targets /= targets.sum(axis=-1, keepdims=True)
model = keras.models.Sequential()
model.add(keras.layers.Masking(input_shape=(3, 4)))
lstm_cells = [keras.layers.LSTMCell(10), keras.layers.LSTMCell(5)]
model.add(
keras.layers.RNN(lstm_cells, return_sequences=True, unroll=unroll)
)
model.compile(
loss="categorical_crossentropy",
optimizer="rmsprop",
run_eagerly=test_utils.should_run_eagerly(),
)
model.fit(inputs, targets, epochs=1, batch_size=2, verbose=1)
def test_from_config_LSTM(self):
layer_class = keras.layers.LSTM
for stateful in (False, True):
l1 = layer_class(units=1, stateful=stateful)
l2 = layer_class.from_config(l1.get_config())
assert l1.get_config() == l2.get_config()
def test_deep_copy_LSTM(self):
cell = keras.layers.LSTMCell(5)
copied_cell = copy.deepcopy(cell)
self.assertEqual(copied_cell.units, 5)
self.assertEqual(cell.get_config(), copied_cell.get_config())
def test_specify_initial_state_keras_tensor(self):
num_states = 2
timesteps = 3
embedding_dim = 4
units = 3
num_samples = 2
# Test with TF-Keras tensor
inputs = keras.Input((timesteps, embedding_dim))
initial_state = [keras.Input((units,)) for _ in range(num_states)]
layer = keras.layers.LSTM(units)
if len(initial_state) == 1:
output = layer(inputs, initial_state=initial_state[0])
else:
output = layer(inputs, initial_state=initial_state)
self.assertTrue(
any(
initial_state[0] is t
for t in layer._inbound_nodes[0].input_tensors
)
)
model = keras.models.Model([inputs] + initial_state, output)
model.compile(
loss="categorical_crossentropy",
optimizer=tf.compat.v1.train.AdamOptimizer(),
run_eagerly=test_utils.should_run_eagerly(),
)
inputs = np.random.random((num_samples, timesteps, embedding_dim))
initial_state = [
np.random.random((num_samples, units)) for _ in range(num_states)
]
targets = np.random.random((num_samples, units))
model.train_on_batch([inputs] + initial_state, targets)
def test_specify_initial_state_non_keras_tensor(self):
num_states = 2
timesteps = 3
embedding_dim = 4
units = 3
num_samples = 2
# Test with non-Keras tensor
inputs = keras.Input((timesteps, embedding_dim))
initial_state = [
keras.backend.random_normal_variable((num_samples, units), 0, 1)
for _ in range(num_states)
]
layer = keras.layers.LSTM(units)
output = layer(inputs, initial_state=initial_state)
model = keras.models.Model(inputs, output)
model.compile(
loss="categorical_crossentropy",
optimizer=tf.compat.v1.train.AdamOptimizer(),
run_eagerly=test_utils.should_run_eagerly(),
)
inputs = np.random.random((num_samples, timesteps, embedding_dim))
targets = np.random.random((num_samples, units))
model.train_on_batch(inputs, targets)
def test_reset_states_with_values(self):
num_states = 2
timesteps = 3
embedding_dim = 4
units = 3
num_samples = 2
layer = keras.layers.LSTM(units, stateful=True)
layer.build((num_samples, timesteps, embedding_dim))
layer.reset_states()
assert len(layer.states) == num_states
assert layer.states[0] is not None
self.assertAllClose(
keras.backend.eval(layer.states[0]),
np.zeros(keras.backend.int_shape(layer.states[0])),
atol=1e-4,
)
state_shapes = [
keras.backend.int_shape(state) for state in layer.states
]
values = [np.ones(shape) for shape in state_shapes]
if len(values) == 1:
values = values[0]
layer.reset_states(values)
self.assertAllClose(
keras.backend.eval(layer.states[0]),
np.ones(keras.backend.int_shape(layer.states[0])),
atol=1e-4,
)
# Test with invalid data
with self.assertRaises(ValueError):
layer.reset_states([1] * (len(layer.states) + 1))
def test_specify_state_with_masking(self):
num_states = 2
timesteps = 3
embedding_dim = 4
units = 3
num_samples = 2
inputs = keras.Input((timesteps, embedding_dim))
_ = keras.layers.Masking()(inputs)
initial_state = [keras.Input((units,)) for _ in range(num_states)]
output = keras.layers.LSTM(units)(inputs, initial_state=initial_state)
model = keras.models.Model([inputs] + initial_state, output)
model.compile(
loss="categorical_crossentropy",
optimizer="rmsprop",
run_eagerly=test_utils.should_run_eagerly(),
)
inputs = np.random.random((num_samples, timesteps, embedding_dim))
initial_state = [
np.random.random((num_samples, units)) for _ in range(num_states)
]
targets = np.random.random((num_samples, units))
model.train_on_batch([inputs] + initial_state, targets)
def test_return_state(self):
num_states = 2
timesteps = 3
embedding_dim = 4
units = 3
num_samples = 2
inputs = keras.Input(
batch_shape=(num_samples, timesteps, embedding_dim)
)
layer = keras.layers.LSTM(units, return_state=True, stateful=True)
outputs = layer(inputs)
state = outputs[1:]
assert len(state) == num_states
model = keras.models.Model(inputs, state[0])
inputs = np.random.random((num_samples, timesteps, embedding_dim))
state = model.predict(inputs)
self.assertAllClose(
keras.backend.eval(layer.states[0]), state, atol=1e-4
)
def test_state_reuse(self):
timesteps = 3
embedding_dim = 4
units = 3
num_samples = 2
inputs = keras.Input(
batch_shape=(num_samples, timesteps, embedding_dim)
)
layer = keras.layers.LSTM(
units, return_state=True, return_sequences=True
)
outputs = layer(inputs)
output, state = outputs[0], outputs[1:]
output = keras.layers.LSTM(units)(output, initial_state=state)
model = keras.models.Model(inputs, output)
inputs = np.random.random((num_samples, timesteps, embedding_dim))
outputs = model.predict(inputs)
def test_initial_states_as_other_inputs(self):
timesteps = 3
embedding_dim = 4
units = 3
num_samples = 2
num_states = 2
layer_class = keras.layers.LSTM
# Test with TF-Keras tensor
main_inputs = keras.Input((timesteps, embedding_dim))
initial_state = [keras.Input((units,)) for _ in range(num_states)]
inputs = [main_inputs] + initial_state
layer = layer_class(units)
output = layer(inputs)
self.assertTrue(
any(
initial_state[0] is t
for t in layer._inbound_nodes[0].input_tensors
)
)
model = keras.models.Model(inputs, output)
model.compile(
loss="categorical_crossentropy",
optimizer=tf.compat.v1.train.AdamOptimizer(),
run_eagerly=test_utils.should_run_eagerly(),
)
main_inputs = np.random.random((num_samples, timesteps, embedding_dim))
initial_state = [
np.random.random((num_samples, units)) for _ in range(num_states)
]
targets = np.random.random((num_samples, units))
model.train_on_batch([main_inputs] + initial_state, targets)
def test_regularizers_LSTM(self):
embedding_dim = 4
layer_class = keras.layers.LSTM
layer = layer_class(
5,
return_sequences=False,
weights=None,
input_shape=(None, embedding_dim),
kernel_regularizer=keras.regularizers.l1(0.01),
recurrent_regularizer=keras.regularizers.l1(0.01),
bias_regularizer="l2",
activity_regularizer="l1",
)
layer.build((None, None, 2))
self.assertEqual(len(layer.losses), 3)
x = keras.backend.variable(np.ones((2, 3, 2)))
layer(x)
if tf.executing_eagerly():
self.assertEqual(len(layer.losses), 4)
else:
self.assertEqual(len(layer.get_losses_for(x)), 1)
@tf.test.disable_with_predicate(
pred=tf.test.is_built_with_rocm,
skip_message="Skipping as ROCm MIOpen does not support padded input.",
)
def test_statefulness_LSTM(self):
num_samples = 2
timesteps = 3
embedding_dim = 4
units = 2
layer_class = keras.layers.LSTM
model = keras.models.Sequential()
model.add(
keras.layers.Embedding(
4,
embedding_dim,
mask_zero=True,
input_length=timesteps,
batch_input_shape=(num_samples, timesteps),
)
)
layer = layer_class(
units, return_sequences=False, stateful=True, weights=None
)
model.add(layer)
model.compile(
optimizer=tf.compat.v1.train.GradientDescentOptimizer(0.01),
loss="mse",
run_eagerly=test_utils.should_run_eagerly(),
)
out1 = model.predict(np.ones((num_samples, timesteps)))
self.assertEqual(out1.shape, (num_samples, units))
# train once so that the states change
model.train_on_batch(
np.ones((num_samples, timesteps)), np.ones((num_samples, units))
)
out2 = model.predict(np.ones((num_samples, timesteps)))
# if the state is not reset, output should be different
self.assertNotEqual(out1.max(), out2.max())
# check that output changes after states are reset
# (even though the model itself didn't change)
layer.reset_states()
out3 = model.predict(np.ones((num_samples, timesteps)))
self.assertNotEqual(out2.max(), out3.max())
# check that container-level reset_states() works
model.reset_states()
out4 = model.predict(np.ones((num_samples, timesteps)))
self.assertAllClose(out3, out4, atol=1e-5)
# check that the call to `predict` updated the states
out5 = model.predict(np.ones((num_samples, timesteps)))
self.assertNotEqual(out4.max(), out5.max())
# Check masking
layer.reset_states()
left_padded_input = np.ones((num_samples, timesteps))
left_padded_input[0, :1] = 0
left_padded_input[1, :2] = 0
out6 = model.predict(left_padded_input)
layer.reset_states()
right_padded_input = np.ones((num_samples, timesteps))
right_padded_input[0, -1:] = 0
right_padded_input[1, -2:] = 0
out7 = model.predict(right_padded_input)
self.assertAllClose(out7, out6, atol=1e-5)
@test_utils.run_v2_only
def test_cloned_weight_names(self):
inp = keras.Input([None, 3])
rnn = keras.layers.LSTM(units=3)
model = keras.Model(inp, rnn(inp))
clone = keras.models.clone_model(model)
model_names = [x.name for x in model.weights]
clone_names = [x.name for x in clone.weights]
self.assertEqual(model_names, clone_names)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/layers/rnn/lstm_test.py/0 | {
"file_path": "tf-keras/tf_keras/layers/rnn/lstm_test.py",
"repo_id": "tf-keras",
"token_count": 24910
} | 187 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for tf.layers.base."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import copy
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
from tf_keras import backend
from tf_keras.engine import base_layer as keras_base_layer
from tf_keras.engine import input_spec
from tf_keras.legacy_tf_layers import base as base_tf_layers
from tf_keras.legacy_tf_layers import core as core_tf_layers
from tf_keras.testing_infra import test_combinations
class BaseLayerTest(tf.test.TestCase, parameterized.TestCase):
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def testLayerProperties(self):
layer = base_tf_layers.Layer(name="my_layer")
self.assertEqual(layer.variables, [])
self.assertEqual(layer.trainable_variables, [])
self.assertEqual(layer.non_trainable_variables, [])
if not tf.executing_eagerly():
# updates, losses only supported in GRAPH mode
self.assertEqual(layer.updates, [])
self.assertEqual(layer.losses, [])
self.assertEqual(layer.built, False)
layer = base_tf_layers.Layer(name="my_layer", trainable=False)
self.assertEqual(layer.trainable, False)
# Assert that the layer was not instrumented as a TF-Keras layer
self.assertFalse(layer._instrumented_keras_api)
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def testInt64Layer(self):
layer = base_tf_layers.Layer(name="my_layer", dtype="int64")
layer.add_weight("my_var", [2, 2])
self.assertEqual(layer.name, "my_layer")
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def testKerasStyleAddWeight(self):
keras_layer = keras_base_layer.Layer(name="keras_layer")
with backend.name_scope("foo"):
keras_variable = keras_layer.add_weight(
"my_var", [2, 2], initializer=tf.compat.v1.zeros_initializer()
)
self.assertEqual(keras_variable.name, "foo/my_var:0")
with backend.name_scope("baz"):
old_style_layer = base_tf_layers.Layer(name="my_layer")
# Test basic variable creation.
variable = old_style_layer.add_weight(
"my_var", [2, 2], initializer=tf.compat.v1.zeros_initializer()
)
self.assertEqual(variable.name, "my_layer/my_var:0")
with base_tf_layers.keras_style_scope():
layer = base_tf_layers.Layer(name="my_layer")
# Assert that the layer was not instrumented as a TF-Keras layer
self.assertFalse(layer._instrumented_keras_api)
# Test basic variable creation.
with backend.name_scope("bar"):
variable = layer.add_weight(
"my_var", [2, 2], initializer=tf.compat.v1.zeros_initializer()
)
self.assertEqual(variable.name, "bar/my_var:0")
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def testAddWeight(self):
layer = base_tf_layers.Layer(name="my_layer")
# Test basic variable creation.
variable = layer.add_weight(
"my_var", [2, 2], initializer=tf.compat.v1.zeros_initializer()
)
self.assertEqual(variable.name, "my_layer/my_var:0")
self.assertEqual(layer.variables, [variable])
self.assertEqual(layer.trainable_variables, [variable])
self.assertEqual(layer.non_trainable_variables, [])
if not tf.executing_eagerly():
self.assertEqual(
layer.variables,
tf.compat.v1.get_collection(
tf.compat.v1.GraphKeys.TRAINABLE_VARIABLES
),
)
# Test non-trainable variable creation.
# layer.add_variable should work even outside `build` and `call`.
variable_2 = layer.add_weight(
"non_trainable_var",
[2, 2],
initializer=tf.compat.v1.zeros_initializer(),
trainable=False,
)
self.assertEqual(layer.variables, [variable, variable_2])
self.assertEqual(layer.trainable_variables, [variable])
self.assertEqual(layer.non_trainable_variables, [variable_2])
if not tf.executing_eagerly():
self.assertEqual(
len(
tf.compat.v1.get_collection(
tf.compat.v1.GraphKeys.TRAINABLE_VARIABLES
)
),
1,
)
regularizer = lambda x: tf.reduce_sum(x) * 1e-3
_ = layer.add_weight(
"reg_var",
[2, 2],
initializer=tf.compat.v1.zeros_initializer(),
regularizer=regularizer,
)
self.assertEqual(len(layer.losses), 1)
added_variable = [False]
# Test that sync `ON_READ` variables are defaulted to be non-trainable.
variable_3 = layer.add_weight(
"sync_on_read_var",
[2, 2],
initializer=tf.compat.v1.zeros_initializer(),
synchronization=tf.VariableSynchronization.ON_READ,
aggregation=tf.compat.v1.VariableAggregation.SUM,
)
self.assertEqual(
layer.non_trainable_variables, [variable_2, variable_3]
)
@tf.function
def function_adds_weight():
if not added_variable[0]:
layer.add_weight(
"reg_var_from_function",
[2, 2],
initializer=tf.compat.v1.zeros_initializer(),
regularizer=regularizer,
)
added_variable[0] = True
function_adds_weight()
self.assertEqual(len(layer.losses), 2)
def testInvalidTrainableSynchronizationCombination(self):
layer = base_tf_layers.Layer(name="my_layer")
with self.assertRaisesRegex(
ValueError,
"Synchronization value can be set to "
"VariableSynchronization.ON_READ only for non-trainable variables. "
"You have specified trainable=True and "
"synchronization=VariableSynchronization.ON_READ.",
):
_ = layer.add_weight(
"v",
[2, 2],
initializer=tf.compat.v1.zeros_initializer(),
synchronization=tf.VariableSynchronization.ON_READ,
trainable=True,
)
def testReusePartitionedVariablesAndRegularizers(self):
with tf.Graph().as_default():
regularizer = lambda x: tf.reduce_sum(x) * 1e-3
partitioner = tf.compat.v1.fixed_size_partitioner(3)
for reuse in [False, True]:
with tf.compat.v1.variable_scope(
tf.compat.v1.get_variable_scope(),
partitioner=partitioner,
reuse=reuse,
):
layer = base_tf_layers.Layer(name="my_layer")
_ = layer.add_weight(
"reg_part_var",
[4, 4],
initializer=tf.compat.v1.zeros_initializer(),
regularizer=regularizer,
)
self.assertEqual(
len(
tf.compat.v1.get_collection(
tf.compat.v1.GraphKeys.REGULARIZATION_LOSSES
)
),
3,
)
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def testCall(self):
class MyLayer(base_tf_layers.Layer):
def call(self, inputs):
return tf.square(inputs)
layer = MyLayer(name="my_layer")
inputs = tf.random.uniform((5,), seed=1)
outputs = layer(inputs)
self.assertEqual(layer.built, True)
if not tf.executing_eagerly():
# op is only supported in GRAPH mode
self.assertEqual(outputs.op.name, "my_layer/Square")
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def testDeepCopy(self):
class MyLayer(base_tf_layers.Layer):
def call(self, inputs):
return tf.square(inputs)
layer = MyLayer(name="my_layer")
layer._private_tensor = tf.random.uniform(())
inputs = tf.random.uniform((5,), seed=1)
outputs = layer(inputs)
self.assertEqual(layer.built, True)
if not tf.executing_eagerly():
# op only supported in GRAPH mode.
self.assertEqual(outputs.op.name, "my_layer/Square")
layer_copy = copy.deepcopy(layer)
self.assertEqual(layer_copy.name, layer.name)
self.assertEqual(layer_copy._scope.name, layer._scope.name)
self.assertEqual(layer_copy._private_tensor, layer._private_tensor)
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def testScopeNaming(self):
class PrivateLayer(base_tf_layers.Layer):
def call(self, inputs):
return inputs
inputs = tf.random.uniform((5,))
default_layer = PrivateLayer()
_ = default_layer(inputs)
self.assertEqual(default_layer._scope.name, "private_layer")
default_layer1 = PrivateLayer()
default_layer1(inputs)
self.assertEqual(default_layer1._scope.name, "private_layer_1")
my_layer = PrivateLayer(name="my_layer")
my_layer(inputs)
self.assertEqual(my_layer._scope.name, "my_layer")
my_layer1 = PrivateLayer(name="my_layer")
my_layer1(inputs)
self.assertEqual(my_layer1._scope.name, "my_layer_1")
my_layer2 = PrivateLayer(name="my_layer")
my_layer2(inputs)
self.assertEqual(my_layer2._scope.name, "my_layer_2")
# Name scope shouldn't affect names.
with backend.name_scope("some_name_scope"):
default_layer2 = PrivateLayer()
default_layer2(inputs)
self.assertEqual(default_layer2._scope.name, "private_layer_2")
my_layer3 = PrivateLayer(name="my_layer")
my_layer3(inputs)
self.assertEqual(my_layer3._scope.name, "my_layer_3")
other_layer = PrivateLayer(name="other_layer")
other_layer(inputs)
self.assertEqual(other_layer._scope.name, "other_layer")
# Variable scope gets added to scope names.
with tf.compat.v1.variable_scope("var_scope"):
default_layer_scoped = PrivateLayer()
default_layer_scoped(inputs)
self.assertEqual(
default_layer_scoped._scope.name, "var_scope/private_layer"
)
my_layer_scoped = PrivateLayer(name="my_layer")
my_layer_scoped(inputs)
self.assertEqual(my_layer_scoped._scope.name, "var_scope/my_layer")
my_layer_scoped1 = PrivateLayer(name="my_layer")
my_layer_scoped1(inputs)
self.assertEqual(
my_layer_scoped1._scope.name, "var_scope/my_layer_1"
)
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def testInputSpecNdimCheck(self):
class CustomerLayer(base_tf_layers.Layer):
def __init__(self):
super().__init__()
self.input_spec = input_spec.InputSpec(ndim=2)
def call(self, inputs):
return inputs
layer = CustomerLayer()
with self.assertRaisesRegex(ValueError, r"expected ndim=2"):
layer(tf.constant([1]))
# Note that we re-create the layer since in Eager mode, input spec
# checks only happen on first call.
# Works
layer = CustomerLayer()
layer(tf.constant([[1], [2]]))
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def testInputSpecMinNdimCheck(self):
class CustomLayer(base_tf_layers.Layer):
def __init__(self):
super().__init__()
self.input_spec = input_spec.InputSpec(min_ndim=2)
def call(self, inputs):
return inputs
layer = CustomLayer()
with self.assertRaisesRegex(ValueError, r"expected min_ndim=2"):
layer(tf.constant([1]))
# Works
layer = CustomLayer()
layer(tf.constant([[1], [2]]))
layer = CustomLayer()
layer(tf.constant([[[1], [2]]]))
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def testInputSpecMaxNdimCheck(self):
class CustomerLayer(base_tf_layers.Layer):
def __init__(self):
super().__init__()
self.input_spec = input_spec.InputSpec(max_ndim=2)
def call(self, inputs):
return inputs
layer = CustomerLayer()
with self.assertRaisesRegex(ValueError, r"expected max_ndim=2"):
layer(tf.constant([[[1], [2]]]))
# Works
layer = CustomerLayer()
layer(tf.constant([1]))
layer = CustomerLayer()
layer(tf.constant([[1], [2]]))
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def testInputSpecDtypeCheck(self):
class CustomerLayer(base_tf_layers.Layer):
def __init__(self):
super().__init__()
self.input_spec = input_spec.InputSpec(dtype="float32")
def call(self, inputs):
return inputs
layer = CustomerLayer()
with self.assertRaisesRegex(ValueError, r"expected dtype=float32"):
layer(tf.constant(1, dtype=tf.int32))
# Works
layer = CustomerLayer()
layer(tf.constant(1.0, dtype=tf.float32))
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def testInputSpecAxesCheck(self):
class CustomerLayer(base_tf_layers.Layer):
def __init__(self):
super().__init__()
self.input_spec = input_spec.InputSpec(axes={-1: 2})
def call(self, inputs):
return inputs
layer = CustomerLayer()
with self.assertRaisesRegex(ValueError, r"expected axis"):
layer(tf.constant([1, 2, 3]))
# Works
layer = CustomerLayer()
layer(tf.constant([1, 2]))
layer = CustomerLayer()
layer(tf.constant([[1, 2], [3, 4], [5, 6]]))
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def testInputSpecShapeCheck(self):
class CustomerLayer(base_tf_layers.Layer):
def __init__(self):
super().__init__()
self.input_spec = input_spec.InputSpec(shape=(None, 3))
def call(self, inputs):
return inputs
layer = CustomerLayer()
with self.assertRaisesRegex(ValueError, r"expected shape"):
layer(tf.constant([[1, 2]]))
# Works
layer = CustomerLayer()
layer(tf.constant([[1, 2, 3], [4, 5, 6]]))
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def testNoInputSpec(self):
class CustomerLayer(base_tf_layers.Layer):
def __init__(self):
super().__init__()
self.input_spec = None
def call(self, inputs):
return inputs
layer = CustomerLayer()
layer(tf.constant(1))
# Works
if not tf.executing_eagerly():
layer(tf.compat.v1.placeholder("int32"))
layer(tf.compat.v1.placeholder("int32", shape=(2, 3)))
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_count_params(self):
dense = core_tf_layers.Dense(16)
dense.build((None, 4))
self.assertEqual(dense.count_params(), 16 * 4 + 16)
dense = core_tf_layers.Dense(16)
with self.assertRaises(ValueError):
dense.count_params()
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def testDictInputOutput(self):
class DictLayer(base_tf_layers.Layer):
def call(self, inputs):
return {"l" + key: inputs[key] for key in inputs}
layer = DictLayer()
if tf.executing_eagerly():
i1 = tf.constant(3)
i2 = tf.constant(4.0)
result = layer({"abel": i1, "ogits": i2})
self.assertTrue(isinstance(result, dict))
self.assertEqual(set(["label", "logits"]), set(result.keys()))
self.assertEqual(3, result["label"].numpy())
self.assertEqual(4.0, result["logits"].numpy())
else:
i1 = tf.compat.v1.placeholder("int32")
i2 = tf.compat.v1.placeholder("float32")
result = layer({"abel": i1, "ogits": i2})
self.assertTrue(isinstance(result, dict))
self.assertEqual(set(["label", "logits"]), set(result.keys()))
def testActivityRegularizer(self):
with tf.Graph().as_default():
regularizer = tf.reduce_sum
layer = base_tf_layers.Layer(activity_regularizer=regularizer)
x = tf.compat.v1.placeholder("int32")
layer(x)
self.assertEqual(len(layer.get_losses_for(x)), 1)
def testNameScopeIsConsistentWithVariableScope(self):
# GitHub issue 13429.
class MyLayer(base_tf_layers.Layer):
def build(self, input_shape):
self.my_var = self.add_weight("my_var", (), tf.float32)
self.built = True
def call(self, inputs):
return tf.multiply(inputs, self.my_var, name="my_op")
def _gen_layer(x, name=None):
layer = MyLayer(name=name)
out = layer(x)
return layer, out
# unnamed layer
with tf.Graph().as_default():
x = tf.compat.v1.placeholder(tf.float32, (), "x")
layer, op = _gen_layer(x)
layer1, op1 = _gen_layer(op)
layer2, op2 = _gen_layer(op1)
self.assertEqual(layer.my_var.name, "my_layer/my_var:0")
self.assertEqual(op.name, "my_layer/my_op:0")
self.assertEqual(layer1.my_var.name, "my_layer_1/my_var:0")
self.assertEqual(op1.name, "my_layer_1/my_op:0")
self.assertEqual(layer2.my_var.name, "my_layer_2/my_var:0")
self.assertEqual(op2.name, "my_layer_2/my_op:0")
# name starts from zero
with tf.Graph().as_default():
x = tf.compat.v1.placeholder(tf.float32, (), "x")
layer, op = _gen_layer(x, name="name")
layer1, op1 = _gen_layer(op, name="name_1")
layer2, op2 = _gen_layer(op1, name="name_2")
self.assertEqual(layer.my_var.name, "name/my_var:0")
self.assertEqual(op.name, "name/my_op:0")
self.assertEqual(layer1.my_var.name, "name_1/my_var:0")
self.assertEqual(op1.name, "name_1/my_op:0")
self.assertEqual(layer2.my_var.name, "name_2/my_var:0")
self.assertEqual(op2.name, "name_2/my_op:0")
# name starts from one
with tf.Graph().as_default():
x = tf.compat.v1.placeholder(tf.float32, (), "x")
layer, op = _gen_layer(x, name="name_1")
layer1, op1 = _gen_layer(op, name="name_2")
layer2, op2 = _gen_layer(op1, name="name_3")
self.assertEqual(layer.my_var.name, "name_1/my_var:0")
self.assertEqual(op.name, "name_1/my_op:0")
self.assertEqual(layer1.my_var.name, "name_2/my_var:0")
self.assertEqual(op1.name, "name_2/my_op:0")
self.assertEqual(layer2.my_var.name, "name_3/my_var:0")
self.assertEqual(op2.name, "name_3/my_op:0")
def testVariablesAreLiftedFromFunctionBuildingGraphs(self):
class MyLayer(base_tf_layers.Layer):
def build(self, input_shape):
self.my_var = self.add_weight("my_var", (), tf.float32)
self.built = True
def call(self, inputs):
return inputs
outer_graph = tf.compat.v1.get_default_graph()
function_building_graph = tf.Graph()
function_building_graph._building_function = True
with outer_graph.as_default():
with function_building_graph.as_default():
layer = MyLayer()
# Create a variable by invoking build through __call__ and
# assert that it is both tracked and lifted into the outer
# graph.
inputs = tf.compat.v1.placeholder(tf.float32, (), "inputs")
layer(inputs)
self.assertEqual(len(layer.variables), 1)
self.assertEqual(len(layer.trainable_variables), 1)
self.assertEqual(layer.variables[0].graph, outer_graph)
def testGetUpdateFor(self):
class MyLayer(base_tf_layers.Layer):
def build(self, input_shape):
self.a = self.add_weight("a", (), tf.float32, trainable=False)
self.b = self.add_weight("b", (), tf.float32, trainable=False)
self.add_update(
tf.compat.v1.assign_add(self.a, 1.0, name="b_update")
)
self.built = True
def call(self, inputs):
self.add_update(
tf.compat.v1.assign_add(self.a, inputs, name="a_update")
)
return inputs + 1
with tf.Graph().as_default():
layer = MyLayer()
inputs = tf.compat.v1.placeholder(tf.float32, (), "inputs")
intermediate_inputs = inputs + 1
outputs = layer(intermediate_inputs)
self.assertEqual(len(layer.updates), 2)
self.assertEqual(len(layer.get_updates_for(None)), 1)
self.assertEqual(len(layer.get_updates_for([inputs])), 1)
self.assertEqual(
len(layer.get_updates_for([intermediate_inputs])), 1
)
self.assertEqual(len(layer.get_updates_for([outputs])), 0)
# Call same layer on new input, creating one more conditional update
inputs = tf.compat.v1.placeholder(tf.float32, (), "inputs")
intermediate_inputs = inputs + 1
outputs = layer(intermediate_inputs)
self.assertEqual(len(layer.updates), 3)
self.assertEqual(len(layer.get_updates_for(None)), 1)
# Check that we are successfully filtering out irrelevant updates
self.assertEqual(len(layer.get_updates_for([inputs])), 1)
self.assertEqual(
len(layer.get_updates_for([intermediate_inputs])), 1
)
self.assertEqual(len(layer.get_updates_for([outputs])), 0)
def testGetLossesFor(self):
class MyLayer(base_tf_layers.Layer):
def build(self, input_shape):
self.a = self.add_weight("a", (), tf.float32, trainable=False)
self.b = self.add_weight("b", (), tf.float32, trainable=False)
self.add_loss(self.a)
self.built = True
def call(self, inputs):
self.add_loss(inputs, inputs=True)
return inputs + 1
with tf.Graph().as_default():
layer = MyLayer()
inputs = tf.compat.v1.placeholder(tf.float32, (), "inputs")
intermediate_inputs = inputs + 1
outputs = layer(intermediate_inputs)
self.assertEqual(len(layer.losses), 2)
self.assertEqual(len(layer.get_losses_for(None)), 1)
self.assertEqual(len(layer.get_losses_for([inputs])), 1)
self.assertEqual(
len(layer.get_losses_for([intermediate_inputs])), 1
)
self.assertEqual(len(layer.get_losses_for([outputs])), 0)
# Call same layer on new input, creating one more conditional loss
inputs = tf.compat.v1.placeholder(tf.float32, (), "inputs")
intermediate_inputs = inputs + 1
outputs = layer(intermediate_inputs)
self.assertEqual(len(layer.losses), 3)
self.assertEqual(len(layer.get_losses_for(None)), 1)
# Check that we are successfully filtering out irrelevant losses
self.assertEqual(len(layer.get_losses_for([inputs])), 1)
self.assertEqual(
len(layer.get_losses_for([intermediate_inputs])), 1
)
self.assertEqual(len(layer.get_losses_for([outputs])), 0)
class IdentityLayer(base_tf_layers.Layer):
"""A layer returns the identity of it's input."""
def call(self, inputs):
return inputs
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class DTypeTest(tf.test.TestCase, parameterized.TestCase):
def _const(self, dtype):
return tf.constant(1, dtype=dtype)
def test_dtype_inferred_from_input(self):
# Test with Tensor input
layer = IdentityLayer()
self.assertIsNone(layer.dtype)
layer(self._const("float64"))
self.assertEqual(layer.dtype, "float64")
# Test with Numpy input
layer = IdentityLayer()
self.assertIsNone(layer.dtype)
layer(np.array(1.0, dtype="float64"))
self.assertEqual(layer.dtype, "float64")
# Test with integer input
layer = IdentityLayer()
self.assertIsNone(layer.dtype)
layer(self._const("int32"))
self.assertEqual(layer.dtype, "int32")
# Test layer dtype doesn't change when passed a new dtype
layer = IdentityLayer()
self.assertIsNone(layer.dtype)
layer(self._const("float64"))
self.assertEqual(layer.dtype, "float64")
layer(self._const("float16"))
self.assertEqual(layer.dtype, "float64")
# Test layer dtype inferred from first input
layer = IdentityLayer()
layer([self._const("float32"), self._const("float64")])
self.assertEqual(layer.dtype, "float32")
def test_passing_dtype_to_constructor(self):
layer = IdentityLayer(dtype="float64")
layer(self._const("float32"))
self.assertEqual(layer.dtype, "float64")
layer = IdentityLayer(dtype="int32")
layer(self._const("float32"))
self.assertEqual(layer.dtype, "int32")
layer = IdentityLayer(dtype=tf.float64)
layer(self._const("float32"))
self.assertEqual(layer.dtype, "float64")
def test_inputs_not_casted(self):
layer = IdentityLayer(dtype="float32")
self.assertEqual(layer(self._const("float64")).dtype, "float64")
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/legacy_tf_layers/base_test.py/0 | {
"file_path": "tf-keras/tf_keras/legacy_tf_layers/base_test.py",
"repo_id": "tf-keras",
"token_count": 13533
} | 188 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.