text
stringlengths 5
261k
| id
stringlengths 16
106
| metadata
dict | __index_level_0__
int64 0
266
|
---|---|---|---|
# Keras layers API
Layers are the basic building blocks of neural networks in Keras.
A layer consists of a tensor-in tensor-out computation function (the layer's `call` method)
and some state, held in TensorFlow variables (the layer's *weights*).
A Layer instance is callable, much like a function:
```python
import keras
from keras import layers
layer = layers.Dense(32, activation='relu')
inputs = keras.random.uniform(shape=(10, 20))
outputs = layer(inputs)
```
Unlike a function, though, layers maintain a state, updated when the layer receives data
during training, and stored in `layer.weights`:
```python
>>> layer.weights
[<KerasVariable shape=(20, 32), dtype=float32, path=dense/kernel>,
<KerasVariable shape=(32,), dtype=float32, path=dense/bias>]
```
---
## Creating custom layers
While Keras offers a wide range of built-in layers, they don't cover
ever possible use case. Creating custom layers is very common, and very easy.
See the guide
[Making new layers and models via subclassing](/guides/making_new_layers_and_models_via_subclassing)
for an extensive overview, and refer to the documentation for [the base `Layer` class](base_layer).
---
## Layers API overview
{{toc}}
| keras-io/templates/api/layers/index.md/0 | {
"file_path": "keras-io/templates/api/layers/index.md",
"repo_id": "keras-io",
"token_count": 358
} | 143 |
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="Keras documentation">
<meta name="author" content="Keras Team">
<link rel="shortcut icon" href="https://keras.io/img/favicon.ico">
{% if relative_url %}
<link rel="canonical" href="https://keras.io{{relative_url}}" />
{% endif %}
<!-- Social -->
<meta property="og:title" content="Keras documentation: {{title}}">
<meta property="og:image" content="https://keras.io/img/logo-k-keras-wb.png">
<meta name="twitter:title" content="Keras documentation: {{title}}">
<meta name="twitter:image" content="https://keras.io/img/k-keras-social.png">
<meta name="twitter:card" content="summary">
<title>{{title}}</title>
<!-- Bootstrap core CSS -->
<link href="{{base_url}}css/bootstrap.min.css" rel="stylesheet">
<!-- Custom fonts for this template -->
<link href="https://fonts.googleapis.com/css2?family=Open+Sans:wght@400;600;700;800&display=swap" rel="stylesheet">
<!-- Custom styles for this template -->
<link href="{{base_url}}css/docs.css" rel="stylesheet">
<link href="{{base_url}}css/monokai.css" rel="stylesheet">
<!-- Google Tag Manager -->
<script>(function(w,d,s,l,i){w[l]=w[l]||[];w[l].push({'gtm.start':
new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],
j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src=
'https://www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);
})(window,document,'script','dataLayer','GTM-5DNGF4N');
</script>
<script>
(function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
(i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
})(window,document,'script','https://www.google-analytics.com/analytics.js','ga');
ga('create', 'UA-175165319-128', 'auto');
ga('send', 'pageview');
</script>
<!-- End Google Tag Manager -->
<script async defer src="https://buttons.github.io/buttons.js"></script>
</head>
<body>
<!-- Google Tag Manager (noscript) -->
<noscript><iframe src="https://www.googletagmanager.com/ns.html?id=GTM-5DNGF4N"
height="0" width="0" style="display:none;visibility:hidden"></iframe></noscript>
<!-- End Google Tag Manager (noscript) -->
<div class='k-page'>
<div class="k-nav" id="nav-menu">
<a href='{{base_url}}'><img src='{{base_url}}img/logo-small.png' class='logo-small' /></a>
<div class="nav flex-column nav-pills" role="tablist" aria-orientation="vertical">
{% for item in nav %}
<a class="nav-link{% if item.active %} active{% endif %}" href="{{item.url}}" role="tab" aria-selected="{{item.selected}}">{{item.title}}</a>
{% if item.active %}
{% for child in item.children %}
<a class="nav-sublink{% if child.active %} active{% endif %}" href="{{child.url}}">{{child.title}}</a>
{% if child.active %}
{% for grandchild in child.children %}
<a class="nav-sublink2{% if grandchild.active %} active{% endif %}" href="{{grandchild.url}}">{{grandchild.title}}</a>
{% endfor %}
{% endif %}
{% endfor %}
{% endif %}
{% endfor %}
</div>
</div>
<div class='k-main'>
{{main}}
</div>
</div>
</body>
<footer style="float: left; width: 100%; padding: 1em; border-top: solid 1px #bbb;">
<a href="https://policies.google.com/terms">Terms</a> | <a href="https://policies.google.com/privacy">Privacy</a>
</footer>
</html>
| keras-io/theme/base.html/0 | {
"file_path": "keras-io/theme/base.html",
"repo_id": "keras-io",
"token_count": 1565
} | 144 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import inspect
import time
import tensorflow as tf
import tensorflow_datasets as tfds
from absl import app
from absl import flags
from tensorflow import keras
import keras_nlp
FLAGS = flags.FLAGS
flags.DEFINE_string(
"model",
None,
"The name of the classifier such as BertClassifier.",
)
flags.DEFINE_string(
"preset",
None,
"The name of a preset, e.g. bert_base_multi.",
)
flags.DEFINE_string(
"mixed_precision_policy",
"mixed_float16",
"The global precision policy to use. E.g. 'mixed_float16' or 'float32'.",
)
flags.DEFINE_float("learning_rate", 5e-5, "The learning rate.")
flags.DEFINE_integer("num_epochs", 1, "The number of epochs.")
flags.DEFINE_integer("batch_size", 16, "The batch size.")
tfds.disable_progress_bar()
BUFFER_SIZE = 10000
def create_imdb_dataset():
dataset, info = tfds.load(
"imdb_reviews", as_supervised=True, with_info=True
)
train_dataset, test_dataset = dataset["train"], dataset["test"]
train_dataset = (
train_dataset.shuffle(BUFFER_SIZE)
.batch(FLAGS.batch_size)
.prefetch(tf.data.AUTOTUNE)
)
# We split the test data evenly into validation and test sets.
test_dataset_size = info.splits["test"].num_examples // 2
val_dataset = (
test_dataset.take(test_dataset_size)
.batch(FLAGS.batch_size)
.prefetch(tf.data.AUTOTUNE)
)
test_dataset = (
test_dataset.skip(test_dataset_size)
.batch(FLAGS.batch_size)
.prefetch(tf.data.AUTOTUNE)
)
return train_dataset, val_dataset, test_dataset
def create_model():
for name, symbol in keras_nlp.models.__dict__.items():
if inspect.isclass(symbol) and issubclass(symbol, keras.Model):
if FLAGS.model and name != FLAGS.model:
continue
if not hasattr(symbol, "from_preset"):
continue
for preset in symbol.presets:
if FLAGS.preset and preset != FLAGS.preset:
continue
model = symbol.from_preset(preset)
print(f"Using model {name} with preset {preset}")
return model
raise ValueError(f"Model {FLAGS.model} or preset {FLAGS.preset} not found.")
def train_model(
model: keras.Model,
train_dataset: tf.data.Dataset,
validation_dataset: tf.data.Dataset,
):
model.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(5e-5),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
jit_compile=True,
)
model.fit(
train_dataset,
epochs=FLAGS.num_epochs,
validation_data=validation_dataset,
verbose=2,
)
return model
def evaluate_model(model: keras.Model, test_dataset: tf.data.Dataset):
loss, accuracy = model.evaluate(test_dataset)
print(f"Test loss: {loss}")
print(f"Test accuracy: {accuracy}")
def main(_):
keras.mixed_precision.set_global_policy(FLAGS.mixed_precision_policy)
# Start time
start_time = time.time()
train_dataset, validation_dataset, test_dataset = create_imdb_dataset()
model = create_model()
model = train_model(model, train_dataset, validation_dataset)
evaluate_model(model, test_dataset)
# End time
end_time = time.time()
print(f"Total wall time: {end_time - start_time}")
if __name__ == "__main__":
flags.mark_flag_as_required("model")
app.run(main)
| keras-nlp/benchmarks/sentiment_analysis.py/0 | {
"file_path": "keras-nlp/benchmarks/sentiment_analysis.py",
"repo_id": "keras-nlp",
"token_count": 1704
} | 145 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import tensorflow as tf
from absl import app
from absl import flags
from absl import logging
from tensorflow import keras
# Import data module to include the customized serializable, required for
# loading tokenizer.
import examples.machine_translation.data # noqa: F401.
FLAGS = flags.FLAGS
flags.DEFINE_integer(
"sequence_length",
20,
"Input and output sequence length.",
)
flags.DEFINE_string(
"saved_model_path",
"saved_models/machine_translation_model",
"The path to saved model",
)
flags.DEFINE_string("inputs", None, "The inputs to run machine translation on.")
EXAMPLES = [
(
"Tom doesn't listen to anyone.",
"[start] Tomás no escucha a nadie. [end]",
),
("I got soaked to the skin.", "[start] Estoy chorreando. [end]"),
("I imagined that.", "[start] Me imaginé eso. [end]"),
("The baby is crying.", "[start] El bebé está llorando. [end]"),
(
"I've never felt so exhilarated.",
"[start] Nunca me he sentido tan animado. [end]",
),
(
"Please forgive me for not having written sooner.",
"[start] Perdóname por no haberte escrito antes, por favor. [end]",
),
("I expected more from you.", "[start] Esperaba más de vos. [end]"),
("I have a computer.", "[start] Tengo un computador. [end]"),
("Dinner's ready!", "[start] ¡La cena está lista! [end]"),
("Let me finish.", "[start] Déjame terminar. [end]"),
]
def decode_sequence(input_sentence, model, max_sequence_length, lookup_table):
encoder_tokenizer = model.encoder_tokenizer
decoder_tokenizer = model.decoder_tokenizer
tokenized_input = encoder_tokenizer([input_sentence])
start_token = decoder_tokenizer("[start]")[0].numpy()
end_token = decoder_tokenizer("[end]")[0].numpy()
decoded_sentence = [start_token]
for i in range(max_sequence_length):
decoder_inputs = tf.convert_to_tensor(
[decoded_sentence],
dtype="int64",
)
decoder_inputs = tf.concat(
[
decoder_inputs,
tf.zeros(
[1, max_sequence_length - i - 1],
dtype="int64",
),
],
axis=1,
)
input = {
"encoder_inputs": tokenized_input,
"decoder_inputs": decoder_inputs,
}
predictions = model(input)
predicted_token = np.argmax(predictions[0, i, :])
decoded_sentence.append(predicted_token)
if predicted_token == end_token:
break
detokenized_output = []
for token in decoded_sentence:
detokenized_output.append(lookup_table[token])
return " ".join(detokenized_output)
def main(_):
loaded_model = keras.models.load_model(FLAGS.saved_model_path)
decoder_tokenizer = loaded_model.decoder_tokenizer
vocab = decoder_tokenizer.get_vocabulary()
index_lookup_table = dict(zip(range(len(vocab)), vocab))
if FLAGS.inputs is not None:
# Run inference on user-specified sentence.
translated = decode_sequence(
FLAGS.inputs,
loaded_model,
FLAGS.sequence_length,
index_lookup_table,
)
logging.info(f"Translated results: {translated}")
else:
translated = []
for example in EXAMPLES:
translated.append(
decode_sequence(
example[0],
loaded_model,
FLAGS.sequence_length,
index_lookup_table,
)
)
for i in range(len(EXAMPLES)):
print("ENGLISH SENTENCE: ", EXAMPLES[i][0])
print("MACHINE TRANSLATED RESULT: ", translated[i])
print("GOLDEN: ", EXAMPLES[i][1])
if __name__ == "__main__":
app.run(main)
| keras-nlp/examples/machine_translation/inference.py/0 | {
"file_path": "keras-nlp/examples/machine_translation/inference.py",
"repo_id": "keras-nlp",
"token_count": 1913
} | 146 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
from keras_nlp.backend import keras
from keras_nlp.backend import ops
from keras_nlp.backend import random
from keras_nlp.layers.modeling.position_embedding import PositionEmbedding
from keras_nlp.tests.test_case import TestCase
def custom_init(shape, dtype=None):
count = 1
for length in shape:
count *= length
return ops.reshape(ops.arange(count, dtype=dtype), shape)
class PositionEmbeddingTest(TestCase):
def test_layer_behaviors(self):
self.run_layer_test(
cls=PositionEmbedding,
init_kwargs={
"sequence_length": 21,
},
input_data=random.uniform(shape=(4, 21, 30)),
expected_output_shape=(4, 21, 30),
expected_num_trainable_weights=1,
)
def test_layer_behaviors_4d(self):
self.run_layer_test(
cls=PositionEmbedding,
init_kwargs={
"sequence_length": 21,
},
input_data=random.uniform(shape=(4, 5, 21, 30)),
expected_output_shape=(4, 5, 21, 30),
expected_num_trainable_weights=1,
)
def test_float16_dtype(self):
# Create a 3-dimensional input (the first dimension is implicit).
sequence_length = 21
feature_size = 30
test_layer = PositionEmbedding(
sequence_length=sequence_length, dtype="float16"
)
input_tensor = keras.Input(shape=(sequence_length, feature_size))
output_tensor = test_layer(input_tensor)
# When using static position embedding shapes, the output is expected
# to be the same as the input shape in all dimensions save batch.
expected_output_shape = (None, sequence_length, feature_size)
self.assertEqual(expected_output_shape, output_tensor.shape)
# The default output dtype for this layer should be "float32".
self.assertEqual("float16", output_tensor.dtype)
def test_dynamic_layer_output_shape(self):
max_sequence_length = 40
feature_size = 30
test_layer = PositionEmbedding(sequence_length=max_sequence_length)
# Create a 3-dimensional input (the first dimension is implicit).
input_tensor = keras.Input(shape=(None, feature_size))
output_tensor = test_layer(input_tensor)
# When using dynamic position embedding shapes, the output is expected
# to be the same as the input shape in all dimensions - but may be None
# if the input shape is None there.
expected_output_shape = (None, None, feature_size)
self.assertEqual(expected_output_shape, output_tensor.shape)
def test_more_than_3_dimensions_dynamic(self):
max_sequence_length = 60
feature_size = 30
test_layer = PositionEmbedding(sequence_length=max_sequence_length)
# Create a 4-dimensional input (the first dimension is implicit).
input_tensor = keras.Input(shape=(None, None, feature_size))
output_tensor = test_layer(input_tensor)
# When using dynamic position embedding shapes, the output is expected
# to be the same as the input shape in all dimensions - but may be None
# if the input shape is None there.
expected_output_shape = (None, None, None, feature_size)
self.assertEqual(expected_output_shape, output_tensor.shape)
def test_dynamic_layer_slicing(self):
max_sequence_length = 40
feature_size = 30
test_layer = PositionEmbedding(sequence_length=max_sequence_length)
# Create a 3-dimensional input (the first dimension is implicit).
input_tensor = keras.Input(shape=(None, feature_size))
output_tensor = test_layer(input_tensor)
model = keras.Model(input_tensor, output_tensor)
# Create input data that is shorter than max_sequence_length, which
# should trigger a down-slice.
input_length = 17
# Note: This test explicitly uses a batch size of 1. This is to get
# around Keras' restriction on Model invocations: inputs are expected to
# have the same batch cardinality as outputs. In practice, this layer
# should be used inside a model, where it can be projected when added to
# another tensor.
input_data = np.ones(shape=[1, input_length, feature_size])
output_data = model.predict(input_data)
self.assertAllEqual([1, input_length, feature_size], output_data.shape)
def test_callable_initializer(self):
max_sequence_length = 4
feature_size = 3
test_layer = PositionEmbedding(
sequence_length=max_sequence_length,
initializer=custom_init,
)
inputs = keras.Input(shape=(max_sequence_length, feature_size))
outputs = test_layer(inputs)
model = keras.Model(inputs=inputs, outputs=outputs)
batch_size = 2
data = np.zeros(shape=[batch_size, max_sequence_length, feature_size])
model(data)
model_output = model.predict(data)
expected_output = np.broadcast_to(
np.reshape(
np.arange(max_sequence_length * feature_size),
[max_sequence_length, feature_size],
),
[batch_size, max_sequence_length, feature_size],
)
self.assertAllClose(model_output, expected_output)
def test_start_index(self):
batch_size, seq_length, feature_size = 2, 3, 4
layer = PositionEmbedding(seq_length)
data = random.uniform(shape=(batch_size, seq_length, feature_size))
full_output = layer(data)
sequential_output = ops.zeros((batch_size, seq_length, feature_size))
for i in range(seq_length):
parial_output = layer(data[:, i : i + 1, :], start_index=i)
sequential_output = ops.slice_update(
sequential_output, (0, i, 0), parial_output
)
self.assertAllClose(full_output, sequential_output)
| keras-nlp/keras_nlp/layers/modeling/position_embedding_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/layers/modeling/position_embedding_test.py",
"repo_id": "keras-nlp",
"token_count": 2629
} | 147 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import tensorflow as tf
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.layers.preprocessing.preprocessing_layer import (
PreprocessingLayer,
)
from keras_nlp.utils.tensor_utils import assert_tf_text_installed
from keras_nlp.utils.tensor_utils import convert_to_ragged_batch
try:
import tensorflow_text as tf_text
except ImportError:
tf_text = None
@keras_nlp_export("keras_nlp.layers.MaskedLMMaskGenerator")
class MaskedLMMaskGenerator(PreprocessingLayer):
"""Layer that applies language model masking.
This layer is useful for preparing inputs for masked language modeling
(MaskedLM) tasks. It follows the masking strategy described in the
[original BERT paper](https://arxiv.org/abs/1810.04805). Given tokenized
text, it randomly selects certain number of tokens for masking. Then for
each selected token, it has a chance (configurable) to be replaced by
"mask token" or random token, or stay unchanged.
Input data should be passed as tensors, `tf.RaggedTensor`s, or lists. For
batched input, inputs should be a list of lists or a rank two tensor. For
unbatched inputs, each element should be a list or a rank one tensor.
This layer can be used with `tf.data` to generate dynamic masks on the fly
during training.
Args:
vocabulary_size: int, the size of the vocabulary.
mask_selection_rate: float, the probability of a token is selected for
masking.
mask_token_id: int. The id of mask token.
mask_selection_length: int. Maximum number of tokens
selected for masking in each sequence. If set, the output
`mask_positions`, `mask_ids` and `mask_weights` will be padded
to dense tensors of length `mask_selection_length`, otherwise
the output will be a RaggedTensor. Defaults to `None`.
unselectable_token_ids: A list of tokens id that should not be
considered eligible for masking. By default, we assume `0`
corresponds to a padding token and ignore it. Defaults to `[0]`.
mask_token_rate: float. `mask_token_rate` must be
between 0 and 1 which indicates how often the mask_token is
substituted for tokens selected for masking. Defaults to `0.8`.
random_token_rate: float. `random_token_rate` must be
between 0 and 1 which indicates how often a random token is
substituted for tokens selected for masking.
Note: mask_token_rate + random_token_rate <= 1, and for
(1 - mask_token_rate - random_token_rate), the token will not be
changed. Defaults to `0.1`.
Returns:
A Dict with 4 keys:
token_ids: Tensor or RaggedTensor, has the same type and shape of
input. Sequence after getting masked.
mask_positions: Tensor, or RaggedTensor if `mask_selection_length`
is None. The positions of token_ids getting masked.
mask_ids: Tensor, or RaggedTensor if `mask_selection_length` is
None. The original token ids at masked positions.
mask_weights: Tensor, or RaggedTensor if `mask_selection_length` is
None. `mask_weights` has the same shape as `mask_positions` and
`mask_ids`. Each element in `mask_weights` should be 0 or 1,
1 means the corresponding position in `mask_positions` is an
actual mask, 0 means it is a pad.
Examples:
Basic usage.
```python
masker = keras_nlp.layers.MaskedLMMaskGenerator(
vocabulary_size=10,
mask_selection_rate=0.2,
mask_token_id=0,
mask_selection_length=5
)
# Dense input.
masker([1, 2, 3, 4, 5])
# Ragged input.
masker([[1, 2], [1, 2, 3, 4]])
```
Masking a batch that contains special tokens.
```python
pad_id, cls_id, sep_id, mask_id = 0, 1, 2, 3
batch = [
[cls_id, 4, 5, 6, sep_id, 7, 8, sep_id, pad_id, pad_id],
[cls_id, 4, 5, sep_id, 6, 7, 8, 9, sep_id, pad_id],
]
masker = keras_nlp.layers.MaskedLMMaskGenerator(
vocabulary_size = 10,
mask_selection_rate = 0.2,
mask_selection_length = 5,
mask_token_id = mask_id,
unselectable_token_ids = [
cls_id,
sep_id,
pad_id,
]
)
masker(batch)
```
"""
def __init__(
self,
vocabulary_size,
mask_selection_rate,
mask_token_id,
mask_selection_length=None,
unselectable_token_ids=[0],
mask_token_rate=0.8,
random_token_rate=0.1,
**kwargs,
):
assert_tf_text_installed(self.__class__.__name__)
super().__init__(**kwargs)
self.vocabulary_size = vocabulary_size
self.unselectable_token_ids = unselectable_token_ids
self.mask_selection_rate = mask_selection_rate
self.mask_selection_length = mask_selection_length
self.mask_token_rate = mask_token_rate
self.random_token_rate = random_token_rate
if mask_token_id >= vocabulary_size:
raise ValueError(
f"Mask token id should be in range [0, vocabulary_size - 1], "
f"but received mask_token_id={mask_token_id}."
)
self.mask_token_id = mask_token_id
max_selections = self.mask_selection_length
if max_selections is None:
# Set a large number to remove the `max_selections_per_batch` cap.
max_selections = 2**31 - 1
self._random_selector = tf_text.RandomItemSelector(
max_selections_per_batch=max_selections,
selection_rate=self.mask_selection_rate,
unselectable_ids=self.unselectable_token_ids,
)
self._mask_values_chooser = tf_text.MaskValuesChooser(
self.vocabulary_size,
self.mask_token_id,
mask_token_rate=self.mask_token_rate,
random_token_rate=self.random_token_rate,
)
def call(self, inputs):
inputs, unbatched, rectangular = convert_to_ragged_batch(inputs)
(
token_ids,
mask_positions,
mask_ids,
) = tf_text.mask_language_model(
inputs,
item_selector=self._random_selector,
mask_values_chooser=self._mask_values_chooser,
)
if rectangular:
# If we converted the input from dense to ragged, convert back.
token_ids = token_ids.to_tensor()
mask_weights = tf.ones_like(mask_positions, self.compute_dtype)
# If `mask_selection_length` is set, convert to dense.
if self.mask_selection_length:
target_shape = tf.cast([-1, self.mask_selection_length], "int64")
mask_positions = mask_positions.to_tensor(shape=target_shape)
mask_ids = mask_ids.to_tensor(shape=target_shape)
mask_weights = mask_weights.to_tensor(shape=target_shape)
if unbatched:
# If inputs is 1D, we format the output to be 1D as well.
token_ids = tf.squeeze(token_ids, axis=0)
mask_positions = tf.squeeze(mask_positions, axis=0)
mask_ids = tf.squeeze(mask_ids, axis=0)
mask_weights = tf.squeeze(mask_weights, axis=0)
return {
"token_ids": token_ids,
"mask_positions": mask_positions,
"mask_ids": mask_ids,
"mask_weights": mask_weights,
}
def get_config(self):
config = super().get_config()
config.update(
{
"vocabulary_size": self.vocabulary_size,
"mask_selection_rate": self.mask_selection_rate,
"mask_selection_length": self.mask_selection_length,
"unselectable_token_ids": self.unselectable_token_ids,
"mask_token_id": self.mask_token_id,
"mask_token_rate": self.mask_token_rate,
"random_token_rate": self.random_token_rate,
}
)
return config
| keras-nlp/keras_nlp/layers/preprocessing/masked_lm_mask_generator.py/0 | {
"file_path": "keras-nlp/keras_nlp/layers/preprocessing/masked_lm_mask_generator.py",
"repo_id": "keras-nlp",
"token_count": 3802
} | 148 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.backend import ops
from keras_nlp.utils.tensor_utils import is_float_dtype
@keras_nlp_export("keras_nlp.metrics.Perplexity")
class Perplexity(keras.metrics.Metric):
"""Perplexity metric.
This class implements the perplexity metric. In short, this class calculates
the cross entropy loss and takes its exponent.
Note: This implementation is not suitable for fixed-size windows.
Args:
from_logits: bool. If True, `y_pred` (input to `update_state()`) should
be the logits as returned by the model. Otherwise, `y_pred` is a
tensor of probabilities.
mask_token_id: int. ID of the token to be masked. If provided, the mask
is computed for this class. Note that if this field is provided, and
if the `sample_weight` field in `update_state()` is also provided,
we will compute the final `sample_weight` as the element-wise
product of the mask and the `sample_weight`.
dtype: string or tf.dtypes.Dtype. Precision of metric computation. If
not specified, it defaults to `"float32"`.
name: string. Name of the metric instance.
**kwargs: Other keyword arguments.
Examples:
1. Calculate perplexity by calling update_state() and result().
1.1. `sample_weight`, and `mask_token_id` are not provided.
>>> np.random.seed(42)
>>> perplexity = keras_nlp.metrics.Perplexity(name="perplexity")
>>> target = np.random.randint(10, size=[2, 5])
>>> logits = np.random.uniform(size=(2, 5, 10))
>>> perplexity.update_state(target, logits)
>>> perplexity.result()
<tf.Tensor: shape=(), dtype=float32, numpy=14.352535>
1.2. `sample_weight` specified (masking token with ID 0).
>>> np.random.seed(42)
>>> perplexity = keras_nlp.metrics.Perplexity(name="perplexity")
>>> target = np.random.randint(10, size=[2, 5])
>>> logits = np.random.uniform(size=(2, 5, 10))
>>> sample_weight = (target != 0).astype("float32")
>>> perplexity.update_state(target, logits, sample_weight)
>>> perplexity.result()
<tf.Tensor: shape=(), dtype=float32, numpy=14.352535>
2. Call perplexity directly.
>>> np.random.seed(42)
>>> perplexity = keras_nlp.metrics.Perplexity(name="perplexity")
>>> target = np.random.randint(10, size=[2, 5])
>>> logits = np.random.uniform(size=(2, 5, 10))
>>> perplexity(target, logits)
<tf.Tensor: shape=(), dtype=float32, numpy=14.352535>
3. Provide the padding token ID and let the class compute the mask on its
own.
>>> np.random.seed(42)
>>> perplexity = keras_nlp.metrics.Perplexity(mask_token_id=0)
>>> target = np.random.randint(10, size=[2, 5])
>>> logits = np.random.uniform(size=(2, 5, 10))
>>> perplexity(target, logits)
<tf.Tensor: shape=(), dtype=float32, numpy=14.352535>
"""
def __init__(
self,
from_logits=False,
mask_token_id=None,
dtype="float32",
name="perplexity",
**kwargs,
):
if not is_float_dtype(dtype):
raise ValueError(
"`dtype` must be a floating point type. "
f"Received: dtype={dtype}"
)
super().__init__(name=name, dtype=dtype, **kwargs)
self._crossentropy = keras.losses.SparseCategoricalCrossentropy(
from_logits=from_logits, reduction="sum"
)
self.from_logits = from_logits
self.mask_token_id = mask_token_id
self._aggregate_crossentropy = self.add_weight(
shape=(),
initializer="zeros",
dtype=self.dtype,
name="aggregate_crossentropy",
)
self._number_of_samples = self.add_weight(
shape=(),
initializer="zeros",
dtype=self.dtype,
name="number_of_samples",
)
def update_state(self, y_true, y_pred, sample_weight=None):
# y_true shape: (batch_size, seq_len)
# y_pred shape: (batch_size, seq_len, vocab_size)
y_true = ops.cast(y_true, self.dtype)
y_pred = ops.cast(y_pred, self.dtype)
if sample_weight is not None:
sample_weight = ops.cast(sample_weight, self.dtype)
batch_size = ops.cast(ops.shape(y_true)[0], self.dtype)
if self.mask_token_id is not None:
mask = ops.cast(
ops.logical_not(ops.equal(y_true, self.mask_token_id)),
self.dtype,
)
if sample_weight is None:
sample_weight = mask
else:
sample_weight = ops.multiply(mask, sample_weight)
# Calculate the Cross Entropy Loss.
crossentropy_value = ops.cast(
self._crossentropy(y_true, y_pred, sample_weight=sample_weight),
self.dtype,
) # scalar
# Divide the loss by the number of non-masked tokens
if sample_weight is not None:
crossentropy_value = crossentropy_value / ops.sum(
sample_weight
) # scalar
else:
crossentropy_value = crossentropy_value / (
ops.cast(ops.shape(y_true)[0], self.dtype)
* ops.cast(ops.shape(y_true)[1], self.dtype)
) # scalar
self._aggregate_crossentropy.assign_add(batch_size * crossentropy_value)
self._number_of_samples.assign_add(batch_size)
def result(self):
perplexity_score = ops.where(
ops.equal(ops.convert_to_tensor(self._number_of_samples), 0),
0,
ops.exp(self._aggregate_crossentropy / self._number_of_samples),
)
return perplexity_score
def reset_state(self):
self._aggregate_crossentropy.assign(0.0)
self._number_of_samples.assign(0.0)
def get_config(self):
config = super().get_config()
config.update(
{
"from_logits": self.from_logits,
"mask_token_id": self.mask_token_id,
}
)
return config
| keras-nlp/keras_nlp/metrics/perplexity.py/0 | {
"file_path": "keras-nlp/keras_nlp/metrics/perplexity.py",
"repo_id": "keras-nlp",
"token_count": 2965
} | 149 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import pytest
from keras_nlp.models.albert.albert_backbone import AlbertBackbone
from keras_nlp.models.albert.albert_masked_lm import AlbertMaskedLM
from keras_nlp.models.albert.albert_masked_lm_preprocessor import (
AlbertMaskedLMPreprocessor,
)
from keras_nlp.models.albert.albert_tokenizer import AlbertTokenizer
from keras_nlp.tests.test_case import TestCase
class AlbertMaskedLMTest(TestCase):
def setUp(self):
# Setup model.
self.preprocessor = AlbertMaskedLMPreprocessor(
AlbertTokenizer(
# Generated using create_albert_test_proto.py
proto=os.path.join(
self.get_test_data_dir(), "albert_test_vocab.spm"
),
sequence_length=5,
),
# Simplify our testing by masking every available token.
mask_selection_rate=1.0,
mask_token_rate=1.0,
random_token_rate=0.0,
mask_selection_length=5,
sequence_length=5,
)
self.backbone = AlbertBackbone(
vocabulary_size=self.preprocessor.tokenizer.vocabulary_size(),
num_layers=2,
num_heads=2,
hidden_dim=2,
embedding_dim=2,
intermediate_dim=4,
max_sequence_length=self.preprocessor.sequence_length,
)
self.init_kwargs = {
"preprocessor": self.preprocessor,
"backbone": self.backbone,
}
self.train_data = (
["the quick brown fox.", "the slow brown fox."], # Features.
)
self.input_data = self.preprocessor(*self.train_data)[0]
def test_masked_lm_basics(self):
self.run_task_test(
cls=AlbertMaskedLM,
init_kwargs=self.init_kwargs,
train_data=self.train_data,
expected_output_shape=(2, 5, 12),
)
@pytest.mark.large
def test_saved_model(self):
self.run_model_saving_test(
cls=AlbertMaskedLM,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
)
@pytest.mark.extra_large
def test_all_presets(self):
for preset in AlbertMaskedLM.presets:
self.run_preset_test(
cls=AlbertMaskedLM,
preset=preset,
input_data=self.input_data,
)
| keras-nlp/keras_nlp/models/albert/albert_masked_lm_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/albert/albert_masked_lm_test.py",
"repo_id": "keras-nlp",
"token_count": 1368
} | 150 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from unittest.mock import patch
import pytest
from keras_nlp.backend import ops
from keras_nlp.models.bart.bart_backbone import BartBackbone
from keras_nlp.models.bart.bart_seq_2_seq_lm import BartSeq2SeqLM
from keras_nlp.models.bart.bart_seq_2_seq_lm_preprocessor import (
BartSeq2SeqLMPreprocessor,
)
from keras_nlp.models.bart.bart_tokenizer import BartTokenizer
from keras_nlp.tests.test_case import TestCase
class BartSeq2SeqLMTest(TestCase):
def setUp(self):
self.vocab = ["<s>", "<pad>", "</s>", "air", "Ġair", "plane", "Ġat"]
self.vocab += ["port", "<mask>"]
self.vocab = dict([(token, i) for i, token in enumerate(self.vocab)])
self.merges = ["Ġ a", "Ġ t", "Ġ i", "Ġ b", "a i", "p l", "n e"]
self.merges += ["Ġa t", "p o", "r t", "Ġt h", "ai r", "pl a", "po rt"]
self.merges += ["Ġai r", "Ġa i", "pla ne"]
self.preprocessor = BartSeq2SeqLMPreprocessor(
BartTokenizer(vocabulary=self.vocab, merges=self.merges),
encoder_sequence_length=12,
decoder_sequence_length=10,
)
self.backbone = BartBackbone(
vocabulary_size=self.preprocessor.tokenizer.vocabulary_size(),
num_layers=2,
num_heads=2,
hidden_dim=4,
intermediate_dim=8,
max_sequence_length=12,
)
self.init_kwargs = {
"preprocessor": self.preprocessor,
"backbone": self.backbone,
}
self.train_data = (
{
"encoder_text": [
" airplane at airport",
" airplane at airport",
],
"decoder_text": [" airplane airport", " airplane airport"],
},
)
self.input_data = self.preprocessor(*self.train_data)[0]
def test_causal_lm_basics(self):
self.run_task_test(
cls=BartSeq2SeqLM,
init_kwargs=self.init_kwargs,
train_data=self.train_data,
expected_output_shape=(2, 10, 9),
)
def test_generate(self):
# String input.
inputs = {
"encoder_text": " airplane at airport",
"decoder_text": " airplane at",
}
seq_2_seq_lm = BartSeq2SeqLM(**self.init_kwargs)
output = seq_2_seq_lm.generate(inputs)
self.assertTrue(" airplane at" in output)
# String tensor input.
self.assertIsInstance(
seq_2_seq_lm.generate(" airplane at airport"), str
)
# Int tensor input.
seq_2_seq_lm.preprocessor = None
preprocessed_batch = self.preprocessor.generate_preprocess(inputs)
outputs = seq_2_seq_lm.generate(preprocessed_batch)
# Assert prompt is in output in token id space.
self.assertAllEqual(
outputs["decoder_token_ids"][:, :5],
preprocessed_batch["decoder_token_ids"][:, :5],
)
self.assertAllEqual(
outputs["decoder_padding_mask"][:, :5],
preprocessed_batch["decoder_padding_mask"][:, :5],
)
def test_early_stopping(self):
seq_2_seq_lm = BartSeq2SeqLM(**self.init_kwargs)
call_decoder_with_cache = seq_2_seq_lm.call_decoder_with_cache
def wrapper(*args, **kwargs):
"""Modify output logits to always favor end_token_id"""
(
logits,
hidden_states,
self_attention_cache,
cross_attention_cache,
) = call_decoder_with_cache(*args, **kwargs)
index = self.preprocessor.tokenizer.end_token_id
update = ops.ones_like(logits)[:, :, index] * 1.0e9
update = ops.expand_dims(update, axis=-1)
logits = ops.slice_update(logits, (0, 0, index), update)
return (
logits,
hidden_states,
self_attention_cache,
cross_attention_cache,
)
with patch.object(
seq_2_seq_lm, "call_decoder_with_cache", wraps=wrapper
):
inputs = {
"encoder_text": [
" airplane at airport",
" airplane at airport",
],
"decoder_text": [" airplane at", " airplane"],
}
output = seq_2_seq_lm.generate(inputs)
# We should immediately abort and output the prompt.
self.assertAllEqual(inputs["decoder_text"], output)
def test_generate_compilation(self):
seq_2_seq_lm = BartSeq2SeqLM(**self.init_kwargs)
# Assert we do not recompile with successive calls.
seq_2_seq_lm.generate(" airplane at airport")
first_fn = seq_2_seq_lm.generate_function
seq_2_seq_lm.generate(" airplane at airport")
second_fn = seq_2_seq_lm.generate_function
self.assertEqual(first_fn, second_fn)
# Assert we do recompile after compile is called.
seq_2_seq_lm.compile(sampler="greedy")
self.assertIsNone(seq_2_seq_lm.generate_function)
def test_beam_search(self):
seq_2_seq_lm = BartSeq2SeqLM(
backbone=self.backbone,
preprocessor=self.preprocessor,
)
seq_2_seq_lm.compile(sampler="beam")
seq_2_seq_lm.generate(" airplane at airport")
@pytest.mark.large
def test_saved_model(self):
self.run_model_saving_test(
cls=BartSeq2SeqLM,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
)
@pytest.mark.extra_large
def test_all_presets(self):
for preset in BartSeq2SeqLM.presets:
self.run_preset_test(
cls=BartSeq2SeqLM,
preset=preset,
input_data=self.input_data,
)
| keras-nlp/keras_nlp/models/bart/bart_seq_2_seq_lm_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/bart/bart_seq_2_seq_lm_test.py",
"repo_id": "keras-nlp",
"token_count": 3159
} | 151 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import pytest
from keras_nlp.models.bert.bert_tokenizer import BertTokenizer
from keras_nlp.tests.test_case import TestCase
class BertTokenizerTest(TestCase):
def setUp(self):
self.vocab = ["[PAD]", "[UNK]", "[CLS]", "[SEP]", "[MASK]"]
self.vocab += ["THE", "QUICK", "BROWN", "FOX"]
self.vocab += ["the", "quick", "brown", "fox"]
self.init_kwargs = {"vocabulary": self.vocab}
self.input_data = ["THE QUICK BROWN FOX", "THE FOX"]
def test_tokenizer_basics(self):
self.run_preprocessing_layer_test(
cls=BertTokenizer,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
expected_output=[[5, 6, 7, 8], [5, 8]],
)
def test_lowercase(self):
tokenizer = BertTokenizer(vocabulary=self.vocab, lowercase=True)
output = tokenizer(self.input_data)
self.assertAllEqual(output, [[9, 10, 11, 12], [9, 12]])
def test_errors_missing_special_tokens(self):
with self.assertRaises(ValueError):
BertTokenizer(vocabulary=["a", "b", "c"])
@pytest.mark.large
def test_smallest_preset(self):
self.run_preset_test(
cls=BertTokenizer,
preset="bert_tiny_en_uncased",
input_data=["The quick brown fox."],
expected_output=[[1996, 4248, 2829, 4419, 1012]],
)
@pytest.mark.extra_large
def test_all_presets(self):
for preset in BertTokenizer.presets:
self.run_preset_test(
cls=BertTokenizer,
preset=preset,
input_data=self.input_data,
)
| keras-nlp/keras_nlp/models/bert/bert_tokenizer_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/bert/bert_tokenizer_test.py",
"repo_id": "keras-nlp",
"token_count": 954
} | 152 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.models.deberta_v3.deberta_v3_backbone import DebertaV3Backbone
from keras_nlp.models.deberta_v3.deberta_v3_backbone import (
deberta_kernel_initializer,
)
from keras_nlp.models.deberta_v3.deberta_v3_preprocessor import (
DebertaV3Preprocessor,
)
from keras_nlp.models.deberta_v3.deberta_v3_presets import backbone_presets
from keras_nlp.models.task import Task
from keras_nlp.utils.python_utils import classproperty
@keras_nlp_export("keras_nlp.models.DebertaV3Classifier")
class DebertaV3Classifier(Task):
"""An end-to-end DeBERTa model for classification tasks.
This model attaches a classification head to a
`keras_nlp.model.DebertaV3Backbone` model, mapping from the backbone
outputs to logit output suitable for a classification task. For usage of
this model with pre-trained weights, see the `from_preset()` method.
This model can optionally be configured with a `preprocessor` layer, in
which case it will automatically apply preprocessing to raw inputs during
`fit()`, `predict()`, and `evaluate()`. This is done by default when
creating the model with `from_preset()`.
Note: `DebertaV3Backbone` has a performance issue on TPUs, and we recommend
other models for TPU training and inference.
Disclaimer: Pre-trained models are provided on an "as is" basis, without
warranties or conditions of any kind. The underlying model is provided by a
third party and subject to a separate license, available
[here](https://github.com/microsoft/DeBERTa).
Args:
backbone: A `keras_nlp.models.DebertaV3` instance.
num_classes: int. Number of classes to predict.
preprocessor: A `keras_nlp.models.DebertaV3Preprocessor` or `None`. If
`None`, this model will not apply preprocessing, and inputs should
be preprocessed before calling the model.
activation: Optional `str` or callable. The
activation function to use on the model outputs. Set
`activation="softmax"` to return output probabilities.
Defaults to `None`.
hidden_dim: int. The size of the pooler layer.
dropout: float. Dropout probability applied to the pooled output. For
the second dropout layer, `backbone.dropout` is used.
Examples:
Raw string data.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
labels = [0, 3]
# Pretrained classifier.
classifier = keras_nlp.models.DebertaV3Classifier.from_preset(
"deberta_v3_base_en",
num_classes=4,
)
classifier.fit(x=features, y=labels, batch_size=2)
classifier.predict(x=features, batch_size=2)
# Re-compile (e.g., with a new learning rate).
classifier.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(5e-5),
jit_compile=True,
)
# Access backbone programmatically (e.g., to change `trainable`).
classifier.backbone.trainable = False
# Fit again.
classifier.fit(x=features, y=labels, batch_size=2)
```
Preprocessed integer data.
```python
features = {
"token_ids": np.ones(shape=(2, 12), dtype="int32"),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
labels = [0, 3]
# Pretrained classifier without preprocessing.
classifier = keras_nlp.models.DebertaV3Classifier.from_preset(
"deberta_v3_base_en",
num_classes=4,
preprocessor=None,
)
classifier.fit(x=features, y=labels, batch_size=2)
```
Custom backbone and vocabulary.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
labels = [0, 3]
bytes_io = io.BytesIO()
ds = tf.data.Dataset.from_tensor_slices(features)
sentencepiece.SentencePieceTrainer.train(
sentence_iterator=ds.as_numpy_iterator(),
model_writer=bytes_io,
vocab_size=10,
model_type="WORD",
pad_id=0,
bos_id=1,
eos_id=2,
unk_id=3,
pad_piece="[PAD]",
bos_piece="[CLS]",
eos_piece="[SEP]",
unk_piece="[UNK]",
)
tokenizer = keras_nlp.models.DebertaV3Tokenizer(
proto=bytes_io.getvalue(),
)
preprocessor = keras_nlp.models.DebertaV3Preprocessor(
tokenizer=tokenizer,
sequence_length=128,
)
backbone = keras_nlp.models.DebertaV3Backbone(
vocabulary_size=30552,
num_layers=4,
num_heads=4,
hidden_dim=256,
intermediate_dim=512,
max_sequence_length=128,
)
classifier = keras_nlp.models.DebertaV3Classifier(
backbone=backbone,
preprocessor=preprocessor,
num_classes=4,
)
classifier.fit(x=features, y=labels, batch_size=2)
```
"""
def __init__(
self,
backbone,
num_classes,
preprocessor=None,
activation=None,
hidden_dim=None,
dropout=0.0,
**kwargs,
):
# === Layers ===
self.backbone = backbone
self.preprocessor = preprocessor
self.pooled_dropout = keras.layers.Dropout(
dropout,
dtype=backbone.dtype_policy,
name="pooled_dropout",
)
hidden_dim = hidden_dim or backbone.hidden_dim
self.pooled_dense = keras.layers.Dense(
hidden_dim,
activation=keras.activations.gelu,
dtype=backbone.dtype_policy,
name="pooled_dense",
)
self.output_dropout = keras.layers.Dropout(
backbone.dropout,
dtype=backbone.dtype_policy,
name="classifier_dropout",
)
self.output_dense = keras.layers.Dense(
num_classes,
kernel_initializer=deberta_kernel_initializer(),
activation=activation,
dtype=backbone.dtype_policy,
name="logits",
)
# === Functional Model ===
inputs = backbone.input
x = backbone(inputs)[:, backbone.start_token_index, :]
x = self.pooled_dropout(x)
x = self.pooled_dense(x)
x = self.output_dropout(x)
outputs = self.output_dense(x)
super().__init__(
inputs=inputs,
outputs=outputs,
**kwargs,
)
# === Config ===
self.backbone = backbone
self.preprocessor = preprocessor
self.num_classes = num_classes
self.activation = keras.activations.get(activation)
self.hidden_dim = hidden_dim
self.dropout = dropout
# === Default compilation ===
logit_output = self.activation == keras.activations.linear
self.compile(
loss=keras.losses.SparseCategoricalCrossentropy(
from_logits=logit_output
),
optimizer=keras.optimizers.Adam(5e-5),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
jit_compile=True,
)
def get_config(self):
config = super().get_config()
config.update(
{
"num_classes": self.num_classes,
"activation": keras.activations.serialize(self.activation),
"hidden_dim": self.hidden_dim,
"dropout": self.dropout,
}
)
return config
@classproperty
def backbone_cls(cls):
return DebertaV3Backbone
@classproperty
def preprocessor_cls(cls):
return DebertaV3Preprocessor
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
| keras-nlp/keras_nlp/models/deberta_v3/deberta_v3_classifier.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/deberta_v3/deberta_v3_classifier.py",
"repo_id": "keras-nlp",
"token_count": 3628
} | 153 |
# Copyright 2024 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
from keras_nlp.backend import keras
from keras_nlp.backend import ops
from keras_nlp.utils.keras_utils import clone_initializer
class CachedGemmaAttention(keras.layers.Layer):
"""A cached grouped query attention layer."""
def __init__(
self,
head_dim,
num_query_heads,
num_key_value_heads,
kernel_initializer="glorot_uniform",
dropout=0,
**kwargs,
):
super().__init__(**kwargs)
self.num_query_heads = num_query_heads
self.num_key_value_heads = num_key_value_heads
self.head_dim = head_dim
self.dropout = dropout
self._kernel_initializer = keras.initializers.get(
clone_initializer(kernel_initializer)
)
self.num_key_value_groups = num_query_heads // num_key_value_heads
def build(self, inputs_shape):
self.hidden_dim = inputs_shape[-1]
self.query_dense = keras.layers.EinsumDense(
"btd,ndh->btnh",
output_shape=(None, self.num_query_heads, self.head_dim),
kernel_initializer=self._kernel_initializer,
dtype=self.dtype_policy,
name="query",
)
self.query_dense.build(inputs_shape)
self.key_dense = keras.layers.EinsumDense(
"bsd,kdh->bskh",
output_shape=(None, self.num_key_value_heads, self.head_dim),
kernel_initializer=self._kernel_initializer,
dtype=self.dtype_policy,
name="key",
)
self.key_dense.build(inputs_shape)
self.value_dense = keras.layers.EinsumDense(
"bsd,kdh->bskh",
output_shape=(None, self.num_key_value_heads, self.head_dim),
kernel_initializer=self._kernel_initializer,
dtype=self.dtype_policy,
name="value",
)
self.value_dense.build(inputs_shape)
self.dropout_layer = keras.layers.Dropout(
rate=self.dropout,
dtype=self.dtype_policy,
)
self.output_dense = keras.layers.EinsumDense(
equation="btnh,nhd->btd",
output_shape=(None, self.hidden_dim),
kernel_initializer=self._kernel_initializer,
dtype=self.dtype_policy,
name="attention_output",
)
self.output_dense.build(
(None, None, self.num_query_heads, self.head_dim)
)
self.softmax = keras.layers.Softmax(dtype="float32")
self.built = True
def _apply_rope(self, x, positions):
"""Rope rotate q or k."""
# TODO: refactor to use RotaryEmbedding layer?
max_wavelength = 10000
x_shape = ops.shape(x)
freq_exponents = (2.0 / x_shape[-1]) * ops.cast(
ops.arange(x_shape[-1] // 2, dtype="float32"), self.compute_dtype
)
timescale = max_wavelength**freq_exponents
radians = positions[..., None] / timescale[None, None, :]
radians = radians[..., None, :]
sin, cos = ops.sin(radians), ops.cos(radians)
x1, x2 = ops.split(x, 2, axis=-1)
# Avoid `ops.concatenate` for now, to avoid a obscure bug with XLA
# compilation on jax. We should be able to remove this once the
# following PR is in all jax releases we care about:
# https://github.com/openxla/xla/pull/7875
output = ops.stack([x1 * cos - x2 * sin, x2 * cos + x1 * sin], axis=-1)
return ops.reshape(output, x_shape)
def _compute_attention(
self,
q,
k,
v,
attention_mask,
training=False,
):
query_normalization = 1 / np.sqrt(self.head_dim)
q *= ops.cast(query_normalization, dtype=q.dtype)
q_shape = ops.shape(q)
q = ops.reshape(
q,
(
*q_shape[:-2],
self.num_key_value_heads,
self.num_query_heads // self.num_key_value_heads,
q_shape[-1],
),
)
b, q_len, _, _, h = ops.shape(q)
attention_logits = ops.einsum("btkgh,bskh->bkgts", q, k)
attention_mask = attention_mask[:, None, None, :, :]
orig_dtype = attention_logits.dtype
attention_softmax = self.softmax(attention_logits, mask=attention_mask)
attention_softmax = ops.cast(attention_softmax, orig_dtype)
if self.dropout:
attention_softmax = self.dropout_layer(
attention_softmax, training=training
)
results = ops.einsum("bkgts,bskh->btkgh", attention_softmax, v)
return ops.reshape(results, (b, q_len, self.num_query_heads, h))
def call(
self,
x,
attention_mask=None,
cache=None,
cache_update_index=0,
training=False,
):
seq_len = ops.shape(x)[1]
start_index = cache_update_index
positions = ops.cast(
ops.arange(seq_len, dtype="float32"), self.compute_dtype
)
positions = positions + ops.cast(start_index, self.compute_dtype)
query = self.query_dense(x)
query = self._apply_rope(query, positions)
if cache is not None:
key_cache = cache[:, 0, ...]
value_cache = cache[:, 1, ...]
key_update = self.key_dense(x)
key_update = self._apply_rope(key_update, positions)
value_update = self.value_dense(x)
start = [0, cache_update_index, 0, 0]
key = ops.slice_update(key_cache, start, key_update)
value = ops.slice_update(value_cache, start, value_update)
cache = ops.stack((key, value), axis=1)
else:
key = self.key_dense(x)
key = self._apply_rope(key, positions)
value = self.value_dense(x)
attention_vec = self._compute_attention(
query, key, value, attention_mask, training=training
)
# Wipe attn vec if there are no attended tokens.
no_attended_tokens = ops.all(
ops.equal(attention_mask, 0), axis=-1, keepdims=True
)[..., None]
attention_vec = ops.where(
no_attended_tokens, ops.zeros_like(attention_vec), attention_vec
)
attention_output = self.output_dense(attention_vec)
if cache is not None:
return attention_output, cache
return attention_output
| keras-nlp/keras_nlp/models/gemma/gemma_attention.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/gemma/gemma_attention.py",
"repo_id": "keras-nlp",
"token_count": 3335
} | 154 |
# Copyright 2022 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.backend import ops
from keras_nlp.models.generative_task import GenerativeTask
from keras_nlp.models.gpt_neo_x.gpt_neo_x_backbone import GPTNeoXBackbone
from keras_nlp.models.gpt_neo_x.gpt_neo_x_causal_lm_preprocessor import (
GPTNeoXCausalLMPreprocessor,
)
from keras_nlp.utils.python_utils import classproperty
@keras_nlp_export("keras_nlp.models.GPTNeoXCausalLM")
class GPTNeoXCausalLM(GenerativeTask):
"""An end-to-end GPTNeoX model for causal language modeling.
A causal language model (LM) predicts the next token based on previous
tokens. This task setup can be used to train the model unsupervised on
plain text input, or to autoregressively generate plain text similar to
the data used for training. This task can be used for pre-training or
fine-tuning a GPT-NeoX model, simply by calling `fit()`.
This model has a `generate()` method, which generates text based on a
prompt. The generation strategy used is controlled by an additional
`sampler` argument on `compile()`. You can recompile the model with
different `keras_nlp.samplers` objects to control the generation. By
default, `"top_k"` sampling will be used.
Args:
backbone: A `keras_nlp.models.GPTNeoXBackbone` instance.
preprocessor: A `keras_nlp.models.GPTNeoXCausalLMPreprocessor` or `None`.
If `None`, this model will not apply preprocessing, and inputs
should be preprocessed before calling the model.
"""
def __init__(
self,
backbone,
preprocessor=None,
**kwargs,
):
# === Layers ===
self.backbone = backbone
self.preprocessor = preprocessor
# === Functional Model ===
inputs = backbone.input
hidden_states = backbone(inputs)
outputs = backbone.token_embedding(hidden_states, reverse=True)
super().__init__(
inputs=inputs,
outputs=outputs,
**kwargs,
)
# === Default compilation ===
self.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(2e-5),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
jit_compile=True,
)
@classproperty
def backbone_cls(cls):
return GPTNeoXBackbone
@classproperty
def preprocessor_cls(cls):
return GPTNeoXCausalLMPreprocessor
def call_with_cache(
self,
token_ids,
cache,
cache_update_index,
):
"""Forward pass of `GPTNeoXCausalLM` with cache.
`call_with_cache` adds an additional forward pass for the model for
autoregressive inference. Unlike calling the model directly, this method
allows caching previous key/value Tensors in multi-head attention layer,
and avoids recomputing the outputs of seen tokens.
Args:
token_ids: a dense int Tensor with shape `(batch_size, max_length)`.
cache: a dense float Tensor, the cache of key and value.
cache_update_index: int, or int Tensor. The index of current inputs
in the whole sequence.
Returns:
A (logits, hidden_states, cache) tuple. Where `logits` is the
language model logits for the input token_ids, `hidden_states` is
the final hidden representation of the input tokens, and `cache` is
the decoding cache.
"""
token_embedding = self.backbone.token_embedding(token_ids)
x = self.backbone.embeddings_dropout(token_embedding)
# Each decoder layer has a cache; we update them separately.
caches = []
for i, transformer_layer in enumerate(self.backbone.transformer_layers):
current_cache = cache[:, i, ...]
x, next_cache = transformer_layer(
x,
self_attention_cache=current_cache,
self_attention_cache_update_index=cache_update_index,
)
caches.append(next_cache)
cache = ops.stack(caches, axis=1)
x = self.backbone.layer_norm(x)
hidden_states = x
logits = self.backbone.token_embedding(hidden_states, reverse=True)
return logits, hidden_states, cache
def _build_cache(self, token_ids):
"""Build an empty cache for use with `call_with_cache()`."""
batch_size = ops.shape(token_ids)[0]
max_length = ops.shape(token_ids)[1]
num_layers = self.backbone.num_layers
num_heads = self.backbone.num_heads
head_dim = self.backbone.hidden_dim // self.backbone.num_heads
shape = [batch_size, num_layers, 2, max_length, num_heads, head_dim]
cache = ops.zeros(shape, dtype=self.compute_dtype)
# Seed the cache.
_, hidden_states, cache = self.call_with_cache(token_ids, cache, 0)
return hidden_states, cache
def generate_step(
self,
inputs,
end_token_id=None,
):
"""A compilable generation function for a single batch of inputs.
This function represents the inner, XLA-compilable, generation function
for a single batch of inputs. Inputs should have the same structure as
model inputs, a dictionary with keys `"token_ids"` and `"padding_mask"`.
Args:
inputs: A dictionary with two keys `"token_ids"` and
`"padding_mask"` and batched tensor values.
end_token_id: The id of the end token to stop on. If all
sequences have produced a new `end_token_id`, generation
will stop.
"""
token_ids, padding_mask = inputs["token_ids"], inputs["padding_mask"]
# Create and seed cache with a single forward pass.
hidden_states, cache = self._build_cache(token_ids)
# Compute the lengths of all user inputted tokens ids.
row_lengths = ops.sum(ops.cast(padding_mask, "int32"), axis=-1)
# Start at the first index that has no user inputted id.
index = ops.min(row_lengths)
def next(prompt, cache, index):
# The cache index is the index of our previous token.
cache_update_index = index - 1
batch_size = ops.shape(prompt)[0]
prompt = ops.slice(prompt, [0, cache_update_index], [batch_size, 1])
logits, hidden_states, cache = self.call_with_cache(
prompt,
cache,
cache_update_index,
)
return (
ops.squeeze(logits, axis=1),
ops.squeeze(hidden_states, axis=1),
cache,
)
token_ids = self._sampler(
next=next,
prompt=token_ids,
cache=cache,
index=index,
mask=padding_mask,
end_token_id=end_token_id,
hidden_states=hidden_states,
model=self,
)
# Compute an output padding mask with the token ids we updated.
if end_token_id is not None:
# Build a mask of `end_token_id` locations not in the original
# prompt (not in locations where `padding_mask` is True).
end_locations = ops.logical_and(
ops.equal(token_ids, end_token_id),
ops.logical_not(padding_mask),
)
end_locations = ops.cast(end_locations, "int32")
# Use cumsum to get ones in all locations after end_locations.
cumsum = ops.cast(ops.cumsum(end_locations, axis=-1), "int32")
overflow = cumsum - end_locations
# Our padding mask is the inverse of these overflow locations.
padding_mask = ops.logical_not(ops.cast(overflow, "bool"))
else:
# Without early stopping, all locations will have been updated.
padding_mask = ops.ones_like(token_ids, dtype="bool")
return {
"token_ids": token_ids,
"padding_mask": padding_mask,
}
| keras-nlp/keras_nlp/models/gpt_neo_x/gpt_neo_x_causal_lm.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/gpt_neo_x/gpt_neo_x_causal_lm.py",
"repo_id": "keras-nlp",
"token_count": 3706
} | 155 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from keras_nlp.models.llama.llama_tokenizer import LlamaTokenizer
from keras_nlp.tests.test_case import TestCase
class LlamaTokenizerTest(TestCase):
def setUp(self):
self.init_kwargs = {
# Generated using create_llama_test_proto.py
"proto": os.path.join(
self.get_test_data_dir(), "llama_test_vocab.spm"
)
}
self.input_data = ["the quick brown fox", "the earth is round"]
def test_tokenizer_basics(self):
self.run_preprocessing_layer_test(
cls=LlamaTokenizer,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
expected_output=[[3, 8, 4, 6], [3, 5, 7, 9]],
)
def test_errors_missing_special_tokens(self):
with self.assertRaises(ValueError):
LlamaTokenizer(
# Generated using create_no_special_token_proto.py
proto=os.path.join(
self.get_test_data_dir(), "no_special_token_vocab.spm"
)
)
| keras-nlp/keras_nlp/models/llama/llama_tokenizer_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/llama/llama_tokenizer_test.py",
"repo_id": "keras-nlp",
"token_count": 698
} | 156 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import copy
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import keras
from keras_nlp.models.roberta.roberta_backbone import RobertaBackbone
from keras_nlp.models.roberta.roberta_backbone import roberta_kernel_initializer
from keras_nlp.models.roberta.roberta_preprocessor import RobertaPreprocessor
from keras_nlp.models.roberta.roberta_presets import backbone_presets
from keras_nlp.models.task import Task
from keras_nlp.utils.python_utils import classproperty
@keras_nlp_export("keras_nlp.models.RobertaClassifier")
class RobertaClassifier(Task):
"""An end-to-end RoBERTa model for classification tasks.
This model attaches a classification head to a
`keras_nlp.model.RobertaBackbone` instance, mapping from the backbone
outputs to logits suitable for a classification task. For usage of this
model with pre-trained weights, see the `from_preset()` constructor.
This model can optionally be configured with a `preprocessor` layer, in
which case it will automatically apply preprocessing to raw inputs during
`fit()`, `predict()`, and `evaluate()`. This is done by default when
creating the model with `from_preset()`.
Disclaimer: Pre-trained models are provided on an "as is" basis, without
warranties or conditions of any kind. The underlying model is provided by a
third party and subject to a separate license, available
[here](https://github.com/facebookresearch/fairseq).
Args:
backbone: A `keras_nlp.models.RobertaBackbone` instance.
num_classes: int. Number of classes to predict.
preprocessor: A `keras_nlp.models.RobertaPreprocessor` or `None`. If
`None`, this model will not apply preprocessing, and inputs should
be preprocessed before calling the model.
activation: Optional `str` or callable. The activation function to use
on the model outputs. Set `activation="softmax"` to return output
probabilities. Defaults to `None`.
hidden_dim: int. The size of the pooler layer.
dropout: float. The dropout probability value, applied to the pooled
output, and after the first dense layer.
Examples:
Raw string data.
```python
features = ["The quick brown fox jumped.", "I forgot my homework."]
labels = [0, 3]
# Pretrained classifier.
classifier = keras_nlp.models.RobertaClassifier.from_preset(
"roberta_base_en",
num_classes=4,
)
classifier.fit(x=features, y=labels, batch_size=2)
classifier.predict(x=features, batch_size=2)
# Re-compile (e.g., with a new learning rate).
classifier.compile(
loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True),
optimizer=keras.optimizers.Adam(5e-5),
jit_compile=True,
)
# Access backbone programmatically (e.g., to change `trainable`).
classifier.backbone.trainable = False
# Fit again.
classifier.fit(x=features, y=labels, batch_size=2)
```
Preprocessed integer data.
```python
features = {
"token_ids": np.ones(shape=(2, 12), dtype="int32"),
"padding_mask": np.array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0]] * 2),
}
labels = [0, 3]
# Pretrained classifier without preprocessing.
classifier = keras_nlp.models.RobertaClassifier.from_preset(
"roberta_base_en",
num_classes=4,
preprocessor=None,
)
classifier.fit(x=features, y=labels, batch_size=2)
```
Custom backbone and vocabulary.
```python
features = ["a quick fox", "a fox quick"]
labels = [0, 3]
vocab = {"<s>": 0, "<pad>": 1, "</s>": 2, "<mask>": 3}
vocab = {**vocab, "a": 4, "Ġquick": 5, "Ġfox": 6}
merges = ["Ġ q", "u i", "c k", "ui ck", "Ġq uick"]
merges += ["Ġ f", "o x", "Ġf ox"]
tokenizer = keras_nlp.models.RobertaTokenizer(
vocabulary=vocab,
merges=merges
)
preprocessor = keras_nlp.models.RobertaPreprocessor(
tokenizer=tokenizer,
sequence_length=128,
)
backbone = keras_nlp.models.RobertaBackbone(
vocabulary_size=20,
num_layers=4,
num_heads=4,
hidden_dim=256,
intermediate_dim=512,
max_sequence_length=128
)
classifier = keras_nlp.models.RobertaClassifier(
backbone=backbone,
preprocessor=preprocessor,
num_classes=4,
)
classifier.fit(x=features, y=labels, batch_size=2)
```
"""
def __init__(
self,
backbone,
num_classes,
preprocessor=None,
activation=None,
hidden_dim=None,
dropout=0.0,
**kwargs,
):
# === Layers ===
self.backbone = backbone
self.preprocessor = preprocessor
self.pooled_dropout = keras.layers.Dropout(
dropout,
dtype=backbone.dtype_policy,
name="pooled_dropout",
)
hidden_dim = hidden_dim or backbone.hidden_dim
self.pooled_dense = keras.layers.Dense(
hidden_dim,
activation="tanh",
dtype=backbone.dtype_policy,
name="pooled_dense",
)
self.output_dropout = keras.layers.Dropout(
dropout,
dtype=backbone.dtype_policy,
name="output_dropout",
)
self.output_dense = keras.layers.Dense(
num_classes,
kernel_initializer=roberta_kernel_initializer(),
activation=activation,
dtype=backbone.dtype_policy,
name="logits",
)
# === Functional Model ===
inputs = backbone.input
x = backbone(inputs)[:, backbone.start_token_index, :]
x = self.pooled_dropout(x)
x = self.pooled_dense(x)
x = self.output_dropout(x)
outputs = self.output_dense(x)
super().__init__(
inputs=inputs,
outputs=outputs,
**kwargs,
)
# === Config ===
self.num_classes = num_classes
self.activation = keras.activations.get(activation)
self.hidden_dim = hidden_dim
self.dropout = dropout
# === Default compilation ===
logit_output = self.activation == keras.activations.linear
self.compile(
loss=keras.losses.SparseCategoricalCrossentropy(
from_logits=logit_output
),
optimizer=keras.optimizers.Adam(2e-5),
metrics=[keras.metrics.SparseCategoricalAccuracy()],
jit_compile=True,
)
def get_config(self):
config = super().get_config()
config.update(
{
"num_classes": self.num_classes,
"activation": keras.activations.serialize(self.activation),
"hidden_dim": self.hidden_dim,
"dropout": self.dropout,
}
)
return config
@classproperty
def backbone_cls(cls):
return RobertaBackbone
@classproperty
def preprocessor_cls(cls):
return RobertaPreprocessor
@classproperty
def presets(cls):
return copy.deepcopy(backbone_presets)
| keras-nlp/keras_nlp/models/roberta/roberta_classifier.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/roberta/roberta_classifier.py",
"repo_id": "keras-nlp",
"token_count": 3307
} | 157 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""XLM-RoBERTa model preset configurations."""
backbone_presets = {
"t5_small_multi": {
"metadata": {
"description": (
"8-layer T5 model. Trained on the Colossal Clean Crawled "
"Corpus (C4)."
),
"params": 0,
"official_name": "T5",
"path": "t5",
"model_card": "https://github.com/google-research/text-to-text-transfer-transformer/blob/main/README.md",
},
"kaggle_handle": "kaggle://keras/t5/keras/t5_small_multi/2",
},
"t5_base_multi": {
"metadata": {
"description": (
"12-layer T5 model. Trained on the Colossal Clean Crawled "
"Corpus (C4)."
),
"params": 0,
"official_name": "T5",
"path": "t5",
"model_card": "https://github.com/google-research/text-to-text-transfer-transformer/blob/main/README.md",
},
"kaggle_handle": "kaggle://keras/t5/keras/t5_base_multi/2",
},
"t5_large_multi": {
"metadata": {
"description": (
"24-layer T5 model. Trained on the Colossal Clean Crawled "
"Corpus (C4)."
),
"params": 0,
"official_name": "T5",
"path": "t5",
"model_card": "https://github.com/google-research/text-to-text-transfer-transformer/blob/main/README.md",
},
"kaggle_handle": "kaggle://keras/t5/keras/t5_large_multi/2",
},
"flan_small_multi": {
"metadata": {
"description": (
"8-layer T5 model. Trained on the Colossal Clean Crawled "
"Corpus (C4)."
),
"params": 0,
"official_name": "T5",
"path": "t5",
"model_card": "https://github.com/google-research/text-to-text-transfer-transformer/blob/main/README.md",
},
"kaggle_handle": "kaggle://keras/t5/keras/flan_small_multi/2",
},
"flan_base_multi": {
"metadata": {
"description": (
"12-layer T5 model. Trained on the Colossal Clean Crawled "
"Corpus (C4)."
),
"params": 0,
"official_name": "T5",
"path": "t5",
"model_card": "https://github.com/google-research/text-to-text-transfer-transformer/blob/main/README.md",
},
"kaggle_handle": "kaggle://keras/t5/keras/flan_base_multi/2",
},
"flan_large_multi": {
"metadata": {
"description": (
"24-layer T5 model. Trained on the Colossal Clean Crawled "
"Corpus (C4)."
),
"params": 0,
"official_name": "T5",
"path": "t5",
"model_card": "https://github.com/google-research/text-to-text-transfer-transformer/blob/main/README.md",
},
"kaggle_handle": "kaggle://keras/t5/keras/flan_large_multi/2",
},
}
| keras-nlp/keras_nlp/models/t5/t5_presets.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/t5/t5_presets.py",
"repo_id": "keras-nlp",
"token_count": 1791
} | 158 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Metadata for loading pretrained model weights.
backbone_presets = {
"whisper_tiny_en": {
"metadata": {
"description": (
"4-layer Whisper model. Trained on 438,000 hours of labelled "
"English speech data."
),
"params": 37184256,
"official_name": "Whisper",
"path": "whisper",
"model_card": "https://github.com/openai/whisper/blob/main/model-card.md",
},
"kaggle_handle": "kaggle://keras/whisper/keras/whisper_tiny_en/2",
},
"whisper_base_en": {
"metadata": {
"description": (
"6-layer Whisper model. Trained on 438,000 hours of labelled "
"English speech data."
),
"params": 124439808,
"official_name": "Whisper",
"path": "whisper",
"model_card": "https://github.com/openai/whisper/blob/main/model-card.md",
},
"kaggle_handle": "kaggle://keras/whisper/keras/whisper_base_en/2",
},
"whisper_small_en": {
"metadata": {
"description": (
"12-layer Whisper model. Trained on 438,000 hours of labelled "
"English speech data."
),
"params": 241734144,
"official_name": "Whisper",
"path": "whisper",
"model_card": "https://github.com/openai/whisper/blob/main/model-card.md",
},
"kaggle_handle": "kaggle://keras/whisper/keras/whisper_small_en/2",
},
"whisper_medium_en": {
"metadata": {
"description": (
"24-layer Whisper model. Trained on 438,000 hours of labelled "
"English speech data."
),
"params": 763856896,
"official_name": "Whisper",
"path": "whisper",
"model_card": "https://github.com/openai/whisper/blob/main/model-card.md",
},
"kaggle_handle": "kaggle://keras/whisper/keras/whisper_medium_en/2",
},
"whisper_tiny_multi": {
"metadata": {
"description": (
"4-layer Whisper model. Trained on 680,000 hours of labelled "
"multilingual speech data."
),
"params": 37760640,
"official_name": "Whisper",
"path": "whisper",
"model_card": "https://github.com/openai/whisper/blob/main/model-card.md",
},
"kaggle_handle": "kaggle://keras/whisper/keras/whisper_tiny_multi/2",
},
"whisper_base_multi": {
"metadata": {
"description": (
"6-layer Whisper model. Trained on 680,000 hours of labelled "
"multilingual speech data."
),
"params": 72593920,
"official_name": "Whisper",
"path": "whisper",
"model_card": "https://github.com/openai/whisper/blob/main/model-card.md",
},
"kaggle_handle": "kaggle://keras/whisper/keras/whisper_base_multi/2",
},
"whisper_small_multi": {
"metadata": {
"description": (
"12-layer Whisper model. Trained on 680,000 hours of labelled "
"multilingual speech data."
),
"params": 241734912,
"official_name": "Whisper",
"path": "whisper",
"model_card": "https://github.com/openai/whisper/blob/main/model-card.md",
},
"kaggle_handle": "kaggle://keras/whisper/keras/whisper_small_multi/2",
},
"whisper_medium_multi": {
"metadata": {
"description": (
"24-layer Whisper model. Trained on 680,000 hours of labelled "
"multilingual speech data."
),
"params": 763857920,
"official_name": "Whisper",
"path": "whisper",
"model_card": "https://github.com/openai/whisper/blob/main/model-card.md",
},
"kaggle_handle": "kaggle://keras/whisper/keras/whisper_medium_multi/2",
},
"whisper_large_multi": {
"metadata": {
"description": (
"32-layer Whisper model. Trained on 680,000 hours of labelled "
"multilingual speech data."
),
"params": 1543304960,
"official_name": "Whisper",
"path": "whisper",
"model_card": "https://github.com/openai/whisper/blob/main/model-card.md",
},
"kaggle_handle": "kaggle://keras/whisper/keras/whisper_large_multi/2",
},
"whisper_large_multi_v2": {
"metadata": {
"description": (
"32-layer Whisper model. Trained for 2.5 epochs on 680,000 "
"hours of labelled multilingual speech data. An improved "
"of `whisper_large_multi`."
),
"params": 1543304960,
"official_name": "Whisper",
"path": "whisper",
"model_card": "https://github.com/openai/whisper/blob/main/model-card.md",
},
"kaggle_handle": "kaggle://keras/whisper/keras/whisper_large_multi_v2/2",
},
}
| keras-nlp/keras_nlp/models/whisper/whisper_presets.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/whisper/whisper_presets.py",
"repo_id": "keras-nlp",
"token_count": 2909
} | 159 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import pytest
from keras_nlp.models.xlm_roberta.xlm_roberta_tokenizer import (
XLMRobertaTokenizer,
)
from keras_nlp.tests.test_case import TestCase
class XLMRobertaTokenizerTest(TestCase):
def setUp(self):
self.init_kwargs = {
# Generated using create_xlm_roberta_test_proto.py
"proto": os.path.join(
self.get_test_data_dir(), "xlm_roberta_test_vocab.spm"
)
}
self.input_data = ["the quick brown fox", "the earth is round"]
def test_tokenizer_basics(self):
self.run_preprocessing_layer_test(
cls=XLMRobertaTokenizer,
init_kwargs=self.init_kwargs,
input_data=self.input_data,
expected_output=[[6, 11, 7, 9], [6, 8, 10, 12]],
)
@pytest.mark.large
def test_smallest_preset(self):
self.run_preset_test(
cls=XLMRobertaTokenizer,
preset="xlm_roberta_base_multi",
input_data=["The quick brown fox."],
expected_output=[[581, 63773, 119455, 6, 147797, 5]],
)
@pytest.mark.extra_large
def test_all_presets(self):
for preset in XLMRobertaTokenizer.presets:
self.run_preset_test(
cls=XLMRobertaTokenizer,
preset=preset,
input_data=self.input_data,
)
| keras-nlp/keras_nlp/models/xlm_roberta/xlm_roberta_tokenizer_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/models/xlm_roberta/xlm_roberta_tokenizer_test.py",
"repo_id": "keras-nlp",
"token_count": 866
} | 160 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.backend import config
from keras_nlp.backend import keras
from keras_nlp.backend import ops
from keras_nlp.backend import random
@keras_nlp_export("keras_nlp.samplers.Sampler")
class Sampler:
"""Base sampler class.
Args:
temperature: float. optional. Used to control the
randomness of the sampling. The higher the temperature, the
more diverse the samples. Defaults to `1.0`.
Call arguments:
{{call_args}}
This base class can be extended to implement different auto-regressive
sampling methods. To do so, override the `get_next_token()` method, which
computes the next token based on a probability distribution over all
possible vocab entries.
Examples:
```python
causal_lm = keras_nlp.models.GPT2CausalLM.from_preset("gpt2_base_en")
# Greedy search with some tokens forbidden.
class CustomSampler(keras_nlp.samplers.Sampler):
def __init__(self, forbidden_tokens, **kwargs):
super().__init__(**kwargs)
self.forbidden_tokens = forbidden_tokens
def get_next_token(self, probs):
batch_size, vocab_size = keras.ops.shape(probs)
for id in self.forbidden_tokens:
update = keras.ops.zeros((batch_size, 1))
probs = keras.ops.slice_update(probs, (0, id), update)
return keras.ops.argmax(probs, axis=-1)
# 257 = "a" with a leading space, 262 = "the" with a leading space.
causal_lm.compile(sampler=CustomSampler(forbidden_tokens=[257, 262]))
causal_lm.summary()
causal_lm.generate(["That's strange"])
```
"""
def __init__(
self,
temperature=1.0,
):
self.temperature = temperature
self._seed_generators = []
def __setattr__(self, name, value):
# We could update to the `Tracker` class from keras-core if our needs
# become more advanced (e.g. list assignment, nested trackables). For
# now, we only track `SeedGenerator` instances directly on the sampler.
if isinstance(value, random.SeedGenerator):
self._seed_generators.append(value)
return super().__setattr__(name, value)
@property
def variables(self):
variables = []
for sg in self._seed_generators:
variables.append(sg.state)
return variables
def __call__(
self,
next,
prompt,
cache=None,
index=0,
mask=None,
end_token_id=None,
hidden_states=None,
model=None,
):
max_length = ops.shape(prompt)[-1]
# Make sure `max_length` and `index` are the same dtype.
index = ops.cast(index, "int32")
max_length = ops.cast(max_length, "int32")
if mask is None:
mask = ops.zeros_like(prompt, dtype="bool")
else:
mask = ops.cast(mask, dtype="bool")
# `ops.while_loop` will not accept `None` as a value for `loop_vars`.
cache = () if cache is None else cache
def cond(prompt, cache, index):
if end_token_id is None:
return True
# Stop if all sequences have produced a *new* end_token_id.
end_tokens = (prompt == end_token_id) & (~mask)
prompt_done = ops.any(end_tokens, axis=-1)
return ops.logical_not(ops.all(prompt_done))
def body(prompt, cache, index):
# Compute the softmax distribution for the next token.
logits, _, cache = next(prompt, cache, index)
probabilities = self.compute_probabilities(logits)
# Compute the next token.
next_token = self.get_next_token(probabilities)
# Don't overwrite anywhere mask is True.
next_token = ops.cast(next_token, prompt.dtype)
next_token = ops.where(mask[:, index], prompt[:, index], next_token)
# Update the prompt with the next token.
next_token = next_token[:, None]
prompt = ops.slice_update(prompt, [0, index], next_token)
# Return the next prompt, cache and incremented index.
return (prompt, cache, index + 1)
prompt, _, _ = self.run_loop(
cond,
body,
loop_vars=(prompt, cache, index),
maximum_iterations=(max_length - index),
model=model,
)
return prompt
def compute_probabilities(self, logits):
"""Compute token probabilities from logits.
This will always be done in full precision, regardless of dtype, and
scale by `temperature`.
"""
logits_dtype = logits.dtype
logits = ops.cast(logits, "float32")
probs = keras.activations.softmax(logits / self.temperature)
return ops.cast(probs, logits_dtype)
def run_loop(
self, cond, body, model=None, loop_vars=None, maximum_iterations=None
):
"""Run ops.while_loops with a `StatelessScope` if necessary."""
if config.backend() == "jax":
import itertools
if model:
model_trainable_variables = model.trainable_variables
model_non_trainable_variables = model.non_trainable_variables
else:
model_trainable_variables = []
model_non_trainable_variables = []
def stateless_cond(state, *loop_vars):
return cond(*loop_vars)
def stateless_body(state, *loop_vars):
(
sampler_variables,
trainable_variables,
non_trainable_variables,
) = state
mapping = itertools.chain(
zip(self.variables, sampler_variables),
zip(model_trainable_variables, trainable_variables),
zip(model_non_trainable_variables, non_trainable_variables),
)
with keras.StatelessScope(state_mapping=mapping) as scope:
loop_vars = body(*loop_vars)
sampler_variables = []
for v in self.variables:
new_v = scope.get_current_value(v)
sampler_variables.append(new_v if new_v is not None else v)
state = (
sampler_variables,
trainable_variables,
non_trainable_variables,
)
return state, *loop_vars
variables = [ops.convert_to_tensor(v) for v in self.variables]
trainable_variables = [
ops.convert_to_tensor(v) for v in model_trainable_variables
]
non_trainable_variables = [
ops.convert_to_tensor(v) for v in model_non_trainable_variables
]
state = (
variables,
trainable_variables,
non_trainable_variables,
)
state, *loop_vars = ops.while_loop(
cond=stateless_cond,
body=stateless_body,
loop_vars=(state, *loop_vars),
maximum_iterations=maximum_iterations,
)
for ref_v, v in zip(self.variables, state[0]):
ref_v.assign(v)
else:
loop_vars = ops.while_loop(
cond=cond,
body=body,
loop_vars=(loop_vars),
maximum_iterations=maximum_iterations,
)
return loop_vars
def get_next_token(self, probabilities):
"""Get the next token.
Args:
probabilities: a Tensor, the probability distribution for next
token over all vocab tokens.
Get the next token based on given probability distribution over tokens.
Subclasses must implement this method.
"""
raise NotImplementedError
@classmethod
def from_config(cls, config):
return cls(**config)
def get_config(self):
return {"temperature": self.temperature}
| keras-nlp/keras_nlp/samplers/sampler.py/0 | {
"file_path": "keras-nlp/keras_nlp/samplers/sampler.py",
"repo_id": "keras-nlp",
"token_count": 4068
} | 161 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import List
from keras_nlp.api_export import keras_nlp_export
from keras_nlp.layers.preprocessing.preprocessing_layer import (
PreprocessingLayer,
)
@keras_nlp_export("keras_nlp.tokenizers.Tokenizer")
class Tokenizer(PreprocessingLayer):
"""A base class for tokenizer layers.
Tokenizers in the KerasNLP library should all subclass this layer.
The class provides two core methods `tokenize()` and `detokenize()` for
going from plain text to sequences and back. A tokenizer is a subclass of
`keras.layers.Layer` and can be combined into a `keras.Model`.
Subclassers should always implement the `tokenize()` method, which will also
be the default when calling the layer directly on inputs.
Subclassers can optionally implement the `detokenize()` method if the
tokenization is reversible. Otherwise, this can be skipped.
Subclassers should implement `get_vocabulary()`, `vocabulary_size()`,
`token_to_id()` and `id_to_token()` if applicable. For some simple
"vocab free" tokenizers, such as a whitespace splitter show below, these
methods do not apply and can be skipped.
Examples:
```python
class WhitespaceSplitterTokenizer(keras_nlp.tokenizers.Tokenizer):
def tokenize(self, inputs):
return tf.strings.split(inputs)
def detokenize(self, inputs):
return tf.strings.reduce_join(inputs, separator=" ", axis=-1)
tokenizer = WhitespaceSplitterTokenizer()
# Tokenize some inputs.
tokenizer.tokenize("This is a test")
# Shorthard for `tokenize()`.
tokenizer("This is a test")
# Detokenize some outputs.
tokenizer.detokenize(["This", "is", "a", "test"])
```
"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def tokenize(self, inputs, *args, **kwargs):
"""Transform input tensors of strings into output tokens.
Args:
inputs: Input tensor, or dict/list/tuple of input tensors.
*args: Additional positional arguments.
**kwargs: Additional keyword arguments.
"""
raise NotImplementedError(
"No implementation of `tokenize()` was found for "
f"{self.__class__.__name__}. All tokenizers should implement "
"`tokenize()`."
)
def detokenize(self, inputs, *args, **kwargs):
"""Transform tokens back into strings.
Args:
inputs: Input tensor, or dict/list/tuple of input tensors.
*args: Additional positional arguments.
**kwargs: Additional keyword arguments.
"""
raise NotImplementedError(
"No implementation of `detokenize()` was found for "
f"{self.__class__.__name__}."
)
def get_vocabulary(self) -> List[str]:
"""Get the tokenizer vocabulary as a list of strings terms."""
raise NotImplementedError(
"No implementation of `get_vocabulary()` was found for "
f"{self.__class__.__name__}."
)
def vocabulary_size(self) -> int:
"""Returns the total size of the token id space."""
raise NotImplementedError(
"No implementation of `vocabulary_size()` was found for "
f"{self.__class__.__name__}."
)
def id_to_token(self, id: int) -> str:
"""Convert an integer id to a string token."""
raise NotImplementedError(
"No implementation of `id_to_token()` was found for "
f"{self.__class__.__name__}."
)
def token_to_id(self, token: str) -> int:
"""Convert a string token to an integer id."""
raise NotImplementedError(
"No implementation of `token_to_id()` was found for "
f"{self.__class__.__name__}."
)
def call(self, inputs, *args, training=None, **kwargs):
return self.tokenize(inputs, *args, **kwargs)
| keras-nlp/keras_nlp/tokenizers/tokenizer.py/0 | {
"file_path": "keras-nlp/keras_nlp/tokenizers/tokenizer.py",
"repo_id": "keras-nlp",
"token_count": 1708
} | 162 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_nlp.tests.test_case import TestCase
from keras_nlp.utils.python_utils import classproperty
from keras_nlp.utils.python_utils import format_docstring
class ClassPropertyTest(TestCase):
def test_class_property(self):
class Foo:
@classproperty
def bar(cls):
return "class property"
self.assertAllEqual(Foo.bar, "class property")
class FormatDocstringTest(TestCase):
def test_function(self):
@format_docstring(adjective="salubrious")
def foo():
"""It was a {{adjective}} November day."""
return "function"
self.assertAllEqual(foo(), "function")
self.assertAllEqual(foo.__doc__, "It was a salubrious November day.")
def test_class(self):
@format_docstring(adjective="smelly", name="Mortimer")
class Foo:
"""I saw my {{adjective}} friend {{name}}."""
def __init__(self):
self.bar = "property"
self.assertAllEqual(Foo().bar, "property")
self.assertAllEqual(Foo.__doc__, "I saw my smelly friend Mortimer.")
def test_class_method(self):
@format_docstring(adjective="smelly", name="Mortimer")
class Foo:
"""I saw my {{adjective}} friend {{name}}."""
def __init__(self):
self.bar = "property"
@classmethod
@format_docstring(noun="cactus", bodypart="nostril")
def baz(cls):
"""He was holding a {{noun}} in his {{bodypart}}."""
return "class method"
self.assertAllEqual(Foo.baz(), "class method")
self.assertAllEqual(
Foo.baz.__doc__,
"He was holding a cactus in his nostril.",
)
self.assertAllEqual(
Foo.baz.__func__.__doc__,
"He was holding a cactus in his nostril.",
)
def test_brackets(self):
@format_docstring(nickname="dumdum")
def bar():
"""Use `{}` to create a dictionary, {{nickname}}."""
return "function"
self.assertAllEqual(bar(), "function")
self.assertAllEqual(
bar.__doc__, "Use `{}` to create a dictionary, dumdum."
)
| keras-nlp/keras_nlp/utils/python_utils_test.py/0 | {
"file_path": "keras-nlp/keras_nlp/utils/python_utils_test.py",
"repo_id": "keras-nlp",
"token_count": 1210
} | 163 |
# Copyright 2023 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import os
import numpy as np
import requests
import tensorflow as tf
import transformers
from absl import app
from absl import flags
import keras_nlp
from tools.checkpoint_conversion.checkpoint_conversion_utils import (
get_md5_checksum,
)
PRESET_MAP = {
"distil_bert_base_en_uncased": "distilbert-base-uncased",
"distil_bert_base_en_cased": "distilbert-base-cased",
"distil_bert_base_multi_cased": "distilbert-base-multilingual-cased",
}
EXTRACT_DIR = "./{}"
FLAGS = flags.FLAGS
flags.DEFINE_string(
"preset", None, f'Must be one of {",".join(PRESET_MAP.keys())}'
)
def download_files(hf_model_name):
print("-> Download original vocab and config.")
extract_dir = EXTRACT_DIR.format(FLAGS.preset)
if not os.path.exists(extract_dir):
os.makedirs(extract_dir)
# Config.
config_path = os.path.join(extract_dir, "config.json")
response = requests.get(
f"https://huggingface.co/{hf_model_name}/raw/main/config.json"
)
open(config_path, "wb").write(response.content)
print(f"`{config_path}`")
# Vocab.
vocab_path = os.path.join(extract_dir, "vocab.txt")
response = requests.get(
f"https://huggingface.co/{hf_model_name}/raw/main/vocab.txt"
)
open(vocab_path, "wb").write(response.content)
print(f"`{vocab_path}`")
def define_preprocessor(hf_model_name):
print("\n-> Define the tokenizers.")
extract_dir = EXTRACT_DIR.format(FLAGS.preset)
vocab_path = os.path.join(extract_dir, "vocab.txt")
keras_nlp_tokenizer = keras_nlp.models.DistilBertTokenizer(
vocabulary=vocab_path,
)
keras_nlp_preprocessor = keras_nlp.models.DistilBertPreprocessor(
keras_nlp_tokenizer
)
hf_tokenizer = transformers.AutoTokenizer.from_pretrained(hf_model_name)
print("\n-> Print MD5 checksum of the vocab files.")
print(f"`{vocab_path}` md5sum: ", get_md5_checksum(vocab_path))
return keras_nlp_preprocessor, hf_tokenizer
def convert_checkpoints(keras_nlp_model, hf_model):
print("\n-> Convert original weights to KerasNLP format.")
extract_dir = EXTRACT_DIR.format(FLAGS.preset)
config_path = os.path.join(extract_dir, "config.json")
# Build config.
cfg = {}
with open(config_path, "r") as pt_cfg_handler:
pt_cfg = json.load(pt_cfg_handler)
cfg["vocabulary_size"] = pt_cfg["vocab_size"]
cfg["num_layers"] = pt_cfg["n_layers"]
cfg["num_heads"] = pt_cfg["n_heads"]
cfg["hidden_dim"] = pt_cfg["dim"]
cfg["intermediate_dim"] = pt_cfg["hidden_dim"]
cfg["dropout"] = pt_cfg["dropout"]
cfg["max_sequence_length"] = pt_cfg["max_position_embeddings"]
print("Config:", cfg)
hf_wts = hf_model.state_dict()
print("Original weights:")
print(
str(hf_wts.keys())
.replace(", ", "\n")
.replace("odict_keys([", "")
.replace("]", "")
.replace(")", "")
)
keras_nlp_model.get_layer(
"token_and_position_embedding"
).token_embedding.embeddings.assign(
hf_wts["embeddings.word_embeddings.weight"]
)
keras_nlp_model.get_layer(
"token_and_position_embedding"
).position_embedding.position_embeddings.assign(
hf_wts["embeddings.position_embeddings.weight"]
)
keras_nlp_model.get_layer("embeddings_layer_norm").gamma.assign(
hf_wts["embeddings.LayerNorm.weight"]
)
keras_nlp_model.get_layer("embeddings_layer_norm").beta.assign(
hf_wts["embeddings.LayerNorm.bias"]
)
for i in range(keras_nlp_model.num_layers):
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._query_dense.kernel.assign(
hf_wts[f"transformer.layer.{i}.attention.q_lin.weight"]
.transpose(1, 0)
.reshape((cfg["hidden_dim"], cfg["num_heads"], -1))
.numpy()
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._query_dense.bias.assign(
hf_wts[f"transformer.layer.{i}.attention.q_lin.bias"]
.reshape((cfg["num_heads"], -1))
.numpy()
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._key_dense.kernel.assign(
hf_wts[f"transformer.layer.{i}.attention.k_lin.weight"]
.transpose(1, 0)
.reshape((cfg["hidden_dim"], cfg["num_heads"], -1))
.numpy()
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._key_dense.bias.assign(
hf_wts[f"transformer.layer.{i}.attention.k_lin.bias"]
.reshape((cfg["num_heads"], -1))
.numpy()
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._value_dense.kernel.assign(
hf_wts[f"transformer.layer.{i}.attention.v_lin.weight"]
.transpose(1, 0)
.reshape((cfg["hidden_dim"], cfg["num_heads"], -1))
.numpy()
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._value_dense.bias.assign(
hf_wts[f"transformer.layer.{i}.attention.v_lin.bias"]
.reshape((cfg["num_heads"], -1))
.numpy()
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._output_dense.kernel.assign(
hf_wts[f"transformer.layer.{i}.attention.out_lin.weight"]
.transpose(1, 0)
.reshape((cfg["num_heads"], -1, cfg["hidden_dim"]))
.numpy()
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer._output_dense.bias.assign(
hf_wts[f"transformer.layer.{i}.attention.out_lin.bias"].numpy()
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer_norm.gamma.assign(
hf_wts[f"transformer.layer.{i}.sa_layer_norm.weight"].numpy()
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._self_attention_layer_norm.beta.assign(
hf_wts[f"transformer.layer.{i}.sa_layer_norm.bias"].numpy()
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._feedforward_intermediate_dense.kernel.assign(
hf_wts[f"transformer.layer.{i}.ffn.lin1.weight"]
.transpose(1, 0)
.numpy()
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._feedforward_intermediate_dense.bias.assign(
hf_wts[f"transformer.layer.{i}.ffn.lin1.bias"].numpy()
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._feedforward_output_dense.kernel.assign(
hf_wts[f"transformer.layer.{i}.ffn.lin2.weight"]
.transpose(1, 0)
.numpy()
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._feedforward_output_dense.bias.assign(
hf_wts[f"transformer.layer.{i}.ffn.lin2.bias"].numpy()
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._feedforward_layer_norm.gamma.assign(
hf_wts[f"transformer.layer.{i}.output_layer_norm.weight"].numpy()
)
keras_nlp_model.get_layer(
f"transformer_layer_{i}"
)._feedforward_layer_norm.beta.assign(
hf_wts[f"transformer.layer.{i}.output_layer_norm.bias"].numpy()
)
# Save the model.
print(f"\n-> Save KerasNLP model weights to `{FLAGS.preset}.h5`.")
keras_nlp_model.save_weights(f"{FLAGS.preset}.h5")
return keras_nlp_model
def check_output(
keras_nlp_preprocessor,
keras_nlp_model,
hf_tokenizer,
hf_model,
):
print("\n-> Check the outputs.")
sample_text = ["cricket is awesome, easily the best sport in the world!"]
# KerasNLP
keras_nlp_inputs = keras_nlp_preprocessor(tf.constant(sample_text))
keras_nlp_output = keras_nlp_model.predict(keras_nlp_inputs)
# HF
hf_inputs = hf_tokenizer(
sample_text, padding="max_length", return_tensors="pt"
)
hf_output = hf_model(**hf_inputs).last_hidden_state
print("KerasNLP output:", keras_nlp_output[0, 0, :10])
print("HF output:", hf_output[0, 0, :10])
print("Difference:", np.mean(keras_nlp_output - hf_output.detach().numpy()))
# Show the MD5 checksum of the model weights.
print("Model md5sum: ", get_md5_checksum(f"./{FLAGS.preset}.h5"))
def main(_):
hf_model_name = PRESET_MAP[FLAGS.preset]
download_files(hf_model_name)
keras_nlp_preprocessor, hf_tokenizer = define_preprocessor(hf_model_name)
print("\n-> Load KerasNLP model.")
keras_nlp_model = keras_nlp.models.DistilBertBackbone.from_preset(
FLAGS.preset, load_weights=False
)
print("\n-> Load HF model.")
hf_model = transformers.AutoModel.from_pretrained(hf_model_name)
hf_model.eval()
keras_nlp_model = convert_checkpoints(keras_nlp_model, hf_model)
check_output(
keras_nlp_preprocessor,
keras_nlp_model,
hf_tokenizer,
hf_model,
)
if __name__ == "__main__":
flags.mark_flag_as_required("preset")
app.run(main)
| keras-nlp/tools/checkpoint_conversion/convert_distilbert_checkpoints.py/0 | {
"file_path": "keras-nlp/tools/checkpoint_conversion/convert_distilbert_checkpoints.py",
"repo_id": "keras-nlp",
"token_count": 4816
} | 164 |
# Copyright 2024 The KerasNLP Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import contextlib
import os
import gemma
import torch
import torch_xla.core.xla_model as xm
from absl import app
from absl import flags
from gemma import model_xla as gemma_model
import keras_nlp
os.environ["KERAS_BACKEND"] = "torch"
"""
Sample usage:
For converting a Keras model to PyTorch format using a custom or fine-tuned
checkpoint from Keras, make sure to pass the path for the Keras weights file
(ending in `.weights.h5`) and the model size (`2b` or `7b`) to `--weights_file`
and `--size`, respectively.
Optionally, you can specify the output path for the converted model at
`--output_file`. (This defaults to `gemma.ckpt`)
```
python tools/gemma/export_gemma_to_torch_xla.py \
--weights_file fine_tuned_imdb.weights.h5 \
--size 2b \
--output_file fine_tuned_imdb.ckpt
```
For converting a Keras model to PyTorch format from a preset,
simply pass the Keras preset name to `--preset`.
```
python tools/gemma/export_gemma_to_torch_xla.py \
--preset gemma_2b_en \
--output_file path/to/keras_torch_model.ckpt
```
"""
PRESET_MAP = {
"gemma_2b_en": gemma.config.get_config_for_2b(),
"gemma_instruct_2b_en": gemma.config.get_config_for_2b(),
"gemma_7b_en": gemma.config.get_config_for_7b(),
"gemma_instruct_7b_en": gemma.config.get_config_for_7b(),
}
SIZE_MAP = {
"2b": (gemma.config.get_config_for_2b(), "gemma_2b_en"),
"7b": (gemma.config.get_config_for_7b(), "gemma_7b_en"),
}
FLAGS = flags.FLAGS
flags.DEFINE_string(
"preset",
None,
f'Must be one of {",".join(PRESET_MAP.keys())}'
" Alternatively, a Keras weights file (`.weights.h5`) can be passed"
" to --weights_file flag.",
)
flags.DEFINE_string(
"weights_file",
None,
"A Keras weights file (`.weights.h5`)."
" Alternatively, a model preset can be passed to --preset flag.",
)
flags.DEFINE_string(
"size",
None,
"Size of model. Must be passed if `weights_file` is passed. "
"This should be either `2b` or `7b`.",
)
flags.DEFINE_string(
"output_file",
"gemma.ckpt",
"An output file for the converted PyTorch checkpoint. Default: `gemma.ckpt`",
)
flags.DEFINE_string(
"vocab_dir",
"gemma_tokenizer",
"A directory in which the vocabulary for the tokenizer will be stored.",
)
flags.DEFINE_string(
"dtype",
"float32",
"Set the precision of the converted checkpoint. Must be a valid PyTorch dtype.",
)
@contextlib.contextmanager
def _set_default_tensor_type(dtype: torch.dtype):
"""Sets the default torch dtype to the given dtype."""
torch.set_default_dtype(dtype)
yield
torch.set_default_dtype(torch.float)
def _reconcile_attention_dims(qkv, target_shape):
return torch.cat(qkv).reshape(tuple(target_shape))
def convert_checkpoints(preset, weights_file, size, output_file, vocab_dir):
device = xm.xla_device()
if preset is not None:
print(
f"\n-> Loading PyTorch Gemma model config for preset `{preset}`..."
)
model = gemma_model.GemmaForCausalLM(
PRESET_MAP[preset], world_size=1, rank=0, device=device
)
print(f"\n-> Loading KerasNLP Gemma model with preset `{preset}`...")
keras_nlp_model = keras_nlp.models.GemmaCausalLM.from_preset(preset)
else:
print(f"\n-> Loading PyTorch Gemma model config for `{size}` model...")
config, size_preset = SIZE_MAP[size.lower()]
model = gemma_model.GemmaForCausalLM(
config, world_size=1, rank=0, device=device
)
print(f"\n-> Loading Keras weights from file `{weights_file}`...")
keras_nlp_model = keras_nlp.models.GemmaCausalLM.from_preset(
size_preset
)
keras_nlp_model.load_weights(weights_file)
print("\n✅ Model loading complete.")
print("\n-> Converting weights from KerasNLP Gemma to PyTorch Gemma...")
# Token embedding (with vocab size difference handling)
keras_embedding = keras_nlp_model.backbone.token_embedding.weights[0]
torch_vocab_size = model.embedder.weight.shape[0]
keras_nlp_vocab_size = keras_embedding.value.shape[0]
if torch_vocab_size < keras_nlp_vocab_size:
diff = keras_nlp_vocab_size - torch_vocab_size
update_state_dict(
model.embedder,
"weight",
keras_embedding.value[:-diff, :],
)
else:
update_state_dict(
model.embedder,
"weight",
keras_embedding.value,
)
# Decoder blocks
for i in range(keras_nlp_model.backbone.num_layers):
decoder_block = keras_nlp_model.backbone.get_layer(f"decoder_block_{i}")
# Pre-attention norm
update_state_dict(
model.model.layers[i].input_layernorm,
"weight",
decoder_block.pre_attention_norm.weights[0].value,
)
# Attention
qkv = (
decoder_block.attention.query_dense.weights[0].value.transpose(
1, 2
),
decoder_block.attention.key_dense.weights[0].value.transpose(1, 2),
decoder_block.attention.value_dense.weights[0].value.transpose(
1, 2
),
)
qkv_target_shape = model.model.layers[i].self_attn.qkv_proj.weight.shape
combined_tensor = _reconcile_attention_dims(qkv, qkv_target_shape)
update_state_dict(
model.model.layers[i].self_attn.qkv_proj, "weight", combined_tensor
)
out_target_shape = model.model.layers[i].self_attn.o_proj.weight.shape
keras_out_tensor = decoder_block.attention.output_dense.weights[0].value
out_tensor = keras_out_tensor.reshape(
(out_target_shape[1], out_target_shape[0]) # Transpose target size
).transpose(0, 1)
update_state_dict(
model.model.layers[i].self_attn.o_proj, "weight", out_tensor
)
# Post-attention norm
update_state_dict(
model.model.layers[i].post_attention_layernorm,
"weight",
decoder_block.pre_ffw_norm.weights[0].value,
)
# MLP (Feed-forward)
update_state_dict(
model.model.layers[i].mlp.gate_proj,
"weight",
decoder_block.gating_ffw.weights[0].value.transpose(0, 1),
)
update_state_dict(
model.model.layers[i].mlp.up_proj,
"weight",
decoder_block.gating_ffw_2.weights[0].value.transpose(0, 1),
)
update_state_dict(
model.model.layers[i].mlp.down_proj,
"weight",
decoder_block.ffw_linear.weights[0].value.transpose(0, 1),
)
# Final norm
update_state_dict(
model.model.norm,
"weight",
keras_nlp_model.backbone.layers[-1].weights[0].value,
)
print("\n✅ Weights converted successfully.")
print(f"\n-> Saving PyTorch model checkpoint to `{output_file}`...")
# Save model checkpoint
torch.save({"model_state_dict": model.state_dict()}, output_file)
print(
f"\n✅ Saving complete. Model checkpoint available at `{output_file}`."
)
if preset is not None:
# Tokenizer
print(
f"\n-> Loading KerasNLP Gemma tokenizer with preset `{preset}`..."
)
keras_nlp_tokenizer = keras_nlp.models.GemmaTokenizer.from_preset(
preset
)
print("\n✅ Model loading complete.")
print(f"\n-> Saving tokenizer state to directory `{vocab_dir}`...")
# Save tokenizer state
os.makedirs(vocab_dir, exist_ok=True)
keras_nlp_tokenizer.save_assets(vocab_dir)
print(
"\n✅ Saving complete. Tokenizer state "
f"available at `{vocab_dir}/vocabulary.spm`."
)
def update_state_dict(layer, weight_name: str, tensor: torch.Tensor) -> None:
"""Updates the state dict for a weight given a tensor."""
assert (
tensor.shape == layer.state_dict()[weight_name].shape
), f"{tensor.shape} vs {layer.state_dict()[weight_name].shape}"
layer.state_dict()[weight_name].copy_(tensor)
def flag_error_handler():
if not FLAGS.preset and not FLAGS.weights_file:
raise ValueError(
"Please pass either a valid Keras preset to `--preset`"
" or supply a Keras weights file (`.weights.h5`) and model size"
" (`2b` or `7b`) to `--weights_file` and `--size`, respectively."
)
if FLAGS.weights_file:
if FLAGS.preset:
raise ValueError(
"Both `--preset` and `--weights_file` flags cannot be supplied "
"at the same time. Either supply a valid Keras preset to "
"`--preset`or supply a Keras `.weights.h5` file and "
"model size (`2b` or `7b`) to `--weights_file` and `--size`, "
"respectively."
)
if not str(FLAGS.weights_file).endswith(".weights.h5"):
raise ValueError(
"Please pass a valid Keras weights file ending in `.weights.h5`."
)
if not FLAGS.size:
raise ValueError(
"The `size` flag must be passed if a weights file is passed. "
"Please pass the appropriate size (`2b` or `7b`) for your "
"model to the `--size` flag."
)
if FLAGS.size.lower() not in ["2b", "7b"]:
raise ValueError(
"Invalid `size`. Please pass the appropriate size (`2b` or `7b`) "
"for your model to the `--size` flag."
)
if FLAGS.dtype:
dtype = getattr(torch, FLAGS.dtype)
if not isinstance(dtype, torch.dtype):
raise ValueError(
"Invalid `dtype`. Please pass a valid PyTorch data type (e.g. "
"`float32', 'float16`, etc.) to the `--dtype` flag."
)
def main(_):
flag_error_handler()
with _set_default_tensor_type(getattr(torch, FLAGS.dtype)):
convert_checkpoints(
FLAGS.preset,
FLAGS.weights_file,
FLAGS.size,
FLAGS.output_file,
FLAGS.vocab_dir,
)
if __name__ == "__main__":
app.run(main)
| keras-nlp/tools/gemma/export_gemma_to_torch_xla.py/0 | {
"file_path": "keras-nlp/tools/gemma/export_gemma_to_torch_xla.py",
"repo_id": "keras-nlp",
"token_count": 4941
} | 165 |
# On Github Issues and Pull Requests
Found a bug? Want to contribute changes to the codebase? Make sure to read this first.
## Update Your Environment
To easily update Keras: `pip install git+https://www.github.com/keras-team/keras.git --upgrade`
To easily update Keras-Preprocessing: `pip install git+https://www.github.com/keras-team/keras-preprocessing.git --upgrade`
To easily update Theano: `pip install git+git://github.com/Theano/Theano.git --upgrade`
To update TensorFlow: See [TensorFlow Installation instructions](https://github.com/tensorflow/tensorflow#installation)
## Bug reporting
Your code doesn't work, **and you have determined that the issue lies with Keras-Preprocessing**? Follow these steps to report a bug.
1. Your bug may already be fixed. Make sure to update to the current Keras master branch and Keras-Preprocessing master branch, as well as the latest Theano/TensorFlow master branch.
2. [Search for similar issues](https://github.com/keras-team/keras-preprocessing/issues?utf8=%E2%9C%93&q=is%3Aissue). It's possible somebody has encountered this bug already. Still having a problem? Open an issue on Github to let us know.
3. Make sure you provide us with useful information about your configuration: what OS are you using? What Keras backend are you using? Are you running on GPU? If so, what is your version of Cuda, of cuDNN? What is your GPU?
4. Provide us with a script to reproduce the issue. This script should be runnable as-is and should not require external data download (use randomly generated data if you need to run a model on some test data). We recommend that you use Github Gists to post your code. Any issue that cannot be reproduced is likely to be closed.
5. If possible, take a stab at fixing the bug yourself --if you can!
The more information you provide, the easier it is for us to validate that there is a bug and the faster we'll be able to take action. If you want your issue to be resolved quickly, following the steps above is crucial.
## Pull Requests
We love pull requests. Here's a quick guide:
1. If your PR introduces a change in functionality, make sure you start by opening an issue to discuss whether the change should be made, and how to handle it. This will save you from having your PR closed down the road! Of course, if your PR is a simple bug fix, you don't need to do that.
2. Ensure that your environment (Keras, Keras-Preprocessing, and your backend) are up to date. See "Update Your Environment". Create a new branch for your changes.
3. Write the code (or get others to write it). This is the hard part!
4. Make sure any new function or class you introduce has proper docstrings. Make sure any code you touch still has up-to-date docstrings and documentation. **Docstring style should be respected.** In particular, they should be formatted in MarkDown, and there should be sections for `Arguments`, `Returns`, `Raises` (if applicable). Look at other docstrings in the codebase for examples.
5. Write tests. Your code should have full unit test coverage. If you want to see your PR merged promptly, this is crucial. If your PR is a bug fix, it is advisable to add a new test, which, without your fix in this PR, would have failed.
6. Run our test suite locally. It's easy: from the Keras folder, simply run: `py.test tests/`.
- You will need to install the test requirements as well: `pip install -e .[tests]`.
7. Make sure all tests are passing:
- with the Theano backend, on Python 2.7 and Python 3.6. Make sure you have the development version of Theano.
- with the TensorFlow backend, on Python 2.7 and Python 3.6. Make sure you have the development version of TensorFlow.
- with the CNTK backend, on Python 2.7 and Python 3.6. Make sure you have the development version of CNTK.
- **Please Note:** all tests run on top of the very latest Keras master branch.
8. We use PEP8 syntax conventions, but we aren't dogmatic when it comes to line length. Make sure your lines stay reasonably sized, though. To make your life easier, we recommend running a PEP8 linter:
- Install PEP8 packages: `pip install pep8 pytest-pep8 autopep8`
- Run a standalone PEP8 check: `py.test --pep8 -m pep8`
- You can automatically fix some PEP8 error by running: `autopep8 -i --select <errors> <FILENAME>` for example: `autopep8 -i --select E128 tests/keras/backend/test_backends.py`
9. When committing, use appropriate, descriptive commit messages. Make sure that your branch history is not a string of "bug fix", "fix", "oops", etc. When submitting your PR, squash your commits into a single commit with an appropriate commit message, to make sure the project history stays clean and readable. See ['rebase and squash'](http://rebaseandsqua.sh/) for technical help on how to squash your commits.
10. Update the documentation. If introducing new functionality, make sure you include code snippets demonstrating the usage of your new feature.
11. Submit your PR. If your changes have been approved in a previous discussion, and if you have complete (and passing) unit tests, your PR is likely to be merged promptly.
| keras-preprocessing/CONTRIBUTING.md/0 | {
"file_path": "keras-preprocessing/CONTRIBUTING.md",
"repo_id": "keras-preprocessing",
"token_count": 1358
} | 166 |
# Configuration of py.test
[tool:pytest]]
addopts=-v -n 2 --durations=20
# Do not run tests in the build folder
norecursedirs=build
[flake8]
# Use 85 as max line length in PEP8 test.
max-line-length=85
# do not run pep8 test in the build folder
exclude=build
# PEP-8 The following are ignored:
# E731 do not assign a lambda expression, use a def
# E402 module level import not at top of file
pep8ignore=* E731 \
* E402 \
| keras-preprocessing/setup.cfg/0 | {
"file_path": "keras-preprocessing/setup.cfg",
"repo_id": "keras-preprocessing",
"token_count": 156
} | 167 |
sudo pip install --upgrade pip
sudo pip install -e ".[tensorflow-cpu,tests]"
echo "sh shell/lint.sh" > .git/hooks/pre-commit
chmod a+x .git/hooks/pre-commit | keras-tuner/.devcontainer/setup.sh/0 | {
"file_path": "keras-tuner/.devcontainer/setup.sh",
"repo_id": "keras-tuner",
"token_count": 59
} | 168 |
<meta http-equiv="refresh" content="0; URL='https://keras.io/guides/keras_tuner/custom_tuner/'" />
| keras-tuner/docs/site/tutorials/subclass-tuner/index.html/0 | {
"file_path": "keras-tuner/docs/site/tutorials/subclass-tuner/index.html",
"repo_id": "keras-tuner",
"token_count": 40
} | 169 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from keras_tuner.backend.config import multi_backend
if multi_backend():
from keras.src.ops import * # noqa: F403, F401
else:
import tensorflow as tf
from tensorflow import cast # noqa: F403, F401
def any_symbolic_tensors(args=None, kwargs=None):
args = args or ()
kwargs = kwargs or {}
for x in tf.nest.flatten((args, kwargs)):
if "KerasTensor" in x.__class__.__name__:
return True
return False
def shape(x):
if any_symbolic_tensors((x,)):
return x.shape
dynamic = tf.shape(x)
static = x.shape.as_list()
return tuple(
dynamic[i] if s is None else s for i, s in enumerate(static)
)
| keras-tuner/keras_tuner/backend/ops.py/0 | {
"file_path": "keras-tuner/keras_tuner/backend/ops.py",
"repo_id": "keras-tuner",
"token_count": 507
} | 170 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"HyperModel base class."
from keras_tuner import errors
from keras_tuner.api_export import keras_tuner_export
@keras_tuner_export(
[
"keras_tuner.HyperModel",
"keras_tuner.engine.hypermodel.HyperModel",
]
)
class HyperModel:
"""Defines a search space of models.
A search space is a collection of models. The `build` function will build
one of the models from the space using the given `HyperParameters` object.
Users should subclass the `HyperModel` class to define their search spaces
by overriding `build()`, which creates and returns the Keras model.
Optionally, you may also override `fit()` to customize the training process
of the model.
Examples:
In `build()`, you can create the model using the hyperparameters.
```python
class MyHyperModel(kt.HyperModel):
def build(self, hp):
model = keras.Sequential()
model.add(keras.layers.Dense(
hp.Choice('units', [8, 16, 32]),
activation='relu'))
model.add(keras.layers.Dense(1, activation='relu'))
model.compile(loss='mse')
return model
```
When overriding `HyperModel.fit()`, if you use `model.fit()` to train your
model, which returns the training history, you can return it directly. You
may use `hp` to specify any hyperparameters to tune.
```python
class MyHyperModel(kt.HyperModel):
def build(self, hp):
...
def fit(self, hp, model, *args, **kwargs):
return model.fit(
*args,
epochs=hp.Int("epochs", 5, 20),
**kwargs)
```
If you have a customized training process, you can return the objective
value as a float.
If you want to keep track of more metrics, you can return a dictionary of
the metrics to track.
```python
class MyHyperModel(kt.HyperModel):
def build(self, hp):
...
def fit(self, hp, model, *args, **kwargs):
...
return {
"loss": loss,
"val_loss": val_loss,
"val_accuracy": val_accuracy
}
```
Args:
name: Optional string, the name of this HyperModel.
tunable: Boolean, whether the hyperparameters defined in this
hypermodel should be added to search space. If `False`, either the
search space for these parameters must be defined in advance, or
the default values will be used. Defaults to True.
"""
def __init__(self, name=None, tunable=True):
self.name = name
self.tunable = tunable
self._build = self.build
self.build = self._build_wrapper
def build(self, hp):
"""Builds a model.
Args:
hp: A `HyperParameters` instance.
Returns:
A model instance.
"""
raise NotImplementedError
def _build_wrapper(self, hp, *args, **kwargs):
if not self.tunable:
# Copy `HyperParameters` object so that new entries are not added
# to the search space.
hp = hp.copy()
return self._build(hp, *args, **kwargs)
def declare_hyperparameters(self, hp):
pass
def fit(self, hp, model, *args, **kwargs):
"""Train the model.
Args:
hp: HyperParameters.
model: `keras.Model` built in the `build()` function.
**kwargs: All arguments passed to `Tuner.search()` are in the
`kwargs` here. It always contains a `callbacks` argument, which
is a list of default Keras callback functions for model
checkpointing, tensorboard configuration, and other tuning
utilities. If `callbacks` is passed by the user from
`Tuner.search()`, these default callbacks will be appended to
the user provided list.
Returns:
A `History` object, which is the return value of `model.fit()`, a
dictionary, or a float.
If return a dictionary, it should be a dictionary of the metrics to
track. The keys are the metric names, which contains the
`objective` name. The values should be the metric values.
If return a float, it should be the `objective` value.
"""
return model.fit(*args, **kwargs)
class DefaultHyperModel(HyperModel):
"""Produces HyperModel from a model building function."""
def __init__(self, build, name=None, tunable=True):
super().__init__(name=name)
self.build = build
def get_hypermodel(hypermodel):
"""Gets a HyperModel from a HyperModel or callable."""
if hypermodel is None:
return None
if isinstance(hypermodel, HyperModel):
return hypermodel
if not callable(hypermodel):
raise errors.FatalValueError(
"The `hypermodel` argument should be either "
"a callable with signature `build(hp)` returning a model, "
"or an instance of `HyperModel`."
)
return DefaultHyperModel(hypermodel)
| keras-tuner/keras_tuner/engine/hypermodel.py/0 | {
"file_path": "keras-tuner/keras_tuner/engine/hypermodel.py",
"repo_id": "keras-tuner",
"token_count": 2296
} | 171 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import random
from keras_tuner import utils
from keras_tuner.api_export import keras_tuner_export
from keras_tuner.engine import conditions as conditions_mod
@keras_tuner_export("keras_tuner.engine.hyperparameters.HyperParameter")
class HyperParameter:
"""Hyperparameter base class.
A `HyperParameter` instance is uniquely identified by its `name` and
`conditions` attributes. `HyperParameter`s with the same `name` but with
different `conditions` are considered as different `HyperParameter`s by
the `HyperParameters` instance.
Args:
name: A string. the name of parameter. Must be unique for each
`HyperParameter` instance in the search space.
default: The default value to return for the parameter.
conditions: A list of `Condition`s for this object to be considered
active.
"""
def __init__(self, name, default=None, conditions=None):
self.name = name
self._default = default
conditions = utils.to_list(conditions) if conditions else []
self.conditions = conditions
def get_config(self):
conditions = [conditions_mod.serialize(c) for c in self.conditions]
return {
"name": self.name,
"default": self.default,
"conditions": conditions,
}
@property
def default(self):
return self._default
@property
def values(self):
"""Return a iterable of all possible values of the hp."""
raise NotImplementedError
def random_sample(self, seed=None):
random_state = random.Random(seed)
prob = float(random_state.random())
return self.prob_to_value(prob)
def prob_to_value(self, prob):
"""Convert cumulative probability in range [0.0, 1.0) to hp value."""
raise NotImplementedError
def value_to_prob(self, value):
"""Convert a hp value to cumulative probability in range [0.0, 1.0)."""
raise NotImplementedError
@classmethod
def from_config(cls, config):
config["conditions"] = [
conditions_mod.deserialize(c) for c in config["conditions"]
]
return cls(**config)
| keras-tuner/keras_tuner/engine/hyperparameters/hyperparameter.py/0 | {
"file_path": "keras-tuner/keras_tuner/engine/hyperparameters/hyperparameter.py",
"repo_id": "keras-tuner",
"token_count": 985
} | 172 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import numpy as np
import pytest
from keras_tuner.backend import keras
from keras_tuner.engine import objective as obj_module
from keras_tuner.engine import tuner_utils
def test_save_best_epoch_with_single_objective(tmp_path):
objective = obj_module.create_objective("val_loss")
filepath = os.path.join(tmp_path, "saved_weights.weights.h5")
callback = tuner_utils.SaveBestEpoch(objective, filepath)
model = keras.Sequential([keras.layers.Dense(1)])
model.compile(loss="mse")
val_x = np.random.rand(10, 10)
val_y = np.random.rand(10, 10)
history = model.fit(
x=np.random.rand(10, 10),
y=np.random.rand(10, 1),
validation_data=(val_x, val_y),
epochs=10,
callbacks=[callback],
)
model.load_weights(filepath)
assert min(history.history["val_loss"]) == model.evaluate(val_x, val_y)
def test_save_best_epoch_with_multi_objective(tmp_path):
objective = obj_module.create_objective(["val_loss", "val_mae"])
filepath = os.path.join(tmp_path, "saved_weights.weights.h5")
callback = tuner_utils.SaveBestEpoch(objective, filepath)
model = keras.Sequential([keras.layers.Dense(1)])
model.compile(loss="mse", metrics=["mae"])
val_x = np.random.rand(10, 10)
val_y = np.random.rand(10, 10)
history = model.fit(
x=np.random.rand(10, 10),
y=np.random.rand(10, 1),
validation_data=(val_x, val_y),
epochs=10,
callbacks=[callback],
)
model.load_weights(filepath)
assert min(history.history["val_loss"]) + min(
history.history["val_mae"]
) == sum(model.evaluate(val_x, val_y))
def test_save_best_epoch_with_default_objective(tmp_path):
objective = obj_module.create_objective(None)
filepath = os.path.join(tmp_path, "saved_weights.weights.h5")
callback = tuner_utils.SaveBestEpoch(objective, filepath)
model = keras.Sequential([keras.layers.Dense(1)])
model.compile(loss="mse")
val_x = np.random.rand(10, 10)
val_y = np.random.rand(10, 10)
history = model.fit(
x=np.random.rand(10, 10),
y=np.random.rand(10, 1),
validation_data=(val_x, val_y),
epochs=10,
callbacks=[callback],
)
model.load_weights(filepath)
assert history.history["val_loss"][-1] == model.evaluate(val_x, val_y)
def test_convert_to_metrics_with_history():
model = keras.Sequential([keras.layers.Dense(1)])
model.compile(loss="mse", metrics=["mae"])
val_x = np.random.rand(10, 10)
val_y = np.random.rand(10, 10)
history = model.fit(
x=np.random.rand(10, 10),
y=np.random.rand(10, 1),
validation_data=(val_x, val_y),
)
results = tuner_utils.convert_to_metrics_dict(
history,
obj_module.Objective("val_loss", "min"),
)
assert all(key in results for key in ["loss", "val_loss", "mae", "val_mae"])
def test_convert_to_metrics_with_float():
assert tuner_utils.convert_to_metrics_dict(
0.1,
obj_module.Objective("val_loss", "min"),
) == {"val_loss": 0.1}
def test_convert_to_metrics_with_dict():
assert tuner_utils.convert_to_metrics_dict(
{"loss": 0.2, "val_loss": 0.1},
obj_module.Objective("val_loss", "min"),
) == {"loss": 0.2, "val_loss": 0.1}
def test_convert_to_metrics_with_list_of_floats():
assert tuner_utils.convert_to_metrics_dict(
[0.1, 0.2],
obj_module.Objective("val_loss", "min"),
) == {"val_loss": (0.1 + 0.2) / 2}
def test_convert_to_metrics_with_dict_without_obj_key():
with pytest.raises(ValueError, match="the specified objective"):
tuner_utils.validate_trial_results(
{"loss": 0.1}, obj_module.Objective("val_loss", "min"), "func_name"
)
def test_get_best_step_return_zero():
assert (
tuner_utils.get_best_step(
[{"val_loss": 1}, {"val_loss": 2}],
obj_module.Objective("val_loss", "min"),
)
== 0
)
def test_get_best_step_return_average_epoch():
class History(keras.callbacks.History):
def __init__(self, history):
self.history = history
results = [
History(
{
"val_loss": [5, 8, 3, 1, 2],
"val_accuracy": [5, 8, 3, 1, 2],
}
),
History(
{
"val_loss": [5, 8, 3, 2, 1],
"val_accuracy": [5, 8, 3, 1, 2],
}
),
]
assert (
tuner_utils.get_best_step(
results,
obj_module.Objective("val_loss", "min"),
)
== 3
)
| keras-tuner/keras_tuner/engine/tuner_utils_test.py/0 | {
"file_path": "keras-tuner/keras_tuner/engine/tuner_utils_test.py",
"repo_id": "keras-tuner",
"token_count": 2353
} | 173 |
# Copyright 2019 The KerasTuner Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tuner for Scikit-learn Models."""
import collections
import inspect
import os
import pickle
import numpy as np
try:
import pandas as pd # pytype: disable=import-error
except ImportError: # pragma: no cover
pd = None # pragma: no cover
try:
import sklearn # pytype: disable=import-error
import sklearn.model_selection
import sklearn.pipeline
except ImportError: # pragma: no cover
sklearn = None # pragma: no cover
from keras_tuner import backend
from keras_tuner.api_export import keras_tuner_export
from keras_tuner.engine import base_tuner
def split_data(data, indices):
if isinstance(data, np.ndarray):
return data[indices]
elif pd is not None and isinstance(data, pd.DataFrame):
return data.iloc[indices]
else:
raise TypeError(
"Expected the data to be numpy.ndarray or pandas.DataFrame. "
f"Received: {data}."
)
@keras_tuner_export(
["keras_tuner.SklearnTuner", "keras_tuner.tuners.SklearnTuner"]
)
class SklearnTuner(base_tuner.BaseTuner):
"""Tuner for Scikit-learn Models.
Performs cross-validated hyperparameter search for Scikit-learn models.
Examples:
```python
import keras_tuner
from sklearn import ensemble
from sklearn import datasets
from sklearn import linear_model
from sklearn import metrics
from sklearn import model_selection
def build_model(hp):
model_type = hp.Choice('model_type', ['random_forest', 'ridge'])
if model_type == 'random_forest':
model = ensemble.RandomForestClassifier(
n_estimators=hp.Int('n_estimators', 10, 50, step=10),
max_depth=hp.Int('max_depth', 3, 10))
else:
model = linear_model.RidgeClassifier(
alpha=hp.Float('alpha', 1e-3, 1, sampling='log'))
return model
tuner = keras_tuner.tuners.SklearnTuner(
oracle=keras_tuner.oracles.BayesianOptimizationOracle(
objective=keras_tuner.Objective('score', 'max'),
max_trials=10),
hypermodel=build_model,
scoring=metrics.make_scorer(metrics.accuracy_score),
cv=model_selection.StratifiedKFold(5),
directory='.',
project_name='my_project')
X, y = datasets.load_iris(return_X_y=True)
X_train, X_test, y_train, y_test = model_selection.train_test_split(
X, y, test_size=0.2)
tuner.search(X_train, y_train)
best_model = tuner.get_best_models(num_models=1)[0]
```
Args:
oracle: A `keras_tuner.Oracle` instance. Note that for this `Tuner`,
the `objective` for the `Oracle` should always be set to
`Objective('score', direction='max')`. Also, `Oracle`s that exploit
Neural-Network-specific training (e.g. `Hyperband`) should not be
used with this `Tuner`.
hypermodel: A `HyperModel` instance (or callable that takes
hyperparameters and returns a Model instance).
scoring: An sklearn `scoring` function. For more information, see
`sklearn.metrics.make_scorer`. If not provided, the Model's default
scoring will be used via `model.score`. Note that if you are
searching across different Model families, the default scoring for
these Models will often be different. In this case you should
supply `scoring` here in order to make sure your Models are being
scored on the same metric.
metrics: Additional `sklearn.metrics` functions to monitor during
search. Note that these metrics do not affect the search process.
cv: An `sklearn.model_selection` Splitter class. Used to
determine how samples are split up into groups for
cross-validation.
**kwargs: Keyword arguments relevant to all `Tuner` subclasses. Please
see the docstring for `Tuner`.
"""
def __init__(
self, oracle, hypermodel, scoring=None, metrics=None, cv=None, **kwargs
):
super().__init__(oracle=oracle, hypermodel=hypermodel, **kwargs)
if sklearn is None:
raise ImportError(
"Please install sklearn before using the `SklearnTuner`."
)
self.scoring = scoring
if metrics is None:
metrics = []
if not isinstance(metrics, (list, tuple)):
metrics = [metrics]
self.metrics = metrics
self.cv = cv or sklearn.model_selection.KFold(
5, shuffle=True, random_state=1
)
def search(self, X, y, sample_weight=None, groups=None):
"""Performs hyperparameter search.
Args:
X: See docstring for `model.fit` for the `sklearn` Models being
tuned.
y: See docstring for `model.fit` for the `sklearn` Models being
tuned.
sample_weight: Optional. See docstring for `model.fit` for the
`sklearn` Models being tuned.
groups: Optional. Required for `sklearn.model_selection` Splitter
classes that split based on group labels (For example, see
`sklearn.model_selection.GroupKFold`).
"""
# Only overridden for the docstring.
return super().search(X, y, sample_weight=sample_weight, groups=groups)
def run_trial(self, trial, X, y, sample_weight=None, groups=None):
metrics = collections.defaultdict(list)
# For cross-validation methods that expect a `groups` argument.
cv_kwargs = {"groups": groups} if groups is not None else {}
for train_indices, test_indices in self.cv.split(X, y, **cv_kwargs):
X_train = split_data(X, train_indices)
y_train = split_data(y, train_indices)
X_test = split_data(X, test_indices)
y_test = split_data(y, test_indices)
sample_weight_train = (
sample_weight[train_indices]
if sample_weight is not None
else None
)
model = self.hypermodel.build(trial.hyperparameters)
supports_sw = (
"sample_weight" in inspect.getfullargspec(model.fit).args
)
if isinstance(model, sklearn.pipeline.Pipeline) or not supports_sw:
model.fit(X_train, y_train)
else:
model.fit(X_train, y_train, sample_weight=sample_weight_train)
sample_weight_test = (
sample_weight[test_indices]
if sample_weight is not None
else None
)
if self.scoring is None:
score = model.score(
X_test, y_test, sample_weight=sample_weight_test
)
else:
score = self.scoring(
model, X_test, y_test, sample_weight=sample_weight_test
)
metrics["score"].append(score)
if self.metrics:
y_test_pred = model.predict(X_test)
for metric in self.metrics:
result = metric(
y_test, y_test_pred, sample_weight=sample_weight_test
)
metrics[metric.__name__].append(result)
self.save_model(trial.trial_id, model)
return {name: np.mean(values) for name, values in metrics.items()}
def save_model(self, trial_id, model, step=0):
fname = os.path.join(self.get_trial_dir(trial_id), "model.pickle")
with backend.io.File(fname, "wb") as f:
pickle.dump(model, f)
def load_model(self, trial):
fname = os.path.join(self.get_trial_dir(trial.trial_id), "model.pickle")
with backend.io.File(fname, "rb") as f:
return pickle.load(f)
| keras-tuner/keras_tuner/tuners/sklearn_tuner.py/0 | {
"file_path": "keras-tuner/keras_tuner/tuners/sklearn_tuner.py",
"repo_id": "keras-tuner",
"token_count": 3640
} | 174 |
#!/bin/bash
guides=(
https://raw.githubusercontent.com/keras-team/keras-io/master/guides/keras_tuner/getting_started.py
https://raw.githubusercontent.com/keras-team/keras-io/master/guides/keras_tuner/distributed_tuning.py
https://raw.githubusercontent.com/keras-team/keras-io/master/guides/keras_tuner/custom_tuner.py
https://raw.githubusercontent.com/keras-team/keras-io/master/guides/keras_tuner/visualize_tuning.py
https://raw.githubusercontent.com/keras-team/keras-io/master/guides/keras_tuner/tailor_the_search_space.py
)
for guide in ${guides[@]}; do
wget $guide -O /tmp/a.py
if ! python /tmp/a.py; then
echo "error occured!"
exit 1
fi
done | keras-tuner/shell/run_guides.sh/0 | {
"file_path": "keras-tuner/shell/run_guides.sh",
"repo_id": "keras-tuner",
"token_count": 291
} | 175 |
"""Benchmark pooling layers.
To run benchmarks, see the following command for an example, please change the
flag to your custom value:
```
python3 -m benchmarks.layer_benchmark.pooling_benchmark \
--benchmark_name=benchmark_max_pooling1d \
--num_samples=2048 \
--batch_size=256 \
--jit_compile=True
```
"""
from absl import app
from absl import flags
from benchmarks.layer_benchmark.base_benchmark import LayerBenchmark
FLAGS = flags.FLAGS
def benchmark_average_pooling1d(
num_samples,
batch_size,
jit_compile=True,
):
layer_name = "AveragePooling1D"
init_args = {
"pool_size": 2,
}
benchmark = LayerBenchmark(
layer_name,
init_args,
input_shape=[1024, 256],
jit_compile=jit_compile,
)
benchmark.benchmark_predict(
num_samples=num_samples,
batch_size=batch_size,
)
benchmark.benchmark_train(
num_samples=num_samples,
batch_size=batch_size,
)
def benchmark_average_pooling2d(
num_samples,
batch_size,
jit_compile=True,
):
layer_name = "AveragePooling2D"
init_args = {
"pool_size": 2,
}
benchmark = LayerBenchmark(
layer_name,
init_args,
input_shape=[256, 256, 3],
jit_compile=jit_compile,
)
benchmark.benchmark_predict(
num_samples=num_samples,
batch_size=batch_size,
)
benchmark.benchmark_train(
num_samples=num_samples,
batch_size=batch_size,
)
def benchmark_average_pooling3d(
num_samples,
batch_size,
jit_compile=True,
):
layer_name = "AveragePooling3D"
init_args = {
"pool_size": 2,
}
benchmark = LayerBenchmark(
layer_name,
init_args,
input_shape=[64, 64, 32, 3],
jit_compile=jit_compile,
)
benchmark.benchmark_predict(
num_samples=num_samples,
batch_size=batch_size,
)
benchmark.benchmark_train(
num_samples=num_samples,
batch_size=batch_size,
)
def benchmark_max_pooling1d(
num_samples,
batch_size,
jit_compile=True,
):
layer_name = "MaxPooling1D"
init_args = {
"pool_size": 2,
}
benchmark = LayerBenchmark(
layer_name,
init_args,
input_shape=[1024, 256],
jit_compile=jit_compile,
)
benchmark.benchmark_predict(
num_samples=num_samples,
batch_size=batch_size,
)
benchmark.benchmark_train(
num_samples=num_samples,
batch_size=batch_size,
)
def benchmark_max_pooling2d(
num_samples,
batch_size,
jit_compile=True,
):
layer_name = "MaxPooling2D"
init_args = {
"pool_size": 2,
}
benchmark = LayerBenchmark(
layer_name,
init_args,
input_shape=[256, 256, 3],
jit_compile=jit_compile,
)
benchmark.benchmark_predict(
num_samples=num_samples,
batch_size=batch_size,
)
benchmark.benchmark_train(
num_samples=num_samples,
batch_size=batch_size,
)
def benchmark_max_pooling3d(
num_samples,
batch_size,
jit_compile=True,
):
layer_name = "MaxPooling3D"
init_args = {
"pool_size": 2,
}
benchmark = LayerBenchmark(
layer_name,
init_args,
input_shape=[64, 64, 32, 3],
jit_compile=jit_compile,
)
benchmark.benchmark_predict(
num_samples=num_samples,
batch_size=batch_size,
)
benchmark.benchmark_train(
num_samples=num_samples,
batch_size=batch_size,
)
def benchmark_global_average_pooling1d(
num_samples,
batch_size,
jit_compile=True,
):
layer_name = "GlobalAveragePooling1D"
init_args = {}
benchmark = LayerBenchmark(
layer_name,
init_args,
input_shape=[1024, 256],
jit_compile=jit_compile,
)
benchmark.benchmark_predict(
num_samples=num_samples,
batch_size=batch_size,
)
benchmark.benchmark_train(
num_samples=num_samples,
batch_size=batch_size,
)
def benchmark_global_average_pooling2d(
num_samples,
batch_size,
jit_compile=True,
):
layer_name = "GlobalAveragePooling2D"
init_args = {}
benchmark = LayerBenchmark(
layer_name,
init_args,
input_shape=[256, 256, 3],
jit_compile=jit_compile,
)
benchmark.benchmark_predict(
num_samples=num_samples,
batch_size=batch_size,
)
benchmark.benchmark_train(
num_samples=num_samples,
batch_size=batch_size,
)
def benchmark_global_average_pooling3d(
num_samples,
batch_size,
jit_compile=True,
):
layer_name = "GlobalAveragePooling3D"
init_args = {}
benchmark = LayerBenchmark(
layer_name,
init_args,
input_shape=[64, 64, 32, 3],
jit_compile=jit_compile,
)
benchmark.benchmark_predict(
num_samples=num_samples,
batch_size=batch_size,
)
benchmark.benchmark_train(
num_samples=num_samples,
batch_size=batch_size,
)
def benchmark_global_max_pooling1d(
num_samples,
batch_size,
jit_compile=True,
):
layer_name = "GlobalMaxPooling1D"
init_args = {}
benchmark = LayerBenchmark(
layer_name,
init_args,
input_shape=[1024, 256],
jit_compile=jit_compile,
)
benchmark.benchmark_predict(
num_samples=num_samples,
batch_size=batch_size,
)
benchmark.benchmark_train(
num_samples=num_samples,
batch_size=batch_size,
)
def benchmark_global_max_pooling2d(
num_samples,
batch_size,
jit_compile=True,
):
layer_name = "GlobalMaxPooling2D"
init_args = {}
benchmark = LayerBenchmark(
layer_name,
init_args,
input_shape=[256, 256, 3],
jit_compile=jit_compile,
)
benchmark.benchmark_predict(
num_samples=num_samples,
batch_size=batch_size,
)
benchmark.benchmark_train(
num_samples=num_samples,
batch_size=batch_size,
)
def benchmark_global_max_pooling3d(
num_samples,
batch_size,
jit_compile=True,
):
layer_name = "GlobalMaxPooling3D"
init_args = {}
benchmark = LayerBenchmark(
layer_name,
init_args,
input_shape=[64, 64, 32, 3],
jit_compile=jit_compile,
)
benchmark.benchmark_predict(
num_samples=num_samples,
batch_size=batch_size,
)
benchmark.benchmark_train(
num_samples=num_samples,
batch_size=batch_size,
)
BENCHMARK_NAMES = {
"benchmark_average_pooling1d": benchmark_average_pooling1d,
"benchmark_average_pooling2d": benchmark_average_pooling2d,
"benchmark_average_pooling3d": benchmark_average_pooling3d,
"benchmark_max_pooling1d": benchmark_max_pooling1d,
"benchmark_max_pooling2d": benchmark_max_pooling2d,
"benchmark_max_pooling3d": benchmark_max_pooling3d,
"benchmark_global_average_pooling1d": benchmark_global_average_pooling1d,
"benchmark_global_average_pooling2d": benchmark_global_average_pooling2d,
"benchmark_global_average_pooling3d": benchmark_global_average_pooling3d,
"benchmark_global_max_pooling1d": benchmark_global_max_pooling1d,
"benchmark_global_max_pooling2d": benchmark_global_max_pooling2d,
"benchmark_global_max_pooling3d": benchmark_global_max_pooling3d,
}
def main(_):
benchmark_name = FLAGS.benchmark_name
num_samples = FLAGS.num_samples
batch_size = FLAGS.batch_size
jit_compile = FLAGS.jit_compile
if benchmark_name is None:
for name, benchmark_fn in BENCHMARK_NAMES.items():
benchmark_fn(num_samples, batch_size, jit_compile)
return
if benchmark_name not in BENCHMARK_NAMES:
raise ValueError(
f"Invalid benchmark name: {benchmark_name}, `benchmark_name` must "
f"be one of {BENCHMARK_NAMES.keys()}"
)
benchmark_fn = BENCHMARK_NAMES[benchmark_name]
benchmark_fn(num_samples, batch_size, jit_compile)
if __name__ == "__main__":
app.run(main)
| keras/benchmarks/layer_benchmark/pooling_benchmark.py/0 | {
"file_path": "keras/benchmarks/layer_benchmark/pooling_benchmark.py",
"repo_id": "keras",
"token_count": 3861
} | 176 |
import numpy as np
import keras
from keras import Model
from keras import initializers
from keras import layers
from keras import losses
from keras import metrics
from keras import ops
from keras import optimizers
class MyDense(layers.Layer):
def __init__(self, units, name=None):
super().__init__(name=name)
self.units = units
def build(self, input_shape):
input_dim = input_shape[-1]
self.w = self.add_weight(
shape=(input_dim, self.units),
initializer=initializers.GlorotNormal(),
name="kernel",
trainable=True,
)
self.b = self.add_weight(
shape=(self.units,),
initializer=initializers.Zeros(),
name="bias",
trainable=True,
)
def call(self, inputs):
# Use Keras ops to create backend-agnostic layers/metrics/etc.
return ops.matmul(inputs, self.w) + self.b
class MyDropout(layers.Layer):
def __init__(self, rate, name=None):
super().__init__(name=name)
self.rate = rate
# Use seed_generator for managing RNG state.
# It is a state element and its seed variable is
# tracked as part of `layer.variables`.
self.seed_generator = keras.random.SeedGenerator(1337)
def call(self, inputs):
# Use `keras.random` for random ops.
return keras.random.dropout(
inputs, self.rate, seed=self.seed_generator
)
class MyModel(Model):
def __init__(self, hidden_dim, output_dim):
super().__init__()
self.dense1 = MyDense(hidden_dim)
self.dense2 = MyDense(hidden_dim)
self.dense3 = MyDense(output_dim)
self.dp = MyDropout(0.5)
def call(self, x):
x1 = self.dense1(x)
x2 = self.dense2(x)
# Why not use some ops here as well
x = ops.concatenate([x1, x2], axis=-1)
x = self.dp(x)
return self.dense3(x)
model = MyModel(hidden_dim=256, output_dim=16)
x = np.random.random((50000, 128))
y = np.random.random((50000, 16))
batch_size = 32
epochs = 5
model.compile(
optimizer=optimizers.SGD(learning_rate=0.001),
loss=losses.MeanSquaredError(),
metrics=[metrics.MeanSquaredError()],
)
history = model.fit(x, y, batch_size=batch_size, epochs=epochs)
model.summary()
print("History:")
print(history.history)
| keras/examples/demo_custom_layer_backend_agnostic.py/0 | {
"file_path": "keras/examples/demo_custom_layer_backend_agnostic.py",
"repo_id": "keras",
"token_count": 1065
} | 177 |
from keras import backend
from keras import ops
from keras.api_export import keras_export
@keras_export("keras.activations.relu")
def relu(x, negative_slope=0.0, max_value=None, threshold=0.0):
"""Applies the rectified linear unit activation function.
With default values, this returns the standard ReLU activation:
`max(x, 0)`, the element-wise maximum of 0 and the input tensor.
Modifying default parameters allows you to use non-zero thresholds,
change the max value of the activation,
and to use a non-zero multiple of the input for values below the threshold.
Examples:
>>> x = [-10, -5, 0.0, 5, 10]
>>> keras.activations.relu(x)
[ 0., 0., 0., 5., 10.]
>>> keras.activations.relu(x, negative_slope=0.5)
[-5. , -2.5, 0. , 5. , 10. ]
>>> keras.activations.relu(x, max_value=5.)
[0., 0., 0., 5., 5.]
>>> keras.activations.relu(x, threshold=5.)
[-0., -0., 0., 0., 10.]
Args:
x: Input tensor.
negative_slope: A `float` that controls the slope
for values lower than the threshold.
max_value: A `float` that sets the saturation threshold (the largest
value the function will return).
threshold: A `float` giving the threshold value of the activation
function below which values will be damped or set to zero.
Returns:
A tensor with the same shape and dtype as input `x`.
"""
if backend.any_symbolic_tensors((x,)):
return ReLU(
negative_slope=negative_slope,
max_value=max_value,
threshold=threshold,
)(x)
return ReLU.static_call(
x,
negative_slope=negative_slope,
max_value=max_value,
threshold=threshold,
)
class ReLU(ops.Operation):
def __init__(
self, negative_slope=0.0, max_value=None, threshold=0.0, name=None
):
super().__init__(name=name)
self.negative_slope = negative_slope
self.max_value = max_value
self.threshold = threshold
def call(self, x):
return self.static_call(
x,
negative_slope=self.negative_slope,
max_value=self.max_value,
threshold=self.threshold,
)
def compute_output_spec(self, x):
return backend.KerasTensor(x.shape, x.dtype)
@staticmethod
def static_call(x, negative_slope=0.0, max_value=None, threshold=0.0):
x = backend.convert_to_tensor(x)
if negative_slope != 0.0:
if max_value is None and threshold == 0:
return backend.nn.leaky_relu(x, negative_slope=negative_slope)
if threshold != 0:
negative_part = backend.nn.relu(-x + threshold)
else:
negative_part = backend.nn.relu(-x)
clip_max = max_value is not None
if threshold != 0:
# computes x for x > threshold else 0
threshold = ops.cast(threshold, dtype=x.dtype)
x = x * backend.cast(
backend.numpy.greater(x, threshold), dtype=x.dtype
)
elif max_value == 6:
# if no threshold, then can use nn.relu6 native op for performance
x = backend.nn.relu6(x)
clip_max = False
else:
x = backend.nn.relu(x)
if clip_max:
min_value = ops.cast(0.0, dtype=x.dtype)
max_value = ops.cast(max_value, dtype=x.dtype)
x = backend.numpy.clip(x, min_value, max_value)
if negative_slope != 0.0:
x -= negative_slope * negative_part
return x
@keras_export("keras.activations.leaky_relu")
def leaky_relu(x, negative_slope=0.2):
"""Leaky relu activation function.
Args:
x: Input tensor.
negative_slope: A `float` that controls the slope
for values lower than the threshold.
"""
return ops.leaky_relu(x, negative_slope=negative_slope)
@keras_export("keras.activations.relu6")
def relu6(x):
"""Relu6 activation function.
It's the ReLU function, but truncated to a maximum value of 6.
Args:
x: Input tensor.
"""
return ops.relu6(x)
@keras_export("keras.activations.softmax")
def softmax(x, axis=-1):
"""Softmax converts a vector of values to a probability distribution.
The elements of the output vector are in range `[0, 1]` and sum to 1.
Each input vector is handled independently.
The `axis` argument sets which axis of the input the function
is applied along.
Softmax is often used as the activation for the last
layer of a classification network because the result could be interpreted as
a probability distribution.
The softmax of each vector x is computed as
`exp(x) / sum(exp(x))`.
The input values in are the log-odds of the resulting probability.
Args:
x: Input tensor.
axis: Integer, axis along which the softmax is applied.
"""
output = ops.softmax(x, axis=axis)
# Cache the logits to use for crossentropy loss.
try:
output._keras_logits = x
except AttributeError:
# We're dealing with a C-type.
pass
return output
@keras_export("keras.activations.elu")
def elu(x, alpha=1.0):
"""Exponential Linear Unit.
The exponential linear unit (ELU) with `alpha > 0` is define as:
- `x` if `x > 0`
- alpha * `exp(x) - 1` if `x < 0`
ELUs have negative values which pushes the mean of the activations
closer to zero.
Mean activations that are closer to zero enable faster learning as they
bring the gradient closer to the natural gradient.
ELUs saturate to a negative value when the argument gets smaller.
Saturation means a small derivative which decreases the variation
and the information that is propagated to the next layer.
Args:
x: Input tensor.
Reference:
- [Clevert et al., 2016](https://arxiv.org/abs/1511.07289)
"""
return ops.elu(x, alpha=alpha)
@keras_export("keras.activations.selu")
def selu(x):
"""Scaled Exponential Linear Unit (SELU).
The Scaled Exponential Linear Unit (SELU) activation function is defined as:
- `scale * x` if `x > 0`
- `scale * alpha * (exp(x) - 1)` if `x < 0`
where `alpha` and `scale` are pre-defined constants
(`alpha=1.67326324` and `scale=1.05070098`).
Basically, the SELU activation function multiplies `scale` (> 1) with the
output of the `keras.activations.elu` function to ensure a slope larger
than one for positive inputs.
The values of `alpha` and `scale` are
chosen so that the mean and variance of the inputs are preserved
between two consecutive layers as long as the weights are initialized
correctly (see `keras.initializers.LecunNormal` initializer)
and the number of input units is "large enough"
(see reference paper for more information).
Args:
x: Input tensor.
Notes:
- To be used together with the
`keras.initializers.LecunNormal` initializer.
- To be used together with the dropout variant
`keras.layers.AlphaDropout` (rather than regular dropout).
Reference:
- [Klambauer et al., 2017](https://arxiv.org/abs/1706.02515)
"""
return ops.selu(x)
@keras_export("keras.activations.softplus")
def softplus(x):
"""Softplus activation function.
It is defined as: `softplus(x) = log(exp(x) + 1)`.
Args:
x: Input tensor.
"""
return ops.softplus(x)
@keras_export("keras.activations.softsign")
def softsign(x):
"""Softsign activation function.
Softsign is defined as: `softsign(x) = x / (abs(x) + 1)`.
Args:
x: Input tensor.
"""
return ops.softsign(x)
@keras_export(["keras.activations.silu", "keras.activations.swish"])
def silu(x):
"""Swish (or Silu) activation function.
It is defined as: `swish(x) = x * sigmoid(x)`.
The Swish (or Silu) activation function is a smooth,
non-monotonic function that is unbounded above and
bounded below.
Args:
x: Input tensor.
Reference:
- [Ramachandran et al., 2017](https://arxiv.org/abs/1710.05941)
"""
return ops.silu(x)
@keras_export("keras.activations.gelu")
def gelu(x, approximate=False):
"""Gaussian error linear unit (GELU) activation function.
The Gaussian error linear unit (GELU) is defined as:
`gelu(x) = x * P(X <= x)` where `P(X) ~ N(0, 1)`,
i.e. `gelu(x) = 0.5 * x * (1 + erf(x / sqrt(2)))`.
GELU weights inputs by their value, rather than gating
inputs by their sign as in ReLU.
Args:
x: Input tensor.
approximate: A `bool`, whether to enable approximation.
Reference:
- [Hendrycks et al., 2016](https://arxiv.org/abs/1606.08415)
"""
return ops.gelu(x, approximate=approximate)
@keras_export("keras.activations.tanh")
def tanh(x):
"""Hyperbolic tangent activation function.
It is defined as:
`tanh(x) = sinh(x) / cosh(x)`, i.e.
`tanh(x) = ((exp(x) - exp(-x)) / (exp(x) + exp(-x)))`.
Args:
x: Input tensor.
"""
return ops.tanh(x)
@keras_export("keras.activations.sigmoid")
def sigmoid(x):
"""Sigmoid activation function.
It is defined as: `sigmoid(x) = 1 / (1 + exp(-x))`.
For small values (<-5),
`sigmoid` returns a value close to zero, and for large values (>5)
the result of the function gets close to 1.
Sigmoid is equivalent to a 2-element softmax, where the second element is
assumed to be zero. The sigmoid function always returns a value between
0 and 1.
Args:
x: Input tensor.
"""
output = ops.sigmoid(x)
# Cache the logits to use for crossentropy loss.
try:
output._keras_logits = x
except AttributeError:
# We're dealing with a C-type.
pass
return output
@keras_export("keras.activations.exponential")
def exponential(x):
"""Exponential activation function.
Args:
x: Input tensor.
"""
return ops.exp(x)
@keras_export("keras.activations.hard_sigmoid")
def hard_sigmoid(x):
"""Hard sigmoid activation function.
The hard sigmoid activation is defined as:
- `0` if `if x < -2.5`
- `1` if `x > 2.5`
- `0.2 * x + 0.5` if `-2.5 <= x <= 2.5`
It's a faster, piecewise linear approximation
of the sigmoid activation.
Args:
x: Input tensor.
Reference:
- [Wikipedia "Hard sigmoid"](https://en.wikipedia.org/wiki/Hard_sigmoid)
"""
return ops.hard_sigmoid(x)
@keras_export(["keras.activations.hard_silu", "keras.activations.hard_swish"])
def hard_silu(x):
"""Hard SiLU activation function, also known as Hard Swish.
It is defined as:
- `0` if `if x < -3`
- `x` if `x > 3`
- `x * (x + 3) / 6` if `-3 <= x <= 3`
It's a faster, piecewise linear approximation of the silu activation.
Args:
x: Input tensor.
Reference:
- [A Howard, 2019](https://arxiv.org/abs/1905.02244)
"""
x = backend.convert_to_tensor(x)
return ops.hard_silu(x)
@keras_export("keras.activations.linear")
def linear(x):
"""Linear activation function (pass-through).
A "linear" activation is an identity function:
it returns the input, unmodified.
Args:
x: Input tensor.
"""
return x
class Mish(ops.Operation):
def call(self, x):
return self.static_call(x)
def compute_output_spec(self, x):
return backend.KerasTensor(x.shape, x.dtype)
@staticmethod
def static_call(x):
return x * backend.nn.tanh(backend.nn.softplus(x))
@keras_export("keras.activations.mish")
def mish(x):
"""Mish activation function.
It is defined as:
`mish(x) = x * tanh(softplus(x))`
where `softplus` is defined as:
`softplus(x) = log(exp(x) + 1)`
Args:
x: Input tensor.
Reference:
- [Misra, 2019](https://arxiv.org/abs/1908.08681)
"""
x = backend.convert_to_tensor(x)
return Mish.static_call(x)
@keras_export("keras.activations.log_softmax")
def log_softmax(x, axis=-1):
"""Log-Softmax activation function.
Each input vector is handled independently.
The `axis` argument sets which axis of the input the function
is applied along.
Args:
x: Input tensor.
axis: Integer, axis along which the softmax is applied.
"""
return ops.log_softmax(x, axis=axis)
| keras/keras/activations/activations.py/0 | {
"file_path": "keras/keras/activations/activations.py",
"repo_id": "keras",
"token_count": 5125
} | 178 |
from unittest.mock import Mock
from unittest.mock import patch
import numpy as np
import tensorflow as tf
from keras import backend
from keras import ops
from keras import testing
from keras.backend.common import keras_tensor
class KerasTensorTest(testing.TestCase):
def test_attributes(self):
x = keras_tensor.KerasTensor(shape=(3,), dtype="float32", sparse=True)
self.assertEqual(x.dtype, "float32")
self.assertEqual(x.shape, (3,))
self.assertEqual(x.sparse, True)
def test_numpy_methods(self):
x = keras_tensor.KerasTensor(shape=(3, 2), dtype="float32")
# reshape
x = x.reshape((6,))
self.assertEqual(x.shape, (6,))
# expand_dims, squeeze
x = ops.expand_dims(x, -1)
self.assertEqual(x.shape, (6, 1))
x = x.squeeze()
self.assertEqual(x.shape, (6,))
x = ops.expand_dims(x, axis=0)
self.assertEqual(x.shape, (1, 6))
x = x.squeeze(axis=0)
self.assertEqual(x.shape, (6,))
def test_invalid_usage(self):
x = keras_tensor.KerasTensor(shape=(3,), dtype="float32")
with self.assertRaisesRegex(
ValueError, "doesn't have any actual numerical value"
):
np.array(x)
if backend.backend() == "jax":
from jax import numpy as jnp
with self.assertRaisesRegex(
ValueError, "cannot be used as input to a JAX function"
):
jnp.array(x)
with self.assertRaisesRegex(
ValueError, "cannot be used as input to a TensorFlow function"
):
tf.convert_to_tensor(x)
def test_bool(self):
tensor = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
with self.assertRaisesRegex(TypeError, "cannot be used as a boolean."):
bool(tensor)
def test_representation(self):
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
self.assertIn("<KerasTensor shape=(3, 4)", repr(x))
def test_iterating(self):
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
with self.assertRaises(NotImplementedError):
iter(x)
def test_any_symbolic_tensors(self):
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
y = np.array([1, 2, 3])
self.assertTrue(keras_tensor.any_symbolic_tensors(args=[x, y]))
self.assertFalse(keras_tensor.any_symbolic_tensors(args=[y]))
def test_is_keras_tensor(self):
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
self.assertTrue(keras_tensor.is_keras_tensor(x))
y = np.array([1, 2, 3])
self.assertFalse(keras_tensor.is_keras_tensor(y))
@patch("keras.ops.Absolute.symbolic_call")
def test_abs_method(self, mock_symbolic_call):
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
abs_x = abs(x) # this will internally call x.__abs__()
mock_symbolic_call.assert_called_once_with(x)
self.assertEqual(abs_x, mock_tensor)
@patch("keras.ops.Negative.symbolic_call")
def test_neg_method(self, mock_method):
self._test_unary_op_method(mock_method, lambda x: -x)
@patch("keras.ops.Subtract.symbolic_call")
def test_sub_method(self, mock_method):
y = Mock()
self._test_binary_op_method(mock_method, y, lambda x, y: x - y)
@patch("keras.ops.Multiply.symbolic_call")
def test_mul_method(self, mock_method):
y = Mock()
self._test_binary_op_method(mock_method, y, lambda x, y: x * y)
@patch("keras.ops.Matmul.symbolic_call")
def test_matmul_method(self, mock_method):
y = Mock()
self._test_binary_op_method(mock_method, y, lambda x, y: x @ y)
@patch("keras.ops.Power.symbolic_call")
def test_pow_method(self, mock_method):
y = Mock()
self._test_binary_op_method(mock_method, y, lambda x, y: x**y)
@patch("keras.ops.Mod.symbolic_call")
def test_mod_method(self, mock_method):
y = Mock()
self._test_binary_op_method(mock_method, y, lambda x, y: x % y)
@patch("keras.ops.Less.symbolic_call")
def test_lt_method(self, mock_method):
y = Mock()
self._test_binary_op_method(mock_method, y, lambda x, y: x < y)
@patch("keras.ops.LogicalAnd.symbolic_call")
def test_and_method(self, mock_method):
y = Mock()
self._test_binary_op_method(mock_method, y, lambda x, y: x & y)
@patch("keras.ops.LogicalOr.symbolic_call")
def test_or_method(self, mock_method):
y = Mock()
self._test_binary_op_method(mock_method, y, lambda x, y: x | y)
@patch("keras.ops.GetItem.symbolic_call")
def test_getitem_method(self, mock_method):
y = Mock()
self._test_binary_op_method(mock_method, y, lambda x, y: x[y])
def _test_unary_op_method(self, mock_method, operator):
mock_tensor = Mock()
mock_method.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
result = operator(x)
mock_method.assert_called_once_with(x)
self.assertEqual(result, mock_tensor)
def _test_binary_op_method(self, mock_method, other, operator):
mock_tensor = Mock()
mock_method.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
result = operator(x, other)
mock_method.assert_called_once_with(x, other)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.Add.symbolic_call")
def test_radd_method(self, mock_symbolic_call):
"""Test __radd__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
y = Mock()
result = y + x
mock_symbolic_call.assert_called_once_with(y, x)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.Subtract.symbolic_call")
def test_rsub_method(self, mock_symbolic_call):
"""Test __rsub__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
y = Mock()
result = y - x
mock_symbolic_call.assert_called_once_with(y, x)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.Multiply.symbolic_call")
def test_rmul_method(self, mock_symbolic_call):
"""Test __rmul__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
y = Mock()
result = y * x
mock_symbolic_call.assert_called_once_with(y, x)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.Matmul.symbolic_call")
def test_rmatmul_method(self, mock_symbolic_call):
"""Test __rmatmul__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
y = Mock()
result = y @ x
mock_symbolic_call.assert_called_once_with(y, x)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.Power.symbolic_call")
def test_rpow_method(self, mock_symbolic_call):
"""Test __rpow__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
y = Mock()
result = y**x
mock_symbolic_call.assert_called_once_with(y, x)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.FloorDivide.symbolic_call")
def test_floordiv_method(self, mock_symbolic_call):
"""Test __floordiv__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
y = Mock()
result = x // y
mock_symbolic_call.assert_called_once_with(x, y)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.FloorDivide.symbolic_call")
def test_rfloordiv_method(self, mock_symbolic_call):
"""Test __rfloordiv__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
y = Mock()
result = y // x
mock_symbolic_call.assert_called_once_with(y, x)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.Mod.symbolic_call")
def test_rmod_method(self, mock_symbolic_call):
"""Test __rmod__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
y = Mock()
result = y % x
mock_symbolic_call.assert_called_once_with(y, x)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.LessEqual.symbolic_call")
def test_le_method(self, mock_symbolic_call):
"""Test __le__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
y = Mock()
result = x <= y
mock_symbolic_call.assert_called_once_with(x, y)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.Greater.symbolic_call")
def test_gt_method(self, mock_symbolic_call):
"""Test __gt__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
y = Mock()
result = x > y
mock_symbolic_call.assert_called_once_with(x, y)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.GreaterEqual.symbolic_call")
def test_ge_method(self, mock_symbolic_call):
"""Test __ge__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
y = Mock()
result = x >= y
mock_symbolic_call.assert_called_once_with(x, y)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.NotEqual.symbolic_call")
def test_ne_method(self, mock_symbolic_call):
"""Test __ne__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
y = Mock()
result = x != y
mock_symbolic_call.assert_called_once_with(x, y)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.LogicalAnd.symbolic_call")
def test_rand_method(self, mock_symbolic_call):
"""Test __rand__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="bool")
y = Mock()
result = y & x
mock_symbolic_call.assert_called_once_with(y, x)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.LogicalOr.symbolic_call")
def test_ror_method(self, mock_symbolic_call):
"""Test __ror__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="bool")
y = Mock()
result = y | x
mock_symbolic_call.assert_called_once_with(y, x)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.LogicalNot.symbolic_call")
def test_invert_method(self, mock_symbolic_call):
"""Test __invert__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="bool")
result = ~x
mock_symbolic_call.assert_called_once_with(x)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.LogicalXor.symbolic_call")
def test_xor_method(self, mock_symbolic_call):
"""Test __xor__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="bool")
y = Mock()
result = x ^ y
mock_symbolic_call.assert_called_once_with(x, y)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.LogicalXor.symbolic_call")
def test_rxor_method(self, mock_symbolic_call):
"""Test __rxor__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="bool")
y = Mock()
result = y ^ x
mock_symbolic_call.assert_called_once_with(y, x)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.TrueDivide.symbolic_call")
def test_truediv_method(self, mock_symbolic_call):
"""Test __truediv__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
y = Mock()
result = x / y
mock_symbolic_call.assert_called_once_with(x, y)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.TrueDivide.symbolic_call")
def test_rtruediv_method(self, mock_symbolic_call):
"""Test __rtruediv__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
y = Mock()
result = y / x
mock_symbolic_call.assert_called_once_with(y, x)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.Divide.symbolic_call")
def test_div_method(self, mock_symbolic_call):
"""Test __div__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
y = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
# to ensure compatibility across Python versions
result = x.__div__(y)
mock_symbolic_call.assert_called_once_with(x, y)
self.assertEqual(result, mock_tensor)
@patch("keras.ops.Divide.symbolic_call")
def test_rdiv_method(self, mock_symbolic_call):
"""Test __rdiv__ method"""
mock_tensor = Mock()
mock_symbolic_call.return_value = mock_tensor
x = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
y = keras_tensor.KerasTensor(shape=(3, 4), dtype="float32")
# to ensure compatibility across Python versions
result = x.__rdiv__(y)
mock_symbolic_call.assert_called_once_with(y, x)
self.assertEqual(result, mock_tensor)
| keras/keras/backend/common/keras_tensor_test.py/0 | {
"file_path": "keras/keras/backend/common/keras_tensor_test.py",
"repo_id": "keras",
"token_count": 7067
} | 179 |
import math
import jax
import jax.numpy as jnp
from keras.backend import config
from keras.backend import standardize_dtype
from keras.backend.common import dtypes
from keras.backend.jax.core import cast
from keras.backend.jax.core import convert_to_tensor
from keras.utils.module_utils import scipy
def segment_sum(data, segment_ids, num_segments=None, sorted=False):
if num_segments is None:
raise ValueError(
"Argument `num_segments` must be set when using the JAX backend. "
"Received: num_segments=None"
)
return jax.ops.segment_sum(
data, segment_ids, num_segments, indices_are_sorted=sorted
)
def segment_max(data, segment_ids, num_segments=None, sorted=False):
if num_segments is None:
raise ValueError(
"Argument `num_segments` must be set when using the JAX backend. "
"Received: num_segments=None"
)
return jax.ops.segment_max(
data, segment_ids, num_segments, indices_are_sorted=sorted
)
def top_k(x, k, sorted=True):
# Jax does not supported `sorted`, but in the case where `sorted=False`,
# order is not guaranteed, so OK to return sorted output.
return jax.lax.top_k(x, k)
def in_top_k(targets, predictions, k):
targets = targets[..., None]
topk_values = top_k(predictions, k)[0]
targets_values = jnp.take_along_axis(predictions, targets, axis=-1)
mask = targets_values >= topk_values
return jnp.any(mask, axis=1)
def logsumexp(x, axis=None, keepdims=False):
max_x = jnp.max(x, axis=axis, keepdims=True)
result = (
jnp.log(jnp.sum(jnp.exp(x - max_x), axis=axis, keepdims=True)) + max_x
)
return jnp.squeeze(result) if not keepdims else result
def qr(x, mode="reduced"):
if mode not in {"reduced", "complete"}:
raise ValueError(
"`mode` argument value not supported. "
"Expected one of {'reduced', 'complete'}. "
f"Received: mode={mode}"
)
return jnp.linalg.qr(x, mode=mode)
def extract_sequences(x, sequence_length, sequence_stride):
*batch_shape, signal_length = x.shape
batch_shape = list(batch_shape)
x = jnp.reshape(x, (math.prod(batch_shape), signal_length, 1))
x = jax.lax.conv_general_dilated_patches(
x,
(sequence_length,),
(sequence_stride,),
"VALID",
dimension_numbers=("NTC", "OIT", "NTC"),
)
return jnp.reshape(x, (*batch_shape, *x.shape[-2:]))
def _get_complex_tensor_from_tuple(x):
if not isinstance(x, (tuple, list)) or len(x) != 2:
raise ValueError(
"Input `x` should be a tuple of two tensors - real and imaginary."
f"Received: x={x}"
)
# `convert_to_tensor` does not support passing complex tensors. We separate
# the input out into real and imaginary and convert them separately.
real, imag = x
# Check shapes.
if real.shape != imag.shape:
raise ValueError(
"Input `x` should be a tuple of two tensors - real and imaginary."
"Both the real and imaginary parts should have the same shape. "
f"Received: x[0].shape = {real.shape}, x[1].shape = {imag.shape}"
)
# Ensure dtype is float.
if not jnp.issubdtype(real.dtype, jnp.floating) or not jnp.issubdtype(
imag.dtype, jnp.floating
):
raise ValueError(
"At least one tensor in input `x` is not of type float."
f"Received: x={x}."
)
complex_input = jax.lax.complex(real, imag)
return complex_input
def fft(x):
complex_input = _get_complex_tensor_from_tuple(x)
complex_output = jnp.fft.fft(complex_input)
return jnp.real(complex_output), jnp.imag(complex_output)
def fft2(x):
complex_input = _get_complex_tensor_from_tuple(x)
complex_output = jnp.fft.fft2(complex_input)
return jnp.real(complex_output), jnp.imag(complex_output)
def rfft(x, fft_length=None):
complex_output = jnp.fft.rfft(x, n=fft_length, axis=-1, norm="backward")
return jnp.real(complex_output), jnp.imag(complex_output)
def irfft(x, fft_length=None):
complex_input = _get_complex_tensor_from_tuple(x)
return jnp.fft.irfft(complex_input, n=fft_length, axis=-1, norm="backward")
def stft(
x, sequence_length, sequence_stride, fft_length, window="hann", center=True
):
if standardize_dtype(x.dtype) not in {"float32", "float64"}:
raise TypeError(
"Invalid input type. Expected `float32` or `float64`. "
f"Received: input type={x.dtype}"
)
if fft_length < sequence_length:
raise ValueError(
"`fft_length` must equal or larger than `sequence_length`. "
f"Received: sequence_length={sequence_length}, "
f"fft_length={fft_length}"
)
if isinstance(window, str):
if window not in {"hann", "hamming"}:
raise ValueError(
"If a string is passed to `window`, it must be one of "
f'`"hann"`, `"hamming"`. Received: window={window}'
)
x = convert_to_tensor(x)
if center:
pad_width = [(0, 0) for _ in range(len(x.shape))]
pad_width[-1] = (fft_length // 2, fft_length // 2)
x = jnp.pad(x, pad_width, mode="reflect")
l_pad = (fft_length - sequence_length) // 2
r_pad = fft_length - sequence_length - l_pad
if window is not None:
if isinstance(window, str):
win = convert_to_tensor(
scipy.signal.get_window(window, sequence_length), dtype=x.dtype
)
else:
win = convert_to_tensor(window, dtype=x.dtype)
if len(win.shape) != 1 or win.shape[-1] != sequence_length:
raise ValueError(
"The shape of `window` must be equal to [sequence_length]."
f"Received: window shape={win.shape}"
)
win = jnp.pad(win, [[l_pad, r_pad]])
else:
win = jnp.ones((sequence_length + l_pad + r_pad), dtype=x.dtype)
result = jax.scipy.signal.stft(
x,
fs=1.0,
window=win,
nperseg=(sequence_length + l_pad + r_pad),
noverlap=(sequence_length + l_pad + r_pad - sequence_stride),
nfft=fft_length,
boundary=None,
padded=False,
)[-1]
# scale and swap to (..., num_sequences, fft_bins)
scale = jnp.sqrt(1.0 / win.sum() ** 2)
result = result / scale
result = jnp.swapaxes(result, -2, -1)
return jnp.real(result), jnp.imag(result)
def istft(
x,
sequence_length,
sequence_stride,
fft_length,
length=None,
window="hann",
center=True,
):
x = _get_complex_tensor_from_tuple(x)
dtype = jnp.real(x).dtype
expected_output_len = fft_length + sequence_stride * (x.shape[-2] - 1)
l_pad = (fft_length - sequence_length) // 2
r_pad = fft_length - sequence_length - l_pad
if window is not None:
if isinstance(window, str):
win = convert_to_tensor(
scipy.signal.get_window(window, sequence_length), dtype=dtype
)
else:
win = convert_to_tensor(window, dtype=dtype)
if len(win.shape) != 1 or win.shape[-1] != sequence_length:
raise ValueError(
"The shape of `window` must be equal to [sequence_length]."
f"Received: window shape={win.shape}"
)
win = jnp.pad(win, [[l_pad, r_pad]])
else:
win = jnp.ones((sequence_length + l_pad + r_pad), dtype=dtype)
x = jax.scipy.signal.istft(
x,
fs=1.0,
window=win,
nperseg=(sequence_length + l_pad + r_pad),
noverlap=(sequence_length + l_pad + r_pad - sequence_stride),
nfft=fft_length,
boundary=False,
time_axis=-2,
freq_axis=-1,
)[-1]
# scale
x = x / win.sum() if window is not None else x / sequence_stride
start = 0 if center is False else fft_length // 2
if length is not None:
end = start + length
elif center is True:
end = -(fft_length // 2)
else:
end = expected_output_len
return x[..., start:end]
def rsqrt(x):
return jax.lax.rsqrt(x)
def erf(x):
return jax.lax.erf(x)
def erfinv(x):
return jax.lax.erf_inv(x)
def solve(a, b):
a = convert_to_tensor(a)
b = convert_to_tensor(b)
return jnp.linalg.solve(a, b)
def norm(x, ord=None, axis=None, keepdims=False):
x = convert_to_tensor(x)
if standardize_dtype(x.dtype) == "int64":
dtype = config.floatx()
else:
dtype = dtypes.result_type(x.dtype, float)
x = cast(x, dtype)
return jnp.linalg.norm(x, ord=ord, axis=axis, keepdims=keepdims)
| keras/keras/backend/jax/math.py/0 | {
"file_path": "keras/keras/backend/jax/math.py",
"repo_id": "keras",
"token_count": 4046
} | 180 |
import numpy as np
from keras.backend.config import floatx
from keras.backend.numpy.nn import softmax
from keras.random.seed_generator import SeedGenerator
from keras.random.seed_generator import draw_seed
from keras.random.seed_generator import make_default_seed
def normal(shape, mean=0.0, stddev=1.0, dtype=None, seed=None):
dtype = dtype or floatx()
seed = draw_seed(seed)
rng = np.random.default_rng(seed)
return rng.normal(size=shape, loc=mean, scale=stddev).astype(dtype)
def uniform(shape, minval=0.0, maxval=1.0, dtype=None, seed=None):
dtype = dtype or floatx()
seed = draw_seed(seed)
rng = np.random.default_rng(seed)
return rng.uniform(size=shape, low=minval, high=maxval).astype(dtype)
def categorical(logits, num_samples, dtype="int64", seed=None):
seed = draw_seed(seed)
rng = np.random.default_rng(seed)
output = []
for logits_instance in logits:
probabilities = softmax(logits_instance)
classes = np.arange(logits_instance.shape[-1])
samples = rng.choice(classes, size=num_samples, p=probabilities)
output.append(samples)
return np.array(output).astype(dtype)
def randint(shape, minval, maxval, dtype="int32", seed=None):
seed = draw_seed(seed)
rng = np.random.default_rng(seed)
output = rng.integers(low=minval, high=maxval, size=shape, dtype=dtype)
return output
def truncated_normal(shape, mean=0.0, stddev=1.0, dtype=None, seed=None):
dtype = dtype or floatx()
seed = draw_seed(seed)
rng = np.random.default_rng(seed)
lower_bound = mean - 2 * stddev
upper_bound = mean + 2 * stddev
flat_shape = np.prod(shape)
random_numbers = np.empty(0)
# loop until we have enough valid numbers to fill our desired shape
while random_numbers.shape[0] < flat_shape:
# Generate a batch of random numbers from a normal distribution
batch = rng.normal(loc=mean, scale=stddev, size=flat_shape)
# Filter the numbers to keep only those within the specified bounds
valid = batch[(batch >= lower_bound) & (batch <= upper_bound)]
# Append the valid numbers to the result array
random_numbers = np.append(random_numbers, valid)
# Truncate the result array to the desired size and reshape it
return random_numbers[:flat_shape].astype(dtype).reshape(shape)
def dropout(inputs, rate, noise_shape=None, seed=None):
seed = draw_seed(seed)
keep_prob = 1.0 - rate
# If noise_shape is not provided, use the shape of inputs
if noise_shape is None:
noise_shape = inputs.shape
else:
# If noise_shape is provided, replace None with corresponding
# input shape
noise_shape = [
n if n is not None else inputs.shape[i]
for i, n in enumerate(noise_shape)
]
rng = np.random.default_rng(seed)
mask = rng.uniform(size=noise_shape) < keep_prob
mask = np.broadcast_to(mask, inputs.shape)
return np.where(mask, inputs / keep_prob, np.zeros_like(inputs))
def shuffle(x, axis=0, seed=None):
seed = draw_seed(seed)
rng = np.random.default_rng(seed)
return rng.permuted(x, axis=axis)
def gamma(shape, alpha, dtype=None, seed=None):
dtype = dtype or floatx()
seed = draw_seed(seed)
rng = np.random.default_rng(seed)
return rng.gamma(alpha, scale=1.0, size=shape).astype(dtype)
def binomial(shape, counts, probabilities, dtype=None, seed=None):
dtype = dtype or floatx()
seed = draw_seed(seed)
rng = np.random.default_rng(seed)
sample = rng.binomial(n=counts, p=probabilities, size=shape).astype(dtype)
return sample
def beta(shape, alpha, beta, dtype=None, seed=None):
dtype = dtype or floatx()
seed = draw_seed(seed)
rng = np.random.default_rng(seed)
sample = rng.beta(a=alpha, b=beta, size=shape).astype(dtype)
return sample
| keras/keras/backend/numpy/random.py/0 | {
"file_path": "keras/keras/backend/numpy/random.py",
"repo_id": "keras",
"token_count": 1558
} | 181 |
import tensorflow as tf
from tensorflow.experimental import numpy as tfnp
from keras.backend.common import standardize_dtype
from keras.backend.config import floatx
from keras.random.seed_generator import SeedGenerator
from keras.random.seed_generator import draw_seed
from keras.random.seed_generator import make_default_seed
def tf_draw_seed(seed):
# TF ops only accept int32/64 seeds but our base seed is uint32.
return tf.cast(draw_seed(seed), dtype="int32")
def normal(shape, mean=0.0, stddev=1.0, dtype=None, seed=None):
dtype = dtype or floatx()
seed = tf_draw_seed(seed)
return tf.random.stateless_normal(
shape=shape, mean=mean, stddev=stddev, dtype=dtype, seed=seed
)
def uniform(shape, minval=0.0, maxval=1.0, dtype=None, seed=None):
dtype = dtype or floatx()
seed = tf_draw_seed(seed)
return tf.random.stateless_uniform(
shape=shape,
minval=tf.cast(minval, dtype),
maxval=tf.cast(maxval, dtype),
dtype=dtype,
seed=seed,
)
def categorical(logits, num_samples, dtype="int64", seed=None):
seed = tf_draw_seed(seed)
output = tf.random.stateless_categorical(logits, num_samples, seed=seed)
return tf.cast(output, dtype)
def randint(shape, minval, maxval, dtype="int32", seed=None):
intemediate_dtype = dtype
if standardize_dtype(dtype) not in ["int32", "int64"]:
intemediate_dtype = "int64"
seed = tf_draw_seed(seed)
output = tf.random.stateless_uniform(
shape=shape,
minval=minval,
maxval=maxval,
dtype=intemediate_dtype,
seed=seed,
)
return tf.cast(output, dtype)
def truncated_normal(shape, mean=0.0, stddev=1.0, dtype=None, seed=None):
dtype = dtype or floatx()
seed = tf_draw_seed(seed)
return tf.random.stateless_truncated_normal(
shape=shape, mean=mean, stddev=stddev, dtype=dtype, seed=seed
)
def _get_concrete_noise_shape(inputs, noise_shape):
if noise_shape is None:
return tf.shape(inputs)
concrete_inputs_shape = tf.shape(inputs)
concrete_noise_shape = []
for i, value in enumerate(noise_shape):
concrete_noise_shape.append(
concrete_inputs_shape[i] if value is None else value
)
return concrete_noise_shape
def dropout(inputs, rate, noise_shape=None, seed=None):
seed = tf_draw_seed(seed)
noise_shape = _get_concrete_noise_shape(inputs, noise_shape)
return tf.nn.experimental.stateless_dropout(
inputs,
rate=rate,
noise_shape=noise_shape,
seed=seed,
)
def shuffle(x, axis=0, seed=None):
seed = tf_draw_seed(seed)
if axis == 0:
return tf.random.experimental.stateless_shuffle(x, seed=seed)
x = tfnp.swapaxes(x, axis1=0, axis2=axis)
x = tf.random.experimental.stateless_shuffle(x, seed=seed)
x = tfnp.swapaxes(x, axis1=0, axis2=axis)
return x
def gamma(shape, alpha, dtype=None, seed=None):
dtype = dtype or floatx()
seed = tf_draw_seed(seed)
return tf.random.stateless_gamma(
shape,
alpha=alpha,
dtype=dtype,
seed=seed,
)
def binomial(shape, counts, probabilities, dtype=None, seed=None):
dtype = dtype or floatx()
seed = tf_draw_seed(seed)
sample = tf.random.stateless_binomial(
shape=shape,
seed=seed,
counts=counts,
probs=probabilities,
output_dtype=dtype,
)
return sample
def beta(shape, alpha, beta, dtype=None, seed=None):
dtype = dtype or floatx()
# since tensorflow doesn't offer a beta distribution function
# so we'll use the formula U(a,b) = (X(a) / (X(a) + Y(b)),
# where U(a,b) is a beta-distributed random variable with
# parameters a and b, and X(a) and Y(b) are gamma-distributed
# random variables with parameters a and b respectively.
# Additionally, we'll use two different seeds for our two
# gamma random variables to prevent any unintended
# dependencies and correlations between the generated values
# due to the usage of same seed.
seed_1 = tf_draw_seed(seed)
# The choice of 12 is totally arbitrary, as we're
# incrementing the first drawn seed by a CONSTANT to
# ensure deterministic results.
seed_2 = seed_1 + 12
alpha = tf.convert_to_tensor(alpha, dtype=dtype)
beta = tf.convert_to_tensor(beta, dtype=dtype)
# tensorflow's tf.random.stateless_gamma has a bit of unconventional
# implementation of the stateless_gamma function where it checks the
# broadcastability of alpha's shape with ONLY the RIGHTMOST dimension of
# the specified output shape instead of considering the whole.
# Consequently, it then results in errors for perfectly broadcastable shapes
# such as for output shape of (2, 3) and alpha shape of (1, 3)
# So to resolve this, we explicitly broadcast alpha and beta to shape before
# passing them to the stateless_gamma function.
if tf.rank(alpha) > 1:
alpha = tf.broadcast_to(alpha, shape)
if tf.rank(beta) > 1:
beta = tf.broadcast_to(beta, shape)
gamma_a = tf.random.stateless_gamma(
shape=shape, seed=seed_1, alpha=alpha, dtype=dtype
)
gamma_b = tf.random.stateless_gamma(
shape=shape, seed=seed_2, alpha=beta, dtype=dtype
)
sample = gamma_a / (gamma_a + gamma_b)
return sample
| keras/keras/backend/tensorflow/random.py/0 | {
"file_path": "keras/keras/backend/tensorflow/random.py",
"repo_id": "keras",
"token_count": 2179
} | 182 |
import builtins
import math
import torch
from keras.backend import KerasTensor
from keras.backend import config
from keras.backend.common import dtypes
from keras.backend.common.variables import standardize_dtype
from keras.backend.torch.core import cast
from keras.backend.torch.core import convert_to_tensor
from keras.backend.torch.core import get_device
from keras.backend.torch.core import is_tensor
from keras.backend.torch.core import to_torch_dtype
TORCH_INT_TYPES = (
torch.int8,
torch.int16,
torch.int32,
torch.int64,
)
def add(x1, x2):
x1 = convert_to_tensor(x1)
x2 = convert_to_tensor(x2)
return torch.add(x1, x2)
def einsum(subscripts, *operands, **kwargs):
operands = [convert_to_tensor(operand) for operand in operands]
return torch.einsum(subscripts, *operands)
def subtract(x1, x2):
x1 = convert_to_tensor(x1)
x2 = convert_to_tensor(x2)
# TODO: torch.subtract doesn't support bool
if standardize_dtype(x1.dtype) == "bool":
x1 = cast(x1, x2.dtype)
if standardize_dtype(x2.dtype) == "bool":
x2 = cast(x2, x1.dtype)
return torch.subtract(x1, x2)
def _can_use_int_matmul(x1, x2):
# torch._int_mm only accepts the following conditions:
# 1. cuda
# 2. both inputs must have int8 dtype
# 3. both inputs must be 2d
# 4. x1.shape must be [>16, >= 16 and a multiplier of 8]
# 5. x2.shape must be [>= 16 and a multiplier of 8, multiplier of 8]
if get_device() != "cuda":
return False
x1_dtype = standardize_dtype(x1.dtype)
x2_dtype = standardize_dtype(x2.dtype)
if x1_dtype != "int8" or x2_dtype != "int8":
return False
x1_shape = x1.shape
x2_shape = x2.shape
if x1.ndim != 2 or x2.ndim != 2:
return False
if x1_shape[0] <= 16 or x1_shape[1] < 16 or x1_shape[1] % 8 != 0:
return False
if x2_shape[0] < 16 or x2_shape[0] % 8 != 0 or x2_shape[1] % 8 != 0:
return False
return True
def matmul(x1, x2):
x1 = convert_to_tensor(x1)
x2 = convert_to_tensor(x2)
# Shortcut for torch._int_mm
# TODO: Loosen the restriction of the usage of torch._int_mm
# TODO: We should replace torch._int_mm with the public api if possible
if _can_use_int_matmul(x1, x2):
return torch._int_mm(x1, x2)
x1_dtype = standardize_dtype(x1.dtype)
x2_dtype = standardize_dtype(x2.dtype)
if x1_dtype == "int8" and x2_dtype == "int8":
result_dtype = "int32"
else:
result_dtype = dtypes.result_type(x1.dtype, x2.dtype)
compute_dtype = result_dtype
# TODO: torch.matmul doesn't support bool
if compute_dtype == "bool":
compute_dtype = config.floatx()
# TODO: torch.matmul doesn't support float16 with cpu
if get_device() == "cpu" and compute_dtype == "float16":
compute_dtype = "float32"
# TODO: torch.matmul doesn't support integer types with cuda
if get_device() == "cuda" and "int" in compute_dtype:
compute_dtype = config.floatx()
x1 = cast(x1, compute_dtype)
x2 = cast(x2, compute_dtype)
return cast(torch.matmul(x1, x2), result_dtype)
def multiply(x1, x2):
x1 = convert_to_tensor(x1)
x2 = convert_to_tensor(x2)
return torch.multiply(x1, x2)
def mean(x, axis=None, keepdims=False):
if isinstance(x, (list, tuple)):
x = stack(x)
x = convert_to_tensor(x)
if axis == () or axis == []:
# Torch handles the empty axis case differently from numpy.
return x
elif isinstance(axis, int):
axis = (axis,) # see [NB] below
ori_dtype = standardize_dtype(x.dtype)
# torch.mean only supports floating point inputs
compute_dtype = dtypes.result_type(x.dtype, "float32")
if "int" in ori_dtype or ori_dtype == "bool":
result_dtype = compute_dtype
else:
result_dtype = ori_dtype
# [NB] the python torch op torch.mean() is generated into
# `torch._C._VariableFunctions.pyi`, and the method
# signature is overloaded.
# Dynamo won't actually find the correct signature of
# `torch.mean()` if arguments are passed via kwargs
# So we have to pass the arguments via positional args
# EXCEPT for those that are forced as kwargs via the `*`
# delimiter in the overloaded method signatures.
# Additionally, we have to create a singleton-tuple
# when `axis` is an int to match the existing fn signature
result = torch.mean(
x,
axis,
keepdims,
dtype=to_torch_dtype(compute_dtype),
)
return cast(result, result_dtype)
def max(x, axis=None, keepdims=False, initial=None):
x = convert_to_tensor(x)
if 0 in x.shape:
if initial is None:
raise ValueError("Cannot compute the max of an empty tensor.")
elif keepdims:
return torch.full((1,) * len(x.shape), initial)
else:
return torch.tensor(initial)
if axis is None:
result = torch.max(x)
else:
result = amax(x, axis=axis, keepdims=keepdims)
if isinstance(getattr(result, "values", None), torch.Tensor):
result = result.values
if initial is not None:
initial = convert_to_tensor(initial)
return torch.maximum(result, torch.full(result.shape, initial))
return result
def ones(shape, dtype=None):
dtype = to_torch_dtype(dtype or config.floatx())
if isinstance(shape, int):
shape = (shape,)
return torch.ones(size=shape, dtype=dtype, device=get_device())
def zeros(shape, dtype=None):
dtype = to_torch_dtype(dtype or config.floatx())
if isinstance(shape, int):
shape = (shape,)
return torch.zeros(size=shape, dtype=dtype, device=get_device())
def zeros_like(x, dtype=None):
x = convert_to_tensor(x)
dtype = to_torch_dtype(dtype or x.dtype)
return torch.zeros_like(x, dtype=dtype)
def absolute(x):
return abs(x)
def abs(x):
x = convert_to_tensor(x)
# bool are always non-negative
if standardize_dtype(x.dtype) == "bool":
return x
return torch.abs(x)
def all(x, axis=None, keepdims=False):
x = convert_to_tensor(x)
if axis is None:
return cast(torch.all(x), "bool")
if not isinstance(axis, (list, tuple)):
axis = (axis,)
for a in axis:
# `torch.all` does not handle multiple axes.
x = torch.all(x, dim=a, keepdim=keepdims)
return cast(x, "bool")
def any(x, axis=None, keepdims=False):
x = convert_to_tensor(x)
if axis is None:
return cast(torch.any(x), "bool")
if not isinstance(axis, (list, tuple)):
axis = (axis,)
for a in axis:
# `torch.any` does not handle multiple axes.
x = torch.any(x, dim=a, keepdim=keepdims)
return cast(x, "bool")
def amax(x, axis=None, keepdims=False):
x = convert_to_tensor(x)
if axis is None:
return torch.amax(x)
if axis == () or axis == []:
# Torch handles the empty axis case differently from numpy.
return x
return torch.amax(x, dim=axis, keepdim=keepdims)
def amin(x, axis=None, keepdims=False):
x = convert_to_tensor(x)
if axis is None:
return torch.amin(x)
if axis == () or axis == []:
# Torch handles the empty axis case differently from numpy.
return x
return torch.amin(x, dim=axis, keepdim=keepdims)
def append(x1, x2, axis=None):
x1, x2 = convert_to_tensor(x1), convert_to_tensor(x2)
if axis is None:
return torch.cat((x1.flatten(), x2.flatten()))
return torch.cat((x1, x2), dim=axis)
def arange(start, stop=None, step=1, dtype=None):
if dtype is None:
dtypes_to_resolve = [
getattr(start, "dtype", type(start)),
getattr(step, "dtype", type(step)),
]
if stop is not None:
dtypes_to_resolve.append(getattr(stop, "dtype", type(stop)))
dtype = dtypes.result_type(*dtypes_to_resolve)
dtype = to_torch_dtype(dtype)
if stop is None:
return torch.arange(end=start, dtype=dtype, device=get_device())
return torch.arange(
start, stop, step=step, dtype=dtype, device=get_device()
)
def arccos(x):
x = convert_to_tensor(x)
return torch.arccos(x)
def arccosh(x):
x = convert_to_tensor(x)
return torch.arccosh(x)
def arcsin(x):
x = convert_to_tensor(x)
return torch.arcsin(x)
def arcsinh(x):
x = convert_to_tensor(x)
return torch.arcsinh(x)
def arctan(x):
x = convert_to_tensor(x)
return torch.arctan(x)
def arctan2(x1, x2):
x1 = convert_to_tensor(x1)
x2 = convert_to_tensor(x2)
result_dtype = dtypes.result_type(x1.dtype, x2.dtype, float)
compute_dtype = result_dtype
# TODO: torch.arctan2 doesn't support float16 with cpu
if get_device() == "cpu" and compute_dtype == "float16":
compute_dtype = "float32"
x1 = cast(x1, compute_dtype)
x2 = cast(x2, compute_dtype)
return cast(torch.arctan2(x1, x2), result_dtype)
def arctanh(x):
x = convert_to_tensor(x)
return torch.arctanh(x)
def argmax(x, axis=None):
x = convert_to_tensor(x)
# TODO: torch.argmax doesn't support bool
if standardize_dtype(x.dtype) == "bool":
x = cast(x, "uint8")
return cast(torch.argmax(x, dim=axis), dtype="int32")
def argmin(x, axis=None):
x = convert_to_tensor(x)
# TODO: torch.argmin doesn't support bool
if standardize_dtype(x.dtype) == "bool":
x = cast(x, "uint8")
return cast(torch.argmin(x, dim=axis), dtype="int32")
def argsort(x, axis=-1):
x = convert_to_tensor(x)
# TODO: torch.argsort doesn't support bool
if standardize_dtype(x.dtype) == "bool":
x = cast(x, "uint8")
if axis is None:
axis = -1
x = x.reshape(-1)
return cast(torch.argsort(x, dim=axis, stable=True), dtype="int32")
def array(x, dtype=None):
return convert_to_tensor(x, dtype=dtype)
def average(x, axis=None, weights=None):
x = convert_to_tensor(x)
dtypes_to_resolve = [x.dtype, float]
if weights is not None:
weights = convert_to_tensor(weights)
dtypes_to_resolve.append(weights.dtype)
dtype = dtypes.result_type(*dtypes_to_resolve)
x = cast(x, dtype)
if weights is not None:
weights = cast(weights, dtype)
if axis == () or axis == []:
# Torch handles the empty axis case differently from numpy.
return x
if weights is not None:
return torch.sum(torch.mul(x, weights), dim=axis) / torch.sum(
weights, dim=-1
)
return torch.mean(x, axis)
def bincount(x, weights=None, minlength=0):
x = convert_to_tensor(x)
dtypes_to_resolve = [x.dtype]
if weights is not None:
weights = convert_to_tensor(weights)
dtypes_to_resolve.append(weights.dtype)
dtype = dtypes.result_type(*dtypes_to_resolve)
else:
dtype = "int32"
if len(x.shape) == 2:
if weights is None:
def bincount_fn(arr):
return torch.bincount(arr, minlength=minlength)
bincounts = list(map(bincount_fn, x))
else:
def bincount_fn(arr_w):
return torch.bincount(
arr_w[0], weights=arr_w[1], minlength=minlength
)
bincounts = list(map(bincount_fn, zip(x, weights)))
return cast(torch.stack(bincounts), dtype)
return cast(torch.bincount(x, weights, minlength), dtype)
def broadcast_to(x, shape):
x = convert_to_tensor(x)
return torch.broadcast_to(x, shape)
def ceil(x):
x = convert_to_tensor(x)
ori_dtype = standardize_dtype(x.dtype)
# TODO: torch.ceil doesn't support bool
if ori_dtype == "bool":
x = cast(x, "uint8")
# TODO: torch.ceil doesn't support float16 with cpu
elif get_device() == "cpu" and ori_dtype == "float16":
x = cast(x, config.floatx())
if ori_dtype == "int64":
dtype = config.floatx()
else:
dtype = dtypes.result_type(ori_dtype, float)
return cast(torch.ceil(x), dtype=dtype)
def clip(x, x_min, x_max):
x = convert_to_tensor(x)
x_min = convert_to_tensor(x_min)
x_max = convert_to_tensor(x_max)
ori_dtype = standardize_dtype(x.dtype)
# TODO: torch.clip doesn't support float16 with cpu
if get_device() == "cpu" and ori_dtype == "float16":
x = cast(x, "float32")
return cast(torch.clip(x, min=x_min, max=x_max), "float16")
if ori_dtype == "bool":
x = cast(x, "int32")
return torch.clip(x, min=x_min, max=x_max)
def concatenate(xs, axis=0):
xs = [convert_to_tensor(x) for x in xs]
return torch.cat(xs, dim=axis)
def conjugate(x):
if not isinstance(x, torch.Tensor):
x = torch.from_numpy(x) # needed for complex type conversion
return torch.conj(x).resolve_conj()
def conj(x):
if not isinstance(x, torch.Tensor):
x = torch.from_numpy(x) # needed for complex type conversion
return torch.conj(x).resolve_conj()
def copy(x):
x = convert_to_tensor(x)
return torch.clone(x)
def cos(x):
x = convert_to_tensor(x)
return torch.cos(x)
def cosh(x):
x = convert_to_tensor(x)
return torch.cosh(x)
def count_nonzero(x, axis=None):
x = convert_to_tensor(x)
if axis == () or axis == []:
# Torch handles the empty axis case differently from numpy.
return cast(torch.ne(x, 0), "int32")
return cast(torch.count_nonzero(x, dim=axis).T, "int32")
def cross(x1, x2, axisa=-1, axisb=-1, axisc=-1, axis=-1):
if axisa != -1 or axisb != -1 or axisc != -1:
raise ValueError(
"Torch backend does not support `axisa`, `axisb`, or `axisc`. "
f"Received: axisa={axisa}, axisb={axisb}, axisc={axisc}. Please "
"use `axis` arg in torch backend."
)
x1 = convert_to_tensor(x1)
x2 = convert_to_tensor(x2)
compute_dtype = dtypes.result_type(x1.dtype, x2.dtype)
result_dtype = compute_dtype
# TODO: torch.cross doesn't support bfloat16 with gpu
if get_device() == "cuda" and compute_dtype == "bfloat16":
compute_dtype = "float32"
# TODO: torch.cross doesn't support float16 with cpu
elif get_device() == "cpu" and compute_dtype == "float16":
compute_dtype = "float32"
x1 = cast(x1, compute_dtype)
x2 = cast(x2, compute_dtype)
return cast(torch.cross(x1, x2, dim=axis), result_dtype)
def cumprod(x, axis=None, dtype=None):
x = convert_to_tensor(x)
if axis is None:
x = x.flatten()
axis = 0
dtype = dtypes.result_type(dtype or x.dtype)
if dtype == "bool":
dtype = "int32"
# TODO: torch.cumprod doesn't support float16 with cpu
elif get_device() == "cpu" and dtype == "float16":
return cast(
torch.cumprod(x, dim=axis, dtype=to_torch_dtype("float32")),
"float16",
)
return torch.cumprod(x, dim=axis, dtype=to_torch_dtype(dtype))
def cumsum(x, axis=None, dtype=None):
x = convert_to_tensor(x)
if axis is None:
x = x.flatten()
axis = 0
dtype = dtypes.result_type(dtype or x.dtype)
if dtype == "bool":
dtype = "int32"
# TODO: torch.cumsum doesn't support float16 with cpu
elif get_device() == "cpu" and dtype == "float16":
return cast(
torch.cumsum(x, dim=axis, dtype=to_torch_dtype("float32")),
"float16",
)
return torch.cumsum(x, dim=axis, dtype=to_torch_dtype(dtype))
def diag(x, k=0):
x = convert_to_tensor(x)
return torch.diag(x, diagonal=k)
def diagonal(x, offset=0, axis1=0, axis2=1):
x = convert_to_tensor(x)
return torch.diagonal(
x,
offset=offset,
dim1=axis1,
dim2=axis2,
)
def diff(a, n=1, axis=-1):
a = convert_to_tensor(a)
return torch.diff(a, n=n, dim=axis)
def digitize(x, bins):
x = convert_to_tensor(x)
bins = convert_to_tensor(bins)
if standardize_dtype(x.dtype) == "bool":
x = cast(x, "uint8")
return cast(torch.bucketize(x, bins, right=True), "int32")
def dot(x, y):
x = convert_to_tensor(x)
y = convert_to_tensor(y)
result_dtype = dtypes.result_type(x.dtype, y.dtype)
# GPU only supports float types
compute_dtype = dtypes.result_type(result_dtype, float)
# TODO: torch.matmul doesn't support float16 with cpu
if get_device() == "cpu" and compute_dtype == "float16":
compute_dtype = "float32"
x = cast(x, compute_dtype)
y = cast(y, compute_dtype)
if x.ndim == 0 or y.ndim == 0:
return cast(torch.multiply(x, y), result_dtype)
return cast(torch.matmul(x, y), result_dtype)
def empty(shape, dtype=None):
dtype = to_torch_dtype(dtype or config.floatx())
return torch.empty(size=shape, dtype=dtype, device=get_device())
def equal(x1, x2):
x1, x2 = convert_to_tensor(x1), convert_to_tensor(x2)
return torch.eq(x1, x2)
def exp(x):
x = convert_to_tensor(x)
ori_dtype = standardize_dtype(x.dtype)
if "int" in ori_dtype or ori_dtype == "bool":
x = cast(x, config.floatx())
return torch.exp(x)
def expand_dims(x, axis):
x = convert_to_tensor(x)
return torch.unsqueeze(x, dim=axis)
def expm1(x):
x = convert_to_tensor(x)
ori_dtype = standardize_dtype(x.dtype)
if "int" in ori_dtype or ori_dtype == "bool":
x = cast(x, config.floatx())
return torch.expm1(x)
def flip(x, axis=None):
x = convert_to_tensor(x)
if axis is None:
axis = tuple(range(x.ndim))
if isinstance(axis, int):
axis = (axis,)
return torch.flip(x, dims=axis)
def floor(x):
x = convert_to_tensor(x)
dtype = (
config.floatx()
if standardize_dtype(x.dtype) == "int64"
else dtypes.result_type(x.dtype, float)
)
x = cast(x, dtype)
return torch.floor(x)
def full(shape, fill_value, dtype=None):
dtype = to_torch_dtype(dtype)
fill_value = convert_to_tensor(fill_value, dtype=dtype)
if len(fill_value.shape) > 0:
# `torch.full` only supports scala `fill_value`.
expand_size = len(shape) - len(fill_value.shape)
tile_shape = tuple(shape[:expand_size]) + (1,) * len(fill_value.shape)
return torch.tile(fill_value, tile_shape)
return torch.full(
size=shape, fill_value=fill_value, dtype=dtype, device=get_device()
)
def full_like(x, fill_value, dtype=None):
dtype = dtype or x.dtype
return full(shape=x.shape, fill_value=fill_value, dtype=dtype)
def greater(x1, x2):
x1, x2 = convert_to_tensor(x1), convert_to_tensor(x2)
return torch.greater(x1, x2)
def greater_equal(x1, x2):
x1, x2 = convert_to_tensor(x1), convert_to_tensor(x2)
return torch.greater_equal(x1, x2)
def hstack(xs):
xs = [convert_to_tensor(x) for x in xs]
return torch.hstack(xs)
def identity(n, dtype=None):
dtype = to_torch_dtype(dtype or config.floatx())
# TODO: torch.eye doesn't support bfloat16 with cpu
if get_device() == "cpu" and dtype == torch.bfloat16:
return cast(
torch.eye(n, dtype=to_torch_dtype("float32"), device=get_device()),
dtype,
)
return torch.eye(n, dtype=dtype, device=get_device())
def imag(x):
if not isinstance(x, torch.Tensor):
x = torch.from_numpy(x) # needed for complex type conversion
return torch.imag(x)
def isclose(x1, x2):
x1 = convert_to_tensor(x1)
x2 = convert_to_tensor(x2)
result_dtype = dtypes.result_type(x1.dtype, x2.dtype)
x1 = cast(x1, result_dtype)
x2 = cast(x2, result_dtype)
return torch.isclose(x1, x2)
def isfinite(x):
x = convert_to_tensor(x)
return torch.isfinite(x)
def isinf(x):
x = convert_to_tensor(x)
return torch.isinf(x)
def isnan(x):
x = convert_to_tensor(x)
return torch.isnan(x)
def less(x1, x2):
x1, x2 = convert_to_tensor(x1), convert_to_tensor(x2)
return torch.less(x1, x2)
def less_equal(x1, x2):
x1, x2 = convert_to_tensor(x1), convert_to_tensor(x2)
return torch.less_equal(x1, x2)
def linspace(
start, stop, num=50, endpoint=True, retstep=False, dtype=None, axis=0
):
if axis != 0:
raise ValueError(
"torch.linspace does not support an `axis` argument. "
f"Received axis={axis}"
)
if dtype is None:
dtypes_to_resolve = [
getattr(start, "dtype", type(start)),
getattr(stop, "dtype", type(stop)),
float,
]
dtype = dtypes.result_type(*dtypes_to_resolve)
dtype = to_torch_dtype(dtype)
if endpoint is False:
stop = stop - ((stop - start) / num)
if hasattr(start, "__len__") and hasattr(stop, "__len__"):
start = convert_to_tensor(start, dtype=dtype)
stop = convert_to_tensor(stop, dtype=dtype)
steps = torch.arange(num, dtype=dtype, device=get_device()) / (num - 1)
# reshape `steps` to allow for broadcasting
for i in range(start.ndim):
steps = steps.unsqueeze(-1)
# increments from `start` to `stop` in each dimension
linspace = start[None] + steps * (stop - start)[None]
else:
linspace = torch.linspace(
start=start,
end=stop,
steps=num,
dtype=dtype,
device=get_device(),
)
if retstep is True:
return (linspace, num)
return linspace
def log(x):
x = convert_to_tensor(x)
return torch.log(x)
def log10(x):
x = convert_to_tensor(x)
return torch.log10(x)
def log1p(x):
x = convert_to_tensor(x)
return torch.log1p(x)
def log2(x):
x = convert_to_tensor(x)
return torch.log2(x)
def logaddexp(x1, x2):
x1 = convert_to_tensor(x1)
x2 = convert_to_tensor(x2)
dtype = dtypes.result_type(x1.dtype, x2.dtype, float)
# TODO: torch.logaddexp doesn't support float16 with cpu
if get_device() == "cpu" and dtype == "float16":
x1 = cast(x1, "float32")
x2 = cast(x2, "float32")
return cast(torch.logaddexp(x1, x2), dtype)
else:
x1 = cast(x1, dtype)
x2 = cast(x2, dtype)
return torch.logaddexp(x1, x2)
def logical_and(x1, x2):
x1, x2 = convert_to_tensor(x1), convert_to_tensor(x2)
return torch.logical_and(x1, x2)
def logical_not(x):
x = convert_to_tensor(x)
return torch.logical_not(x)
def logical_or(x1, x2):
x1, x2 = convert_to_tensor(x1), convert_to_tensor(x2)
return torch.logical_or(x1, x2)
def logspace(start, stop, num=50, endpoint=True, base=10, dtype=None, axis=0):
if axis != 0:
raise ValueError(
"torch.logspace does not support an `axis` argument. "
f"Received axis={axis}"
)
if dtype is None:
dtypes_to_resolve = [
getattr(start, "dtype", type(start)),
getattr(stop, "dtype", type(stop)),
float,
]
dtype = dtypes.result_type(*dtypes_to_resolve)
dtype = to_torch_dtype(dtype)
if endpoint is False:
stop = stop - ((stop - start) / num)
if hasattr(start, "__len__") and hasattr(stop, "__len__"):
start = convert_to_tensor(start, dtype=dtype)
stop = convert_to_tensor(stop, dtype=dtype)
steps = torch.arange(num, dtype=dtype, device=get_device()) / (num - 1)
# reshape `steps` to allow for broadcasting
for i in range(start.ndim):
steps = steps.unsqueeze(-1)
# increments from `start` to `stop` in each dimension
linspace = start[None] + steps * (stop - start)[None]
logspace = base**linspace
else:
compute_dtype = dtype
# TODO: torch.logspace doesn't support float16 with cpu
if get_device() == "cpu" and dtype == torch.float16:
compute_dtype = torch.float32
logspace = cast(
torch.logspace(
start=start,
end=stop,
steps=num,
base=base,
dtype=compute_dtype,
device=get_device(),
),
dtype,
)
return logspace
def maximum(x1, x2):
if not isinstance(x1, (int, float)):
x1 = convert_to_tensor(x1)
if not isinstance(x2, (int, float)):
x2 = convert_to_tensor(x2)
dtype = dtypes.result_type(
getattr(x1, "dtype", type(x1)),
getattr(x2, "dtype", type(x2)),
)
x1 = convert_to_tensor(x1, dtype)
x2 = convert_to_tensor(x2, dtype)
return torch.maximum(x1, x2)
def median(x, axis=None, keepdims=False):
x = convert_to_tensor(x)
compute_dtype = dtypes.result_type(x.dtype, "float32")
result_dtype = dtypes.result_type(x.dtype, float)
x = cast(x, compute_dtype)
if axis is None and keepdims is False:
return cast(torch.median(x), result_dtype)
elif isinstance(axis, int):
return cast(
torch.median(x, dim=axis, keepdim=keepdims)[0], result_dtype
)
# support multiple axes
if axis is None:
y = reshape(x, [-1])
else:
# transpose
axis = list(map(lambda a: a if a >= 0 else a + x.ndim, axis))
other_dims = sorted(set(range(x.ndim)).difference(axis))
perm = other_dims + list(axis)
x_permed = torch.permute(x, dims=perm)
# reshape
x_shape = list(x.shape)
other_shape = [x_shape[i] for i in other_dims]
end_shape = [math.prod([x_shape[i] for i in axis])]
full_shape = other_shape + end_shape
y = reshape(x_permed, full_shape)
y = torch.median(y, dim=-1)[0]
if keepdims:
if axis is None:
for _ in range(x.ndim):
y = expand_dims(y, axis=-1)
else:
for i in sorted(axis):
y = expand_dims(y, axis=i)
return cast(y, result_dtype)
def meshgrid(*x, indexing="xy"):
x = [convert_to_tensor(sc_tensor) for sc_tensor in x]
return torch.meshgrid(x, indexing=indexing)
def min(x, axis=None, keepdims=False, initial=None):
x = convert_to_tensor(x)
if 0 in x.shape:
if initial is None:
raise ValueError("Cannot compute the min of an empty tensor.")
elif keepdims:
return torch.full((1,) * len(x.shape), initial)
else:
return torch.tensor(initial)
if axis is None:
result = torch.min(x)
else:
result = amin(x, axis=axis, keepdims=keepdims)
if isinstance(getattr(result, "values", None), torch.Tensor):
result = result.values
if initial is not None:
initial = convert_to_tensor(initial)
return torch.minimum(result, initial)
return result
def minimum(x1, x2):
if not isinstance(x1, (int, float)):
x1 = convert_to_tensor(x1)
if not isinstance(x2, (int, float)):
x2 = convert_to_tensor(x2)
dtype = dtypes.result_type(
getattr(x1, "dtype", type(x1)),
getattr(x2, "dtype", type(x2)),
)
x1 = convert_to_tensor(x1, dtype)
x2 = convert_to_tensor(x2, dtype)
return torch.minimum(x1, x2)
def mod(x1, x2):
x1 = convert_to_tensor(x1)
x2 = convert_to_tensor(x2)
dtype = dtypes.result_type(x1.dtype, x2.dtype)
if dtype == "bool":
x1 = cast(x1, "int32")
x2 = cast(x2, "int32")
return torch.remainder(x1, x2)
def moveaxis(x, source, destination):
x = convert_to_tensor(x)
return torch.moveaxis(x, source=source, destination=destination)
def nan_to_num(x):
x = convert_to_tensor(x)
return torch.nan_to_num(x)
def ndim(x):
x = convert_to_tensor(x)
return x.ndim
def nonzero(x):
x = convert_to_tensor(x)
return tuple(cast(indices, "int32") for indices in torch.nonzero(x).T)
def not_equal(x1, x2):
x1, x2 = convert_to_tensor(x1), convert_to_tensor(x2)
return torch.not_equal(x1, x2)
def ones_like(x, dtype=None):
x = convert_to_tensor(x)
dtype = to_torch_dtype(dtype or x.dtype)
return torch.ones_like(x, dtype=dtype)
def outer(x1, x2):
x1, x2 = convert_to_tensor(x1), convert_to_tensor(x2)
return torch.outer(x1.flatten(), x2.flatten())
def pad(x, pad_width, mode="constant", constant_values=None):
kwargs = {}
if constant_values is not None:
if mode != "constant":
raise ValueError(
"Argument `constant_values` can only be "
"provided when `mode == 'constant'`. "
f"Received: mode={mode}"
)
kwargs["value"] = constant_values
x = convert_to_tensor(x)
pad_sum = []
pad_width = list(pad_width)[::-1] # torch uses reverse order
pad_width_sum = 0
for pad in pad_width:
pad_width_sum += pad[0] + pad[1]
for pad in pad_width:
pad_sum += pad
pad_width_sum -= pad[0] + pad[1]
if pad_width_sum == 0: # early break when no padding in higher order
break
if mode == "symmetric":
mode = "replicate"
if mode == "constant":
return torch.nn.functional.pad(x, pad=pad_sum, mode=mode, **kwargs)
# TODO: reflect and symmetric padding are implemented for padding the
# last 3 dimensions of a 4D or 5D input tensor, the last 2 dimensions of a
# 3D or 4D input tensor, or the last dimension of a 2D or 3D input tensor.
# https://pytorch.org/docs/stable/generated/torch.nn.functional.pad.html
ori_dtype = x.dtype
ori_ndim = x.ndim
need_squeeze = False
if x.ndim < 3:
need_squeeze = True
new_dims = [1] * (3 - x.ndim)
x = x.view(*new_dims, *x.shape)
need_cast = False
if x.dtype not in (torch.float32, torch.float64):
# TODO: reflect and symmetric padding are only supported with float32/64
# https://github.com/pytorch/pytorch/issues/40763
need_cast = True
x = cast(x, torch.float32)
x = torch.nn.functional.pad(x, pad=pad_sum, mode=mode)
if need_cast:
x = cast(x, ori_dtype)
if need_squeeze:
x = torch.squeeze(x, dim=tuple(range(3 - ori_ndim)))
return x
def prod(x, axis=None, keepdims=False, dtype=None):
x = convert_to_tensor(x)
if dtype is None:
dtype = dtypes.result_type(x.dtype)
if dtype == "bool":
dtype = "int32"
elif dtype in ("int8", "int16"):
dtype = "int32"
# TODO: torch.prod doesn't support uint32
elif dtype == "uint8":
dtype = "int32"
compute_dtype = dtype
# TODO: torch.prod doesn't support float16 with cpu
if get_device() == "cpu" and compute_dtype == "float16":
compute_dtype = "float32"
if axis is None:
return cast(torch.prod(x, dtype=to_torch_dtype(compute_dtype)), dtype)
if not isinstance(axis, (list, tuple)):
axis = (axis,)
for a in axis:
# `torch.prod` does not handle multiple axes.
x = cast(
torch.prod(
x, dim=a, keepdim=keepdims, dtype=to_torch_dtype(compute_dtype)
),
dtype,
)
return x
def quantile(x, q, axis=None, method="linear", keepdims=False):
if isinstance(axis, int):
axis = [axis]
x = convert_to_tensor(x)
q = convert_to_tensor(q)
compute_dtype = dtypes.result_type(x.dtype, "float32")
result_dtype = dtypes.result_type(x.dtype, float)
x = cast(x, compute_dtype)
# q must be same dtype as x
if x.dtype != q.dtype:
q = cast(q, x.dtype)
# support multiple axes
if axis is None:
y = reshape(x, [-1])
else:
# transpose
axis = list(map(lambda a: a if a >= 0 else a + x.ndim, axis))
other_dims = sorted(set(range(x.ndim)).difference(axis))
perm = other_dims + list(axis)
x_permed = torch.permute(x, dims=perm)
# reshape
x_shape = list(x.shape)
other_shape = [x_shape[i] for i in other_dims]
end_shape = [math.prod([x_shape[i] for i in axis])]
full_shape = other_shape + end_shape
y = reshape(x_permed, full_shape)
y = torch.quantile(y, q, dim=-1, interpolation=method)
if keepdims:
if axis is None:
for _ in range(x.ndim):
y = expand_dims(y, axis=-1)
else:
for i in sorted(axis):
i = i + 1 if q.ndim > 0 else i
y = expand_dims(y, axis=i)
return cast(y, result_dtype)
def ravel(x):
x = convert_to_tensor(x)
return torch.ravel(x)
def real(x):
if not isinstance(x, torch.Tensor):
x = torch.from_numpy(x) # needed for complex type conversion
return torch.real(x)
def reciprocal(x):
x = convert_to_tensor(x)
return torch.reciprocal(x)
def repeat(x, repeats, axis=None):
x = convert_to_tensor(x)
if get_device() == "meta":
x = KerasTensor(x.shape, standardize_dtype(x.dtype))
outputs = repeat(x, repeats, axis=axis)
return torch.empty(
size=outputs.shape,
dtype=to_torch_dtype(outputs.dtype),
device=get_device(),
)
repeats = convert_to_tensor(repeats, dtype=int)
return torch.repeat_interleave(x, repeats, dim=axis)
def reshape(x, newshape):
if not isinstance(newshape, (list, tuple)):
newshape = (newshape,)
x = convert_to_tensor(x)
return torch.reshape(x, newshape)
def roll(x, shift, axis=None):
x = convert_to_tensor(x)
return torch.roll(x, shift, dims=axis)
def sign(x):
x = convert_to_tensor(x)
return torch.sign(x)
def sin(x):
x = convert_to_tensor(x)
return torch.sin(x)
def sinh(x):
x = convert_to_tensor(x)
return torch.sinh(x)
def size(x):
x_shape = convert_to_tensor(tuple(x.shape))
return torch.prod(x_shape)
def sort(x, axis=-1):
x = convert_to_tensor(x)
# TODO: torch.sort doesn't support bool with cuda
if get_device() == "cuda" and standardize_dtype(x.dtype) == "bool":
x = cast(x, "uint8")
return cast(torch.sort(x, dim=axis).values, "bool")
return torch.sort(x, dim=axis).values
def split(x, indices_or_sections, axis=0):
x = convert_to_tensor(x)
dim = x.shape[axis]
if not isinstance(indices_or_sections, int):
indices_or_sections = convert_to_tensor(indices_or_sections)
start_size = indices_or_sections[0:1]
end_size = dim - indices_or_sections[-1:]
chunk_sizes = torch.concat(
[start_size, torch.diff(indices_or_sections), end_size], dim=0
)
# torch.split doesn't support tensor input for `split_size_or_sections`
chunk_sizes = chunk_sizes.tolist()
else:
if dim % indices_or_sections != 0:
raise ValueError(
f"Received indices_or_sections={indices_or_sections} "
f"(interpreted as a number of sections) and axis={axis}, "
f"but input dimension x.shape[{axis}]={x.shape[axis]} "
f"is not divisible by {indices_or_sections}. "
f"Full input shape: x.shape={x.shape}"
)
chunk_sizes = dim // indices_or_sections
out = torch.split(
tensor=x,
split_size_or_sections=chunk_sizes,
dim=axis,
)
if dim == 0 and isinstance(indices_or_sections, int):
out = tuple(out[0].clone() for _ in range(indices_or_sections))
return out
def stack(x, axis=0):
x = [convert_to_tensor(elem) for elem in x]
return torch.stack(x, dim=axis)
def std(x, axis=None, keepdims=False):
x = convert_to_tensor(x)
ori_dtype = standardize_dtype(x.dtype)
if "int" in ori_dtype or ori_dtype == "bool":
x = cast(x, "float32")
# Remove Bessel correction to align with numpy
return torch.std(x, dim=axis, keepdim=keepdims, unbiased=False)
def swapaxes(x, axis1, axis2):
x = convert_to_tensor(x)
return torch.swapaxes(x, axis0=axis1, axis1=axis2)
def take(x, indices, axis=None):
x = convert_to_tensor(x)
indices = convert_to_tensor(indices).long()
if x.ndim == 2 and axis == 0:
# This case is equivalent to embedding lookup.
return torch.nn.functional.embedding(indices, x)
if axis is None:
x = torch.reshape(x, (-1,))
axis = 0
if axis is not None:
# make sure axis is non-negative
axis = len(x.shape) + axis if axis < 0 else axis
shape = x.shape[:axis] + indices.shape + x.shape[axis + 1 :]
# ravel the `indices` since `index_select` expects `indices`
# to be a vector (1-D tensor).
indices = indices.ravel()
out = torch.index_select(x, dim=axis, index=indices).squeeze(axis)
return out.reshape(shape)
return torch.take(x, index=indices)
def take_along_axis(x, indices, axis=None):
x = convert_to_tensor(x)
indices = convert_to_tensor(indices).long()
return torch.take_along_dim(x, indices, dim=axis)
def tan(x):
x = convert_to_tensor(x)
return torch.tan(x)
def tanh(x):
x = convert_to_tensor(x)
return torch.tanh(x)
def tensordot(x1, x2, axes=2):
x1 = convert_to_tensor(x1)
x2 = convert_to_tensor(x2)
result_dtype = dtypes.result_type(x1.dtype, x2.dtype)
# TODO: torch.tensordot only supports float types
compute_dtype = dtypes.result_type(result_dtype, float)
# TODO: torch.tensordot doesn't support float16 with cpu
if get_device() == "cpu" and compute_dtype == "float16":
compute_dtype = "float32"
x1 = cast(x1, compute_dtype)
x2 = cast(x2, compute_dtype)
# torch only handles dims=((0,), (1,)), numpy accepts axes=(0, 1).
if isinstance(axes, (list, tuple)):
first, second = axes
if not isinstance(first, (list, tuple)):
first = (first,)
if not isinstance(second, (list, tuple)):
second = (second,)
axes = (first, second)
return cast(torch.tensordot(x1, x2, dims=axes), result_dtype)
def round(x, decimals=0):
x = convert_to_tensor(x)
ori_dtype = standardize_dtype(x.dtype)
# TODO: torch.round doesn't support int8, int16, int32, int64, uint8
if "int" in ori_dtype:
x = cast(x, config.floatx())
return cast(torch.round(x, decimals=decimals), ori_dtype)
return torch.round(x, decimals=decimals)
def tile(x, repeats):
if is_tensor(repeats):
repeats = tuple(repeats.int().numpy())
x = convert_to_tensor(x)
return torch.tile(x, dims=repeats)
def trace(x, offset=None, axis1=None, axis2=None):
x = convert_to_tensor(x)
dtype = standardize_dtype(x.dtype)
if dtype != "int64":
dtype = dtypes.result_type(dtype, "int32")
return torch.sum(
torch.diagonal(x, offset, axis1, axis2),
dim=-1,
dtype=to_torch_dtype(dtype),
)
def tri(N, M=None, k=0, dtype=None):
dtype = to_torch_dtype(dtype or config.floatx())
M = M or N
x = torch.ones((N, M), dtype=dtype, device=get_device())
return torch.tril(x, diagonal=k)
def tril(x, k=0):
x = convert_to_tensor(x)
return torch.tril(x, diagonal=k)
def triu(x, k=0):
x = convert_to_tensor(x)
return torch.triu(x, diagonal=k)
def vdot(x1, x2):
x1 = convert_to_tensor(x1)
x2 = convert_to_tensor(x2)
result_dtype = dtypes.result_type(x1.dtype, x2.dtype)
# TODO: torch.vdot only supports float types
compute_dtype = dtypes.result_type(result_dtype, float)
# TODO: torch.vdot doesn't support float16 with cpu
if get_device() == "cpu" and compute_dtype == "float16":
compute_dtype = "float32"
x1 = cast(x1, compute_dtype)
x2 = cast(x2, compute_dtype)
return cast(torch.vdot(x1, x2), result_dtype)
def vstack(xs):
xs = [convert_to_tensor(x) for x in xs]
return torch.vstack(xs)
def where(condition, x1, x2):
condition = convert_to_tensor(condition, dtype=bool)
if x1 is not None and x2 is not None:
x1 = convert_to_tensor(x1)
x2 = convert_to_tensor(x2)
return torch.where(condition, x1, x2)
else:
return torch.where(condition)
def divide(x1, x2):
if not isinstance(x1, (int, float)):
x1 = convert_to_tensor(x1)
if not isinstance(x2, (int, float)):
x2 = convert_to_tensor(x2)
return torch.divide(x1, x2)
def divide_no_nan(x1, x2):
if not isinstance(x1, (int, float)):
x1 = convert_to_tensor(x1)
if not isinstance(x2, (int, float)):
x2 = convert_to_tensor(x2)
return torch.where(x2 == 0, 0, torch.divide(x1, x2))
def true_divide(x1, x2):
return divide(x1, x2)
def power(x1, x2):
x1, x2 = convert_to_tensor(x1), convert_to_tensor(x2)
return torch.pow(x1, x2)
def negative(x):
x = convert_to_tensor(x)
return torch.negative(x)
def square(x):
x = convert_to_tensor(x)
if standardize_dtype(x.dtype) == "bool":
x = cast(x, "int32")
return torch.square(x)
def sqrt(x):
x = convert_to_tensor(x)
if standardize_dtype(x.dtype) == "int64":
x = cast(x, config.floatx())
return torch.sqrt(x)
def squeeze(x, axis=None):
x = convert_to_tensor(x)
if axis is not None:
return torch.squeeze(x, dim=axis)
return torch.squeeze(x)
def transpose(x, axes=None):
x = convert_to_tensor(x)
if axes is not None:
return torch.permute(x, dims=axes)
return x.T
def var(x, axis=None, keepdims=False):
x = convert_to_tensor(x)
compute_dtype = dtypes.result_type(x.dtype, "float32")
result_dtype = dtypes.result_type(x.dtype, float)
if axis == [] or axis == ():
# Torch handles the empty axis case differently from numpy.
return zeros_like(x, result_dtype)
# Bessel correction removed for numpy compatibility
x = cast(x, compute_dtype)
return cast(
torch.var(x, dim=axis, keepdim=keepdims, correction=0), result_dtype
)
def sum(x, axis=None, keepdims=False):
if isinstance(x, (list, tuple)):
x = stack(x)
x = convert_to_tensor(x)
if axis == () or axis == []:
# Torch handles the empty axis case differently from numpy.
return x
dtype = standardize_dtype(x.dtype)
# follow jax's rule
# TODO: torch doesn't support uint32
if dtype in ("bool", "uint8", "int8", "int16"):
dtype = "int32"
if axis is not None:
return cast(torch.sum(x, axis=axis, keepdim=keepdims), dtype)
return cast(torch.sum(x), dtype)
def eye(N, M=None, k=None, dtype=None):
dtype = to_torch_dtype(dtype or config.floatx())
M = N if M is None else M
k = 0 if k is None else k
if k == 0:
# TODO: torch.eye doesn't support bfloat16 with cpu
if get_device() == "cpu" and dtype == torch.bfloat16:
return cast(
torch.eye(
N, M, dtype=to_torch_dtype("float32"), device=get_device()
),
dtype,
)
return torch.eye(N, M, dtype=dtype, device=get_device())
diag_length = builtins.max(N, M)
diag = torch.ones(diag_length, dtype=dtype, device=get_device())
return torch.diag(diag, diagonal=k)[:N, :M]
def floor_divide(x1, x2):
if not isinstance(x1, (int, float)):
x1 = convert_to_tensor(x1)
if not isinstance(x2, (int, float)):
x2 = convert_to_tensor(x2)
dtype = dtypes.result_type(
getattr(x1, "dtype", type(x1)),
getattr(x2, "dtype", type(x2)),
)
return cast(torch.floor_divide(x1, x2), dtype)
def logical_xor(x1, x2):
x1, x2 = convert_to_tensor(x1), convert_to_tensor(x2)
return torch.logical_xor(x1, x2)
| keras/keras/backend/torch/numpy.py/0 | {
"file_path": "keras/keras/backend/torch/numpy.py",
"repo_id": "keras",
"token_count": 20264
} | 183 |
from keras.callbacks.backup_and_restore_callback import BackupAndRestore
from keras.callbacks.callback import Callback
from keras.callbacks.callback_list import CallbackList
from keras.callbacks.csv_logger import CSVLogger
from keras.callbacks.early_stopping import EarlyStopping
from keras.callbacks.history import History
from keras.callbacks.lambda_callback import LambdaCallback
from keras.callbacks.learning_rate_scheduler import LearningRateScheduler
from keras.callbacks.model_checkpoint import ModelCheckpoint
from keras.callbacks.progbar_logger import ProgbarLogger
from keras.callbacks.reduce_lr_on_plateau import ReduceLROnPlateau
from keras.callbacks.remote_monitor import RemoteMonitor
from keras.callbacks.swap_ema_weights import SwapEMAWeights
from keras.callbacks.tensorboard import TensorBoard
from keras.callbacks.terminate_on_nan import TerminateOnNaN
| keras/keras/callbacks/__init__.py/0 | {
"file_path": "keras/keras/callbacks/__init__.py",
"repo_id": "keras",
"token_count": 256
} | 184 |
import os
import warnings
import pytest
from keras import callbacks
from keras import layers
from keras import metrics
from keras import models
from keras import saving
from keras import testing
from keras.models import Sequential
from keras.testing import test_utils
from keras.utils import numerical_utils
try:
import h5py
except ImportError:
h5py = None
TRAIN_SAMPLES = 30
TEST_SAMPLES = 30
NUM_CLASSES = 3
INPUT_DIM = 3
NUM_HIDDEN = 5
BATCH_SIZE = 5
class ModelCheckpointTest(testing.TestCase):
@pytest.mark.skipif(
h5py is None,
reason="`h5py` is a required dependency for `ModelCheckpoint` tests.",
)
@pytest.mark.requires_trainable_backend
def test_model_checkpoint_options(self):
def get_model():
model = Sequential(
[
layers.Dense(NUM_HIDDEN, activation="relu"),
layers.Dense(NUM_CLASSES, activation="softmax"),
]
)
model.compile(
loss="categorical_crossentropy",
optimizer="sgd",
metrics=[metrics.Accuracy("acc")],
)
return model
model = get_model()
temp_dir = self.get_temp_dir()
# Save model to a subdir inside the temp_dir so we can test
# automatic directory creation.
filepath = os.path.join(temp_dir, "subdir", "checkpoint.keras")
(x_train, y_train), (x_test, y_test) = test_utils.get_test_data(
random_seed=42,
train_samples=TRAIN_SAMPLES,
test_samples=TEST_SAMPLES,
input_shape=(INPUT_DIM,),
num_classes=NUM_CLASSES,
)
y_test = numerical_utils.to_categorical(y_test, num_classes=NUM_CLASSES)
y_train = numerical_utils.to_categorical(
y_train, num_classes=NUM_CLASSES
)
# Case 1
monitor = "val_loss"
save_best_only = False
mode = "auto"
cbks = [
callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
mode=mode,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
self.assertTrue(os.path.exists(filepath))
os.remove(filepath)
# Case 2
mode = "min"
cbks = [
callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
mode=mode,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
self.assertTrue(os.path.exists(filepath))
os.remove(filepath)
# Case 3
mode = "max"
monitor = "val_acc"
cbks = [
callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
mode=mode,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
self.assertTrue(os.path.exists(filepath))
os.remove(filepath)
# Case 4
save_best_only = True
cbks = [
callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
mode=mode,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
self.assertTrue(os.path.exists(filepath))
os.remove(filepath)
# Case 5: metric not available.
cbks = [
callbacks.ModelCheckpoint(
filepath, monitor="unknown", save_best_only=True
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
# File won't be written.
self.assertFalse(os.path.exists(filepath))
# Case 6
with warnings.catch_warnings(record=True) as warning_logs:
warnings.simplefilter("always")
callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
mode="unknown",
)
self.assertIn(
"ModelCheckpoint mode 'unknown' is unknown",
str(warning_logs[-1].message),
)
# Case 8a: `ModelCheckpoint` with an integer `save_freq`
temp_dir = self.get_temp_dir()
filepath = os.path.join(temp_dir, "checkpoint.epoch{epoch:02d}.keras")
save_best_only = False
cbks = [
callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
mode=mode,
save_freq=15,
)
]
self.assertFalse(os.path.exists(filepath.format(epoch=3)))
model.fit(
x_train,
y_train,
batch_size=6, # 5 batches / epoch, so should backup every 3 epochs
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=10,
verbose=0,
)
self.assertFalse(os.path.exists(filepath.format(epoch=1)))
self.assertFalse(os.path.exists(filepath.format(epoch=2)))
self.assertTrue(os.path.exists(filepath.format(epoch=3)))
self.assertFalse(os.path.exists(filepath.format(epoch=4)))
self.assertFalse(os.path.exists(filepath.format(epoch=5)))
self.assertTrue(os.path.exists(filepath.format(epoch=6)))
self.assertFalse(os.path.exists(filepath.format(epoch=7)))
self.assertFalse(os.path.exists(filepath.format(epoch=8)))
self.assertTrue(os.path.exists(filepath.format(epoch=9)))
os.remove(filepath.format(epoch=3))
os.remove(filepath.format(epoch=6))
os.remove(filepath.format(epoch=9))
# Case 8b: `ModelCheckpoint` with int `save_freq` & `save_weights_only`
temp_dir = self.get_temp_dir()
filepath = os.path.join(
temp_dir, "checkpoint.epoch{epoch:02d}.weights.h5"
)
cbks = [
callbacks.ModelCheckpoint(
filepath, monitor=monitor, save_freq=15, save_weights_only=True
)
]
self.assertFalse(os.path.exists(filepath.format(epoch=3)))
model.fit(
x_train,
y_train,
batch_size=6,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=10,
verbose=0,
)
self.assertFalse(os.path.exists(filepath.format(epoch=1)))
self.assertFalse(os.path.exists(filepath.format(epoch=2)))
self.assertTrue(os.path.exists(filepath.format(epoch=3)))
self.assertFalse(os.path.exists(filepath.format(epoch=4)))
self.assertFalse(os.path.exists(filepath.format(epoch=5)))
self.assertTrue(os.path.exists(filepath.format(epoch=6)))
self.assertFalse(os.path.exists(filepath.format(epoch=7)))
self.assertFalse(os.path.exists(filepath.format(epoch=8)))
self.assertTrue(os.path.exists(filepath.format(epoch=9)))
# Case 9: `ModelCheckpoint` with valid and invalid save_freq argument.
with self.assertRaisesRegex(ValueError, "Unrecognized save_freq"):
callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
save_weights_only=True,
mode=mode,
save_freq="invalid_save_freq",
)
# The following should not raise ValueError.
callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
save_weights_only=True,
mode=mode,
save_freq="epoch",
)
callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
save_weights_only=True,
mode=mode,
save_freq=3,
)
# Case 10a: `ModelCheckpoint` save with batch in filename.
temp_dir = self.get_temp_dir()
filepath = os.path.join(
temp_dir, "checkpoint.epoch{epoch:02d}batch{batch:02d}.keras"
)
cbks = [
callbacks.ModelCheckpoint(filepath, monitor=monitor, save_freq=1)
]
model.fit(
x_train,
y_train,
batch_size=15,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=5,
verbose=1,
)
self.assertTrue(os.path.exists(filepath.format(epoch=1, batch=1)))
self.assertTrue(os.path.exists(filepath.format(epoch=1, batch=2)))
self.assertTrue(os.path.exists(filepath.format(epoch=2, batch=1)))
self.assertTrue(os.path.exists(filepath.format(epoch=2, batch=2)))
self.assertTrue(os.path.exists(filepath.format(epoch=3, batch=1)))
self.assertTrue(os.path.exists(filepath.format(epoch=3, batch=2)))
self.assertTrue(os.path.exists(filepath.format(epoch=4, batch=1)))
self.assertTrue(os.path.exists(filepath.format(epoch=4, batch=2)))
self.assertTrue(os.path.exists(filepath.format(epoch=5, batch=1)))
self.assertTrue(os.path.exists(filepath.format(epoch=5, batch=2)))
# Case 10b: `ModelCheckpoint` save weights with batch in filename.
temp_dir = self.get_temp_dir()
filepath = os.path.join(
temp_dir, "checkpoint.epoch{epoch:02d}batch{batch:02d}.weights.h5"
)
cbks = [
callbacks.ModelCheckpoint(
filepath, monitor=monitor, save_freq=1, save_weights_only=True
)
]
model.fit(
x_train,
y_train,
batch_size=15,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=5,
verbose=1,
)
self.assertTrue(os.path.exists(filepath.format(epoch=1, batch=1)))
self.assertTrue(os.path.exists(filepath.format(epoch=1, batch=2)))
self.assertTrue(os.path.exists(filepath.format(epoch=2, batch=1)))
self.assertTrue(os.path.exists(filepath.format(epoch=2, batch=2)))
self.assertTrue(os.path.exists(filepath.format(epoch=3, batch=1)))
self.assertTrue(os.path.exists(filepath.format(epoch=3, batch=2)))
self.assertTrue(os.path.exists(filepath.format(epoch=4, batch=1)))
self.assertTrue(os.path.exists(filepath.format(epoch=4, batch=2)))
self.assertTrue(os.path.exists(filepath.format(epoch=5, batch=1)))
self.assertTrue(os.path.exists(filepath.format(epoch=5, batch=2)))
# Case 11: ModelCheckpoint saves model with initial_value_threshold
# param
mode = "max"
monitor = "val_acc"
initial_value_threshold = -0.01
save_best_only = True
filepath = os.path.join(temp_dir, "checkpoint.keras")
cbks = [
callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
initial_value_threshold=initial_value_threshold,
mode=mode,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
self.assertTrue(os.path.exists(filepath))
os.remove(filepath)
# Case 12: ModelCheckpoint saves model with initial_value_threshold
# param
mode = "auto"
monitor = "val_loss"
initial_value_threshold = None
save_best_only = True
cbks = [
callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
initial_value_threshold=initial_value_threshold,
mode=mode,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
self.assertTrue(os.path.exists(filepath))
os.remove(filepath)
# Case 13: ModelCheckpoint doesnt save model if loss was minimum earlier
mode = "min"
monitor = "val_loss"
initial_value_threshold = 0
save_best_only = True
cbks = [
callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
initial_value_threshold=initial_value_threshold,
mode=mode,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
self.assertFalse(os.path.exists(filepath))
# Case 14: ModelCheckpoint doesnt save model if loss was min earlier in
# auto mode
mode = "auto"
monitor = "val_loss"
initial_value_threshold = 0
save_best_only = True
cbks = [
callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
initial_value_threshold=initial_value_threshold,
mode=mode,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
self.assertFalse(os.path.exists(filepath))
@pytest.mark.skipif(
h5py is None,
reason="`h5py` is a required dependency for `ModelCheckpoint` tests.",
)
@pytest.mark.requires_trainable_backend
def test_model_checkpoint_loading(self):
def get_model():
inputs = layers.Input(shape=(INPUT_DIM,), batch_size=5)
x = layers.Dense(NUM_HIDDEN, activation="relu")(inputs)
outputs = layers.Dense(NUM_CLASSES, activation="softmax")(x)
functional_model = models.Model(inputs, outputs)
functional_model.compile(
loss="categorical_crossentropy",
optimizer="sgd",
metrics=[metrics.Accuracy("acc")],
)
return functional_model
(x_train, y_train), (x_test, y_test) = test_utils.get_test_data(
random_seed=42,
train_samples=TRAIN_SAMPLES,
test_samples=TEST_SAMPLES,
input_shape=(INPUT_DIM,),
num_classes=NUM_CLASSES,
)
y_test = numerical_utils.to_categorical(y_test, num_classes=NUM_CLASSES)
y_train = numerical_utils.to_categorical(
y_train, num_classes=NUM_CLASSES
)
# Model Checkpoint load model (default)
model = get_model()
temp_dir = self.get_temp_dir()
filepath = os.path.join(temp_dir, "checkpoint.model.keras")
mode = "auto"
monitor = "val_loss"
save_best_only = True
cbks = [
callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
mode=mode,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
ref_weights = model.get_weights()
self.assertTrue(os.path.exists(filepath))
new_model = saving.load_model(filepath)
new_weights = new_model.get_weights()
self.assertEqual(len(ref_weights), len(new_weights))
for ref_w, w in zip(ref_weights, new_weights):
self.assertAllClose(ref_w, w)
# Model Checkpoint load model weights
model = get_model()
temp_dir = self.get_temp_dir()
filepath = os.path.join(temp_dir, "checkpoint.weights.h5")
mode = "auto"
monitor = "val_loss"
save_best_only = True
cbks = [
callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
save_weights_only=True,
mode=mode,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
ref_weights = model.get_weights()
self.assertTrue(os.path.exists(filepath))
new_model = get_model()
new_model.load_weights(filepath)
new_weights = new_model.get_weights()
self.assertEqual(len(ref_weights), len(new_weights))
for ref_w, w in zip(ref_weights, new_weights):
self.assertAllClose(ref_w, w)
| keras/keras/callbacks/model_checkpoint_test.py/0 | {
"file_path": "keras/keras/callbacks/model_checkpoint_test.py",
"repo_id": "keras",
"token_count": 9759
} | 185 |
import numpy as np
from keras.api_export import keras_export
from keras.utils.file_utils import get_file
@keras_export("keras.datasets.boston_housing.load_data")
def load_data(path="boston_housing.npz", test_split=0.2, seed=113):
"""Loads the Boston Housing dataset.
This is a dataset taken from the StatLib library which is maintained at
Carnegie Mellon University.
**WARNING:** This dataset has an ethical problem: the authors of this
dataset included a variable, "B", that may appear to assume that racial
self-segregation influences house prices. As such, we strongly discourage
the use of this dataset, unless in the context of illustrating ethical
issues in data science and machine learning.
Samples contain 13 attributes of houses at different locations around the
Boston suburbs in the late 1970s. Targets are the median values of
the houses at a location (in k$).
The attributes themselves are defined in the
[StatLib website](http://lib.stat.cmu.edu/datasets/boston).
Args:
path: path where to cache the dataset locally
(relative to `~/.keras/datasets`).
test_split: fraction of the data to reserve as test set.
seed: Random seed for shuffling the data
before computing the test split.
Returns:
Tuple of NumPy arrays: `(x_train, y_train), (x_test, y_test)`.
**x_train, x_test**: NumPy arrays with shape `(num_samples, 13)`
containing either the training samples (for x_train),
or test samples (for y_train).
**y_train, y_test**: NumPy arrays of shape `(num_samples,)` containing the
target scalars. The targets are float scalars typically between 10 and
50 that represent the home prices in k$.
"""
assert 0 <= test_split < 1
origin_folder = (
"https://storage.googleapis.com/tensorflow/tf-keras-datasets/"
)
path = get_file(
path,
origin=origin_folder + "boston_housing.npz",
file_hash=( # noqa: E501
"f553886a1f8d56431e820c5b82552d9d95cfcb96d1e678153f8839538947dff5"
),
)
with np.load(path, allow_pickle=True) as f:
x = f["x"]
y = f["y"]
rng = np.random.RandomState(seed)
indices = np.arange(len(x))
rng.shuffle(indices)
x = x[indices]
y = y[indices]
x_train = np.array(x[: int(len(x) * (1 - test_split))])
y_train = np.array(y[: int(len(x) * (1 - test_split))])
x_test = np.array(x[int(len(x) * (1 - test_split)) :])
y_test = np.array(y[int(len(x) * (1 - test_split)) :])
return (x_train, y_train), (x_test, y_test)
| keras/keras/datasets/boston_housing.py/0 | {
"file_path": "keras/keras/datasets/boston_housing.py",
"repo_id": "keras",
"token_count": 1027
} | 186 |
"""Library for exporting inference-only Keras models/layers."""
from absl import logging
from keras import backend
from keras.api_export import keras_export
from keras.layers import Layer
from keras.models import Functional
from keras.models import Sequential
from keras.utils import io_utils
from keras.utils.module_utils import tensorflow as tf
@keras_export("keras.export.ExportArchive")
class ExportArchive:
"""ExportArchive is used to write SavedModel artifacts (e.g. for inference).
If you have a Keras model or layer that you want to export as SavedModel for
serving (e.g. via TensorFlow-Serving), you can use `ExportArchive`
to configure the different serving endpoints you need to make available,
as well as their signatures. Simply instantiate an `ExportArchive`,
use `track()` to register the layer(s) or model(s) to be used,
then use the `add_endpoint()` method to register a new serving endpoint.
When done, use the `write_out()` method to save the artifact.
The resulting artifact is a SavedModel and can be reloaded via
`tf.saved_model.load`.
Examples:
Here's how to export a model for inference.
```python
export_archive = ExportArchive()
export_archive.track(model)
export_archive.add_endpoint(
name="serve",
fn=model.call,
input_signature=[tf.TensorSpec(shape=(None, 3), dtype=tf.float32)],
)
export_archive.write_out("path/to/location")
# Elsewhere, we can reload the artifact and serve it.
# The endpoint we added is available as a method:
serving_model = tf.saved_model.load("path/to/location")
outputs = serving_model.serve(inputs)
```
Here's how to export a model with one endpoint for inference and one
endpoint for a training-mode forward pass (e.g. with dropout on).
```python
export_archive = ExportArchive()
export_archive.track(model)
export_archive.add_endpoint(
name="call_inference",
fn=lambda x: model.call(x, training=False),
input_signature=[tf.TensorSpec(shape=(None, 3), dtype=tf.float32)],
)
export_archive.add_endpoint(
name="call_training",
fn=lambda x: model.call(x, training=True),
input_signature=[tf.TensorSpec(shape=(None, 3), dtype=tf.float32)],
)
export_archive.write_out("path/to/location")
```
**Note on resource tracking:**
`ExportArchive` is able to automatically track all `tf.Variables` used
by its endpoints, so most of the time calling `.track(model)`
is not strictly required. However, if your model uses lookup layers such
as `IntegerLookup`, `StringLookup`, or `TextVectorization`,
it will need to be tracked explicitly via `.track(model)`.
Explicit tracking is also required if you need to be able to access
the properties `variables`, `trainable_variables`, or
`non_trainable_variables` on the revived archive.
"""
def __init__(self):
self._endpoint_names = []
self._endpoint_signatures = {}
self.tensorflow_version = tf.__version__
self._tf_trackable = tf.__internal__.tracking.AutoTrackable()
self._tf_trackable.variables = []
self._tf_trackable.trainable_variables = []
self._tf_trackable.non_trainable_variables = []
if backend.backend() not in ("tensorflow", "jax"):
raise NotImplementedError(
"The export API is only compatible with JAX and TF backends."
)
@property
def variables(self):
return self._tf_trackable.variables
@property
def trainable_variables(self):
return self._tf_trackable.trainable_variables
@property
def non_trainable_variables(self):
return self._tf_trackable.non_trainable_variables
def track(self, resource):
"""Track the variables (and other assets) of a layer or model."""
if backend.backend() == "tensorflow" and not isinstance(
resource, tf.__internal__.tracking.Trackable
):
raise ValueError(
"Invalid resource type. Expected an instance of a "
"TensorFlow `Trackable` (such as a Keras `Layer` or `Model`). "
f"Received instead an object of type '{type(resource)}'. "
f"Object received: {resource}"
)
if backend.backend() == "jax" and not isinstance(
resource, backend.jax.layer.JaxLayer
):
raise ValueError(
"Invalid resource type. Expected an instance of a "
"JAX-based Keras `Layer` or `Model`. "
f"Received instead an object of type '{type(resource)}'. "
f"Object received: {resource}"
)
if isinstance(resource, Layer):
if not resource.built:
raise ValueError(
"The layer provided has not yet been built. "
"It must be built before export."
)
# Layers in `_tracked` are not part of the trackables that get saved,
# because we're creating the attribute in a
# no_automatic_dependency_tracking scope.
if not hasattr(self, "_tracked"):
self._tracked = []
self._tracked.append(resource)
if isinstance(resource, Layer):
# Variables in the lists below are actually part of the trackables
# that get saved, because the lists are created in __init__.
if backend.backend() == "jax":
self._tf_trackable.variables += tf.nest.flatten(
tf.nest.map_structure(tf.Variable, resource.variables)
)
self._tf_trackable.trainable_variables += tf.nest.flatten(
tf.nest.map_structure(
tf.Variable, resource.trainable_variables
)
)
self._tf_trackable.non_trainable_variables += tf.nest.flatten(
tf.nest.map_structure(
tf.Variable, resource.non_trainable_variables
)
)
else:
self._tf_trackable.variables += resource.variables
self._tf_trackable.trainable_variables += (
resource.trainable_variables
)
self._tf_trackable.non_trainable_variables += (
resource.non_trainable_variables
)
def add_endpoint(self, name, fn, input_signature=None):
"""Register a new serving endpoint.
Arguments:
name: Str, name of the endpoint.
fn: A function. It should only leverage resources
(e.g. `tf.Variable` objects or `tf.lookup.StaticHashTable`
objects) that are available on the models/layers
tracked by the `ExportArchive` (you can call `.track(model)`
to track a new model).
The shape and dtype of the inputs to the function must be
known. For that purpose, you can either 1) make sure that
`fn` is a `tf.function` that has been called at least once, or
2) provide an `input_signature` argument that specifies the
shape and dtype of the inputs (see below).
input_signature: Used to specify the shape and dtype of the
inputs to `fn`. List of `tf.TensorSpec` objects (one
per positional input argument of `fn`). Nested arguments are
allowed (see below for an example showing a Functional model
with 2 input arguments).
Example:
Adding an endpoint using the `input_signature` argument when the
model has a single input argument:
```python
export_archive = ExportArchive()
export_archive.track(model)
export_archive.add_endpoint(
name="serve",
fn=model.call,
input_signature=[tf.TensorSpec(shape=(None, 3), dtype=tf.float32)],
)
```
Adding an endpoint using the `input_signature` argument when the
model has two positional input arguments:
```python
export_archive = ExportArchive()
export_archive.track(model)
export_archive.add_endpoint(
name="serve",
fn=model.call,
input_signature=[
tf.TensorSpec(shape=(None, 3), dtype=tf.float32),
tf.TensorSpec(shape=(None, 4), dtype=tf.float32),
],
)
```
Adding an endpoint using the `input_signature` argument when the
model has one input argument that is a list of 2 tensors (e.g.
a Functional model with 2 inputs):
```python
model = keras.Model(inputs=[x1, x2], outputs=outputs)
export_archive = ExportArchive()
export_archive.track(model)
export_archive.add_endpoint(
name="serve",
fn=model.call,
input_signature=[
[
tf.TensorSpec(shape=(None, 3), dtype=tf.float32),
tf.TensorSpec(shape=(None, 4), dtype=tf.float32),
],
],
)
```
This also works with dictionary inputs:
```python
model = keras.Model(inputs={"x1": x1, "x2": x2}, outputs=outputs)
export_archive = ExportArchive()
export_archive.track(model)
export_archive.add_endpoint(
name="serve",
fn=model.call,
input_signature=[
{
"x1": tf.TensorSpec(shape=(None, 3), dtype=tf.float32),
"x2": tf.TensorSpec(shape=(None, 4), dtype=tf.float32),
},
],
)
```
Adding an endpoint that is a `tf.function`:
```python
@tf.function()
def serving_fn(x):
return model(x)
# The function must be traced, i.e. it must be called at least once.
serving_fn(tf.random.normal(shape=(2, 3)))
export_archive = ExportArchive()
export_archive.track(model)
export_archive.add_endpoint(name="serve", fn=serving_fn)
```
"""
if name in self._endpoint_names:
raise ValueError(f"Endpoint name '{name}' is already taken.")
if input_signature:
if backend.backend() == "tensorflow":
decorated_fn = tf.function(fn, input_signature=input_signature)
else: # JAX backend
fn = self._convert_jax2tf_function(fn, input_signature)
decorated_fn = tf.function(
fn, input_signature=input_signature, autograph=False
)
self._endpoint_signatures[name] = input_signature
else:
if isinstance(fn, tf.types.experimental.GenericFunction):
if not fn._list_all_concrete_functions():
raise ValueError(
f"The provided tf.function '{fn}' "
"has never been called. "
"To specify the expected shape and dtype "
"of the function's arguments, "
"you must either provide a function that "
"has been called at least once, or alternatively pass "
"an `input_signature` argument in `add_endpoint()`."
)
decorated_fn = fn
else:
raise ValueError(
"If the `fn` argument provided is not a `tf.function`, "
"you must provide an `input_signature` argument to "
"specify the shape and dtype of the function arguments. "
"Example:\n\n"
"export_archive.add_endpoint(\n"
" name='call',\n"
" fn=model.call,\n"
" input_signature=[\n"
" tf.TensorSpec(\n"
" shape=(None, 224, 224, 3),\n"
" dtype=tf.float32,\n"
" )\n"
" ],\n"
")"
)
setattr(self._tf_trackable, name, decorated_fn)
self._endpoint_names.append(name)
def add_variable_collection(self, name, variables):
"""Register a set of variables to be retrieved after reloading.
Arguments:
name: The string name for the collection.
variables: A tuple/list/set of `tf.Variable` instances.
Example:
```python
export_archive = ExportArchive()
export_archive.track(model)
# Register an endpoint
export_archive.add_endpoint(
name="serve",
fn=model.call,
input_signature=[tf.TensorSpec(shape=(None, 3), dtype=tf.float32)],
)
# Save a variable collection
export_archive.add_variable_collection(
name="optimizer_variables", variables=model.optimizer.variables)
export_archive.write_out("path/to/location")
# Reload the object
revived_object = tf.saved_model.load("path/to/location")
# Retrieve the variables
optimizer_variables = revived_object.optimizer_variables
```
"""
if not isinstance(variables, (list, tuple, set)):
raise ValueError(
"Expected `variables` to be a list/tuple/set. "
f"Received instead object of type '{type(variables)}'."
)
# Ensure that all variables added are either tf.Variables
# or Variables created by Keras 3 with the TF or JAX backends.
if not all(
isinstance(v, (tf.Variable, backend.Variable)) for v in variables
):
raise ValueError(
"Expected all elements in `variables` to be "
"`tf.Variable` instances. Found instead the following types: "
f"{list(set(type(v) for v in variables))}"
)
if backend.backend() == "jax":
variables = tf.nest.flatten(
tf.nest.map_structure(tf.Variable, variables)
)
setattr(self._tf_trackable, name, list(variables))
def write_out(self, filepath, options=None):
"""Write the corresponding SavedModel to disk.
Arguments:
filepath: `str` or `pathlib.Path` object.
Path where to save the artifact.
options: `tf.saved_model.SaveOptions` object that specifies
SavedModel saving options.
**Note on TF-Serving**: all endpoints registered via `add_endpoint()`
are made visible for TF-Serving in the SavedModel artifact. In addition,
the first endpoint registered is made visible under the alias
`"serving_default"` (unless an endpoint with the name
`"serving_default"` was already registered manually),
since TF-Serving requires this endpoint to be set.
"""
if not self._endpoint_names:
raise ValueError(
"No endpoints have been set yet. Call add_endpoint()."
)
if backend.backend() == "tensorflow":
self._filter_and_track_resources()
signatures = {}
for name in self._endpoint_names:
signatures[name] = self._get_concrete_fn(name)
# Add "serving_default" signature key for TFServing
if "serving_default" not in self._endpoint_names:
signatures["serving_default"] = self._get_concrete_fn(
self._endpoint_names[0]
)
tf.saved_model.save(
self._tf_trackable,
filepath,
options=options,
signatures=signatures,
)
# Print out available endpoints
endpoints = "\n\n".join(
_print_signature(getattr(self._tf_trackable, name), name)
for name in self._endpoint_names
)
io_utils.print_msg(
f"Saved artifact at '{filepath}'. "
"The following endpoints are available:\n\n"
f"{endpoints}"
)
def _get_concrete_fn(self, endpoint):
"""Workaround for some SavedModel quirks."""
if endpoint in self._endpoint_signatures:
return getattr(self._tf_trackable, endpoint)
else:
traces = getattr(self._tf_trackable, endpoint)._trackable_children(
"saved_model"
)
return list(traces.values())[0]
def _get_variables_used_by_endpoints(self):
fns = [self._get_concrete_fn(name) for name in self._endpoint_names]
return _list_variables_used_by_fns(fns)
def _filter_and_track_resources(self):
"""Track resources used by endpoints / referenced in `track()` calls."""
# Start by extracting variables from endpoints.
fns = [self._get_concrete_fn(name) for name in self._endpoint_names]
tvs, ntvs = _list_variables_used_by_fns(fns)
self._tf_trackable._all_variables = list(tvs + ntvs)
# Next, track lookup tables.
# Hopefully, one day this will be automated at the tf.function level.
self._tf_trackable._misc_assets = []
from keras.layers import IntegerLookup
from keras.layers import StringLookup
from keras.layers import TextVectorization
if hasattr(self, "_tracked"):
for root in self._tracked:
descendants = tf.train.TrackableView(root).descendants()
for trackable in descendants:
if isinstance(
trackable,
(IntegerLookup, StringLookup, TextVectorization),
):
self._tf_trackable._misc_assets.append(trackable)
def _convert_jax2tf_function(self, fn, input_signature):
from jax.experimental import jax2tf
native_serialization = self._check_device_compatible()
shapes = []
for spec in input_signature:
shapes.append(self._spec_to_poly_shape(spec))
return jax2tf.convert(
fn,
polymorphic_shapes=shapes,
native_serialization=native_serialization,
)
def _spec_to_poly_shape(self, spec):
if isinstance(spec, (dict, list)):
return tf.nest.map_structure(self._spec_to_poly_shape, spec)
spec_shape = spec.shape
spec_shape = str(spec_shape).replace("None", "b")
return spec_shape
def _check_device_compatible(self):
from jax import default_backend as jax_device
if (
jax_device() == "gpu"
and len(tf.config.list_physical_devices("GPU")) == 0
):
logging.warning(
"JAX backend is using GPU for export, but installed "
"TF package cannot access GPU, so reloading the model with "
"the TF runtime in the same environment will not work. "
"To use JAX-native serialization for high-performance export "
"and serving, please install `tensorflow-gpu` and ensure "
"CUDA version compatiblity between your JAX and TF "
"installations."
)
return False
else:
return True
def export_model(model, filepath):
export_archive = ExportArchive()
export_archive.track(model)
if isinstance(model, (Functional, Sequential)):
input_signature = tf.nest.map_structure(_make_tensor_spec, model.inputs)
if isinstance(input_signature, list) and len(input_signature) > 1:
input_signature = [input_signature]
export_archive.add_endpoint("serve", model.__call__, input_signature)
else:
save_spec = _get_save_spec(model)
if not save_spec or not model._called:
raise ValueError(
"The model provided has never called. "
"It must be called at least once before export."
)
input_signature = [save_spec]
export_archive.add_endpoint("serve", model.__call__, input_signature)
export_archive.write_out(filepath)
def _get_save_spec(model):
shapes_dict = getattr(model, "_build_shapes_dict", None)
if not shapes_dict:
return None
if len(shapes_dict) == 1:
return tf.TensorSpec(
shape=list(shapes_dict.values())[0], dtype=model.input_dtype
)
specs = {}
for key, value in shapes_dict.items():
key = key.rstrip("_shape")
specs[key] = tf.TensorSpec(shape=value, dtype=model.input_dtype)
return specs
@keras_export("keras.layers.TFSMLayer")
class TFSMLayer(Layer):
"""Reload a Keras model/layer that was saved via SavedModel / ExportArchive.
Arguments:
filepath: `str` or `pathlib.Path` object. The path to the SavedModel.
call_endpoint: Name of the endpoint to use as the `call()` method
of the reloaded layer. If the SavedModel was created
via `model.export()`,
then the default endpoint name is `'serve'`. In other cases
it may be named `'serving_default'`.
Example:
```python
model.export("path/to/artifact")
reloaded_layer = TFSMLayer("path/to/artifact")
outputs = reloaded_layer(inputs)
```
The reloaded object can be used like a regular Keras layer, and supports
training/fine-tuning of its trainable weights. Note that the reloaded
object retains none of the internal structure or custom methods of the
original object -- it's a brand new layer created around the saved
function.
**Limitations:**
* Only call endpoints with a single `inputs` tensor argument
(which may optionally be a dict/tuple/list of tensors) are supported.
For endpoints with multiple separate input tensor arguments, consider
subclassing `TFSMLayer` and implementing a `call()` method with a
custom signature.
* If you need training-time behavior to differ from inference-time behavior
(i.e. if you need the reloaded object to support a `training=True` argument
in `__call__()`), make sure that the training-time call function is
saved as a standalone endpoint in the artifact, and provide its name
to the `TFSMLayer` via the `call_training_endpoint` argument.
"""
def __init__(
self,
filepath,
call_endpoint="serve",
call_training_endpoint=None,
trainable=True,
name=None,
dtype=None,
):
# Initialize an empty layer, then add_weight() etc. as needed.
super().__init__(trainable=trainable, name=name, dtype=dtype)
self._reloaded_obj = tf.saved_model.load(filepath)
self.filepath = filepath
self.call_endpoint = call_endpoint
self.call_training_endpoint = call_training_endpoint
# Resolve the call function.
if hasattr(self._reloaded_obj, call_endpoint):
# Case 1: it's set as an attribute.
self.call_endpoint_fn = getattr(self._reloaded_obj, call_endpoint)
elif call_endpoint in self._reloaded_obj.signatures:
# Case 2: it's listed in the `signatures` field.
self.call_endpoint_fn = self._reloaded_obj.signatures[call_endpoint]
else:
raise ValueError(
f"The endpoint '{call_endpoint}' "
"is neither an attribute of the reloaded SavedModel, "
"nor an entry in the `signatures` field of "
"the reloaded SavedModel. Select another endpoint via "
"the `call_endpoint` argument. Available endpoints for "
"this SavedModel: "
f"{list(self._reloaded_obj.signatures.keys())}"
)
# Resolving the training function.
if call_training_endpoint:
if hasattr(self._reloaded_obj, call_training_endpoint):
self.call_training_endpoint_fn = getattr(
self._reloaded_obj, call_training_endpoint
)
elif call_training_endpoint in self._reloaded_obj.signatures:
self.call_training_endpoint_fn = self._reloaded_obj.signatures[
call_training_endpoint
]
else:
raise ValueError(
f"The endpoint '{call_training_endpoint}' "
"is neither an attribute of the reloaded SavedModel, "
"nor an entry in the `signatures` field of "
"the reloaded SavedModel. Available endpoints for "
"this SavedModel: "
f"{list(self._reloaded_obj.signatures.keys())}"
)
# Add trainable and non-trainable weights from the call_endpoint_fn.
all_fns = [self.call_endpoint_fn]
if call_training_endpoint:
all_fns.append(self.call_training_endpoint_fn)
tvs, ntvs = _list_variables_used_by_fns(all_fns)
for v in tvs:
self._add_existing_weight(v)
for v in ntvs:
self._add_existing_weight(v)
self.built = True
def _add_existing_weight(self, weight):
"""Tracks an existing weight."""
self._track_variable(weight)
def call(self, inputs, training=False, **kwargs):
if training:
if self.call_training_endpoint:
return self.call_training_endpoint_fn(inputs, **kwargs)
return self.call_endpoint_fn(inputs, **kwargs)
def get_config(self):
base_config = super().get_config()
config = {
# Note: this is not intended to be portable.
"filepath": self.filepath,
"call_endpoint": self.call_endpoint,
"call_training_endpoint": self.call_training_endpoint,
}
return {**base_config, **config}
def _make_tensor_spec(x):
return tf.TensorSpec(x.shape, dtype=x.dtype, name=x.name)
def _print_signature(fn, name):
concrete_fn = fn._list_all_concrete_functions()[0]
pprinted_signature = concrete_fn.pretty_printed_signature(verbose=True)
lines = pprinted_signature.split("\n")
lines = [f"* Endpoint '{name}'"] + lines[1:]
endpoint = "\n".join(lines)
return endpoint
def _list_variables_used_by_fns(fns):
trainable_variables = []
non_trainable_variables = []
trainable_variables_ids = set()
non_trainable_variables_ids = set()
for fn in fns:
if hasattr(fn, "concrete_functions"):
concrete_functions = fn.concrete_functions
elif hasattr(fn, "get_concrete_function"):
concrete_functions = [fn.get_concrete_function()]
else:
concrete_functions = [fn]
for concrete_fn in concrete_functions:
for v in concrete_fn.trainable_variables:
if id(v) not in trainable_variables_ids:
trainable_variables.append(v)
trainable_variables_ids.add(id(v))
for v in concrete_fn.variables:
if (
id(v) not in trainable_variables_ids
and id(v) not in non_trainable_variables_ids
):
non_trainable_variables.append(v)
non_trainable_variables_ids.add(id(v))
return trainable_variables, non_trainable_variables
| keras/keras/export/export_lib.py/0 | {
"file_path": "keras/keras/export/export_lib.py",
"repo_id": "keras",
"token_count": 12558
} | 187 |
from keras import activations
from keras import constraints
from keras import initializers
from keras import regularizers
from keras.api_export import keras_export
from keras.layers.input_spec import InputSpec
from keras.layers.layer import Layer
@keras_export("keras.layers.PReLU")
class PReLU(Layer):
"""Parametric Rectified Linear Unit activation layer.
Formula:
``` python
f(x) = alpha * x for x < 0
f(x) = x for x >= 0
```
where `alpha` is a learned array with the same shape as x.
Args:
alpha_initializer: Initializer function for the weights.
alpha_regularizer: Regularizer for the weights.
alpha_constraint: Constraint for the weights.
shared_axes: The axes along which to share learnable parameters for the
activation function. For example, if the incoming feature maps are
from a 2D convolution with output shape
`(batch, height, width, channels)`, and you wish to share parameters
across space so that each filter only has one set of parameters,
set `shared_axes=[1, 2]`.
**kwargs: Base layer keyword arguments, such as `name` and `dtype`.
"""
def __init__(
self,
alpha_initializer="Zeros",
alpha_regularizer=None,
alpha_constraint=None,
shared_axes=None,
**kwargs
):
super().__init__(**kwargs)
self.supports_masking = True
self.alpha_initializer = initializers.get(alpha_initializer)
self.alpha_regularizer = regularizers.get(alpha_regularizer)
self.alpha_constraint = constraints.get(alpha_constraint)
if shared_axes is None:
self.shared_axes = None
elif not isinstance(shared_axes, (list, tuple)):
self.shared_axes = [shared_axes]
else:
self.shared_axes = list(shared_axes)
def build(self, input_shape):
param_shape = list(input_shape[1:])
if self.shared_axes is not None:
for i in self.shared_axes:
param_shape[i - 1] = 1
self.alpha = self.add_weight(
shape=param_shape,
name="alpha",
initializer=self.alpha_initializer,
regularizer=self.alpha_regularizer,
constraint=self.alpha_constraint,
)
# Set input spec
axes = {}
if self.shared_axes:
for i in range(1, len(input_shape)):
if i not in self.shared_axes:
axes[i] = input_shape[i]
self.input_spec = InputSpec(ndim=len(input_shape), axes=axes)
self.built = True
def call(self, inputs):
pos = activations.relu(inputs)
neg = -self.alpha * activations.relu(-inputs)
return pos + neg
def get_config(self):
config = super().get_config()
config.update(
{
"alpha_initializer": initializers.serialize(
self.alpha_initializer
),
"alpha_regularizer": regularizers.serialize(
self.alpha_regularizer
),
"alpha_constraint": constraints.serialize(
self.alpha_constraint
),
"shared_axes": self.shared_axes,
}
)
return config
def compute_output_shape(self, input_shape):
return input_shape
| keras/keras/layers/activations/prelu.py/0 | {
"file_path": "keras/keras/layers/activations/prelu.py",
"repo_id": "keras",
"token_count": 1566
} | 188 |
"""Keras base class for convolution layers."""
from keras import activations
from keras import constraints
from keras import initializers
from keras import ops
from keras import regularizers
from keras.backend import standardize_data_format
from keras.layers.input_spec import InputSpec
from keras.layers.layer import Layer
from keras.ops.operation_utils import compute_conv_output_shape
from keras.utils.argument_validation import standardize_padding
from keras.utils.argument_validation import standardize_tuple
class BaseConv(Layer):
"""Abstract N-D convolution layer (private, used as implementation base).
This layer creates a convolution kernel that is convolved (actually
cross-correlated) with the layer input to produce a tensor of outputs. If
`use_bias` is True (and a `bias_initializer` is provided), a bias vector is
created and added to the outputs. Finally, if `activation` is not `None`, it
is applied to the outputs as well.
Note: layer attributes cannot be modified after the layer has been called
once (except the `trainable` attribute).
Args:
rank: int, the rank of the convolution, e.g. 2 for 2D convolution.
filters: int, the dimension of the output space (the number of filters
in the convolution).
kernel_size: int or tuple/list of `rank` integers, specifying the size
of the convolution window.
strides: int or tuple/list of `rank` integers, specifying the stride
length of the convolution. If only one int is specified, the same
stride size will be used for all dimensions. `strides > 1` is
incompatible with `dilation_rate > 1`.
padding: string, either `"valid"` or `"same"` (case-insensitive).
`"valid"` means no padding. `"same"` results in padding evenly to
the left/right or up/down of the input. When `padding="same"` and
`strides=1`, the output has the same size as the input.
data_format: string, either `"channels_last"` or `"channels_first"`.
The ordering of the dimensions in the inputs. `"channels_last"`
corresponds to inputs with shape `(batch, steps, features)`
while `"channels_first"` corresponds to inputs with shape
`(batch, features, steps)`. It defaults to the `image_data_format`
value found in your Keras config file at `~/.keras/keras.json`.
If you never set it, then it will be `"channels_last"`.
dilation_rate: int or tuple/list of `rank` integers, specifying the
dilation rate to use for dilated convolution. If only one int is
specified, the same dilation rate will be used for all dimensions.
groups: A positive int specifying the number of groups in which the
input is split along the channel axis. Each group is convolved
separately with `filters // groups` filters. The output is the
concatenation of all the `groups` results along the channel axis.
Input channels and `filters` must both be divisible by `groups`.
activation: Activation function. If `None`, no activation is applied.
use_bias: bool, if `True`, bias will be added to the output.
kernel_initializer: Initializer for the convolution kernel. If `None`,
the default initializer (`"glorot_uniform"`) will be used.
bias_initializer: Initializer for the bias vector. If `None`, the
default initializer (`"zeros"`) will be used.
kernel_regularizer: Optional regularizer for the convolution kernel.
bias_regularizer: Optional regularizer for the bias vector.
activity_regularizer: Optional regularizer function for the output.
kernel_constraint: Optional projection function to be applied to the
kernel after being updated by an `Optimizer` (e.g. used to implement
norm constraints or value constraints for layer weights). The
function must take as input the unprojected variable and must return
the projected variable (which must have the same shape). Constraints
are not safe to use when doing asynchronous distributed training.
bias_constraint: Optional projection function to be applied to the
bias after being updated by an `Optimizer`.
"""
def __init__(
self,
rank,
filters,
kernel_size,
strides=1,
padding="valid",
data_format=None,
dilation_rate=1,
groups=1,
activation=None,
use_bias=True,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
trainable=True,
name=None,
**kwargs,
):
super().__init__(
trainable=trainable,
name=name,
activity_regularizer=activity_regularizer,
**kwargs,
)
self.rank = rank
self.filters = filters
self.groups = groups
self.kernel_size = standardize_tuple(kernel_size, rank, "kernel_size")
self.strides = standardize_tuple(strides, rank, "strides")
self.dilation_rate = standardize_tuple(
dilation_rate, rank, "dilation_rate"
)
self.padding = standardize_padding(padding, allow_causal=rank == 1)
self.data_format = standardize_data_format(data_format)
self.activation = activations.get(activation)
self.use_bias = use_bias
self.kernel_initializer = initializers.get(kernel_initializer)
self.bias_initializer = initializers.get(bias_initializer)
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.bias_regularizer = regularizers.get(bias_regularizer)
self.kernel_constraint = constraints.get(kernel_constraint)
self.bias_constraint = constraints.get(bias_constraint)
self.input_spec = InputSpec(min_ndim=self.rank + 2)
self.data_format = self.data_format
if self.filters is not None and self.filters <= 0:
raise ValueError(
"Invalid value for argument `filters`. Expected a strictly "
f"positive value. Received filters={self.filters}."
)
if self.groups <= 0:
raise ValueError(
"The number of groups must be a positive integer. "
f"Received: groups={self.groups}."
)
if self.filters is not None and self.filters % self.groups != 0:
raise ValueError(
"The number of filters must be evenly divisible by the "
f"number of groups. Received: groups={self.groups}, "
f"filters={self.filters}."
)
if not all(self.kernel_size):
raise ValueError(
"The argument `kernel_size` cannot contain 0. Received "
f"kernel_size={self.kernel_size}."
)
if not all(self.strides):
raise ValueError(
"The argument `strides` cannot contains 0. Received "
f"strides={self.strides}"
)
if max(self.strides) > 1 and max(self.dilation_rate) > 1:
raise ValueError(
"`strides > 1` not supported in conjunction with "
f"`dilation_rate > 1`. Received: strides={self.strides} and "
f"dilation_rate={self.dilation_rate}"
)
def build(self, input_shape):
if self.data_format == "channels_last":
channel_axis = -1
input_channel = input_shape[-1]
else:
channel_axis = 1
input_channel = input_shape[1]
self.input_spec = InputSpec(
min_ndim=self.rank + 2, axes={channel_axis: input_channel}
)
if input_channel % self.groups != 0:
raise ValueError(
"The number of input channels must be evenly divisible by "
f"the number of groups. Received groups={self.groups}, but the "
f"input has {input_channel} channels (full input shape is "
f"{input_shape})."
)
kernel_shape = self.kernel_size + (
input_channel // self.groups,
self.filters,
)
# compute_output_shape contains some validation logic for the input
# shape, and make sure the output shape has all positive dimensions.
self.compute_output_shape(input_shape)
self.kernel = self.add_weight(
name="kernel",
shape=kernel_shape,
initializer=self.kernel_initializer,
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint,
trainable=True,
dtype=self.dtype,
)
if self.use_bias:
self.bias = self.add_weight(
name="bias",
shape=(self.filters,),
initializer=self.bias_initializer,
regularizer=self.bias_regularizer,
constraint=self.bias_constraint,
trainable=True,
dtype=self.dtype,
)
else:
self.bias = None
self.built = True
def convolution_op(self, inputs, kernel):
return ops.conv(
inputs,
kernel,
strides=list(self.strides),
padding=self.padding,
dilation_rate=self.dilation_rate,
data_format=self.data_format,
)
def call(self, inputs):
outputs = self.convolution_op(
inputs,
self.kernel,
)
if self.use_bias:
if self.data_format == "channels_last":
bias_shape = (1,) * (self.rank + 1) + (self.filters,)
else:
bias_shape = (1, self.filters) + (1,) * self.rank
bias = ops.reshape(self.bias, bias_shape)
outputs += bias
if self.activation is not None:
return self.activation(outputs)
return outputs
def compute_output_shape(self, input_shape):
return compute_conv_output_shape(
input_shape,
self.filters,
self.kernel_size,
strides=self.strides,
padding=self.padding,
data_format=self.data_format,
dilation_rate=self.dilation_rate,
)
def get_config(self):
config = super().get_config()
config.update(
{
"filters": self.filters,
"kernel_size": self.kernel_size,
"strides": self.strides,
"padding": self.padding,
"data_format": self.data_format,
"dilation_rate": self.dilation_rate,
"groups": self.groups,
"activation": activations.serialize(self.activation),
"use_bias": self.use_bias,
"kernel_initializer": initializers.serialize(
self.kernel_initializer
),
"bias_initializer": initializers.serialize(
self.bias_initializer
),
"kernel_regularizer": regularizers.serialize(
self.kernel_regularizer
),
"bias_regularizer": regularizers.serialize(
self.bias_regularizer
),
"activity_regularizer": regularizers.serialize(
self.activity_regularizer
),
"kernel_constraint": constraints.serialize(
self.kernel_constraint
),
"bias_constraint": constraints.serialize(self.bias_constraint),
}
)
return config
| keras/keras/layers/convolutional/base_conv.py/0 | {
"file_path": "keras/keras/layers/convolutional/base_conv.py",
"repo_id": "keras",
"token_count": 5351
} | 189 |
import numpy as np
import pytest
from keras import layers
from keras import models
from keras import testing
class MaskingTest(testing.TestCase):
@pytest.mark.requires_trainable_backend
def test_masking_basics(self):
self.run_layer_test(
layers.Masking,
init_kwargs={"mask_value": 0.0},
input_shape=(2, 3, 2),
expected_output_shape=(2, 3, 2),
expected_num_trainable_weights=0,
expected_num_non_trainable_weights=0,
expected_num_seed_generators=0,
expected_num_losses=0,
supports_masking=True,
)
@pytest.mark.requires_trainable_backend
def test_masking_correctness(self):
x = np.array(
[
[[0.0, 0.0], [1.0, 2.0], [0.0, 0.0]],
[[2.0, 2.0], [0.0, 0.0], [2.0, 1.0]],
]
)
expected_mask = [[False, True, False], [True, False, True]]
layer = layers.Masking(mask_value=0.0)
self.assertAllClose(layer.compute_mask(x), expected_mask)
test_obj = self
class TestLayer(layers.Layer):
def __init__(self, **kwargs):
super().__init__(**kwargs)
self.supports_masking = True
def compute_output_shape(self, input_shape):
return input_shape
def call(self, inputs, mask=None):
assert mask is not None
test_obj.assertAllClose(mask, expected_mask)
return inputs
model = models.Sequential(
[
layers.Masking(mask_value=0.0),
TestLayer(),
]
)
model(x)
| keras/keras/layers/core/masking_test.py/0 | {
"file_path": "keras/keras/layers/core/masking_test.py",
"repo_id": "keras",
"token_count": 894
} | 190 |
from keras import ops
from keras.api_export import keras_export
from keras.layers.merging.base_merge import Merge
@keras_export("keras.layers.Subtract")
class Subtract(Merge):
"""Performs elementwise subtraction.
It takes as input a list of tensors of size 2 both of the
same shape, and returns a single tensor (inputs[0] - inputs[1])
of same shape.
Examples:
>>> input_shape = (2, 3, 4)
>>> x1 = np.random.rand(*input_shape)
>>> x2 = np.random.rand(*input_shape)
>>> y = keras.layers.Subtract()([x1, x2])
Usage in a Keras model:
>>> input1 = keras.layers.Input(shape=(16,))
>>> x1 = keras.layers.Dense(8, activation='relu')(input1)
>>> input2 = keras.layers.Input(shape=(32,))
>>> x2 = keras.layers.Dense(8, activation='relu')(input2)
>>> # equivalent to `subtracted = keras.layers.subtract([x1, x2])`
>>> subtracted = keras.layers.Subtract()([x1, x2])
>>> out = keras.layers.Dense(4)(subtracted)
>>> model = keras.models.Model(inputs=[input1, input2], outputs=out)
"""
def build(self, input_shape):
super().build(input_shape)
if len(input_shape) != 2:
raise ValueError(
"A `Subtract` layer should be called on exactly 2 inputs. "
f"Received: input_shape={input_shape}"
)
def _merge_function(self, inputs):
if len(inputs) != 2:
raise ValueError(
"A `Subtract` layer should be called on exactly 2 inputs. "
f"Received: inputs={inputs}"
)
return ops.subtract(inputs[0], inputs[1])
@keras_export("keras.layers.subtract")
def subtract(inputs, **kwargs):
"""Functional interface to the `keras.layers.Subtract` layer.
Args:
inputs: A list of input tensors of size 2, each tensor of
the same shape.
**kwargs: Standard layer keyword arguments.
Returns:
A tensor as the difference of the inputs. It has the same shape
as the inputs.
Examples:
>>> input_shape = (2, 3, 4)
>>> x1 = np.random.rand(*input_shape)
>>> x2 = np.random.rand(*input_shape)
>>> y = keras.layers.subtract([x1, x2])
Usage in a Keras model:
>>> input1 = keras.layers.Input(shape=(16,))
>>> x1 = keras.layers.Dense(8, activation='relu')(input1)
>>> input2 = keras.layers.Input(shape=(32,))
>>> x2 = keras.layers.Dense(8, activation='relu')(input2)
>>> subtracted = keras.layers.subtract([x1, x2])
>>> out = keras.layers.Dense(4)(subtracted)
>>> model = keras.models.Model(inputs=[input1, input2], outputs=out)
"""
return Subtract(**kwargs)(inputs)
| keras/keras/layers/merging/subtract.py/0 | {
"file_path": "keras/keras/layers/merging/subtract.py",
"repo_id": "keras",
"token_count": 1149
} | 191 |
import numpy as np
import pytest
from absl.testing import parameterized
from numpy.lib.stride_tricks import as_strided
from keras import backend
from keras import layers
from keras import testing
def _same_padding(input_size, pool_size, stride):
if input_size % stride == 0:
return max(pool_size - stride, 0)
else:
return max(pool_size - (input_size % stride), 0)
def np_avgpool1d(x, pool_size, strides, padding, data_format):
if data_format == "channels_first":
x = x.swapaxes(1, 2)
if isinstance(pool_size, (tuple, list)):
pool_size = pool_size[0]
if isinstance(strides, (tuple, list)):
h_stride = strides[0]
else:
h_stride = strides
if padding == "same":
n_batch, h_x, ch_x = x.shape
pad_value = _same_padding(h_x, pool_size, h_stride)
npad = [(0, 0)] * x.ndim
npad[1] = (0, pad_value)
x = np.pad(x, pad_width=npad, mode="edge")
n_batch, h_x, ch_x = x.shape
out_h = int((h_x - pool_size) / h_stride) + 1
stride_shape = (n_batch, out_h, ch_x, pool_size)
strides = (
x.strides[0],
h_stride * x.strides[1],
x.strides[2],
x.strides[1],
)
windows = as_strided(x, shape=stride_shape, strides=strides)
out = np.mean(windows, axis=(3,))
if data_format == "channels_first":
out = out.swapaxes(1, 2)
return out
def np_avgpool2d(x, pool_size, strides, padding, data_format):
if data_format == "channels_first":
x = x.transpose((0, 2, 3, 1))
if isinstance(pool_size, int):
pool_size = (pool_size, pool_size)
if isinstance(strides, int):
strides = (strides, strides)
h_pool_size, w_pool_size = pool_size
h_stride, w_stride = strides
if padding == "same":
n_batch, h_x, w_x, ch_x = x.shape
h_padding = _same_padding(h_x, h_pool_size, h_stride)
w_padding = _same_padding(w_x, w_pool_size, w_stride)
npad = [(0, 0)] * x.ndim
npad[1] = (0, h_padding)
npad[2] = (0, w_padding)
x = np.pad(x, pad_width=npad, mode="edge")
n_batch, h_x, w_x, ch_x = x.shape
out_h = int((h_x - h_pool_size) / h_stride) + 1
out_w = int((w_x - w_pool_size) / w_stride) + 1
stride_shape = (n_batch, out_h, out_w, ch_x, *pool_size)
strides = (
x.strides[0],
h_stride * x.strides[1],
w_stride * x.strides[2],
x.strides[3],
x.strides[1],
x.strides[2],
)
windows = as_strided(x, shape=stride_shape, strides=strides)
out = np.mean(windows, axis=(4, 5))
if data_format == "channels_first":
out = out.transpose((0, 3, 1, 2))
return out
def np_avgpool3d(x, pool_size, strides, padding, data_format):
if data_format == "channels_first":
x = x.transpose((0, 2, 3, 4, 1))
if isinstance(pool_size, int):
pool_size = (pool_size, pool_size, pool_size)
if isinstance(strides, int):
strides = (strides, strides, strides)
h_pool_size, w_pool_size, d_pool_size = pool_size
h_stride, w_stride, d_stride = strides
if padding == "same":
n_batch, h_x, w_x, d_x, ch_x = x.shape
h_padding = _same_padding(h_x, h_pool_size, h_stride)
w_padding = _same_padding(w_x, w_pool_size, w_stride)
d_padding = _same_padding(d_x, d_pool_size, d_stride)
npad = [(0, 0)] * x.ndim
npad[1] = (0, h_padding)
npad[2] = (0, w_padding)
npad[3] = (0, d_padding)
x = np.pad(x, pad_width=npad, mode="symmetric")
n_batch, h_x, w_x, d_x, ch_x = x.shape
out_h = int((h_x - h_pool_size) / h_stride) + 1
out_w = int((w_x - w_pool_size) / w_stride) + 1
out_d = int((d_x - d_pool_size) / d_stride) + 1
stride_shape = (n_batch, out_h, out_w, out_d, ch_x, *pool_size)
strides = (
x.strides[0],
h_stride * x.strides[1],
w_stride * x.strides[2],
d_stride * x.strides[3],
x.strides[4],
x.strides[1],
x.strides[2],
x.strides[3],
)
windows = as_strided(x, shape=stride_shape, strides=strides)
out = np.mean(windows, axis=(5, 6, 7))
if data_format == "channels_first":
out = out.transpose((0, 4, 1, 2, 3))
return out
@pytest.mark.requires_trainable_backend
class AveragePoolingBasicTest(testing.TestCase, parameterized.TestCase):
@parameterized.parameters(
(2, 1, "valid", "channels_last", (3, 5, 4), (3, 4, 4)),
(2, 1, "same", "channels_first", (3, 5, 4), (3, 5, 4)),
((2,), (2,), "valid", "channels_last", (3, 5, 4), (3, 2, 4)),
)
def test_average_pooling1d(
self,
pool_size,
strides,
padding,
data_format,
input_shape,
output_shape,
):
self.run_layer_test(
layers.AveragePooling1D,
init_kwargs={
"pool_size": pool_size,
"strides": strides,
"padding": padding,
"data_format": data_format,
},
input_shape=input_shape,
expected_output_shape=output_shape,
expected_num_trainable_weights=0,
expected_num_non_trainable_weights=0,
expected_num_losses=0,
supports_masking=False,
)
@parameterized.parameters(
(2, 1, "valid", "channels_last", (3, 5, 5, 4), (3, 4, 4, 4)),
(2, 1, "same", "channels_first", (3, 5, 5, 4), (3, 5, 5, 4)),
((2, 3), (2, 2), "valid", "channels_last", (3, 5, 5, 4), (3, 2, 2, 4)),
)
def test_average_pooling2d(
self,
pool_size,
strides,
padding,
data_format,
input_shape,
output_shape,
):
self.run_layer_test(
layers.AveragePooling2D,
init_kwargs={
"pool_size": pool_size,
"strides": strides,
"padding": padding,
"data_format": data_format,
},
input_shape=input_shape,
expected_output_shape=output_shape,
expected_num_trainable_weights=0,
expected_num_non_trainable_weights=0,
expected_num_losses=0,
supports_masking=False,
)
@parameterized.parameters(
(2, 1, "valid", "channels_last", (3, 5, 5, 5, 4), (3, 4, 4, 4, 4)),
(2, 1, "same", "channels_first", (3, 5, 5, 5, 4), (3, 5, 5, 5, 4)),
(
(2, 3, 2),
(2, 2, 1),
"valid",
"channels_last",
(3, 5, 5, 5, 4),
(3, 2, 2, 4, 4),
),
)
def test_average_pooling3d(
self,
pool_size,
strides,
padding,
data_format,
input_shape,
output_shape,
):
self.run_layer_test(
layers.AveragePooling3D,
init_kwargs={
"pool_size": pool_size,
"strides": strides,
"padding": padding,
"data_format": data_format,
},
input_shape=input_shape,
expected_output_shape=output_shape,
expected_num_trainable_weights=0,
expected_num_non_trainable_weights=0,
expected_num_losses=0,
supports_masking=False,
# Incomplete op support on tensorflow.
run_mixed_precision_check=False,
)
class AveragePoolingCorrectnessTest(testing.TestCase, parameterized.TestCase):
@parameterized.parameters(
(2, 1, "valid", "channels_last"),
(2, 1, "valid", "channels_first"),
((2,), (2,), "valid", "channels_last"),
((2,), (2,), "valid", "channels_first"),
)
def test_average_pooling1d(self, pool_size, strides, padding, data_format):
inputs = np.arange(24, dtype="float32").reshape((2, 3, 4))
layer = layers.AveragePooling1D(
pool_size=pool_size,
strides=strides,
padding=padding,
data_format=data_format,
)
outputs = layer(inputs)
expected = np_avgpool1d(
inputs, pool_size, strides, padding, data_format
)
self.assertAllClose(outputs, expected)
@parameterized.parameters(
(2, 1, "same", "channels_last"),
(2, 1, "same", "channels_first"),
((2,), (2,), "same", "channels_last"),
((2,), (2,), "same", "channels_first"),
)
@pytest.mark.skipif(
backend.backend() == "torch",
reason="Same padding in Torch backend produces different results.",
)
def test_average_pooling1d_same_padding(
self, pool_size, strides, padding, data_format
):
inputs = np.arange(24, dtype="float32").reshape((2, 3, 4))
layer = layers.AveragePooling1D(
pool_size=pool_size,
strides=strides,
padding=padding,
data_format=data_format,
)
outputs = layer(inputs)
expected = np_avgpool1d(
inputs, pool_size, strides, padding, data_format
)
self.assertAllClose(outputs, expected)
@parameterized.parameters(
(2, 1, "valid", "channels_last"),
((2, 3), (2, 2), "valid", "channels_last"),
)
def test_average_pooling2d(self, pool_size, strides, padding, data_format):
inputs = np.arange(16, dtype="float32").reshape((1, 4, 4, 1))
layer = layers.AveragePooling2D(
pool_size=pool_size,
strides=strides,
padding=padding,
data_format=data_format,
)
outputs = layer(inputs)
expected = np_avgpool2d(
inputs, pool_size, strides, padding, data_format
)
self.assertAllClose(outputs, expected)
@parameterized.parameters(
(2, (2, 1), "same", "channels_last"),
(2, (2, 1), "same", "channels_first"),
((2, 2), (2, 2), "same", "channels_last"),
((2, 2), (2, 2), "same", "channels_first"),
)
@pytest.mark.skipif(
backend.backend() == "torch",
reason="Same padding in Torch backend produces different results.",
)
def test_average_pooling2d_same_padding(
self, pool_size, strides, padding, data_format
):
inputs = np.arange(16, dtype="float32").reshape((1, 4, 4, 1))
layer = layers.AveragePooling2D(
pool_size=pool_size,
strides=strides,
padding=padding,
data_format=data_format,
)
outputs = layer(inputs)
expected = np_avgpool2d(
inputs, pool_size, strides, padding, data_format
)
self.assertAllClose(outputs, expected)
@parameterized.parameters(
(2, 1, "valid", "channels_last"),
(2, 1, "valid", "channels_first"),
((2, 3, 2), (2, 2, 1), "valid", "channels_last"),
((2, 3, 2), (2, 2, 1), "valid", "channels_first"),
)
def test_average_pooling3d(self, pool_size, strides, padding, data_format):
inputs = np.arange(240, dtype="float32").reshape((2, 3, 4, 5, 2))
layer = layers.AveragePooling3D(
pool_size=pool_size,
strides=strides,
padding=padding,
data_format=data_format,
)
outputs = layer(inputs)
expected = np_avgpool3d(
inputs, pool_size, strides, padding, data_format
)
self.assertAllClose(outputs, expected)
@parameterized.parameters(
(2, 1, "same", "channels_last"),
(2, 1, "same", "channels_first"),
((2, 2, 2), (2, 2, 1), "same", "channels_last"),
((2, 2, 2), (2, 2, 1), "same", "channels_first"),
)
@pytest.mark.skipif(
backend.backend() == "torch",
reason="Same padding in Torch backend produces different results.",
)
def test_average_pooling3d_same_padding(
self, pool_size, strides, padding, data_format
):
inputs = np.arange(240, dtype="float32").reshape((2, 3, 4, 5, 2))
layer = layers.AveragePooling3D(
pool_size=pool_size,
strides=strides,
padding=padding,
data_format=data_format,
)
outputs = layer(inputs)
expected = np_avgpool3d(
inputs, pool_size, strides, padding, data_format
)
self.assertAllClose(outputs, expected)
| keras/keras/layers/pooling/average_pooling_test.py/0 | {
"file_path": "keras/keras/layers/pooling/average_pooling_test.py",
"repo_id": "keras",
"token_count": 6276
} | 192 |
from keras.api_export import keras_export
from keras.layers.preprocessing.tf_data_layer import TFDataLayer
from keras.utils import backend_utils
@keras_export("keras.layers.CategoryEncoding")
class CategoryEncoding(TFDataLayer):
"""A preprocessing layer which encodes integer features.
This layer provides options for condensing data into a categorical encoding
when the total number of tokens are known in advance. It accepts integer
values as inputs, and it outputs a dense or sparse representation of those
inputs. For integer inputs where the total number of tokens is not known,
use `keras.layers.IntegerLookup` instead.
**Note:** This layer is safe to use inside a `tf.data` pipeline
(independently of which backend you're using).
Examples:
**One-hot encoding data**
>>> layer = keras.layers.CategoryEncoding(
... num_tokens=4, output_mode="one_hot")
>>> layer([3, 2, 0, 1])
array([[0., 0., 0., 1.],
[0., 0., 1., 0.],
[1., 0., 0., 0.],
[0., 1., 0., 0.]]>
**Multi-hot encoding data**
>>> layer = keras.layers.CategoryEncoding(
... num_tokens=4, output_mode="multi_hot")
>>> layer([[0, 1], [0, 0], [1, 2], [3, 1]])
array([[1., 1., 0., 0.],
[1., 0., 0., 0.],
[0., 1., 1., 0.],
[0., 1., 0., 1.]]>
**Using weighted inputs in `"count"` mode**
>>> layer = keras.layers.CategoryEncoding(
... num_tokens=4, output_mode="count")
>>> count_weights = np.array([[.1, .2], [.1, .1], [.2, .3], [.4, .2]])
>>> layer([[0, 1], [0, 0], [1, 2], [3, 1]], count_weights=count_weights)
array([[0.1, 0.2, 0. , 0. ],
[0.2, 0. , 0. , 0. ],
[0. , 0.2, 0.3, 0. ],
[0. , 0.2, 0. , 0.4]]>
Args:
num_tokens: The total number of tokens the layer should support. All
inputs to the layer must integers in the range `0 <= value <
num_tokens`, or an error will be thrown.
output_mode: Specification for the output of the layer.
Values can be `"one_hot"`, `"multi_hot"` or `"count"`,
configuring the layer as follows:
- `"one_hot"`: Encodes each individual element in the input
into an array of `num_tokens` size, containing a 1 at the
element index. If the last dimension is size 1, will encode
on that dimension. If the last dimension is not size 1,
will append a new dimension for the encoded output.
- `"multi_hot"`: Encodes each sample in the input into a single
array of `num_tokens` size, containing a 1 for each
vocabulary term present in the sample. Treats the last
dimension as the sample dimension, if input shape is
`(..., sample_length)`, output shape will be
`(..., num_tokens)`.
- `"count"`: Like `"multi_hot"`, but the int array contains a
count of the number of times the token at that index
appeared in the sample.
For all output modes, currently only output up to rank 2 is
supported.
Defaults to `"multi_hot"`.
Call arguments:
inputs: A 1D or 2D tensor of integer inputs.
count_weights: A tensor in the same shape as `inputs` indicating the
weight for each sample value when summing up in `count` mode.
Not used in `"multi_hot"` or `"one_hot"` modes.
"""
def __init__(self, num_tokens=None, output_mode="multi_hot", **kwargs):
super().__init__(**kwargs)
# Support deprecated names for output_modes.
if output_mode == "binary":
output_mode = "multi_hot"
# 'output_mode' must be one of ("count", "one_hot", "multi_hot")
if output_mode not in ("count", "one_hot", "multi_hot"):
raise ValueError(f"Unknown arg for output_mode: {output_mode}")
if num_tokens is None:
raise ValueError(
"num_tokens must be set to use this layer. If the "
"number of tokens is not known beforehand, use the "
"IntegerLookup layer instead."
)
if num_tokens < 1:
raise ValueError(
f"`num_tokens` must be >= 1. Received: num_tokens={num_tokens}."
)
self.num_tokens = num_tokens
self.output_mode = output_mode
self._allow_non_tensor_positional_args = True
self._convert_input_args = False
def _count(self, inputs, axis=-1, count_weights=None):
reduction_axis = 1 if len(inputs.shape) > 1 else 0
one_hot_encoding = self.backend.nn.one_hot(
inputs, self.num_tokens, axis=axis, dtype=self.dtype
)
if count_weights is not None:
split_weights = self.backend.numpy.split(
count_weights,
count_weights.shape[reduction_axis],
reduction_axis,
)
stacked_weights = self.backend.numpy.stack(
split_weights, axis=reduction_axis
)
one_hot_encoding = one_hot_encoding * stacked_weights
outputs = self.backend.numpy.sum(
one_hot_encoding,
axis=reduction_axis,
)
return outputs
def _encode(self, inputs, count_weights=None):
if self.output_mode == "multi_hot":
outputs = self.backend.nn.multi_hot(
inputs, self.num_tokens, dtype=self.dtype
)
elif self.output_mode == "one_hot":
outputs = self.backend.nn.one_hot(
inputs, self.num_tokens, dtype=self.dtype
)
elif self.output_mode == "count":
outputs = self._count(inputs, count_weights=count_weights)
return outputs
def compute_output_shape(self, input_shape):
if self.output_mode == "one_hot":
if input_shape[-1] != 1:
return tuple(input_shape + (self.num_tokens,))
else:
return tuple(input_shape[:-1] + (self.num_tokens,))
return tuple(input_shape[:-1] + (self.num_tokens,))
def get_config(self):
config = {
"num_tokens": self.num_tokens,
"output_mode": self.output_mode,
}
base_config = super().get_config()
return {**base_config, **config}
def call(self, inputs, count_weights=None):
if count_weights is not None:
if self.output_mode != "count":
raise ValueError(
"`count_weights` is not used when `output_mode` is not "
"`'count'`. Received `count_weights={count_weights}`."
)
count_weights = self.backend.convert_to_tensor(
count_weights, dtype=self.compute_dtype
)
outputs = self._encode(inputs, count_weights)
return backend_utils.convert_tf_tensor(outputs)
| keras/keras/layers/preprocessing/category_encoding.py/0 | {
"file_path": "keras/keras/layers/preprocessing/category_encoding.py",
"repo_id": "keras",
"token_count": 3325
} | 193 |
import math
import numpy as np
from keras import backend
from keras import ops
from keras.api_export import keras_export
from keras.layers.layer import Layer
from keras.utils.module_utils import tensorflow as tf
@keras_export("keras.layers.Normalization")
class Normalization(Layer):
"""A preprocessing layer that normalizes continuous features.
This layer will shift and scale inputs into a distribution centered around
0 with standard deviation 1. It accomplishes this by precomputing the mean
and variance of the data, and calling `(input - mean) / sqrt(var)` at
runtime.
The mean and variance values for the layer must be either supplied on
construction or learned via `adapt()`. `adapt()` will compute the mean and
variance of the data and store them as the layer's weights. `adapt()` should
be called before `fit()`, `evaluate()`, or `predict()`.
Args:
axis: Integer, tuple of integers, or None. The axis or axes that should
have a separate mean and variance for each index in the shape.
For example, if shape is `(None, 5)` and `axis=1`, the layer will
track 5 separate mean and variance values for the last axis.
If `axis` is set to `None`, the layer will normalize
all elements in the input by a scalar mean and variance.
When `-1`, the last axis of the input is assumed to be a
feature dimension and is normalized per index.
Note that in the specific case of batched scalar inputs where
the only axis is the batch axis, the default will normalize
each index in the batch separately.
In this case, consider passing `axis=None`. Defaults to `-1`.
mean: The mean value(s) to use during normalization. The passed value(s)
will be broadcast to the shape of the kept axes above;
if the value(s) cannot be broadcast, an error will be raised when
this layer's `build()` method is called.
variance: The variance value(s) to use during normalization. The passed
value(s) will be broadcast to the shape of the kept axes above;
if the value(s) cannot be broadcast, an error will be raised when
this layer's `build()` method is called.
invert: If `True`, this layer will apply the inverse transformation
to its inputs: it would turn a normalized input back into its
original form.
Examples:
Calculate a global mean and variance by analyzing the dataset in `adapt()`.
>>> adapt_data = np.array([1., 2., 3., 4., 5.], dtype='float32')
>>> input_data = np.array([1., 2., 3.], dtype='float32')
>>> layer = keras.layers.Normalization(axis=None)
>>> layer.adapt(adapt_data)
>>> layer(input_data)
array([-1.4142135, -0.70710677, 0.], dtype=float32)
Calculate a mean and variance for each index on the last axis.
>>> adapt_data = np.array([[0., 7., 4.],
... [2., 9., 6.],
... [0., 7., 4.],
... [2., 9., 6.]], dtype='float32')
>>> input_data = np.array([[0., 7., 4.]], dtype='float32')
>>> layer = keras.layers.Normalization(axis=-1)
>>> layer.adapt(adapt_data)
>>> layer(input_data)
array([-1., -1., -1.], dtype=float32)
Pass the mean and variance directly.
>>> input_data = np.array([[1.], [2.], [3.]], dtype='float32')
>>> layer = keras.layers.Normalization(mean=3., variance=2.)
>>> layer(input_data)
array([[-1.4142135 ],
[-0.70710677],
[ 0. ]], dtype=float32)
Use the layer to de-normalize inputs (after adapting the layer).
>>> adapt_data = np.array([[0., 7., 4.],
... [2., 9., 6.],
... [0., 7., 4.],
... [2., 9., 6.]], dtype='float32')
>>> input_data = np.array([[1., 2., 3.]], dtype='float32')
>>> layer = keras.layers.Normalization(axis=-1, invert=True)
>>> layer.adapt(adapt_data)
>>> layer(input_data)
array([2., 10., 8.], dtype=float32)
"""
def __init__(
self, axis=-1, mean=None, variance=None, invert=False, **kwargs
):
super().__init__(**kwargs)
# Standardize `axis` to a tuple.
if axis is None:
axis = ()
elif isinstance(axis, int):
axis = (axis,)
else:
axis = tuple(axis)
self.axis = axis
# Set `mean` and `variance` if passed.
if (mean is not None) != (variance is not None):
raise ValueError(
"When setting values directly, both `mean` and `variance` "
f"must be set. Received: mean={mean} and variance={variance}"
)
self.input_mean = mean
self.input_variance = variance
self.invert = invert
self.supports_masking = True
self._build_input_shape = None
def build(self, input_shape):
if input_shape is None:
return
ndim = len(input_shape)
self._build_input_shape = input_shape
if any(a < -ndim or a >= ndim for a in self.axis):
raise ValueError(
"All `axis` values must be in the range [-ndim, ndim). "
f"Received inputs with ndim={ndim}, while axis={self.axis}"
)
# Axes to be kept, replacing negative values with positive equivalents.
# Sorted to avoid transposing axes.
self._keep_axis = tuple(
sorted([d if d >= 0 else d + ndim for d in self.axis])
)
# All axes to be kept should have known shape.
for d in self._keep_axis:
if input_shape[d] is None:
raise ValueError(
"All `axis` values to be kept must have a known shape. "
f"Received axis={self.axis}, "
f"inputs.shape={input_shape}, "
f"with unknown axis at index {d}"
)
# Axes to be reduced.
self._reduce_axis = tuple(
d for d in range(ndim) if d not in self._keep_axis
)
# 1 if an axis should be reduced, 0 otherwise.
self._reduce_axis_mask = [
0 if d in self._keep_axis else 1 for d in range(ndim)
]
# Broadcast any reduced axes.
self._broadcast_shape = [
input_shape[d] if d in self._keep_axis else 1 for d in range(ndim)
]
mean_and_var_shape = tuple(input_shape[d] for d in self._keep_axis)
self._mean_and_var_shape = mean_and_var_shape
if self.input_mean is None:
self.adapt_mean = self.add_weight(
name="mean",
shape=mean_and_var_shape,
initializer="zeros",
trainable=False,
)
self.adapt_variance = self.add_weight(
name="variance",
shape=mean_and_var_shape,
initializer="ones",
trainable=False,
)
# For backwards compatibility with older saved models.
self.count = self.add_weight(
name="count",
shape=(),
dtype="int",
initializer="zeros",
trainable=False,
)
self.built = True
self.finalize_state()
else:
# In the no adapt case, make constant tensors for mean and variance
# with proper broadcast shape for use during call.
mean = ops.convert_to_tensor(self.input_mean)
variance = ops.convert_to_tensor(self.input_variance)
mean = ops.reshape(mean, self._broadcast_shape)
variance = ops.reshape(variance, self._broadcast_shape)
self.mean = ops.cast(mean, dtype=self.compute_dtype)
self.variance = ops.cast(variance, dtype=self.compute_dtype)
self.built = True
def adapt(self, data):
"""Computes the mean and variance of values in a dataset.
Calling `adapt()` on a `Normalization` layer is an alternative to
passing in `mean` and `variance` arguments during layer construction. A
`Normalization` layer should always either be adapted over a dataset or
passed `mean` and `variance`.
During `adapt()`, the layer will compute a `mean` and `variance`
separately for each position in each axis specified by the `axis`
argument. To calculate a single `mean` and `variance` over the input
data, simply pass `axis=None` to the layer.
Arg:
data: The data to train on. It can be passed either as a
`tf.data.Dataset`, as a NumPy array, or as a backend-native
eager tensor.
If a dataset, *it must be batched*. Keras will assume that the
data is batched, and if that assumption doesn't hold, the mean
and variance may be incorrectly computed.
"""
if isinstance(data, np.ndarray) or backend.is_tensor(data):
input_shape = data.shape
elif isinstance(data, tf.data.Dataset):
input_shape = tuple(data.element_spec.shape)
if len(input_shape) == 1:
# Batch dataset if it isn't batched
data = data.batch(128)
input_shape = tuple(data.element_spec.shape)
if not self.built:
self.build(input_shape)
else:
for d in self._keep_axis:
if input_shape[d] != self._build_input_shape[d]:
raise ValueError(
"The layer was built with "
f"input_shape={self._build_input_shape}, "
"but adapt() is being called with data with "
f"an incompatible shape, data.shape={input_shape}"
)
if isinstance(data, np.ndarray):
total_mean = np.mean(data, axis=self._reduce_axis)
total_var = np.var(data, axis=self._reduce_axis)
elif backend.is_tensor(data):
total_mean = ops.mean(data, axis=self._reduce_axis)
total_var = ops.var(data, axis=self._reduce_axis)
elif isinstance(data, tf.data.Dataset):
total_mean = ops.zeros(self._mean_and_var_shape)
total_var = ops.zeros(self._mean_and_var_shape)
total_count = 0
for batch in data:
batch = backend.convert_to_tensor(
batch, dtype=self.compute_dtype
)
batch_mean = ops.mean(batch, axis=self._reduce_axis)
batch_var = ops.var(batch, axis=self._reduce_axis)
if self._reduce_axis:
batch_reduce_shape = (
batch.shape[d] for d in self._reduce_axis
)
batch_count = math.prod(batch_reduce_shape)
else:
batch_count = 1
total_count += batch_count
batch_weight = float(batch_count) / total_count
existing_weight = 1.0 - batch_weight
new_total_mean = (
total_mean * existing_weight + batch_mean * batch_weight
)
# The variance is computed using the lack-of-fit sum of squares
# formula (see
# https://en.wikipedia.org/wiki/Lack-of-fit_sum_of_squares).
total_var = (
total_var + (total_mean - new_total_mean) ** 2
) * existing_weight + (
batch_var + (batch_mean - new_total_mean) ** 2
) * batch_weight
total_mean = new_total_mean
self.adapt_mean.assign(total_mean)
self.adapt_variance.assign(total_var)
self.finalize_state()
def finalize_state(self):
if self.input_mean is not None or not self.built:
return
# In the adapt case, we make constant tensors for mean and variance with
# proper broadcast shape and dtype each time `finalize_state` is called.
self.mean = ops.reshape(self.adapt_mean, self._broadcast_shape)
self.mean = ops.cast(self.mean, self.compute_dtype)
self.variance = ops.reshape(self.adapt_variance, self._broadcast_shape)
self.variance = ops.cast(self.variance, self.compute_dtype)
def call(self, inputs):
inputs = backend.convert_to_tensor(inputs, dtype=self.compute_dtype)
if self.invert:
return ops.add(
self.mean,
ops.multiply(
inputs,
ops.maximum(ops.sqrt(self.variance), backend.epsilon()),
),
)
else:
return ops.divide(
ops.subtract(inputs, self.mean),
ops.maximum(ops.sqrt(self.variance), backend.epsilon()),
)
def compute_output_shape(self, input_shape):
return input_shape
def get_config(self):
config = super().get_config()
config.update(
{
"axis": self.axis,
"invert": self.invert,
"mean": np.array(self.input_mean).tolist(),
"variance": np.array(self.input_variance).tolist(),
}
)
return config
def load_own_variables(self, store):
super().load_own_variables(store)
# Ensure that we call finalize_state after variable loading.
self.finalize_state()
def get_build_config(self):
if self._build_input_shape:
return {"input_shape": self._build_input_shape}
def build_from_config(self, config):
if config:
self.build(config["input_shape"])
| keras/keras/layers/preprocessing/normalization.py/0 | {
"file_path": "keras/keras/layers/preprocessing/normalization.py",
"repo_id": "keras",
"token_count": 6471
} | 194 |
from keras import backend
from keras.api_export import keras_export
from keras.layers.preprocessing.tf_data_layer import TFDataLayer
@keras_export("keras.layers.Rescaling")
class Rescaling(TFDataLayer):
"""A preprocessing layer which rescales input values to a new range.
This layer rescales every value of an input (often an image) by multiplying
by `scale` and adding `offset`.
For instance:
1. To rescale an input in the `[0, 255]` range
to be in the `[0, 1]` range, you would pass `scale=1./255`.
2. To rescale an input in the `[0, 255]` range to be in the `[-1, 1]` range,
you would pass `scale=1./127.5, offset=-1`.
The rescaling is applied both during training and inference. Inputs can be
of integer or floating point dtype, and by default the layer will output
floats.
**Note:** This layer is safe to use inside a `tf.data` pipeline
(independently of which backend you're using).
Args:
scale: Float, the scale to apply to the inputs.
offset: Float, the offset to apply to the inputs.
**kwargs: Base layer keyword arguments, such as `name` and `dtype`.
"""
def __init__(self, scale, offset=0.0, **kwargs):
super().__init__(**kwargs)
self.scale = scale
self.offset = offset
self.supports_masking = True
def call(self, inputs):
dtype = self.compute_dtype
scale = self.backend.cast(self.scale, dtype)
offset = self.backend.cast(self.offset, dtype)
scale_shape = self.backend.core.shape(scale)
if (
len(scale_shape) > 0
and backend.image_data_format() == "channels_first"
):
scale = self.backend.numpy.reshape(
scale, scale_shape + (1,) * (3 - len(scale_shape))
)
return self.backend.cast(inputs, dtype) * scale + offset
def compute_output_shape(self, input_shape):
return input_shape
def get_config(self):
base_config = super().get_config()
config = {
"scale": self.scale,
"offset": self.offset,
}
return {**base_config, **config}
| keras/keras/layers/preprocessing/rescaling.py/0 | {
"file_path": "keras/keras/layers/preprocessing/rescaling.py",
"repo_id": "keras",
"token_count": 873
} | 195 |
import math
from keras import backend
from keras import layers
from keras import ops
from keras.api_export import keras_export
@keras_export("keras.layers.GaussianDropout")
class GaussianDropout(layers.Layer):
"""Apply multiplicative 1-centered Gaussian noise.
As it is a regularization layer, it is only active at training time.
Args:
rate: Float, drop probability (as with `Dropout`).
The multiplicative noise will have
standard deviation `sqrt(rate / (1 - rate))`.
seed: Integer, optional random seed to enable deterministic behavior.
Call arguments:
inputs: Input tensor (of any rank).
training: Python boolean indicating whether the layer should behave in
training mode (adding dropout) or in inference mode (doing nothing).
"""
def __init__(self, rate, seed=None, **kwargs):
super().__init__(**kwargs)
if not 0 <= rate <= 1:
raise ValueError(
f"Invalid value received for argument "
"`rate`. Expected a float value between 0 and 1. "
f"Received: rate={rate}"
)
self.rate = rate
self.seed = seed
self.seed_generator = backend.random.SeedGenerator(seed)
self.supports_masking = True
def call(self, inputs, training=False):
if training and self.rate > 0:
stddev = math.sqrt(self.rate / (1.0 - self.rate))
return inputs * backend.random.normal(
shape=ops.shape(inputs),
mean=1.0,
stddev=stddev,
seed=self.seed_generator,
)
return inputs
def compute_output_shape(self, input_shape):
return input_shape
def get_config(self):
base_config = super().get_config()
config = {
"rate": self.rate,
"seed": self.seed,
}
return {**base_config, **config}
| keras/keras/layers/regularization/gaussian_dropout.py/0 | {
"file_path": "keras/keras/layers/regularization/gaussian_dropout.py",
"repo_id": "keras",
"token_count": 844
} | 196 |
import numpy as np
import pytest
from absl.testing import parameterized
from keras import backend
from keras import layers
from keras import ops
from keras import testing
class PermuteTest(testing.TestCase, parameterized.TestCase):
@parameterized.named_parameters(
[
{"testcase_name": "dense", "sparse": False},
{"testcase_name": "sparse", "sparse": True},
]
)
@pytest.mark.requires_trainable_backend
def test_permute(self, sparse):
if sparse and not backend.SUPPORTS_SPARSE_TENSORS:
pytest.skip("Backend does not support sparse tensors.")
inputs = np.random.random((10, 3, 5, 5)).astype("float32")
# Make the ndarray relatively sparse
inputs = np.multiply(inputs, inputs >= 0.8)
expected_output = ops.convert_to_tensor(
np.transpose(inputs, axes=(0, 3, 1, 2))
)
if sparse:
if backend.backend() == "tensorflow":
import tensorflow as tf
inputs = tf.sparse.from_dense(inputs)
expected_output = tf.sparse.from_dense(expected_output)
elif backend.backend() == "jax":
import jax.experimental.sparse as jax_sparse
inputs = jax_sparse.BCOO.fromdense(inputs)
expected_output = jax_sparse.BCOO.fromdense(expected_output)
else:
self.fail(
f"Backend {backend.backend()} does not support sparse"
)
self.run_layer_test(
layers.Permute,
init_kwargs={"dims": (3, 1, 2)},
input_data=inputs,
input_sparse=sparse,
expected_output=expected_output,
expected_output_sparse=sparse,
run_training_check=not sparse,
)
def test_permute_with_dynamic_batch_size(self):
input_layer = layers.Input(batch_shape=(None, 3, 5))
permuted = layers.Permute((2, 1))(input_layer)
self.assertEqual(permuted.shape, (None, 5, 3))
def test_permute_errors_on_invalid_starting_dims_index(self):
with self.assertRaisesRegex(
ValueError, r"Invalid permutation .*dims.*"
):
self.run_layer_test(
layers.Permute,
init_kwargs={"dims": (0, 1, 2)},
input_shape=(3, 2, 4),
)
def test_permute_errors_on_invalid_set_of_dims_indices(self):
with self.assertRaisesRegex(
ValueError, r"Invalid permutation .*dims.*"
):
self.run_layer_test(
layers.Permute,
init_kwargs={"dims": (1, 4, 2)},
input_shape=(3, 2, 4),
)
| keras/keras/layers/reshaping/permute_test.py/0 | {
"file_path": "keras/keras/layers/reshaping/permute_test.py",
"repo_id": "keras",
"token_count": 1367
} | 197 |
import numpy as np
from absl.testing import parameterized
from keras import backend
from keras import layers
from keras import testing
class ZeroPadding3DTest(testing.TestCase, parameterized.TestCase):
@parameterized.named_parameters(
("channels_first", "channels_first"), ("channels_last", "channels_last")
)
def test_zero_padding_3d(self, data_format):
inputs = np.random.rand(1, 2, 3, 4, 5)
outputs = layers.ZeroPadding3D(
padding=((1, 2), (3, 4), (0, 2)), data_format=data_format
)(inputs)
if data_format == "channels_first":
for index in [0, -1, -2]:
self.assertAllClose(outputs[:, :, index, :, :], 0.0)
for index in [0, 1, 2, -1, -2, -3, -4]:
self.assertAllClose(outputs[:, :, :, index, :], 0.0)
for index in [-1, -2]:
self.assertAllClose(outputs[:, :, :, :, index], 0.0)
self.assertAllClose(outputs[:, :, 1:-2, 3:-4, 0:-2], inputs)
else:
for index in [0, -1, -2]:
self.assertAllClose(outputs[:, index, :, :, :], 0.0)
for index in [0, 1, 2, -1, -2, -3, -4]:
self.assertAllClose(outputs[:, :, index, :, :], 0.0)
for index in [-1, -2]:
self.assertAllClose(outputs[:, :, :, index, :], 0.0)
self.assertAllClose(outputs[:, 1:-2, 3:-4, 0:-2, :], inputs)
@parameterized.product(
(
{"padding": ((2, 2), (2, 2), (2, 2))}, # 3 tuples
{"padding": (2, 2, 2)}, # 1 tuple
{"padding": 2}, # 1 int
),
(
{"data_format": "channels_first"},
{"data_format": "channels_last"},
),
)
def test_zero_padding_3d_with_same_padding(self, padding, data_format):
inputs = np.random.rand(1, 2, 3, 4, 5)
outputs = layers.ZeroPadding3D(
padding=padding, data_format=data_format
)(inputs)
if data_format == "channels_first":
for index in [0, 1, -1, -2]:
self.assertAllClose(outputs[:, :, index, :, :], 0.0)
self.assertAllClose(outputs[:, :, :, index, :], 0.0)
self.assertAllClose(outputs[:, :, :, :, index], 0.0)
self.assertAllClose(outputs[:, :, 2:-2, 2:-2, 2:-2], inputs)
else:
for index in [0, 1, -1, -2]:
self.assertAllClose(outputs[:, index, :, :, :], 0.0)
self.assertAllClose(outputs[:, :, index, :, :], 0.0)
self.assertAllClose(outputs[:, :, :, index, :], 0.0)
self.assertAllClose(outputs[:, 2:-2, 2:-2, 2:-2, :], inputs)
def test_zero_padding_3d_with_dynamic_spatial_dim(self):
if backend.config.image_data_format() == "channels_last":
input_layer = layers.Input(batch_shape=(1, 2, None, 4, 5))
else:
input_layer = layers.Input(batch_shape=(1, 5, 2, None, 4))
padded = layers.ZeroPadding3D(((1, 2), (3, 4), (5, 6)))(input_layer)
if backend.config.image_data_format() == "channels_last":
self.assertEqual(padded.shape, (1, 5, None, 15, 5))
else:
self.assertEqual(padded.shape, (1, 5, 5, None, 15))
def test_zero_padding_3d_errors_if_padding_argument_invalid(self):
with self.assertRaises(ValueError):
layers.ZeroPadding3D(padding=(1,))
with self.assertRaises(ValueError):
layers.ZeroPadding3D(padding=(1, 2))
with self.assertRaises(ValueError):
layers.ZeroPadding3D(padding=(1, 2, 3, 4))
with self.assertRaises(ValueError):
layers.ZeroPadding3D(padding="1")
with self.assertRaises(ValueError):
layers.ZeroPadding3D(padding=((1, 2), (3, 4), (5, 6, 7)))
with self.assertRaises(ValueError):
layers.ZeroPadding3D(padding=((1, 2), (3, 4), (5, -6)))
with self.assertRaises(ValueError):
layers.ZeroPadding3D(padding=((1, 2), (3, 4), "5"))
| keras/keras/layers/reshaping/zero_padding3d_test.py/0 | {
"file_path": "keras/keras/layers/reshaping/zero_padding3d_test.py",
"repo_id": "keras",
"token_count": 2027
} | 198 |
import tree
from keras import activations
from keras import backend
from keras import constraints
from keras import initializers
from keras import ops
from keras import regularizers
from keras.api_export import keras_export
from keras.layers.input_spec import InputSpec
from keras.layers.layer import Layer
from keras.layers.rnn.dropout_rnn_cell import DropoutRNNCell
from keras.layers.rnn.rnn import RNN
@keras_export("keras.layers.LSTMCell")
class LSTMCell(Layer, DropoutRNNCell):
"""Cell class for the LSTM layer.
This class processes one step within the whole time sequence input, whereas
`keras.layer.LSTM` processes the whole sequence.
Args:
units: Positive integer, dimensionality of the output space.
activation: Activation function to use. Default: hyperbolic tangent
(`tanh`). If you pass None, no activation is applied
(ie. "linear" activation: `a(x) = x`).
recurrent_activation: Activation function to use for the recurrent step.
Default: sigmoid (`sigmoid`). If you pass `None`, no activation is
applied (ie. "linear" activation: `a(x) = x`).
use_bias: Boolean, (default `True`), whether the layer
should use a bias vector.
kernel_initializer: Initializer for the `kernel` weights matrix,
used for the linear transformation of the inputs. Default:
`"glorot_uniform"`.
recurrent_initializer: Initializer for the `recurrent_kernel`
weights matrix, used for the linear transformation
of the recurrent state. Default: `"orthogonal"`.
bias_initializer: Initializer for the bias vector. Default: `"zeros"`.
unit_forget_bias: Boolean (default `True`). If `True`,
add 1 to the bias of the forget gate at initialization.
Setting it to `True` will also force `bias_initializer="zeros"`.
This is recommended in [Jozefowicz et al.](
https://github.com/mlresearch/v37/blob/gh-pages/jozefowicz15.pdf)
kernel_regularizer: Regularizer function applied to the `kernel` weights
matrix. Default: `None`.
recurrent_regularizer: Regularizer function applied to the
`recurrent_kernel` weights matrix. Default: `None`.
bias_regularizer: Regularizer function applied to the bias vector.
Default: `None`.
kernel_constraint: Constraint function applied to the `kernel` weights
matrix. Default: `None`.
recurrent_constraint: Constraint function applied to the
`recurrent_kernel` weights matrix. Default: `None`.
bias_constraint: Constraint function applied to the bias vector.
Default: `None`.
dropout: Float between 0 and 1. Fraction of the units to drop for the
linear transformation of the inputs. Default: 0.
recurrent_dropout: Float between 0 and 1. Fraction of the units to drop
for the linear transformation of the recurrent state. Default: 0.
seed: Random seed for dropout.
Call arguments:
inputs: A 2D tensor, with shape `(batch, features)`.
states: A 2D tensor with shape `(batch, units)`, which is the state
from the previous time step.
training: Python boolean indicating whether the layer should behave in
training mode or in inference mode. Only relevant when `dropout` or
`recurrent_dropout` is used.
Example:
>>> inputs = np.random.random((32, 10, 8))
>>> rnn = keras.layers.RNN(keras.layers.LSTMCell(4))
>>> output = rnn(inputs)
>>> output.shape
(32, 4)
>>> rnn = keras.layers.RNN(
... keras.layers.LSTMCell(4),
... return_sequences=True,
... return_state=True)
>>> whole_sequence_output, final_state = rnn(inputs)
>>> whole_sequence_output.shape
(32, 10, 4)
>>> final_state.shape
(32, 4)
"""
def __init__(
self,
units,
activation="tanh",
recurrent_activation="sigmoid",
use_bias=True,
kernel_initializer="glorot_uniform",
recurrent_initializer="orthogonal",
bias_initializer="zeros",
unit_forget_bias=True,
kernel_regularizer=None,
recurrent_regularizer=None,
bias_regularizer=None,
kernel_constraint=None,
recurrent_constraint=None,
bias_constraint=None,
dropout=0.0,
recurrent_dropout=0.0,
seed=None,
**kwargs,
):
if units <= 0:
raise ValueError(
"Received an invalid value for argument `units`, "
f"expected a positive integer, got {units}."
)
implementation = kwargs.pop("implementation", 2)
super().__init__(**kwargs)
self.units = units
self.activation = activations.get(activation)
self.recurrent_activation = activations.get(recurrent_activation)
self.use_bias = use_bias
self.kernel_initializer = initializers.get(kernel_initializer)
self.recurrent_initializer = initializers.get(recurrent_initializer)
self.bias_initializer = initializers.get(bias_initializer)
self.kernel_regularizer = regularizers.get(kernel_regularizer)
self.recurrent_regularizer = regularizers.get(recurrent_regularizer)
self.bias_regularizer = regularizers.get(bias_regularizer)
self.kernel_constraint = constraints.get(kernel_constraint)
self.recurrent_constraint = constraints.get(recurrent_constraint)
self.bias_constraint = constraints.get(bias_constraint)
self.dropout = min(1.0, max(0.0, dropout))
self.recurrent_dropout = min(1.0, max(0.0, recurrent_dropout))
self.seed = seed
self.seed_generator = backend.random.SeedGenerator(seed=seed)
self.unit_forget_bias = unit_forget_bias
self.state_size = [self.units, self.units]
self.output_size = self.units
self.implementation = implementation
def build(self, input_shape):
super().build(input_shape)
input_dim = input_shape[-1]
self.kernel = self.add_weight(
shape=(input_dim, self.units * 4),
name="kernel",
initializer=self.kernel_initializer,
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint,
)
self.recurrent_kernel = self.add_weight(
shape=(self.units, self.units * 4),
name="recurrent_kernel",
initializer=self.recurrent_initializer,
regularizer=self.recurrent_regularizer,
constraint=self.recurrent_constraint,
)
if self.use_bias:
if self.unit_forget_bias:
def bias_initializer(_, *args, **kwargs):
return ops.concatenate(
[
self.bias_initializer(
(self.units,), *args, **kwargs
),
initializers.get("ones")(
(self.units,), *args, **kwargs
),
self.bias_initializer(
(self.units * 2,), *args, **kwargs
),
]
)
else:
bias_initializer = self.bias_initializer
self.bias = self.add_weight(
shape=(self.units * 4,),
name="bias",
initializer=bias_initializer,
regularizer=self.bias_regularizer,
constraint=self.bias_constraint,
)
else:
self.bias = None
self.built = True
def _compute_carry_and_output(self, x, h_tm1, c_tm1):
"""Computes carry and output using split kernels."""
x_i, x_f, x_c, x_o = x
h_tm1_i, h_tm1_f, h_tm1_c, h_tm1_o = h_tm1
i = self.recurrent_activation(
x_i + ops.matmul(h_tm1_i, self.recurrent_kernel[:, : self.units])
)
f = self.recurrent_activation(
x_f
+ ops.matmul(
h_tm1_f, self.recurrent_kernel[:, self.units : self.units * 2]
)
)
c = f * c_tm1 + i * self.activation(
x_c
+ ops.matmul(
h_tm1_c,
self.recurrent_kernel[:, self.units * 2 : self.units * 3],
)
)
o = self.recurrent_activation(
x_o
+ ops.matmul(h_tm1_o, self.recurrent_kernel[:, self.units * 3 :])
)
return c, o
def _compute_carry_and_output_fused(self, z, c_tm1):
"""Computes carry and output using fused kernels."""
z0, z1, z2, z3 = z
i = self.recurrent_activation(z0)
f = self.recurrent_activation(z1)
c = f * c_tm1 + i * self.activation(z2)
o = self.recurrent_activation(z3)
return c, o
def call(self, inputs, states, training=False):
h_tm1 = states[0] # previous memory state
c_tm1 = states[1] # previous carry state
dp_mask = self.get_dropout_mask(inputs)
rec_dp_mask = self.get_recurrent_dropout_mask(h_tm1)
if training and 0.0 < self.dropout < 1.0:
inputs = inputs * dp_mask
if training and 0.0 < self.recurrent_dropout < 1.0:
h_tm1 = h_tm1 * rec_dp_mask
if self.implementation == 1:
inputs_i = inputs
inputs_f = inputs
inputs_c = inputs
inputs_o = inputs
k_i, k_f, k_c, k_o = ops.split(self.kernel, 4, axis=1)
x_i = ops.matmul(inputs_i, k_i)
x_f = ops.matmul(inputs_f, k_f)
x_c = ops.matmul(inputs_c, k_c)
x_o = ops.matmul(inputs_o, k_o)
if self.use_bias:
b_i, b_f, b_c, b_o = ops.split(self.bias, 4, axis=0)
x_i += b_i
x_f += b_f
x_c += b_c
x_o += b_o
h_tm1_i = h_tm1
h_tm1_f = h_tm1
h_tm1_c = h_tm1
h_tm1_o = h_tm1
x = (x_i, x_f, x_c, x_o)
h_tm1 = (h_tm1_i, h_tm1_f, h_tm1_c, h_tm1_o)
c, o = self._compute_carry_and_output(x, h_tm1, c_tm1)
else:
z = ops.matmul(inputs, self.kernel)
z += ops.matmul(h_tm1, self.recurrent_kernel)
if self.use_bias:
z += self.bias
z = ops.split(z, 4, axis=1)
c, o = self._compute_carry_and_output_fused(z, c_tm1)
h = o * self.activation(c)
return h, [h, c]
def get_config(self):
config = {
"units": self.units,
"activation": activations.serialize(self.activation),
"recurrent_activation": activations.serialize(
self.recurrent_activation
),
"use_bias": self.use_bias,
"unit_forget_bias": self.unit_forget_bias,
"kernel_initializer": initializers.serialize(
self.kernel_initializer
),
"recurrent_initializer": initializers.serialize(
self.recurrent_initializer
),
"bias_initializer": initializers.serialize(self.bias_initializer),
"kernel_regularizer": regularizers.serialize(
self.kernel_regularizer
),
"recurrent_regularizer": regularizers.serialize(
self.recurrent_regularizer
),
"bias_regularizer": regularizers.serialize(self.bias_regularizer),
"kernel_constraint": constraints.serialize(self.kernel_constraint),
"recurrent_constraint": constraints.serialize(
self.recurrent_constraint
),
"bias_constraint": constraints.serialize(self.bias_constraint),
"dropout": self.dropout,
"recurrent_dropout": self.recurrent_dropout,
"seed": self.seed,
}
base_config = super().get_config()
return {**base_config, **config}
def get_initial_state(self, batch_size=None):
return [
ops.zeros((batch_size, d), dtype=self.compute_dtype)
for d in self.state_size
]
@keras_export("keras.layers.LSTM")
class LSTM(RNN):
"""Long Short-Term Memory layer - Hochreiter 1997.
Based on available runtime hardware and constraints, this layer
will choose different implementations (cuDNN-based or backend-native)
to maximize the performance. If a GPU is available and all
the arguments to the layer meet the requirement of the cuDNN kernel
(see below for details), the layer will use a fast cuDNN implementation
when using the TensorFlow backend.
The requirements to use the cuDNN implementation are:
1. `activation` == `tanh`
2. `recurrent_activation` == `sigmoid`
3. `dropout` == 0 and `recurrent_dropout` == 0
4. `unroll` is `False`
5. `use_bias` is `True`
6. Inputs, if use masking, are strictly right-padded.
7. Eager execution is enabled in the outermost context.
For example:
>>> inputs = np.random.random((32, 10, 8))
>>> lstm = keras.layers.LSTM(4)
>>> output = lstm(inputs)
>>> output.shape
(32, 4)
>>> lstm = keras.layers.LSTM(
... 4, return_sequences=True, return_state=True)
>>> whole_seq_output, final_memory_state, final_carry_state = lstm(inputs)
>>> whole_seq_output.shape
(32, 10, 4)
>>> final_memory_state.shape
(32, 4)
>>> final_carry_state.shape
(32, 4)
Args:
units: Positive integer, dimensionality of the output space.
activation: Activation function to use.
Default: hyperbolic tangent (`tanh`).
If you pass `None`, no activation is applied
(ie. "linear" activation: `a(x) = x`).
recurrent_activation: Activation function to use
for the recurrent step.
Default: sigmoid (`sigmoid`).
If you pass `None`, no activation is applied
(ie. "linear" activation: `a(x) = x`).
use_bias: Boolean, (default `True`), whether the layer
should use a bias vector.
kernel_initializer: Initializer for the `kernel` weights matrix,
used for the linear transformation of the inputs. Default:
`"glorot_uniform"`.
recurrent_initializer: Initializer for the `recurrent_kernel`
weights matrix, used for the linear transformation of the recurrent
state. Default: `"orthogonal"`.
bias_initializer: Initializer for the bias vector. Default: `"zeros"`.
unit_forget_bias: Boolean (default `True`). If `True`,
add 1 to the bias of the forget gate at initialization.
Setting it to `True` will also force `bias_initializer="zeros"`.
This is recommended in [Jozefowicz et al.](
https://github.com/mlresearch/v37/blob/gh-pages/jozefowicz15.pdf)
kernel_regularizer: Regularizer function applied to the `kernel` weights
matrix. Default: `None`.
recurrent_regularizer: Regularizer function applied to the
`recurrent_kernel` weights matrix. Default: `None`.
bias_regularizer: Regularizer function applied to the bias vector.
Default: `None`.
activity_regularizer: Regularizer function applied to the output of the
layer (its "activation"). Default: `None`.
kernel_constraint: Constraint function applied to the `kernel` weights
matrix. Default: `None`.
recurrent_constraint: Constraint function applied to the
`recurrent_kernel` weights matrix. Default: `None`.
bias_constraint: Constraint function applied to the bias vector.
Default: `None`.
dropout: Float between 0 and 1. Fraction of the units to drop for the
linear transformation of the inputs. Default: 0.
recurrent_dropout: Float between 0 and 1. Fraction of the units to drop
for the linear transformation of the recurrent state. Default: 0.
seed: Random seed for dropout.
return_sequences: Boolean. Whether to return the last output
in the output sequence, or the full sequence. Default: `False`.
return_state: Boolean. Whether to return the last state in addition
to the output. Default: `False`.
go_backwards: Boolean (default: `False`).
If `True`, process the input sequence backwards and return the
reversed sequence.
stateful: Boolean (default: `False`). If `True`, the last state
for each sample at index i in a batch will be used as initial
state for the sample of index i in the following batch.
unroll: Boolean (default False).
If `True`, the network will be unrolled,
else a symbolic loop will be used.
Unrolling can speed-up a RNN,
although it tends to be more memory-intensive.
Unrolling is only suitable for short sequences.
Call arguments:
inputs: A 3D tensor, with shape `(batch, timesteps, feature)`.
mask: Binary tensor of shape `(samples, timesteps)` indicating whether
a given timestep should be masked (optional).
An individual `True` entry indicates that the corresponding timestep
should be utilized, while a `False` entry indicates that the
corresponding timestep should be ignored. Defaults to `None`.
training: Python boolean indicating whether the layer should behave in
training mode or in inference mode. This argument is passed to the
cell when calling it. This is only relevant if `dropout` or
`recurrent_dropout` is used (optional). Defaults to `None`.
initial_state: List of initial state tensors to be passed to the first
call of the cell (optional, `None` causes creation
of zero-filled initial state tensors). Defaults to `None`.
"""
def __init__(
self,
units,
activation="tanh",
recurrent_activation="sigmoid",
use_bias=True,
kernel_initializer="glorot_uniform",
recurrent_initializer="orthogonal",
bias_initializer="zeros",
unit_forget_bias=True,
kernel_regularizer=None,
recurrent_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
recurrent_constraint=None,
bias_constraint=None,
dropout=0.0,
recurrent_dropout=0.0,
seed=None,
return_sequences=False,
return_state=False,
go_backwards=False,
stateful=False,
unroll=False,
**kwargs,
):
cell = LSTMCell(
units,
activation=activation,
recurrent_activation=recurrent_activation,
use_bias=use_bias,
kernel_initializer=kernel_initializer,
unit_forget_bias=unit_forget_bias,
recurrent_initializer=recurrent_initializer,
bias_initializer=bias_initializer,
kernel_regularizer=kernel_regularizer,
recurrent_regularizer=recurrent_regularizer,
bias_regularizer=bias_regularizer,
kernel_constraint=kernel_constraint,
recurrent_constraint=recurrent_constraint,
bias_constraint=bias_constraint,
dropout=dropout,
recurrent_dropout=recurrent_dropout,
dtype=kwargs.get("dtype", None),
trainable=kwargs.get("trainable", True),
name="lstm_cell",
seed=seed,
implementation=kwargs.pop("implementation", 2),
)
super().__init__(
cell,
return_sequences=return_sequences,
return_state=return_state,
go_backwards=go_backwards,
stateful=stateful,
unroll=unroll,
activity_regularizer=activity_regularizer,
**kwargs,
)
self.input_spec = InputSpec(ndim=3)
if backend.backend() == "tensorflow" and backend.cudnn_ok(
cell.activation,
cell.recurrent_activation,
self.unroll,
cell.use_bias,
):
self.supports_jit = False
def inner_loop(self, sequences, initial_state, mask, training=False):
if tree.is_nested(mask):
mask = mask[0]
if not self.dropout and not self.recurrent_dropout:
try:
# Backends are allowed to specify (optionally) optimized
# implementation of the inner LSTM loop. In the case of
# TF for instance, it will leverage cuDNN when feasible, and
# it will raise NotImplementedError otherwise.
out = backend.lstm(
sequences,
initial_state[0],
initial_state[1],
mask,
kernel=self.cell.kernel,
recurrent_kernel=self.cell.recurrent_kernel,
bias=self.cell.bias,
activation=self.cell.activation,
recurrent_activation=self.cell.recurrent_activation,
return_sequences=self.return_sequences,
go_backwards=self.go_backwards,
unroll=self.unroll,
)
# We disable jit_compile for the model in this case,
# since cuDNN ops aren't XLA compatible.
if backend.backend() == "tensorflow":
self.supports_jit = False
return out
except NotImplementedError:
pass
return super().inner_loop(
sequences, initial_state, mask=mask, training=training
)
def call(self, sequences, initial_state=None, mask=None, training=False):
return super().call(
sequences, mask=mask, training=training, initial_state=initial_state
)
@property
def units(self):
return self.cell.units
@property
def activation(self):
return self.cell.activation
@property
def recurrent_activation(self):
return self.cell.recurrent_activation
@property
def use_bias(self):
return self.cell.use_bias
@property
def unit_forget_bias(self):
return self.cell.unit_forget_bias
@property
def kernel_initializer(self):
return self.cell.kernel_initializer
@property
def recurrent_initializer(self):
return self.cell.recurrent_initializer
@property
def bias_initializer(self):
return self.cell.bias_initializer
@property
def kernel_regularizer(self):
return self.cell.kernel_regularizer
@property
def recurrent_regularizer(self):
return self.cell.recurrent_regularizer
@property
def bias_regularizer(self):
return self.cell.bias_regularizer
@property
def kernel_constraint(self):
return self.cell.kernel_constraint
@property
def recurrent_constraint(self):
return self.cell.recurrent_constraint
@property
def bias_constraint(self):
return self.cell.bias_constraint
@property
def dropout(self):
return self.cell.dropout
@property
def recurrent_dropout(self):
return self.cell.recurrent_dropout
def get_config(self):
config = {
"units": self.units,
"activation": activations.serialize(self.activation),
"recurrent_activation": activations.serialize(
self.recurrent_activation
),
"use_bias": self.use_bias,
"kernel_initializer": initializers.serialize(
self.kernel_initializer
),
"recurrent_initializer": initializers.serialize(
self.recurrent_initializer
),
"bias_initializer": initializers.serialize(self.bias_initializer),
"unit_forget_bias": self.unit_forget_bias,
"kernel_regularizer": regularizers.serialize(
self.kernel_regularizer
),
"recurrent_regularizer": regularizers.serialize(
self.recurrent_regularizer
),
"bias_regularizer": regularizers.serialize(self.bias_regularizer),
"activity_regularizer": regularizers.serialize(
self.activity_regularizer
),
"kernel_constraint": constraints.serialize(self.kernel_constraint),
"recurrent_constraint": constraints.serialize(
self.recurrent_constraint
),
"bias_constraint": constraints.serialize(self.bias_constraint),
"dropout": self.dropout,
"recurrent_dropout": self.recurrent_dropout,
"seed": self.cell.seed,
}
base_config = super().get_config()
del base_config["cell"]
return {**base_config, **config}
@classmethod
def from_config(cls, config):
return cls(**config)
| keras/keras/layers/rnn/lstm.py/0 | {
"file_path": "keras/keras/layers/rnn/lstm.py",
"repo_id": "keras",
"token_count": 11712
} | 199 |
"""Deprecated sequence preprocessing APIs from Keras 1."""
import json
import random
import numpy as np
from keras.api_export import keras_export
from keras.trainers.data_adapters.py_dataset_adapter import PyDataset
@keras_export("keras._legacy.preprocessing.sequence.TimeseriesGenerator")
class TimeseriesGenerator(PyDataset):
"""Utility class for generating batches of temporal data.
DEPRECATED.
This class takes in a sequence of data-points gathered at
equal intervals, along with time series parameters such as
stride, length of history, etc., to produce batches for
training/validation.
Arguments:
data: Indexable generator (such as list or Numpy array)
containing consecutive data points (timesteps).
The data should be at 2D, and axis 0 is expected
to be the time dimension.
targets: Targets corresponding to timesteps in `data`.
It should have same length as `data`.
length: Length of the output sequences (in number of timesteps).
sampling_rate: Period between successive individual timesteps
within sequences. For rate `r`, timesteps
`data[i]`, `data[i-r]`, ... `data[i - length]`
are used for create a sample sequence.
stride: Period between successive output sequences.
For stride `s`, consecutive output samples would
be centered around `data[i]`, `data[i+s]`, `data[i+2*s]`, etc.
start_index: Data points earlier than `start_index` will not be used
in the output sequences. This is useful to reserve part of the
data for test or validation.
end_index: Data points later than `end_index` will not be used
in the output sequences. This is useful to reserve part of the
data for test or validation.
shuffle: Whether to shuffle output samples,
or instead draw them in chronological order.
reverse: Boolean: if `true`, timesteps in each output sample will be
in reverse chronological order.
batch_size: Number of timeseries samples in each batch
(except maybe the last one).
Returns:
A PyDataset instance.
"""
def __init__(
self,
data,
targets,
length,
sampling_rate=1,
stride=1,
start_index=0,
end_index=None,
shuffle=False,
reverse=False,
batch_size=128,
):
if len(data) != len(targets):
raise ValueError(
"Data and targets have to be "
f"of same length. Data length is {len(data)} "
f"while target length is {len(targets)}"
)
self.data = data
self.targets = targets
self.length = length
self.sampling_rate = sampling_rate
self.stride = stride
self.start_index = start_index + length
if end_index is None:
end_index = len(data) - 1
self.end_index = end_index
self.shuffle = shuffle
self.reverse = reverse
self.batch_size = batch_size
if self.start_index > self.end_index:
raise ValueError(
f"`start_index+length={self.start_index} "
f"> end_index={self.end_index}` "
"is disallowed, as no part of the sequence "
"would be left to be used as current step."
)
def __len__(self):
return (
self.end_index - self.start_index + self.batch_size * self.stride
) // (self.batch_size * self.stride)
def __getitem__(self, index):
if self.shuffle:
rows = np.random.randint(
self.start_index, self.end_index + 1, size=self.batch_size
)
else:
i = self.start_index + self.batch_size * self.stride * index
rows = np.arange(
i,
min(i + self.batch_size * self.stride, self.end_index + 1),
self.stride,
)
samples = np.array(
[
self.data[row - self.length : row : self.sampling_rate]
for row in rows
]
)
targets = np.array([self.targets[row] for row in rows])
if self.reverse:
return samples[:, ::-1, ...], targets
return samples, targets
def get_config(self):
"""Returns the TimeseriesGenerator configuration as Python dictionary.
Returns:
A Python dictionary with the TimeseriesGenerator configuration.
"""
data = self.data
if type(self.data).__module__ == np.__name__:
data = self.data.tolist()
try:
json_data = json.dumps(data)
except TypeError as e:
raise TypeError(f"Data not JSON Serializable: {data}") from e
targets = self.targets
if type(self.targets).__module__ == np.__name__:
targets = self.targets.tolist()
try:
json_targets = json.dumps(targets)
except TypeError as e:
raise TypeError(f"Targets not JSON Serializable: {targets}") from e
return {
"data": json_data,
"targets": json_targets,
"length": self.length,
"sampling_rate": self.sampling_rate,
"stride": self.stride,
"start_index": self.start_index,
"end_index": self.end_index,
"shuffle": self.shuffle,
"reverse": self.reverse,
"batch_size": self.batch_size,
}
def to_json(self, **kwargs):
"""Returns a JSON string containing the generator's configuration.
Args:
**kwargs: Additional keyword arguments to be passed
to `json.dumps()`.
Returns:
A JSON string containing the tokenizer configuration.
"""
config = self.get_config()
timeseries_generator_config = {
"class_name": self.__class__.__name__,
"config": config,
}
return json.dumps(timeseries_generator_config, **kwargs)
@keras_export("keras._legacy.preprocessing.sequence.make_sampling_table")
def make_sampling_table(size, sampling_factor=1e-5):
"""Generates a word rank-based probabilistic sampling table.
DEPRECATED.
Used for generating the `sampling_table` argument for `skipgrams`.
`sampling_table[i]` is the probability of sampling
the word i-th most common word in a dataset
(more common words should be sampled less frequently, for balance).
The sampling probabilities are generated according
to the sampling distribution used in word2vec:
```
p(word) = (min(1, sqrt(word_frequency / sampling_factor) /
(word_frequency / sampling_factor)))
```
We assume that the word frequencies follow Zipf's law (s=1) to derive
a numerical approximation of frequency(rank):
`frequency(rank) ~ 1/(rank * (log(rank) + gamma) + 1/2 - 1/(12*rank))`
where `gamma` is the Euler-Mascheroni constant.
Args:
size: Int, number of possible words to sample.
sampling_factor: The sampling factor in the word2vec formula.
Returns:
A 1D Numpy array of length `size` where the ith entry
is the probability that a word of rank i should be sampled.
"""
gamma = 0.577
rank = np.arange(size)
rank[0] = 1
inv_fq = rank * (np.log(rank) + gamma) + 0.5 - 1.0 / (12.0 * rank)
f = sampling_factor * inv_fq
return np.minimum(1.0, f / np.sqrt(f))
@keras_export("keras._legacy.preprocessing.sequence.skipgrams")
def skipgrams(
sequence,
vocabulary_size,
window_size=4,
negative_samples=1.0,
shuffle=True,
categorical=False,
sampling_table=None,
seed=None,
):
"""Generates skipgram word pairs.
DEPRECATED.
This function transforms a sequence of word indexes (list of integers)
into tuples of words of the form:
- (word, word in the same window), with label 1 (positive samples).
- (word, random word from the vocabulary), with label 0 (negative samples).
Read more about Skipgram in this gnomic paper by Mikolov et al.:
[Efficient Estimation of Word Representations in
Vector Space](http://arxiv.org/pdf/1301.3781v3.pdf)
Args:
sequence: A word sequence (sentence), encoded as a list
of word indices (integers). If using a `sampling_table`,
word indices are expected to match the rank
of the words in a reference dataset (e.g. 10 would encode
the 10-th most frequently occurring token).
Note that index 0 is expected to be a non-word and will be skipped.
vocabulary_size: Int, maximum possible word index + 1
window_size: Int, size of sampling windows (technically half-window).
The window of a word `w_i` will be
`[i - window_size, i + window_size+1]`.
negative_samples: Float >= 0. 0 for no negative (i.e. random) samples.
1 for same number as positive samples.
shuffle: Whether to shuffle the word couples before returning them.
categorical: bool. if False, labels will be
integers (eg. `[0, 1, 1 .. ]`),
if `True`, labels will be categorical, e.g.
`[[1,0],[0,1],[0,1] .. ]`.
sampling_table: 1D array of size `vocabulary_size` where the entry i
encodes the probability to sample a word of rank i.
seed: Random seed.
Returns:
couples, labels: where `couples` are int pairs and
`labels` are either 0 or 1.
Note:
By convention, index 0 in the vocabulary is
a non-word and will be skipped.
"""
couples = []
labels = []
for i, wi in enumerate(sequence):
if not wi:
continue
if sampling_table is not None:
if sampling_table[wi] < random.random():
continue
window_start = max(0, i - window_size)
window_end = min(len(sequence), i + window_size + 1)
for j in range(window_start, window_end):
if j != i:
wj = sequence[j]
if not wj:
continue
couples.append([wi, wj])
if categorical:
labels.append([0, 1])
else:
labels.append(1)
if negative_samples > 0:
num_negative_samples = int(len(labels) * negative_samples)
words = [c[0] for c in couples]
random.shuffle(words)
couples += [
[words[i % len(words)], random.randint(1, vocabulary_size - 1)]
for i in range(num_negative_samples)
]
if categorical:
labels += [[1, 0]] * num_negative_samples
else:
labels += [0] * num_negative_samples
if shuffle:
if seed is None:
seed = random.randint(0, 10e6)
random.seed(seed)
random.shuffle(couples)
random.seed(seed)
random.shuffle(labels)
return couples, labels
| keras/keras/legacy/preprocessing/sequence.py/0 | {
"file_path": "keras/keras/legacy/preprocessing/sequence.py",
"repo_id": "keras",
"token_count": 4854
} | 200 |
from keras import backend
from keras import ops
from keras.api_export import keras_export
from keras.losses.loss import squeeze_or_expand_to_same_rank
from keras.metrics import reduction_metrics
def accuracy(y_true, y_pred):
y_pred = ops.convert_to_tensor(y_pred)
y_true = ops.convert_to_tensor(y_true, dtype=y_pred.dtype)
y_true, y_pred = squeeze_or_expand_to_same_rank(y_true, y_pred)
return ops.mean(
ops.cast(ops.equal(y_true, y_pred), dtype=backend.floatx()),
axis=-1,
)
@keras_export("keras.metrics.Accuracy")
class Accuracy(reduction_metrics.MeanMetricWrapper):
"""Calculates how often predictions equal labels.
This metric creates two local variables, `total` and `count` that are used
to compute the frequency with which `y_pred` matches `y_true`. This
frequency is ultimately returned as `binary accuracy`: an idempotent
operation that simply divides `total` by `count`.
If `sample_weight` is `None`, weights default to 1.
Use `sample_weight` of 0 to mask values.
Args:
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
Standalone usage:
>>> m = keras.metrics.Accuracy()
>>> m.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]])
>>> m.result()
0.75
>>> m.reset_state()
>>> m.update_state([[1], [2], [3], [4]], [[0], [2], [3], [4]],
... sample_weight=[1, 1, 0, 0])
>>> m.result()
0.5
Usage with `compile()` API:
```python
model.compile(optimizer='sgd',
loss='binary_crossentropy',
metrics=[keras.metrics.Accuracy()])
```
"""
def __init__(self, name="accuracy", dtype=None):
super().__init__(fn=accuracy, name=name, dtype=dtype)
# Metric should be maximized during optimization.
self._direction = "up"
def get_config(self):
return {"name": self.name, "dtype": self.dtype}
@keras_export("keras.metrics.binary_accuracy")
def binary_accuracy(y_true, y_pred, threshold=0.5):
y_true = ops.convert_to_tensor(y_true)
y_pred = ops.convert_to_tensor(y_pred)
y_true, y_pred = squeeze_or_expand_to_same_rank(y_true, y_pred)
threshold = ops.cast(threshold, y_pred.dtype)
y_pred = ops.cast(y_pred > threshold, y_true.dtype)
return ops.mean(
ops.cast(ops.equal(y_true, y_pred), dtype=backend.floatx()),
axis=-1,
)
@keras_export("keras.metrics.BinaryAccuracy")
class BinaryAccuracy(reduction_metrics.MeanMetricWrapper):
"""Calculates how often predictions match binary labels.
This metric creates two local variables, `total` and `count` that are used
to compute the frequency with which `y_pred` matches `y_true`. This
frequency is ultimately returned as `binary accuracy`: an idempotent
operation that simply divides `total` by `count`.
If `sample_weight` is `None`, weights default to 1.
Use `sample_weight` of 0 to mask values.
Args:
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
threshold: (Optional) Float representing the threshold for deciding
whether prediction values are 1 or 0.
Standalone usage:
>>> m = keras.metrics.BinaryAccuracy()
>>> m.update_state([[1], [1], [0], [0]], [[0.98], [1], [0], [0.6]])
>>> m.result()
0.75
>>> m.reset_state()
>>> m.update_state([[1], [1], [0], [0]], [[0.98], [1], [0], [0.6]],
... sample_weight=[1, 0, 0, 1])
>>> m.result()
0.5
Usage with `compile()` API:
```python
model.compile(optimizer='sgd',
loss='binary_crossentropy',
metrics=[keras.metrics.BinaryAccuracy()])
```
"""
def __init__(self, name="binary_accuracy", dtype=None, threshold=0.5):
if threshold is not None and (threshold <= 0 or threshold >= 1):
raise ValueError(
"Invalid value for argument `threshold`. "
"Expected a value in interval (0, 1). "
f"Received: threshold={threshold}"
)
super().__init__(
fn=binary_accuracy, name=name, dtype=dtype, threshold=threshold
)
self.threshold = threshold
# Metric should be maximized during optimization.
self._direction = "up"
def get_config(self):
return {
"name": self.name,
"dtype": self.dtype,
"threshold": self.threshold,
}
@keras_export("keras.metrics.categorical_accuracy")
def categorical_accuracy(y_true, y_pred):
y_true = ops.argmax(y_true, axis=-1)
reshape_matches = False
y_pred = ops.convert_to_tensor(y_pred)
y_true = ops.convert_to_tensor(y_true, dtype=y_true.dtype)
y_true_org_shape = ops.shape(y_true)
y_pred_rank = len(y_pred.shape)
y_true_rank = len(y_true.shape)
# If the shape of y_true is (num_samples, 1), squeeze to (num_samples,)
if (
(y_true_rank is not None)
and (y_pred_rank is not None)
and (len(y_true.shape) == len(y_pred.shape))
):
y_true = ops.squeeze(y_true, -1)
reshape_matches = True
y_pred = ops.argmax(y_pred, axis=-1)
# If the predicted output and actual output types don't match, force cast
# them to match.
if y_pred.dtype != y_true.dtype:
y_pred = ops.cast(y_pred, dtype=y_true.dtype)
matches = ops.cast(ops.equal(y_true, y_pred), backend.floatx())
if reshape_matches:
matches = ops.reshape(matches, y_true_org_shape)
return matches
@keras_export("keras.metrics.CategoricalAccuracy")
class CategoricalAccuracy(reduction_metrics.MeanMetricWrapper):
"""Calculates how often predictions match one-hot labels.
You can provide logits of classes as `y_pred`, since argmax of
logits and probabilities are same.
This metric creates two local variables, `total` and `count` that are used
to compute the frequency with which `y_pred` matches `y_true`. This
frequency is ultimately returned as `categorical accuracy`: an idempotent
operation that simply divides `total` by `count`.
`y_pred` and `y_true` should be passed in as vectors of probabilities,
rather than as labels. If necessary, use `ops.one_hot` to expand `y_true` as
a vector.
If `sample_weight` is `None`, weights default to 1.
Use `sample_weight` of 0 to mask values.
Args:
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
Standalone usage:
>>> m = keras.metrics.CategoricalAccuracy()
>>> m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8],
... [0.05, 0.95, 0]])
>>> m.result()
0.5
>>> m.reset_state()
>>> m.update_state([[0, 0, 1], [0, 1, 0]], [[0.1, 0.9, 0.8],
... [0.05, 0.95, 0]],
... sample_weight=[0.7, 0.3])
>>> m.result()
0.3
Usage with `compile()` API:
```python
model.compile(optimizer='sgd',
loss='categorical_crossentropy',
metrics=[keras.metrics.CategoricalAccuracy()])
```
"""
def __init__(self, name="categorical_accuracy", dtype=None):
super().__init__(fn=categorical_accuracy, name=name, dtype=dtype)
# Metric should be maximized during optimization.
self._direction = "up"
def get_config(self):
return {"name": self.name, "dtype": self.dtype}
@keras_export("keras.metrics.sparse_categorical_accuracy")
def sparse_categorical_accuracy(y_true, y_pred):
reshape_matches = False
y_pred = ops.convert_to_tensor(y_pred)
y_true = ops.convert_to_tensor(y_true, dtype=y_true.dtype)
y_true_org_shape = ops.shape(y_true)
y_pred_rank = len(y_pred.shape)
y_true_rank = len(y_true.shape)
# If the shape of y_true is (num_samples, 1), squeeze to (num_samples,)
if (
(y_true_rank is not None)
and (y_pred_rank is not None)
and (len(y_true.shape) == len(y_pred.shape))
and ops.shape(y_true)[-1] == 1
):
y_true = ops.squeeze(y_true, -1)
reshape_matches = True
y_pred = ops.argmax(y_pred, axis=-1)
# If the predicted output and actual output types don't match, force cast
# them to match.
if y_pred.dtype != y_true.dtype:
y_pred = ops.cast(y_pred, y_true.dtype)
matches = ops.cast(ops.equal(y_true, y_pred), backend.floatx())
if reshape_matches:
matches = ops.reshape(matches, y_true_org_shape)
# if shape is (num_samples, 1) squeeze
if len(matches.shape) > 1 and matches.shape[-1] == 1:
matches = ops.squeeze(matches, -1)
return matches
@keras_export("keras.metrics.SparseCategoricalAccuracy")
class SparseCategoricalAccuracy(reduction_metrics.MeanMetricWrapper):
"""Calculates how often predictions match integer labels.
```python
acc = np.dot(sample_weight, np.equal(y_true, np.argmax(y_pred, axis=1))
```
You can provide logits of classes as `y_pred`, since argmax of
logits and probabilities are same.
This metric creates two local variables, `total` and `count` that are used
to compute the frequency with which `y_pred` matches `y_true`. This
frequency is ultimately returned as `sparse categorical accuracy`: an
idempotent operation that simply divides `total` by `count`.
If `sample_weight` is `None`, weights default to 1.
Use `sample_weight` of 0 to mask values.
Args:
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
Standalone usage:
>>> m = keras.metrics.SparseCategoricalAccuracy()
>>> m.update_state([[2], [1]], [[0.1, 0.6, 0.3], [0.05, 0.95, 0]])
>>> m.result()
0.5
>>> m.reset_state()
>>> m.update_state([[2], [1]], [[0.1, 0.6, 0.3], [0.05, 0.95, 0]],
... sample_weight=[0.7, 0.3])
>>> m.result()
0.3
Usage with `compile()` API:
```python
model.compile(optimizer='sgd',
loss='sparse_categorical_crossentropy',
metrics=[keras.metrics.SparseCategoricalAccuracy()])
```
"""
def __init__(self, name="sparse_categorical_accuracy", dtype=None):
super().__init__(fn=sparse_categorical_accuracy, name=name, dtype=dtype)
# Metric should be maximized during optimization.
self._direction = "up"
def get_config(self):
return {"name": self.name, "dtype": self.dtype}
@keras_export("keras.metrics.top_k_categorical_accuracy")
def top_k_categorical_accuracy(y_true, y_pred, k=5):
reshape_matches = False
y_pred = ops.convert_to_tensor(y_pred)
y_true = ops.convert_to_tensor(y_true, dtype=y_true.dtype)
y_true = ops.argmax(y_true, axis=-1)
y_true_rank = len(y_true.shape)
y_pred_rank = len(y_pred.shape)
y_true_org_shape = ops.shape(y_true)
# Flatten y_pred to (batch_size, num_samples) and y_true to (num_samples,)
if (y_true_rank is not None) and (y_pred_rank is not None):
if y_pred_rank > 2:
y_pred = ops.reshape(y_pred, [-1, y_pred.shape[-1]])
if y_true_rank > 1:
reshape_matches = True
y_true = ops.reshape(y_true, [-1])
matches = ops.cast(
ops.in_top_k(ops.cast(y_true, "int32"), y_pred, k=k),
dtype=backend.floatx(),
)
# returned matches is expected to have same shape as y_true input
if reshape_matches:
matches = ops.reshape(matches, y_true_org_shape)
return matches
@keras_export("keras.metrics.TopKCategoricalAccuracy")
class TopKCategoricalAccuracy(reduction_metrics.MeanMetricWrapper):
"""Computes how often targets are in the top `K` predictions.
Args:
k: (Optional) Number of top elements to look at for computing accuracy.
Defaults to `5`.
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
Standalone usage:
>>> m = keras.metrics.TopKCategoricalAccuracy(k=1)
>>> m.update_state([[0, 0, 1], [0, 1, 0]],
... [[0.1, 0.9, 0.8], [0.05, 0.95, 0]])
>>> m.result()
0.5
>>> m.reset_state()
>>> m.update_state([[0, 0, 1], [0, 1, 0]],
... [[0.1, 0.9, 0.8], [0.05, 0.95, 0]],
... sample_weight=[0.7, 0.3])
>>> m.result()
0.3
Usage with `compile()` API:
```python
model.compile(optimizer='sgd',
loss='categorical_crossentropy',
metrics=[keras.metrics.TopKCategoricalAccuracy()])
```
"""
def __init__(self, k=5, name="top_k_categorical_accuracy", dtype=None):
super().__init__(
fn=top_k_categorical_accuracy,
name=name,
dtype=dtype,
k=k,
)
self.k = k
# Metric should be maximized during optimization.
self._direction = "up"
def get_config(self):
return {"name": self.name, "dtype": self.dtype, "k": self.k}
@keras_export("keras.metrics.sparse_top_k_categorical_accuracy")
def sparse_top_k_categorical_accuracy(y_true, y_pred, k=5):
reshape_matches = False
y_pred = ops.convert_to_tensor(y_pred)
y_true = ops.convert_to_tensor(y_true, dtype=y_true.dtype)
y_true_rank = len(y_true.shape)
y_pred_rank = len(y_pred.shape)
y_true_org_shape = ops.shape(y_true)
# Flatten y_pred to (batch_size, num_samples) and y_true to (num_samples,)
if (y_true_rank is not None) and (y_pred_rank is not None):
if y_pred_rank > 2:
y_pred = ops.reshape(y_pred, [-1, y_pred.shape[-1]])
if y_true_rank > 1:
reshape_matches = True
y_true = ops.reshape(y_true, [-1])
matches = ops.cast(
ops.in_top_k(ops.cast(y_true, "int32"), y_pred, k=k),
dtype=backend.floatx(),
)
# returned matches is expected to have same shape as y_true input
if reshape_matches:
matches = ops.reshape(matches, y_true_org_shape)
return matches
@keras_export("keras.metrics.SparseTopKCategoricalAccuracy")
class SparseTopKCategoricalAccuracy(reduction_metrics.MeanMetricWrapper):
"""Computes how often integer targets are in the top `K` predictions.
Args:
k: (Optional) Number of top elements to look at for computing accuracy.
Defaults to `5`.
name: (Optional) string name of the metric instance.
dtype: (Optional) data type of the metric result.
Standalone usage:
>>> m = keras.metrics.SparseTopKCategoricalAccuracy(k=1)
>>> m.update_state([2, 1], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]])
>>> m.result()
0.5
>>> m.reset_state()
>>> m.update_state([2, 1], [[0.1, 0.9, 0.8], [0.05, 0.95, 0]],
... sample_weight=[0.7, 0.3])
>>> m.result()
0.3
Usage with `compile()` API:
```python
model.compile(optimizer='sgd',
loss='sparse_categorical_crossentropy',
metrics=[keras.metrics.SparseTopKCategoricalAccuracy()])
```
"""
def __init__(
self, k=5, name="sparse_top_k_categorical_accuracy", dtype=None
):
super().__init__(
fn=sparse_top_k_categorical_accuracy,
name=name,
dtype=dtype,
k=k,
)
self.k = k
# Metric should be maximized during optimization.
self._direction = "up"
def get_config(self):
return {"name": self.name, "dtype": self.dtype, "k": self.k}
| keras/keras/metrics/accuracy_metrics.py/0 | {
"file_path": "keras/keras/metrics/accuracy_metrics.py",
"repo_id": "keras",
"token_count": 6920
} | 201 |
import numpy as np
from keras import testing
from keras.metrics import reduction_metrics
from keras.saving import register_keras_serializable
class SumTest(testing.TestCase):
def test_config(self):
sum_obj = reduction_metrics.Sum(name="sum", dtype="float32")
self.assertEqual(sum_obj.name, "sum")
self.assertEqual(len(sum_obj.variables), 1)
self.assertEqual(sum_obj._dtype, "float32")
# Check save and restore config
sum_obj2 = reduction_metrics.Sum.from_config(sum_obj.get_config())
self.assertEqual(sum_obj2.name, "sum")
self.assertEqual(len(sum_obj2.variables), 1)
self.assertEqual(sum_obj2._dtype, "float32")
def test_unweighted(self):
sum_obj = reduction_metrics.Sum(name="sum", dtype="float32")
sum_obj.update_state([1, 3, 5, 7])
result = sum_obj.result()
self.assertAllClose(result, 16.0, atol=1e-3)
def test_weighted(self):
sum_obj = reduction_metrics.Sum(name="sum", dtype="float32")
sum_obj.update_state([1, 3, 5, 7], sample_weight=[1, 1, 0, 0])
result = sum_obj.result()
self.assertAllClose(result, 4.0, atol=1e-3)
def test_weighted_nd(self):
sum_obj = reduction_metrics.Sum(name="sum", dtype="float32")
sum_obj.update_state([[1, 3], [5, 7]], sample_weight=[[1, 1], [1, 0]])
result = sum_obj.result()
self.assertAllClose(result, 9.0, atol=1e-3)
class MeanTest(testing.TestCase):
def test_config(self):
mean_obj = reduction_metrics.Mean(name="mean", dtype="float32")
self.assertEqual(mean_obj.name, "mean")
self.assertEqual(len(mean_obj.variables), 2)
self.assertEqual(mean_obj._dtype, "float32")
# Check save and restore config
mean_obj2 = reduction_metrics.Mean.from_config(mean_obj.get_config())
self.assertEqual(mean_obj2.name, "mean")
self.assertEqual(len(mean_obj2.variables), 2)
self.assertEqual(mean_obj2._dtype, "float32")
def test_unweighted(self):
mean_obj = reduction_metrics.Mean(name="mean", dtype="float32")
mean_obj.update_state([1, 3, 5, 7])
result = mean_obj.result()
self.assertAllClose(result, 4.0, atol=1e-3)
def test_weighted(self):
mean_obj = reduction_metrics.Mean(name="mean", dtype="float32")
mean_obj.update_state([1, 3, 5, 7], sample_weight=[1, 1, 0, 0])
result = mean_obj.result()
self.assertAllClose(result, 2.0, atol=1e-3)
def test_weighted_negative_weights(self):
mean_obj = reduction_metrics.Mean(name="mean", dtype="float32")
mean_obj.update_state([1, 3, 5, 7], sample_weight=[-1, -1, 0, 0])
result = mean_obj.result()
self.assertAllClose(result, 2.0, atol=1e-3)
def test_weighted_nd(self):
mean_obj = reduction_metrics.Mean(name="mean", dtype="float32")
mean_obj.update_state([[1, 3], [5, 7]], sample_weight=[[1, 1], [1, 0]])
result = mean_obj.result()
self.assertAllClose(result, 3.0, atol=1e-3)
# How users would register a custom function or class to use with
# MeanMetricWrapper.
@register_keras_serializable(package="test", name="mse")
def mse(y_true, y_pred):
return (y_true - y_pred) ** 2
class MetricWrapperTest(testing.TestCase):
def test_config(self):
mse_obj = reduction_metrics.MeanMetricWrapper(
fn=mse, name="mse", dtype="float32"
)
self.assertEqual(mse_obj.name, "mse")
self.assertEqual(len(mse_obj.variables), 2)
self.assertEqual(mse_obj._dtype, "float32")
# Check save and restore config
mse_obj2 = reduction_metrics.MeanMetricWrapper.from_config(
mse_obj.get_config()
)
self.assertEqual(mse_obj2.name, "mse")
self.assertEqual(len(mse_obj2.variables), 2)
self.assertEqual(mse_obj2._dtype, "float32")
self.assertTrue("fn" in mse_obj2.get_config())
def test_unweighted(self):
mse_obj = reduction_metrics.MeanMetricWrapper(
fn=mse, name="mse", dtype="float32"
)
y_true = np.array(
[[0, 1, 0, 1, 0], [0, 0, 1, 1, 1], [1, 1, 1, 1, 0], [0, 0, 0, 0, 1]]
)
y_pred = np.array(
[[0, 0, 1, 1, 0], [1, 1, 1, 1, 1], [0, 1, 0, 1, 0], [1, 1, 1, 1, 1]]
)
mse_obj.update_state(y_true, y_pred)
result = mse_obj.result()
self.assertAllClose(0.5, result, atol=1e-5)
def test_weighted(self):
mse_obj = reduction_metrics.MeanMetricWrapper(
fn=mse, name="mse", dtype="float32"
)
y_true = np.array(
[[0, 1, 0, 1, 0], [0, 0, 1, 1, 1], [1, 1, 1, 1, 0], [0, 0, 0, 0, 1]]
)
y_pred = np.array(
[[0, 0, 1, 1, 0], [1, 1, 1, 1, 1], [0, 1, 0, 1, 0], [1, 1, 1, 1, 1]]
)
sample_weight = np.array([1.0, 1.5, 2.0, 2.5])
result = mse_obj(y_true, y_pred, sample_weight=sample_weight)
self.assertAllClose(0.54285, result, atol=1e-5)
| keras/keras/metrics/reduction_metrics_test.py/0 | {
"file_path": "keras/keras/metrics/reduction_metrics_test.py",
"repo_id": "keras",
"token_count": 2434
} | 202 |
import contextlib
from unittest.mock import Mock
import numpy as np
import pytest
import tree
from absl.testing import parameterized
from keras import backend
from keras import layers
from keras import losses
from keras import models
from keras import ops
from keras import optimizers
from keras import testing
from keras.backend.common.keras_tensor import KerasTensor
from keras.backend.common.variables import ALLOWED_DTYPES
from keras.ops import core
class CoreOpsStaticShapeTest(testing.TestCase):
def test_scatter(self):
indices = KerasTensor((5, 2))
values = KerasTensor((5,))
shape = (4, 4)
self.assertEqual(core.scatter(indices, values, shape).shape, (4, 4))
def test_scatter_update(self):
inputs = KerasTensor((4, 4))
indices = KerasTensor((5, 2))
updates = KerasTensor((5,))
self.assertEqual(
core.scatter_update(inputs, indices, updates).shape, (4, 4)
)
inputs = KerasTensor((4, 4, 4))
indices = KerasTensor((5, 2))
updates = KerasTensor((5, 4))
self.assertEqual(
core.scatter_update(inputs, indices, updates).shape, (4, 4, 4)
)
def test_slice_update(self):
inputs = KerasTensor((4, 4))
start_indices = KerasTensor((2,))
updates = KerasTensor((2, 2))
self.assertEqual(
core.slice_update(inputs, start_indices, updates).shape, (4, 4)
)
inputs = KerasTensor((4, 4, 4))
start_indices = KerasTensor((3,))
updates = KerasTensor((2, 2, 2))
self.assertEqual(
core.slice_update(inputs, start_indices, updates).shape, (4, 4, 4)
)
def test_fori_loop(self):
def body_fun(i, x):
return x + i
initial_value = KerasTensor((3, 5, 7))
result = core.fori_loop(0, 10, body_fun, initial_value)
self.assertEqual(result.shape, (3, 5, 7))
def test_unstack(self):
x = KerasTensor((2, 3, 4))
axis = 1
out = core.unstack(x, axis=axis)
self.assertEqual(len(out), 3)
for o in out:
self.assertEqual(o.shape, (2, 4))
x = KerasTensor((2, None, None))
axis, num = 1, 3
out = core.unstack(x, num=num, axis=axis)
self.assertEqual(len(out), 3)
for o in out:
self.assertEqual(o.shape, (2, None))
with self.assertRaisesRegex(
ValueError, r"Cannot infer argument `num` from shape"
):
core.unstack(x, axis=axis)
class CoreOpsCorrectnessTest(testing.TestCase, parameterized.TestCase):
def test_scatter(self):
# Test 1D
indices = np.array([[1], [3], [4], [7]])
values = np.array([9, 10, 11, 12])
self.assertAllClose(
core.scatter(indices, values, (8,)),
[0, 9, 0, 10, 11, 0, 0, 12],
)
# Test 2D
indices = np.array([[0, 1], [2, 0]])
values = np.array([5, 10])
self.assertAllClose(
core.scatter(indices, values, (3, 2)), [[0, 5], [0, 0], [10, 0]]
)
# Test 3D
indices = np.array([[1], [3]])
values = np.array(
[
[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
]
)
self.assertAllClose(
core.scatter(indices, values, (4, 4, 4)),
[
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],
[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]],
[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
],
)
# Test slices
indices = np.array([[2], [4]])
values = np.array([[1, 2, 3], [4, 5, 6]])
self.assertAllClose(
core.scatter(indices, values, (6, 3)),
[[0, 0, 0], [0, 0, 0], [1, 2, 3], [0, 0, 0], [4, 5, 6], [0, 0, 0]],
)
# Duplicate indices
indices = np.array([[0], [0]])
values = np.array([1, 1])
self.assertAllClose(core.scatter(indices, values, (1,)), [2])
def test_scatter_update(self):
# Test 1D.
inputs = np.array([0, 0, 0, 0, 0, 0, 0, 0])
indices = [[1], [3], [4], [7]]
updates = np.array([9, 10, 11, 12])
self.assertAllClose(
core.scatter_update(inputs, indices, updates),
[0, 9, 0, 10, 11, 0, 0, 12],
)
# Test 2D.
inputs = np.array([[1, 1], [1, 1], [1, 1]])
indices = [[0, 1], [2, 0]]
updates = np.array([5, 10])
self.assertAllClose(
core.scatter_update(inputs, indices, updates),
[[1, 5], [1, 1], [10, 1]],
)
# Test updates has multiple dimension.
inputs = np.ones([4, 4, 4])
indices = [[1, 1], [2, 2]]
updates = np.array([[0, 1, 2, 3], [3, 2, 1, 0]], dtype=np.float64)
outputs = core.scatter_update(inputs, indices, updates)
self.assertTrue(ops.is_tensor(outputs))
self.assertAllClose(outputs[1, 1, :], [0, 1, 2, 3])
self.assertAllClose(outputs[2, 2, :], [3, 2, 1, 0])
def test_slice(self):
# Test 1D.
inputs = np.arange(10)
start_indices = np.array([1])
shape = np.array([4])
self.assertAllClose(
core.slice(inputs, start_indices, shape),
[1, 2, 3, 4],
)
# Test 2D.
inputs = np.broadcast_to(np.arange(10), (4, 10))
start_indices = np.array([1, 1])
shape = np.array([2, 4])
self.assertAllClose(
core.slice(inputs, start_indices, shape),
[[1, 2, 3, 4], [1, 2, 3, 4]],
)
# Test N-D.
inputs = np.broadcast_to(np.arange(10), (4, 4, 4, 10))
start_indices = np.array([1, 1, 1, 1])
shape = np.array([1, 2, 3, 4])
outputs = core.slice(inputs, start_indices, shape)
expected = np.broadcast_to(np.arange(1, 5), (1, 2, 3, 4))
self.assertAllClose(outputs, expected)
def test_dynamic_slice(self):
def cond(index, inputs, sum):
return index < 10
def body(index, inputs, sum):
sum = sum + core.slice(inputs, [index], [1])
index = index + 1
return index, inputs, sum
index, inputs, sum = 0, np.arange(10), np.array([0])
index, inputs, sum = core.while_loop(cond, body, (index, inputs, sum))
self.assertAllClose(sum, [45])
def test_slice_update(self):
# Test 1D.
inputs = np.array([0, 0, 0, 0, 0, 0, 0, 0])
start_indices = np.array([1])
updates = np.array([9, 10, 11, 12])
self.assertAllClose(
core.slice_update(inputs, start_indices, updates),
[0, 9, 10, 11, 12, 0, 0, 0],
)
# Test 2D.
inputs = np.array([[1, 1], [1, 1], [1, 1]])
start_indices = [1, 0]
updates = np.array([[2, 2], [2, 2]])
self.assertAllClose(
core.slice_update(inputs, start_indices, updates),
[[1, 1], [2, 2], [2, 2]],
)
# Test N-D.
inputs = np.ones([4, 4, 4, 4])
start_indices = [1, 1, 2, 2]
updates = np.zeros([2, 2, 2, 2])
outputs = core.slice_update(inputs, start_indices, updates)
self.assertAllClose(outputs[1:3, 1:3, 2:4, 2:4], np.zeros([2, 2, 2, 2]))
@parameterized.named_parameters(
[
{
"testcase_name": "with_max",
"state": (np.array(0), np.array(1)),
"output": (np.array(5), np.array(6)),
"maximum_iterations": 5,
},
{
"testcase_name": "no_max",
"state": (np.array(0), np.array(1)),
"output": (np.array(10), np.array(11)),
"maximum_iterations": None,
},
]
)
def test_while_loop_list_data(self, state, output, maximum_iterations):
def cond(*args):
return tree.flatten(args)[0] < 10
def body(*args):
return tree.map_structure(lambda x: x + 1, args)
state = core.while_loop(
cond, body, state, maximum_iterations=maximum_iterations
)
tree.map_structure(self.assertAllClose, state, output)
@parameterized.named_parameters(
[
{
"testcase_name": "scalar_data_with_max",
"state": np.array(0),
"output": np.array(5),
"maximum_iterations": 5,
},
{
"testcase_name": "scalar_data_no_max",
"state": np.array(0),
"output": np.array(10),
"maximum_iterations": None,
},
{
"testcase_name": "nested_data_with_max",
"state": {
"a": np.array(0),
"b": (np.array(1), np.array(2)),
},
"output": {
"a": np.array(5),
"b": (np.array(6), np.array(7)),
},
"maximum_iterations": 5,
},
{
"testcase_name": "nested_data_no_max",
"state": {
"a": np.array(0),
"b": (np.array(1), np.array(2)),
},
"output": {
"a": np.array(10),
"b": (np.array(11), np.array(12)),
},
"maximum_iterations": None,
},
]
)
def test_while_loop(self, state, output, maximum_iterations):
def cond(args):
return tree.flatten(args)[0] < 10
def body(args):
return tree.map_structure(lambda x: x + 1, args)
state = core.while_loop(
cond, body, state, maximum_iterations=maximum_iterations
)
tree.map_structure(self.assertAllClose, state, output)
def test_fori_loop(self):
def body_fun(i, x):
return x + i
initial_value = np.array(0)
result = core.fori_loop(0, 10, body_fun, initial_value)
self.assertAllClose(result, 45)
@pytest.mark.requires_trainable_backend
def test_stop_gradient(self):
class ExampleLayer(layers.Layer):
def __init__(self):
super().__init__()
self.w = self.add_weight(shape=(1,), initializer="zeros")
self.b = self.add_weight(shape=(1,), initializer="zeros")
def call(self, x, training=False):
return x * ops.stop_gradient(self.w.value) + self.b
model = models.Sequential([ExampleLayer()])
model.compile(
optimizer=optimizers.SGD(), loss=losses.MeanSquaredError()
)
rng = np.random.default_rng(0)
x = np.ones((2, 4), dtype=np.float32)
y = rng.standard_normal((2, 4), dtype=np.float32)
model.fit(x, y, epochs=1, batch_size=2)
self.assertEqual(model.layers[0].w.numpy(), 0.0)
self.assertNotEqual(model.layers[0].b.numpy(), 0.0)
def test_stop_gradient_return(self):
x = ops.random.uniform(shape=(2, 4), dtype="float32")
y = ops.stop_gradient(x)
self.assertAllClose(x, y)
def test_shape(self):
x = np.ones((2, 3, 7, 1))
self.assertAllEqual(core.shape(x), (2, 3, 7, 1))
x = KerasTensor((None, 3, None, 1))
self.assertAllEqual(core.shape(x), (None, 3, None, 1))
@pytest.mark.skipif(
not backend.SUPPORTS_SPARSE_TENSORS,
reason="Backend does not support sparse tensors.",
)
def test_shape_sparse(self):
if backend.backend() == "tensorflow":
import tensorflow as tf
x = tf.SparseTensor([[0, 0], [1, 2]], [1.0, 2.0], (2, 3))
elif backend.backend() == "jax":
import jax.experimental.sparse as jax_sparse
x = jax_sparse.BCOO(([1.0, 2.0], [[0, 0], [1, 2]]), shape=(2, 3))
else:
self.fail(f"Sparse is unsupported with backend {backend.backend()}")
self.assertAllEqual(core.shape(x), (2, 3))
def test_convert_to_tensor(self):
x = np.ones((2,))
x = ops.convert_to_tensor(x)
x = ops.convert_to_numpy(x)
self.assertAllEqual(x, (1, 1))
self.assertIsInstance(x, np.ndarray)
# Empty lists should give an empty array.
x = ops.convert_to_tensor([])
np_x = ops.convert_to_numpy(x)
self.assertTrue(ops.is_tensor(x))
self.assertAllEqual(x, [])
self.assertIsInstance(np_x, np.ndarray)
# Partially converted.
x = ops.convert_to_tensor((1, ops.array(2), 3))
self.assertAllEqual(x, (1, 2, 3))
with self.assertRaises(ValueError):
ops.convert_to_numpy(KerasTensor((2,)))
@pytest.mark.skipif(
not backend.SUPPORTS_SPARSE_TENSORS,
reason="Backend does not support sparse tensors.",
)
def test_convert_to_tensor_sparse(self):
if backend.backend() == "tensorflow":
import tensorflow as tf
x = tf.SparseTensor([[0, 0], [1, 2]], [1.0, 2.0], (2, 3))
sparse_class = tf.SparseTensor
elif backend.backend() == "jax":
import jax.experimental.sparse as jax_sparse
x = jax_sparse.BCOO(([1.0, 2.0], [[0, 0], [1, 2]]), shape=(2, 3))
sparse_class = jax_sparse.JAXSparse
else:
self.fail(f"Sparse is unsupported with backend {backend.backend()}")
x_default = ops.convert_to_tensor(x)
self.assertIsInstance(x_default, sparse_class)
self.assertAllClose(x, x_default)
x_sparse = ops.convert_to_tensor(x, sparse=True)
self.assertIsInstance(x_sparse, sparse_class)
self.assertAllClose(x, x_sparse)
x_dense = ops.convert_to_tensor(x, sparse=False)
self.assertNotIsInstance(x_dense, sparse_class)
self.assertAllClose(x, x_dense)
x_numpy = ops.convert_to_numpy(x)
self.assertIsInstance(x_numpy, np.ndarray)
self.assertAllClose(x_numpy, x_dense)
def test_cond(self):
t = ops.cond(True, lambda: 0, lambda: 1)
self.assertEqual(t, 0)
f = ops.cond(False, lambda: 0, lambda: 1)
self.assertEqual(f, 1)
f = ops.cond(False, lambda: None, lambda: None)
self.assertEqual(f, None)
for val in [True, False]:
out = ops.cond(
val,
lambda: KerasTensor((16, 3)),
lambda: KerasTensor((16, 3)),
)
self.assertEqual((16, 3), out.shape)
out = ops.cond(
KerasTensor((), dtype="bool"),
lambda: ops.ones((1, 3)),
lambda: ops.zeros((1, 3)),
)
self.assertEqual((1, 3), out.shape)
out = ops.cond(
KerasTensor((), dtype="bool"),
lambda: KerasTensor((3,)),
lambda: KerasTensor((3,)),
)
self.assertEqual((3,), out.shape)
with self.assertRaises(ValueError):
ops.cond(
KerasTensor((), dtype="bool"),
lambda: KerasTensor((3,)),
lambda: KerasTensor((4,)),
)
def test_unstack(self):
rng = np.random.default_rng(0)
x = rng.uniform(size=(2, 3, 4))
x_tensor = ops.convert_to_tensor(x)
axis = 1
out = ops.unstack(x_tensor, axis=axis)
out_ex = [x[:, i, :] for i in range(x.shape[axis])]
self.assertEqual(len(out), len(out_ex))
for o, o_e in zip(out, out_ex):
o = ops.convert_to_numpy(o)
self.assertAllClose(o, o_e)
def test_cast(self):
x = ops.ones((2,), dtype="float32")
y = ops.cast(x, "float16")
self.assertIn("float16", str(y.dtype))
x = ops.KerasTensor((2,), dtype="float32")
y = ops.cast(x, "float16")
self.assertEqual("float16", y.dtype)
self.assertEqual(x.shape, y.shape)
self.assertTrue(hasattr(y, "_keras_history"))
def test_vectorized_map(self):
def fn(x):
return x + 1
output = ops.vectorized_map(fn, ops.zeros((2, 3), dtype="float32"))
self.assertAllClose(backend.convert_to_numpy(output), np.ones((2, 3)))
def fn(x):
return ops.stack([x, x])
output = ops.vectorized_map(fn, ops.zeros((2, 3), dtype="float32"))
self.assertAllClose(
backend.convert_to_numpy(output), np.zeros((2, 2, 3))
)
# Case: multiple args
def fn(elems):
x, y = elems
return x + y
output = ops.vectorized_map(fn, [ops.ones((2, 3)), ops.ones((2, 3))])
self.assertAllClose(
backend.convert_to_numpy(output), 2 * np.ones((2, 3))
)
def test_is_tensor(self):
np_x = np.array([[1, 2, 3], [3, 2, 1]])
x = backend.convert_to_tensor(np_x)
if backend.backend() != "numpy":
self.assertFalse(ops.is_tensor(np_x))
self.assertTrue(ops.is_tensor(x))
self.assertFalse(ops.is_tensor([1, 2, 3]))
class CoreOpsDtypeTest(testing.TestCase, parameterized.TestCase):
import jax # enable bfloat16 for numpy
# TODO: Using uint64 will lead to weak type promotion (`float`),
# resulting in different behavior between JAX and Keras. Currently, we
# are skipping the test for uint64
ALL_DTYPES = [
x for x in ALLOWED_DTYPES if x not in ["string", "uint64"]
] + [None]
if backend.backend() == "torch":
# TODO: torch doesn't support uint16, uint32 and uint64
ALL_DTYPES = [
x for x in ALL_DTYPES if x not in ["uint16", "uint32", "uint64"]
]
@parameterized.parameters(
((), None, backend.floatx()),
([], None, backend.floatx()),
(bool(0), None, "bool"),
(int(0), None, "int32"),
(float(0), None, backend.floatx()),
([False, True, False], None, "bool"),
([1, 2, 3], None, "int32"),
([1.0, 2.0, 3.0], None, backend.floatx()),
([1, 2.0, 3], None, backend.floatx()),
([[False], [True], [False]], None, "bool"),
([[1], [2], [3]], None, "int32"),
([[1], [2.0], [3]], None, backend.floatx()),
*[
(np.array(0, dtype=dtype), None, dtype)
for dtype in ALL_DTYPES
if dtype is not None
],
*[
([[1, 0, 1], [1, 1, 0]], dtype, dtype)
for dtype in ALL_DTYPES
if dtype is not None
],
)
def test_convert_to_tensor(self, x, dtype, expected_dtype):
# We have to disable x64 for jax backend since jnp.array doesn't respect
# JAX_DEFAULT_DTYPE_BITS=32 in `./conftest.py`. We also need to downcast
# the expected dtype from 64 bit to 32 bit.
if backend.backend() == "jax":
import jax.experimental
jax_disable_x64 = jax.experimental.disable_x64()
expected_dtype = expected_dtype.replace("64", "32")
else:
jax_disable_x64 = contextlib.nullcontext()
with jax_disable_x64:
self.assertEqual(
backend.standardize_dtype(
ops.convert_to_tensor(x, dtype=dtype).dtype
),
expected_dtype,
)
class CoreOpsCallsTests(testing.TestCase):
def test_scatter_basic_call(self):
indices = np.array([[1, 0], [0, 1]])
values = np.array([10, 20])
shape = (2, 2)
scatter = core.Scatter()
result = scatter.call(indices, values, shape)
expected_output = np.array([[0, 20], [10, 0]])
self.assertAllClose(core.convert_to_numpy(result), expected_output)
def test_scatter_update_basic_call(self):
inputs = np.array([[0, 0], [0, 0]])
indices = np.array([[1, 0], [0, 1]])
updates = np.array([10, 20])
scatter_update = core.ScatterUpdate()
result = scatter_update.call(inputs, indices, updates)
expected_output = np.array([[0, 20], [10, 0]])
self.assertAllClose(core.convert_to_numpy(result), expected_output)
def test_slice_basic_call(self):
inputs = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
start_indices = np.array([1, 1])
shape = (2, 2)
slice_op = core.Slice()
result = slice_op.call(inputs, start_indices, shape)
expected_output = np.array([[5, 6], [8, 9]])
self.assertAllClose(core.convert_to_numpy(result), expected_output)
def test_slice_compute_output_spec(self):
inputs = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=np.float32)
start_indices = np.array([1, 1])
shape = (2, 2)
slice_op = core.Slice()
output_spec = slice_op.compute_output_spec(inputs, start_indices, shape)
self.assertEqual(output_spec.shape, shape)
self.assertEqual(output_spec.dtype, inputs.dtype)
def test_slice_with_symbolic_tensors(self):
inputs = KerasTensor(shape=(3, 3), dtype=np.float32)
start_indices = KerasTensor(shape=(2,), dtype=np.int32)
shape = (2, 2)
result = core.slice(inputs, start_indices, shape)
self.assertTrue(isinstance(result, KerasTensor))
def test_slice_with_non_symbolic_tensors(self):
inputs = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
start_indices = np.array([1, 1])
shape = (2, 2)
result = core.slice(inputs, start_indices, shape)
expected_output = np.array([[5, 6], [8, 9]])
self.assertAllClose(result, expected_output)
def test_slice_update_basic_call(self):
inputs = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
start_indices = np.array([1, 1])
updates = np.array([[10, 11], [12, 13]])
slice_update = core.SliceUpdate()
result = slice_update.call(inputs, start_indices, updates)
expected_output = np.array([[1, 2, 3], [4, 10, 11], [7, 12, 13]])
self.assertAllClose(core.convert_to_numpy(result), expected_output)
def test_while_loop_basic_functionality(self):
# Loop condition: continue if i < 5
def cond(i):
return i < 5
# Loop body: increment i by 1
def body(i):
return (i + 1,)
while_loop = core.WhileLoop(cond, body, maximum_iterations=None)
# Initial loop variable (i = 0)
loop_vars = (0,)
result = while_loop.call(loop_vars)
self.assertEqual(result[0], 5)
def test_while_loop_output_spec(self):
# Define dummy cond and body functions
def cond(x):
return True
def body(x):
return (x,)
while_loop = core.WhileLoop(cond, body, maximum_iterations=None)
loop_vars = (KerasTensor(shape=(10,), dtype=np.float32),)
output_spec = while_loop.compute_output_spec(loop_vars)
self.assertEqual(output_spec[0].shape, loop_vars[0].shape)
self.assertEqual(output_spec[0].dtype, loop_vars[0].dtype)
def test_while_loop_with_max_iterations(self):
# loop condition: continue if i < 10
def cond(i):
return i < 10
def body(i):
return (i + 1,)
while_loop = core.WhileLoop(cond, body, maximum_iterations=5)
result = while_loop.call((0,))
self.assertEqual(result[0], 5)
def test_whileloop_compute_output_spec(self):
# Define loop variables with different shapes and data types
loop_vars = (np.random.rand(5, 5), np.random.randint(10, size=(3, 7)))
keras_loop_vars = [
KerasTensor(v.shape, dtype=v.dtype) for v in loop_vars
]
def cond(v):
return v[0] < 5
def body(v):
return (v[0] + 1, v[1])
while_loop = core.WhileLoop(cond, body, maximum_iterations=None)
output_specs = while_loop.compute_output_spec(keras_loop_vars)
self.assertEqual(output_specs[0].shape, keras_loop_vars[0].shape)
self.assertEqual(output_specs[0].dtype, keras_loop_vars[0].dtype)
self.assertEqual(output_specs[1].shape, keras_loop_vars[1].shape)
self.assertEqual(output_specs[1].dtype, keras_loop_vars[1].dtype)
def test_stop_gradient_call(self):
variable_np = np.array([1.0, 2.0, 3.0], dtype=np.float32)
variable = core.convert_to_tensor(variable_np)
stop_gradient = core.StopGradient()
result = stop_gradient.call(variable)
result_np = core.convert_to_numpy(result)
self.assertTrue(np.array_equal(result_np, variable_np))
self.assertEqual(result_np.dtype, variable_np.dtype)
def test_stop_gradient_compute_output_spec(self):
variable = KerasTensor(shape=(3,), dtype=np.float32)
stop_gradient = core.StopGradient()
output_spec = stop_gradient.compute_output_spec(variable)
self.assertEqual(output_spec.shape, variable.shape)
self.assertEqual(output_spec.dtype, variable.dtype)
def test_fori_loop_basic_functionality(self):
lower = 0
upper = 5
def body_fun(index, val):
return val + 1
fori_loop = core.ForiLoop(lower, upper, body_fun)
init_val = 0
result = fori_loop.call(init_val)
self.assertEqual(result, upper)
def test_unstack_basic_functionality(self):
x = np.random.rand(2, 3, 4)
x = core.convert_to_tensor(x)
axis = 1
unstack = core.Unstack(axis=axis)
result = unstack.call(x)
self.assertEqual(len(result), x.shape[axis])
result = core.convert_to_numpy(result)
expected_shape = x.shape[:axis] + x.shape[axis + 1 :]
# Check that all tensors have the same shape
if len(result) > 0:
self.assertEqual(result[0].shape, expected_shape)
if len(result) > 1:
self.assertEqual(result[1].shape, expected_shape)
if len(result) > 2:
self.assertEqual(result[2].shape, expected_shape)
def test_cast_basic_functionality(self):
x = np.array([1.0, 2.0, 3.0], dtype=np.float32)
target_dtype = np.int32
cast = core.Cast(target_dtype)
result = cast.call(x)
result = core.convert_to_numpy(result)
self.assertEqual(result.dtype, target_dtype)
# Check that the values are the same
expected_values = x.astype(target_dtype)
self.assertTrue(np.array_equal(result, expected_values))
def test_cond_check_output_spec_list_tuple(self):
cond_op = core.Cond()
mock_spec = Mock(dtype="float32", shape=(2, 2))
self.assertTrue(
cond_op._check_output_spec(
[mock_spec, mock_spec], [mock_spec, mock_spec]
)
)
def test_cond_check_output_spec_other_types(self):
cond_op = core.Cond()
# Create mock objects with dtype and shape attributes
mock_spec1 = Mock(dtype="float32", shape=(2, 2))
mock_spec2 = Mock(dtype="float32", shape=(2, 2))
self.assertTrue(cond_op._check_output_spec(mock_spec1, mock_spec2))
def test_cond_check_output_spec_none(self):
cond_op = core.Cond()
self.assertTrue(cond_op._check_output_spec(None, None))
self.assertFalse(
cond_op._check_output_spec(
None, Mock(dtype="float32", shape=(2, 2))
)
)
self.assertFalse(
cond_op._check_output_spec(
Mock(dtype="float32", shape=(2, 2)), None
)
)
def test_cond_check_output_spec_dict(self):
cond_op = core.Cond()
mock_spec = Mock(dtype="float32", shape=(2, 2))
self.assertTrue(
cond_op._check_output_spec({"a": mock_spec}, {"a": mock_spec})
)
self.assertFalse(
cond_op._check_output_spec({"a": mock_spec}, {"b": mock_spec})
)
self.assertFalse(
cond_op._check_output_spec(
{"a": mock_spec}, {"a": mock_spec, "b": mock_spec}
)
)
def test_cond_check_output_spec_list(self):
cond_op = core.Cond()
mock_spec = Mock(dtype="float32", shape=(2, 2))
mock_spec_different = Mock(dtype="int32", shape=(3, 3))
self.assertTrue(cond_op._check_output_spec([mock_spec], [mock_spec]))
self.assertFalse(
cond_op._check_output_spec(
[mock_spec], [mock_spec, mock_spec_different]
)
)
def test_cond_check_output_spec_tuple(self):
cond_op = core.Cond()
mock_spec = Mock(dtype="float32", shape=(2, 2))
mock_spec_different = Mock(dtype="int32", shape=(3, 3))
self.assertTrue(cond_op._check_output_spec((mock_spec,), (mock_spec,)))
self.assertFalse(
cond_op._check_output_spec(
(mock_spec,), (mock_spec, mock_spec_different)
)
)
| keras/keras/ops/core_test.py/0 | {
"file_path": "keras/keras/ops/core_test.py",
"repo_id": "keras",
"token_count": 14875
} | 203 |
import numpy as np
from keras import backend
from keras import testing
from keras.backend.common import keras_tensor
from keras.ops import numpy as knp
from keras.ops import operation
class OpWithMultipleInputs(operation.Operation):
def call(self, x, y, z=None):
# `z` has to be put first due to the order of operations issue with
# torch backend.
return 3 * z + x + 2 * y
def compute_output_spec(self, x, y, z=None):
return keras_tensor.KerasTensor(x.shape, x.dtype)
class OpWithMultipleOutputs(operation.Operation):
def call(self, x):
return (x, x + 1)
def compute_output_spec(self, x):
return (
keras_tensor.KerasTensor(x.shape, x.dtype),
keras_tensor.KerasTensor(x.shape, x.dtype),
)
class OpWithCustomConstructor(operation.Operation):
def __init__(self, alpha, mode="foo"):
super().__init__()
self.alpha = alpha
self.mode = mode
def call(self, x):
if self.mode == "foo":
return x
return self.alpha * x
def compute_output_spec(self, x):
return keras_tensor.KerasTensor(x.shape, x.dtype)
class OperationTest(testing.TestCase):
def test_symbolic_call(self):
x = keras_tensor.KerasTensor(shape=(2, 3), name="x")
y = keras_tensor.KerasTensor(shape=(2, 3), name="y")
z = keras_tensor.KerasTensor(shape=(2, 3), name="z")
# Positional arguments
op = OpWithMultipleInputs(name="test_op")
self.assertEqual(op.name, "test_op")
out = op(x, y, z)
self.assertIsInstance(out, keras_tensor.KerasTensor)
self.assertEqual(out.shape, (2, 3))
self.assertEqual(len(op._inbound_nodes), 1)
self.assertEqual(op.input, [x, y, z])
self.assertEqual(op.output, out)
# Keyword arguments
op = OpWithMultipleInputs(name="test_op")
out = op(x=x, y=y, z=z)
self.assertIsInstance(out, keras_tensor.KerasTensor)
self.assertEqual(out.shape, (2, 3))
self.assertEqual(len(op._inbound_nodes), 1)
self.assertEqual(op.input, [x, y, z])
self.assertEqual(op.output, out)
# Mix
op = OpWithMultipleInputs(name="test_op")
out = op(x, y=y, z=z)
self.assertIsInstance(out, keras_tensor.KerasTensor)
self.assertEqual(out.shape, (2, 3))
self.assertEqual(len(op._inbound_nodes), 1)
self.assertEqual(op.input, [x, y, z])
self.assertEqual(op.output, out)
# Test op reuse
prev_out = out
out = op(x, y=y, z=z)
self.assertIsInstance(out, keras_tensor.KerasTensor)
self.assertEqual(out.shape, (2, 3))
self.assertEqual(len(op._inbound_nodes), 2)
self.assertEqual(op.output, prev_out)
# Test multiple outputs
op = OpWithMultipleOutputs()
out = op(x)
self.assertIsInstance(out, tuple)
self.assertEqual(len(out), 2)
self.assertIsInstance(out[0], keras_tensor.KerasTensor)
self.assertIsInstance(out[1], keras_tensor.KerasTensor)
self.assertEqual(out[0].shape, (2, 3))
self.assertEqual(out[1].shape, (2, 3))
self.assertEqual(len(op._inbound_nodes), 1)
self.assertEqual(op.output, list(out))
def test_eager_call(self):
x = knp.ones((2, 3))
y = knp.ones((2, 3))
z = knp.ones((2, 3))
op = OpWithMultipleInputs(name="test_op")
self.assertEqual(op.name, "test_op")
# Positional arguments
out = op(x, y, z)
self.assertTrue(backend.is_tensor(out))
self.assertAllClose(out, 6 * np.ones((2, 3)))
# Keyword arguments
out = op(x=x, y=y, z=z)
self.assertTrue(backend.is_tensor(out))
self.assertAllClose(out, 6 * np.ones((2, 3)))
# Mixed arguments
out = op(x, y=y, z=z)
self.assertTrue(backend.is_tensor(out))
self.assertAllClose(out, 6 * np.ones((2, 3)))
# Test multiple outputs
op = OpWithMultipleOutputs()
out = op(x)
self.assertEqual(len(out), 2)
self.assertTrue(backend.is_tensor(out[0]))
self.assertTrue(backend.is_tensor(out[1]))
self.assertAllClose(out[0], np.ones((2, 3)))
self.assertAllClose(out[1], np.ones((2, 3)) + 1)
def test_serialization(self):
op = OpWithMultipleOutputs(name="test_op")
config = op.get_config()
self.assertEqual(config, {"name": "test_op"})
op = OpWithMultipleOutputs.from_config(config)
self.assertEqual(op.name, "test_op")
def test_autoconfig(self):
op = OpWithCustomConstructor(alpha=0.2, mode="bar")
config = op.get_config()
self.assertEqual(config, {"alpha": 0.2, "mode": "bar"})
revived = OpWithCustomConstructor.from_config(config)
self.assertEqual(revived.get_config(), config)
def test_input_conversion(self):
x = np.ones((2,))
y = np.ones((2,))
z = knp.ones((2,)) # mix
if backend.backend() == "torch":
z = z.cpu()
op = OpWithMultipleInputs()
out = op(x, y, z)
self.assertTrue(backend.is_tensor(out))
self.assertAllClose(out, 6 * np.ones((2,)))
def test_valid_naming(self):
OpWithMultipleOutputs(name="test_op")
with self.assertRaisesRegex(
ValueError, "must be a string and cannot contain character `/`."
):
OpWithMultipleOutputs(name="test/op")
| keras/keras/ops/operation_test.py/0 | {
"file_path": "keras/keras/ops/operation_test.py",
"repo_id": "keras",
"token_count": 2641
} | 204 |
from keras.api_export import keras_export
from keras.optimizers import adam
from keras.optimizers import optimizer
@keras_export(["keras.optimizers.AdamW"])
class AdamW(adam.Adam):
"""Optimizer that implements the AdamW algorithm.
AdamW optimization is a stochastic gradient descent method that is based on
adaptive estimation of first-order and second-order moments with an added
method to decay weights per the techniques discussed in the paper,
'Decoupled Weight Decay Regularization' by
[Loshchilov, Hutter et al., 2019](https://arxiv.org/abs/1711.05101).
According to
[Kingma et al., 2014](http://arxiv.org/abs/1412.6980),
the underying Adam method is "*computationally
efficient, has little memory requirement, invariant to diagonal rescaling of
gradients, and is well suited for problems that are large in terms of
data/parameters*".
Args:
learning_rate: A float, a
`keras.optimizers.schedules.LearningRateSchedule` instance, or
a callable that takes no arguments and returns the actual value to
use. The learning rate. Defaults to `0.001`.
beta_1: A float value or a constant float tensor, or a callable
that takes no arguments and returns the actual value to use. The
exponential decay rate for the 1st moment estimates.
Defaults to `0.9`.
beta_2: A float value or a constant float tensor, or a callable
that takes no arguments and returns the actual value to use. The
exponential decay rate for the 2nd moment estimates.
Defaults to `0.999`.
epsilon: A small constant for numerical stability. This epsilon is
"epsilon hat" in the Kingma and Ba paper (in the formula just
before Section 2.1), not the epsilon in Algorithm 1 of the paper.
Defaults to 1e-7.
amsgrad: Boolean. Whether to apply AMSGrad variant of this algorithm
from the paper "On the Convergence of Adam and beyond".
Defaults to `False`.
{{base_optimizer_keyword_args}}
References:
- [Loshchilov et al., 2019](https://arxiv.org/abs/1711.05101)
- [Kingma et al., 2014](http://arxiv.org/abs/1412.6980) for `adam`
- [Reddi et al., 2018](
https://openreview.net/pdf?id=ryQu7f-RZ) for `amsgrad`.
"""
def __init__(
self,
learning_rate=0.001,
weight_decay=0.004,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-7,
amsgrad=False,
clipnorm=None,
clipvalue=None,
global_clipnorm=None,
use_ema=False,
ema_momentum=0.99,
ema_overwrite_frequency=None,
name="adamw",
**kwargs,
):
super().__init__(
learning_rate=learning_rate,
beta_1=beta_1,
beta_2=beta_2,
epsilon=epsilon,
amsgrad=amsgrad,
name=name,
weight_decay=weight_decay,
clipnorm=clipnorm,
clipvalue=clipvalue,
global_clipnorm=global_clipnorm,
use_ema=use_ema,
ema_momentum=ema_momentum,
ema_overwrite_frequency=ema_overwrite_frequency,
**kwargs,
)
if self.weight_decay is None:
raise ValueError(
"Argument `weight_decay` must be a float. Received: "
"weight_decay=None"
)
AdamW.__doc__ = AdamW.__doc__.replace(
"{{base_optimizer_keyword_args}}", optimizer.base_optimizer_keyword_args
)
| keras/keras/optimizers/adamw.py/0 | {
"file_path": "keras/keras/optimizers/adamw.py",
"repo_id": "keras",
"token_count": 1574
} | 205 |
from keras.optimizers.schedules.learning_rate_schedule import CosineDecay
from keras.optimizers.schedules.learning_rate_schedule import (
CosineDecayRestarts,
)
from keras.optimizers.schedules.learning_rate_schedule import ExponentialDecay
from keras.optimizers.schedules.learning_rate_schedule import InverseTimeDecay
from keras.optimizers.schedules.learning_rate_schedule import (
PiecewiseConstantDecay,
)
from keras.optimizers.schedules.learning_rate_schedule import PolynomialDecay
| keras/keras/optimizers/schedules/__init__.py/0 | {
"file_path": "keras/keras/optimizers/schedules/__init__.py",
"repo_id": "keras",
"token_count": 160
} | 206 |
import keras
from keras import testing
from keras.saving import object_registration
from keras.saving import serialization_lib
class TestObjectRegistration(testing.TestCase):
def test_custom_object_scope(self):
def custom_fn():
pass
class CustomClass:
pass
def check_get_in_thread():
with object_registration.custom_object_scope(
{"CustomClass": CustomClass, "custom_fn": custom_fn}
):
actual_custom_fn = keras.activations.get("custom_fn")
self.assertEqual(actual_custom_fn, custom_fn)
actual_custom_class = keras.regularizers.get("CustomClass")
self.assertEqual(actual_custom_class.__class__, CustomClass)
with object_registration.custom_object_scope(
{"CustomClass": CustomClass, "custom_fn": custom_fn}
):
actual_custom_fn = keras.activations.get("custom_fn")
self.assertEqual(actual_custom_fn, custom_fn)
actual_custom_class = keras.regularizers.get("CustomClass")
self.assertEqual(actual_custom_class.__class__, CustomClass)
checked_thread = self.checkedThread(check_get_in_thread)
checked_thread.start()
checked_thread.join()
def test_serialize_custom_class_with_default_name(self):
@object_registration.register_keras_serializable()
class TestClass:
def __init__(self, value):
self._value = value
def get_config(self):
return {"value": self._value}
@classmethod
def from_config(cls, config):
return cls(**config)
serialized_name = "Custom>TestClass"
inst = TestClass(value=10)
class_name = object_registration.GLOBAL_CUSTOM_NAMES[TestClass]
self.assertEqual(serialized_name, class_name)
config = serialization_lib.serialize_keras_object(inst)
self.assertEqual("TestClass", config["class_name"])
new_inst = serialization_lib.deserialize_keras_object(config)
self.assertIsNot(inst, new_inst)
self.assertIsInstance(new_inst, TestClass)
self.assertEqual(10, new_inst._value)
def test_serialize_custom_class_with_custom_name(self):
@object_registration.register_keras_serializable(
"TestPackage", "CustomName"
)
class OtherTestClass:
def __init__(self, val):
self._val = val
def get_config(self):
return {"val": self._val}
@classmethod
def from_config(cls, config):
return cls(**config)
serialized_name = "TestPackage>CustomName"
inst = OtherTestClass(val=5)
class_name = object_registration.GLOBAL_CUSTOM_NAMES[OtherTestClass]
self.assertEqual(serialized_name, class_name)
fn_class_name = object_registration.get_registered_name(OtherTestClass)
self.assertEqual(fn_class_name, class_name)
cls = object_registration.get_registered_object(fn_class_name)
self.assertEqual(OtherTestClass, cls)
config = keras.saving.serialize_keras_object(inst)
self.assertEqual("OtherTestClass", config["class_name"])
new_inst = keras.saving.deserialize_keras_object(config)
self.assertIsNot(inst, new_inst)
self.assertIsInstance(new_inst, OtherTestClass)
self.assertEqual(5, new_inst._val)
def test_serialize_custom_function(self):
@object_registration.register_keras_serializable()
def my_fn():
return 42
serialized_name = "Custom>my_fn"
class_name = object_registration.GLOBAL_CUSTOM_NAMES[my_fn]
self.assertEqual(serialized_name, class_name)
fn_class_name = object_registration.get_registered_name(my_fn)
self.assertEqual(fn_class_name, class_name)
config = keras.saving.serialize_keras_object(my_fn)
fn = keras.saving.deserialize_keras_object(config)
self.assertEqual(42, fn())
fn_2 = object_registration.get_registered_object(fn_class_name)
self.assertEqual(42, fn_2())
def test_serialize_custom_class_without_get_config_fails(self):
with self.assertRaisesRegex(
ValueError,
"Cannot register a class that does not have a get_config.*",
):
@object_registration.register_keras_serializable(
"TestPackage", "TestClass"
)
class TestClass:
def __init__(self, value):
self._value = value
| keras/keras/saving/object_registration_test.py/0 | {
"file_path": "keras/keras/saving/object_registration_test.py",
"repo_id": "keras",
"token_count": 2145
} | 207 |
import jax
import numpy as np
import pandas
import pytest
import tensorflow as tf
import torch
from absl.testing import parameterized
from keras import backend
from keras import testing
from keras.testing.test_utils import named_product
from keras.trainers.data_adapters import array_data_adapter
class TestArrayDataAdapter(testing.TestCase, parameterized.TestCase):
def make_array(self, array_type, shape, dtype="float32"):
x = np.array([[i] * shape[1] for i in range(shape[0])], dtype=dtype)
if array_type == "np":
return x
elif array_type == "tf":
return tf.constant(x)
elif array_type == "jax":
return jax.numpy.array(x)
elif array_type == "torch":
return torch.as_tensor(x)
elif array_type == "pandas":
return pandas.DataFrame(x)
@parameterized.named_parameters(
named_product(
array_type=["np", "tf", "jax", "torch", "pandas"],
iterator_type=["np", "tf", "jax", "torch"],
shuffle=[False, "batch", True],
)
)
def test_basic_flow(self, array_type, iterator_type, shuffle):
x = self.make_array(array_type, (34, 4))
y = self.make_array(array_type, (34, 2))
adapter = array_data_adapter.ArrayDataAdapter(
x,
y=y,
sample_weight=None,
batch_size=16,
steps=None,
shuffle=shuffle,
)
self.assertEqual(adapter.num_batches, 3)
self.assertEqual(adapter.batch_size, 16)
self.assertEqual(adapter.has_partial_batch, True)
self.assertEqual(adapter.partial_batch_size, 2)
if iterator_type == "np":
it = adapter.get_numpy_iterator()
expected_class = np.ndarray
elif iterator_type == "tf":
it = adapter.get_tf_dataset()
expected_class = tf.Tensor
elif iterator_type == "jax":
it = adapter.get_jax_iterator()
expected_class = jax.Array
elif iterator_type == "torch":
it = adapter.get_torch_dataloader()
expected_class = torch.Tensor
sample_order = []
for i, batch in enumerate(it):
self.assertEqual(len(batch), 2)
bx, by = batch
self.assertIsInstance(bx, expected_class)
self.assertIsInstance(by, expected_class)
self.assertEqual(bx.dtype, by.dtype)
self.assertContainsExactSubsequence(str(bx.dtype), "float32")
if i < 2:
self.assertEqual(bx.shape, (16, 4))
self.assertEqual(by.shape, (16, 2))
else:
self.assertEqual(bx.shape, (2, 4))
self.assertEqual(by.shape, (2, 2))
for i in range(by.shape[0]):
sample_order.append(by[i, 0])
if shuffle:
self.assertNotAllClose(sample_order, list(range(34)))
else:
self.assertAllClose(sample_order, list(range(34)))
def test_multi_inputs_and_outputs(self):
x1 = np.random.random((34, 1))
x2 = np.random.random((34, 2))
y1 = np.random.random((34, 3))
y2 = np.random.random((34, 4))
sw = np.random.random((34,))
adapter = array_data_adapter.ArrayDataAdapter(
x={"x1": x1, "x2": x2},
y=[y1, y2],
sample_weight=sw,
batch_size=16,
steps=None,
shuffle=False,
)
gen = adapter.get_numpy_iterator()
for i, batch in enumerate(gen):
self.assertEqual(len(batch), 3)
bx, by, bw = batch
self.assertIsInstance(bx, dict)
# NOTE: the y list was converted to a tuple for tf.data
# compatibility.
self.assertIsInstance(by, tuple)
self.assertIsInstance(bw, tuple)
self.assertIsInstance(bx["x1"], np.ndarray)
self.assertIsInstance(bx["x2"], np.ndarray)
self.assertIsInstance(by[0], np.ndarray)
self.assertIsInstance(by[1], np.ndarray)
self.assertIsInstance(bw[0], np.ndarray)
self.assertIsInstance(bw[1], np.ndarray)
self.assertEqual(bx["x1"].dtype, by[0].dtype)
self.assertEqual(bx["x1"].dtype, backend.floatx())
if i < 2:
self.assertEqual(bx["x1"].shape, (16, 1))
self.assertEqual(bx["x2"].shape, (16, 2))
self.assertEqual(by[0].shape, (16, 3))
self.assertEqual(by[1].shape, (16, 4))
self.assertEqual(bw[0].shape, (16,))
self.assertEqual(bw[1].shape, (16,))
else:
self.assertEqual(bx["x1"].shape, (2, 1))
self.assertEqual(by[0].shape, (2, 3))
self.assertEqual(bw[0].shape, (2,))
self.assertEqual(bw[1].shape, (2,))
ds = adapter.get_tf_dataset()
for i, batch in enumerate(ds):
self.assertEqual(len(batch), 3)
bx, by, bw = batch
self.assertIsInstance(bx, dict)
# NOTE: the y list was converted to a tuple for tf.data
# compatibility.
self.assertIsInstance(by, tuple)
self.assertIsInstance(bw, tuple)
self.assertIsInstance(bx["x1"], tf.Tensor)
self.assertIsInstance(bx["x2"], tf.Tensor)
self.assertIsInstance(by[0], tf.Tensor)
self.assertIsInstance(by[1], tf.Tensor)
self.assertIsInstance(bw[0], tf.Tensor)
self.assertIsInstance(bw[1], tf.Tensor)
self.assertEqual(bx["x1"].dtype, by[0].dtype)
self.assertEqual(bx["x1"].dtype, backend.floatx())
if i < 2:
self.assertEqual(tuple(bx["x1"].shape), (16, 1))
self.assertEqual(tuple(bx["x2"].shape), (16, 2))
self.assertEqual(tuple(by[0].shape), (16, 3))
self.assertEqual(tuple(by[1].shape), (16, 4))
self.assertEqual(tuple(bw[0].shape), (16,))
self.assertEqual(tuple(bw[1].shape), (16,))
else:
self.assertEqual(tuple(bx["x1"].shape), (2, 1))
self.assertEqual(tuple(by[0].shape), (2, 3))
self.assertEqual(tuple(bw[0].shape), (2,))
self.assertEqual(tuple(bw[1].shape), (2,))
@parameterized.named_parameters(
named_product(target_encoding=["int", "categorical"])
)
def test_class_weights(self, target_encoding):
x = np.random.random((4, 2))
if target_encoding == "int":
y = np.array([[0], [1], [2], [3]], dtype="int32")
else:
y = np.array(
[[1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0], [0, 0, 0, 1]],
dtype="float32",
)
class_weight = {
0: 0.1,
1: 0.2,
2: 0.3,
3: 0.4,
}
adapter = array_data_adapter.ArrayDataAdapter(
x,
y=y,
class_weight=class_weight,
batch_size=16,
)
gen = adapter.get_numpy_iterator()
for batch in gen:
self.assertEqual(len(batch), 3)
_, _, bw = batch
self.assertAllClose(bw, [0.1, 0.2, 0.3, 0.4])
def test_errors(self):
# TODO
pass
@parameterized.named_parameters(
named_product(array_type=["np", "tf", "jax", "torch", "pandas"])
)
def test_integer_inputs(self, array_type):
x1 = self.make_array(array_type, (4, 4), dtype="float64")
x2 = self.make_array(array_type, (4, 4), dtype="int32")
y = self.make_array(array_type, (4, 2))
adapter = array_data_adapter.ArrayDataAdapter(
(x1, x2),
y=y,
sample_weight=None,
batch_size=4,
steps=None,
shuffle=False,
)
(x1, x2), y = next(adapter.get_numpy_iterator())
self.assertEqual(x1.dtype, backend.floatx())
self.assertEqual(x2.dtype, "int32")
def test_pandas_series(self):
x = pandas.Series(np.ones((10,)))
y = np.ones((10,))
adapter = array_data_adapter.ArrayDataAdapter(
x,
y=y,
sample_weight=None,
batch_size=4,
steps=None,
shuffle=False,
)
self.assertEqual(adapter.num_batches, 3)
self.assertEqual(adapter.batch_size, 4)
self.assertEqual(adapter.has_partial_batch, True)
self.assertEqual(adapter.partial_batch_size, 2)
x, y = next(adapter.get_numpy_iterator())
self.assertEqual(x.dtype, backend.floatx())
self.assertIsInstance(x, np.ndarray)
self.assertEqual(x.shape, (4, 1))
@pytest.mark.skipif(
backend.backend() != "tensorflow",
reason="Only tensorflow supports raggeds",
)
def test_tf_ragged(self):
x = tf.ragged.constant([[1, 2], [1, 2, 3], [1, 2], [1], []], "float64")
y = np.ones((5,))
adapter = array_data_adapter.ArrayDataAdapter(
x,
y=y,
sample_weight=None,
batch_size=2,
steps=None,
shuffle=False,
)
self.assertEqual(adapter.num_batches, 3)
self.assertEqual(adapter.batch_size, 2)
self.assertEqual(adapter.has_partial_batch, True)
self.assertEqual(adapter.partial_batch_size, 1)
x, y = next(adapter.get_numpy_iterator())
self.assertEqual(x.dtype, backend.floatx())
self.assertIsInstance(x, tf.RaggedTensor)
self.assertEqual(x.shape, (2, None))
| keras/keras/trainers/data_adapters/array_data_adapter_test.py/0 | {
"file_path": "keras/keras/trainers/data_adapters/array_data_adapter_test.py",
"repo_id": "keras",
"token_count": 5174
} | 208 |
def standardize_tuple(value, n, name, allow_zero=False):
"""Transforms non-negative/positive integer/integers into an integer tuple.
Args:
value: int or iterable of ints. The value to validate and convert.
n: int. The size of the tuple to be returned.
name: string. The name of the argument being validated, e.g. "strides"
or "kernel_size". This is only used to format error messages.
allow_zero: bool, defaults to False. A ValueError will raised if zero is
received and this param is False.
Returns:
A tuple of n integers.
"""
error_msg = (
f"The `{name}` argument must be a tuple of {n} integers. "
f"Received {name}={value}"
)
if isinstance(value, int):
value_tuple = (value,) * n
else:
try:
value_tuple = tuple(value)
except TypeError:
raise ValueError(error_msg)
if len(value_tuple) != n:
raise ValueError(error_msg)
for single_value in value_tuple:
try:
int(single_value)
except (ValueError, TypeError):
error_msg += (
f"including element {single_value} of "
f"type {type(single_value)}"
)
raise ValueError(error_msg)
if allow_zero:
unqualified_values = {v for v in value_tuple if v < 0}
req_msg = ">= 0"
else:
unqualified_values = {v for v in value_tuple if v <= 0}
req_msg = "> 0"
if unqualified_values:
error_msg += (
f", including values {unqualified_values}"
f" that do not satisfy `value {req_msg}`"
)
raise ValueError(error_msg)
return value_tuple
def standardize_padding(value, allow_causal=False):
if isinstance(value, (list, tuple)):
return value
padding = value.lower()
if allow_causal:
allowed_values = {"valid", "same", "causal"}
else:
allowed_values = {"valid", "same"}
if padding not in allowed_values:
raise ValueError(
"The `padding` argument must be a list/tuple or one of "
f"{allowed_values}. "
f"Received: {padding}"
)
return padding
def validate_string_arg(
value,
allowable_strings,
caller_name,
arg_name,
allow_none=False,
allow_callables=False,
):
"""Validates the correctness of a string-based arg."""
if allow_none and value is None:
return
elif allow_callables and callable(value):
return
elif isinstance(value, str) and value in allowable_strings:
return
raise ValueError(
f"Unkown value for `{arg_name}` argument of {caller_name}. "
f"Allowed values are: {allowable_strings}. Received: "
f"{arg_name}={value}"
)
| keras/keras/utils/argument_validation.py/0 | {
"file_path": "keras/keras/utils/argument_validation.py",
"repo_id": "keras",
"token_count": 1281
} | 209 |
from unittest.mock import patch
from keras.testing import test_case
from keras.utils import io_utils
class TestIoUtils(test_case.TestCase):
def test_enable_interactive_logging(self):
io_utils.enable_interactive_logging()
self.assertTrue(io_utils.is_interactive_logging_enabled())
def test_disable_interactive_logging(self):
io_utils.disable_interactive_logging()
self.assertFalse(io_utils.is_interactive_logging_enabled())
def test_set_logging_verbosity_valid(self):
valid_levels = ["FATAL", "ERROR", "WARNING", "INFO", "DEBUG"]
for level in valid_levels:
io_utils.set_logging_verbosity(level)
def test_set_logging_verbosity_invalid(self):
with self.assertRaises(ValueError):
io_utils.set_logging_verbosity("INVALID")
@patch("builtins.input", side_effect=["y"])
def test_ask_to_proceed_with_overwrite_yes(self, _):
self.assertTrue(io_utils.ask_to_proceed_with_overwrite("test_path"))
@patch("builtins.input", side_effect=["n"])
def test_ask_to_proceed_with_overwrite_no(self, _):
self.assertFalse(io_utils.ask_to_proceed_with_overwrite("test_path"))
@patch("sys.stdout.write")
def test_print_msg_interactive_with_line_break(self, mock_write):
io_utils.enable_interactive_logging()
io_utils.print_msg("Hello", line_break=True)
mock_write.assert_called_once_with("Hello\n")
@patch("sys.stdout.write")
def test_print_msg_interactive_without_line_break(self, mock_write):
io_utils.enable_interactive_logging()
io_utils.print_msg("Hello", line_break=False)
mock_write.assert_called_once_with("Hello")
@patch("absl.logging.info")
def test_print_msg_non_interactive(self, mock_logging):
io_utils.disable_interactive_logging()
io_utils.print_msg("Hello")
mock_logging.assert_called_once_with("Hello")
@patch("builtins.input", side_effect=["invalid", "invalid", "y"])
def test_ask_to_proceed_with_overwrite_invalid_then_yes(self, _):
self.assertTrue(io_utils.ask_to_proceed_with_overwrite("test_path"))
@patch("builtins.input", side_effect=["invalid", "n"])
def test_ask_to_proceed_with_overwrite_invalid_then_no(self, _):
self.assertFalse(io_utils.ask_to_proceed_with_overwrite("test_path"))
| keras/keras/utils/io_utils_test.py/0 | {
"file_path": "keras/keras/utils/io_utils_test.py",
"repo_id": "keras",
"token_count": 977
} | 210 |
def is_shape_tuple(x):
if isinstance(x, (list, tuple)):
if all(isinstance(e, (int, type(None))) for e in x):
return True
return False
def map_shape_structure(fn, struct):
"""Variant of tree.map_structure that operates on shape tuples."""
if is_shape_tuple(struct):
return fn(tuple(struct))
if isinstance(struct, list):
return [map_shape_structure(fn, e) for e in struct]
if isinstance(struct, tuple):
return tuple(map_shape_structure(fn, e) for e in struct)
if isinstance(struct, dict):
return {k: map_shape_structure(fn, v) for k, v in struct.items()}
else:
raise ValueError(f"Cannot map function to unknown object {struct}")
| keras/keras/utils/shape_utils.py/0 | {
"file_path": "keras/keras/utils/shape_utils.py",
"repo_id": "keras",
"token_count": 294
} | 211 |
[isort]
force_single_line=True
known_first_party=keras
line_length=80
profile=black
[flake8]
# imported but unused in __init__.py, that's ok.
per-file-ignores=*__init__.py:F401
ignore=E203,W503,W605,F632,E266,E731,E712,E741
max-line-length=80
| tf-keras/setup.cfg/0 | {
"file_path": "tf-keras/setup.cfg",
"repo_id": "tf-keras",
"token_count": 108
} | 212 |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Keras Applications are premade architectures with pre-trained weights."""
from tf_keras.applications.convnext import ConvNeXtBase
from tf_keras.applications.convnext import ConvNeXtLarge
from tf_keras.applications.convnext import ConvNeXtSmall
from tf_keras.applications.convnext import ConvNeXtTiny
from tf_keras.applications.convnext import ConvNeXtXLarge
from tf_keras.applications.densenet import DenseNet121
from tf_keras.applications.densenet import DenseNet169
from tf_keras.applications.densenet import DenseNet201
from tf_keras.applications.efficientnet import EfficientNetB0
from tf_keras.applications.efficientnet import EfficientNetB1
from tf_keras.applications.efficientnet import EfficientNetB2
from tf_keras.applications.efficientnet import EfficientNetB3
from tf_keras.applications.efficientnet import EfficientNetB4
from tf_keras.applications.efficientnet import EfficientNetB5
from tf_keras.applications.efficientnet import EfficientNetB6
from tf_keras.applications.efficientnet import EfficientNetB7
from tf_keras.applications.efficientnet_v2 import EfficientNetV2B0
from tf_keras.applications.efficientnet_v2 import EfficientNetV2B1
from tf_keras.applications.efficientnet_v2 import EfficientNetV2B2
from tf_keras.applications.efficientnet_v2 import EfficientNetV2B3
from tf_keras.applications.efficientnet_v2 import EfficientNetV2L
from tf_keras.applications.efficientnet_v2 import EfficientNetV2M
from tf_keras.applications.efficientnet_v2 import EfficientNetV2S
from tf_keras.applications.inception_resnet_v2 import InceptionResNetV2
from tf_keras.applications.inception_v3 import InceptionV3
from tf_keras.applications.mobilenet import MobileNet
from tf_keras.applications.mobilenet_v2 import MobileNetV2
from tf_keras.applications.mobilenet_v3 import MobileNetV3Large
from tf_keras.applications.mobilenet_v3 import MobileNetV3Small
from tf_keras.applications.nasnet import NASNetLarge
from tf_keras.applications.nasnet import NASNetMobile
from tf_keras.applications.resnet import ResNet50
from tf_keras.applications.resnet import ResNet101
from tf_keras.applications.resnet import ResNet152
from tf_keras.applications.resnet_rs import ResNetRS50
from tf_keras.applications.resnet_rs import ResNetRS101
from tf_keras.applications.resnet_rs import ResNetRS152
from tf_keras.applications.resnet_rs import ResNetRS200
from tf_keras.applications.resnet_rs import ResNetRS270
from tf_keras.applications.resnet_rs import ResNetRS350
from tf_keras.applications.resnet_rs import ResNetRS420
from tf_keras.applications.resnet_v2 import ResNet50V2
from tf_keras.applications.resnet_v2 import ResNet101V2
from tf_keras.applications.resnet_v2 import ResNet152V2
from tf_keras.applications.vgg16 import VGG16
from tf_keras.applications.vgg19 import VGG19
from tf_keras.applications.xception import Xception
| tf-keras/tf_keras/applications/__init__.py/0 | {
"file_path": "tf-keras/tf_keras/applications/__init__.py",
"repo_id": "tf-keras",
"token_count": 1129
} | 213 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""ResNet models for TF-Keras.
Reference:
- [Deep Residual Learning for Image Recognition](
https://arxiv.org/abs/1512.03385) (CVPR 2015)
"""
import tensorflow.compat.v2 as tf
from tf_keras import backend
from tf_keras.applications import imagenet_utils
from tf_keras.engine import training
from tf_keras.layers import VersionAwareLayers
from tf_keras.utils import data_utils
from tf_keras.utils import layer_utils
# isort: off
from tensorflow.python.util.tf_export import keras_export
BASE_WEIGHTS_PATH = (
"https://storage.googleapis.com/tensorflow/keras-applications/resnet/"
)
WEIGHTS_HASHES = {
"resnet50": (
"2cb95161c43110f7111970584f804107",
"4d473c1dd8becc155b73f8504c6f6626",
),
"resnet101": (
"f1aeb4b969a6efcfb50fad2f0c20cfc5",
"88cf7a10940856eca736dc7b7e228a21",
),
"resnet152": (
"100835be76be38e30d865e96f2aaae62",
"ee4c566cf9a93f14d82f913c2dc6dd0c",
),
"resnet50v2": (
"3ef43a0b657b3be2300d5770ece849e0",
"fac2f116257151a9d068a22e544a4917",
),
"resnet101v2": (
"6343647c601c52e1368623803854d971",
"c0ed64b8031c3730f411d2eb4eea35b5",
),
"resnet152v2": (
"a49b44d1979771252814e80f8ec446f9",
"ed17cf2e0169df9d443503ef94b23b33",
),
"resnext50": (
"67a5b30d522ed92f75a1f16eef299d1a",
"62527c363bdd9ec598bed41947b379fc",
),
"resnext101": (
"34fb605428fcc7aa4d62f44404c11509",
"0f678c91647380debd923963594981b3",
),
}
layers = None
def ResNet(
stack_fn,
preact,
use_bias,
model_name="resnet",
include_top=True,
weights="imagenet",
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
classifier_activation="softmax",
**kwargs,
):
"""Instantiates the ResNet, ResNetV2, and ResNeXt architecture.
Args:
stack_fn: a function that returns output tensor for the
stacked residual blocks.
preact: whether to use pre-activation or not
(True for ResNetV2, False for ResNet and ResNeXt).
use_bias: whether to use biases for convolutional layers or not
(True for ResNet and ResNetV2, False for ResNeXt).
model_name: string, model name.
include_top: whether to include the fully-connected
layer at the top of the network.
weights: one of `None` (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded.
input_tensor: optional TF-Keras tensor
(i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(224, 224, 3)` (with `channels_last` data format)
or `(3, 224, 224)` (with `channels_first` data format).
It should have exactly 3 inputs channels.
pooling: optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional layer.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional layer, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
classifier_activation: A `str` or callable. The activation function to use
on the "top" layer. Ignored unless `include_top=True`. Set
`classifier_activation=None` to return the logits of the "top" layer.
When loading pretrained weights, `classifier_activation` can only
be `None` or `"softmax"`.
**kwargs: For backwards compatibility only.
Returns:
A `keras.Model` instance.
"""
global layers
if "layers" in kwargs:
layers = kwargs.pop("layers")
else:
layers = VersionAwareLayers()
if kwargs:
raise ValueError(f"Unknown argument(s): {kwargs}")
if not (weights in {"imagenet", None} or tf.io.gfile.exists(weights)):
raise ValueError(
"The `weights` argument should be either "
"`None` (random initialization), `imagenet` "
"(pre-training on ImageNet), "
"or the path to the weights file to be loaded."
)
if weights == "imagenet" and include_top and classes != 1000:
raise ValueError(
'If using `weights` as `"imagenet"` with `include_top`'
" as true, `classes` should be 1000"
)
# Determine proper input shape
input_shape = imagenet_utils.obtain_input_shape(
input_shape,
default_size=224,
min_size=32,
data_format=backend.image_data_format(),
require_flatten=include_top,
weights=weights,
)
if input_tensor is None:
img_input = layers.Input(shape=input_shape)
else:
if not backend.is_keras_tensor(input_tensor):
img_input = layers.Input(tensor=input_tensor, shape=input_shape)
else:
img_input = input_tensor
bn_axis = 3 if backend.image_data_format() == "channels_last" else 1
x = layers.ZeroPadding2D(padding=((3, 3), (3, 3)), name="conv1_pad")(
img_input
)
x = layers.Conv2D(64, 7, strides=2, use_bias=use_bias, name="conv1_conv")(x)
if not preact:
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name="conv1_bn"
)(x)
x = layers.Activation("relu", name="conv1_relu")(x)
x = layers.ZeroPadding2D(padding=((1, 1), (1, 1)), name="pool1_pad")(x)
x = layers.MaxPooling2D(3, strides=2, name="pool1_pool")(x)
x = stack_fn(x)
if preact:
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name="post_bn"
)(x)
x = layers.Activation("relu", name="post_relu")(x)
if include_top:
x = layers.GlobalAveragePooling2D(name="avg_pool")(x)
imagenet_utils.validate_activation(classifier_activation, weights)
x = layers.Dense(
classes, activation=classifier_activation, name="predictions"
)(x)
else:
if pooling == "avg":
x = layers.GlobalAveragePooling2D(name="avg_pool")(x)
elif pooling == "max":
x = layers.GlobalMaxPooling2D(name="max_pool")(x)
# Ensure that the model takes into account
# any potential predecessors of `input_tensor`.
if input_tensor is not None:
inputs = layer_utils.get_source_inputs(input_tensor)
else:
inputs = img_input
# Create model.
model = training.Model(inputs, x, name=model_name)
# Load weights.
if (weights == "imagenet") and (model_name in WEIGHTS_HASHES):
if include_top:
file_name = model_name + "_weights_tf_dim_ordering_tf_kernels.h5"
file_hash = WEIGHTS_HASHES[model_name][0]
else:
file_name = (
model_name + "_weights_tf_dim_ordering_tf_kernels_notop.h5"
)
file_hash = WEIGHTS_HASHES[model_name][1]
weights_path = data_utils.get_file(
file_name,
BASE_WEIGHTS_PATH + file_name,
cache_subdir="models",
file_hash=file_hash,
)
model.load_weights(weights_path)
elif weights is not None:
model.load_weights(weights)
return model
def block1(x, filters, kernel_size=3, stride=1, conv_shortcut=True, name=None):
"""A residual block.
Args:
x: input tensor.
filters: integer, filters of the bottleneck layer.
kernel_size: default 3, kernel size of the bottleneck layer.
stride: default 1, stride of the first layer.
conv_shortcut: default True, use convolution shortcut if True,
otherwise identity shortcut.
name: string, block label.
Returns:
Output tensor for the residual block.
"""
bn_axis = 3 if backend.image_data_format() == "channels_last" else 1
if conv_shortcut:
shortcut = layers.Conv2D(
4 * filters, 1, strides=stride, name=name + "_0_conv"
)(x)
shortcut = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + "_0_bn"
)(shortcut)
else:
shortcut = x
x = layers.Conv2D(filters, 1, strides=stride, name=name + "_1_conv")(x)
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + "_1_bn"
)(x)
x = layers.Activation("relu", name=name + "_1_relu")(x)
x = layers.Conv2D(
filters, kernel_size, padding="SAME", name=name + "_2_conv"
)(x)
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + "_2_bn"
)(x)
x = layers.Activation("relu", name=name + "_2_relu")(x)
x = layers.Conv2D(4 * filters, 1, name=name + "_3_conv")(x)
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + "_3_bn"
)(x)
x = layers.Add(name=name + "_add")([shortcut, x])
x = layers.Activation("relu", name=name + "_out")(x)
return x
def stack1(x, filters, blocks, stride1=2, name=None):
"""A set of stacked residual blocks.
Args:
x: input tensor.
filters: integer, filters of the bottleneck layer in a block.
blocks: integer, blocks in the stacked blocks.
stride1: default 2, stride of the first layer in the first block.
name: string, stack label.
Returns:
Output tensor for the stacked blocks.
"""
x = block1(x, filters, stride=stride1, name=name + "_block1")
for i in range(2, blocks + 1):
x = block1(
x, filters, conv_shortcut=False, name=name + "_block" + str(i)
)
return x
def block2(x, filters, kernel_size=3, stride=1, conv_shortcut=False, name=None):
"""A residual block.
Args:
x: input tensor.
filters: integer, filters of the bottleneck layer.
kernel_size: default 3, kernel size of the bottleneck layer.
stride: default 1, stride of the first layer.
conv_shortcut: default False, use convolution shortcut if True,
otherwise identity shortcut.
name: string, block label.
Returns:
Output tensor for the residual block.
"""
bn_axis = 3 if backend.image_data_format() == "channels_last" else 1
preact = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + "_preact_bn"
)(x)
preact = layers.Activation("relu", name=name + "_preact_relu")(preact)
if conv_shortcut:
shortcut = layers.Conv2D(
4 * filters, 1, strides=stride, name=name + "_0_conv"
)(preact)
else:
shortcut = (
layers.MaxPooling2D(1, strides=stride)(x) if stride > 1 else x
)
x = layers.Conv2D(
filters, 1, strides=1, use_bias=False, name=name + "_1_conv"
)(preact)
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + "_1_bn"
)(x)
x = layers.Activation("relu", name=name + "_1_relu")(x)
x = layers.ZeroPadding2D(padding=((1, 1), (1, 1)), name=name + "_2_pad")(x)
x = layers.Conv2D(
filters,
kernel_size,
strides=stride,
use_bias=False,
name=name + "_2_conv",
)(x)
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + "_2_bn"
)(x)
x = layers.Activation("relu", name=name + "_2_relu")(x)
x = layers.Conv2D(4 * filters, 1, name=name + "_3_conv")(x)
x = layers.Add(name=name + "_out")([shortcut, x])
return x
def stack2(x, filters, blocks, stride1=2, name=None):
"""A set of stacked residual blocks.
Args:
x: input tensor.
filters: integer, filters of the bottleneck layer in a block.
blocks: integer, blocks in the stacked blocks.
stride1: default 2, stride of the first layer in the first block.
name: string, stack label.
Returns:
Output tensor for the stacked blocks.
"""
x = block2(x, filters, conv_shortcut=True, name=name + "_block1")
for i in range(2, blocks):
x = block2(x, filters, name=name + "_block" + str(i))
x = block2(x, filters, stride=stride1, name=name + "_block" + str(blocks))
return x
def block3(
x,
filters,
kernel_size=3,
stride=1,
groups=32,
conv_shortcut=True,
name=None,
):
"""A residual block.
Args:
x: input tensor.
filters: integer, filters of the bottleneck layer.
kernel_size: default 3, kernel size of the bottleneck layer.
stride: default 1, stride of the first layer.
groups: default 32, group size for grouped convolution.
conv_shortcut: default True, use convolution shortcut if True,
otherwise identity shortcut.
name: string, block label.
Returns:
Output tensor for the residual block.
"""
bn_axis = 3 if backend.image_data_format() == "channels_last" else 1
if conv_shortcut:
shortcut = layers.Conv2D(
(64 // groups) * filters,
1,
strides=stride,
use_bias=False,
name=name + "_0_conv",
)(x)
shortcut = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + "_0_bn"
)(shortcut)
else:
shortcut = x
x = layers.Conv2D(filters, 1, use_bias=False, name=name + "_1_conv")(x)
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + "_1_bn"
)(x)
x = layers.Activation("relu", name=name + "_1_relu")(x)
c = filters // groups
x = layers.ZeroPadding2D(padding=((1, 1), (1, 1)), name=name + "_2_pad")(x)
x = layers.DepthwiseConv2D(
kernel_size,
strides=stride,
depth_multiplier=c,
use_bias=False,
name=name + "_2_conv",
)(x)
x_shape = backend.shape(x)[:-1]
x = backend.reshape(x, backend.concatenate([x_shape, (groups, c, c)]))
x = layers.Lambda(
lambda x: sum(x[:, :, :, :, i] for i in range(c)),
name=name + "_2_reduce",
)(x)
x = backend.reshape(x, backend.concatenate([x_shape, (filters,)]))
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + "_2_bn"
)(x)
x = layers.Activation("relu", name=name + "_2_relu")(x)
x = layers.Conv2D(
(64 // groups) * filters, 1, use_bias=False, name=name + "_3_conv"
)(x)
x = layers.BatchNormalization(
axis=bn_axis, epsilon=1.001e-5, name=name + "_3_bn"
)(x)
x = layers.Add(name=name + "_add")([shortcut, x])
x = layers.Activation("relu", name=name + "_out")(x)
return x
def stack3(x, filters, blocks, stride1=2, groups=32, name=None):
"""A set of stacked residual blocks.
Args:
x: input tensor.
filters: integer, filters of the bottleneck layer in a block.
blocks: integer, blocks in the stacked blocks.
stride1: default 2, stride of the first layer in the first block.
groups: default 32, group size for grouped convolution.
name: string, stack label.
Returns:
Output tensor for the stacked blocks.
"""
x = block3(x, filters, stride=stride1, groups=groups, name=name + "_block1")
for i in range(2, blocks + 1):
x = block3(
x,
filters,
groups=groups,
conv_shortcut=False,
name=name + "_block" + str(i),
)
return x
@keras_export(
"keras.applications.resnet50.ResNet50",
"keras.applications.resnet.ResNet50",
"keras.applications.ResNet50",
)
def ResNet50(
include_top=True,
weights="imagenet",
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
**kwargs,
):
"""Instantiates the ResNet50 architecture."""
def stack_fn(x):
x = stack1(x, 64, 3, stride1=1, name="conv2")
x = stack1(x, 128, 4, name="conv3")
x = stack1(x, 256, 6, name="conv4")
return stack1(x, 512, 3, name="conv5")
return ResNet(
stack_fn,
False,
True,
"resnet50",
include_top,
weights,
input_tensor,
input_shape,
pooling,
classes,
**kwargs,
)
@keras_export(
"keras.applications.resnet.ResNet101", "keras.applications.ResNet101"
)
def ResNet101(
include_top=True,
weights="imagenet",
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
**kwargs,
):
"""Instantiates the ResNet101 architecture."""
def stack_fn(x):
x = stack1(x, 64, 3, stride1=1, name="conv2")
x = stack1(x, 128, 4, name="conv3")
x = stack1(x, 256, 23, name="conv4")
return stack1(x, 512, 3, name="conv5")
return ResNet(
stack_fn,
False,
True,
"resnet101",
include_top,
weights,
input_tensor,
input_shape,
pooling,
classes,
**kwargs,
)
@keras_export(
"keras.applications.resnet.ResNet152", "keras.applications.ResNet152"
)
def ResNet152(
include_top=True,
weights="imagenet",
input_tensor=None,
input_shape=None,
pooling=None,
classes=1000,
**kwargs,
):
"""Instantiates the ResNet152 architecture."""
def stack_fn(x):
x = stack1(x, 64, 3, stride1=1, name="conv2")
x = stack1(x, 128, 8, name="conv3")
x = stack1(x, 256, 36, name="conv4")
return stack1(x, 512, 3, name="conv5")
return ResNet(
stack_fn,
False,
True,
"resnet152",
include_top,
weights,
input_tensor,
input_shape,
pooling,
classes,
**kwargs,
)
@keras_export(
"keras.applications.resnet50.preprocess_input",
"keras.applications.resnet.preprocess_input",
)
def preprocess_input(x, data_format=None):
return imagenet_utils.preprocess_input(
x, data_format=data_format, mode="caffe"
)
@keras_export(
"keras.applications.resnet50.decode_predictions",
"keras.applications.resnet.decode_predictions",
)
def decode_predictions(preds, top=5):
return imagenet_utils.decode_predictions(preds, top=top)
preprocess_input.__doc__ = imagenet_utils.PREPROCESS_INPUT_DOC.format(
mode="",
ret=imagenet_utils.PREPROCESS_INPUT_RET_DOC_CAFFE,
error=imagenet_utils.PREPROCESS_INPUT_ERROR_DOC,
)
decode_predictions.__doc__ = imagenet_utils.decode_predictions.__doc__
DOC = """
Reference:
- [Deep Residual Learning for Image Recognition](
https://arxiv.org/abs/1512.03385) (CVPR 2015)
For image classification use cases, see
[this page for detailed examples](
https://keras.io/api/applications/#usage-examples-for-image-classification-models).
For transfer learning use cases, make sure to read the
[guide to transfer learning & fine-tuning](
https://keras.io/guides/transfer_learning/).
Note: each TF-Keras Application expects a specific kind of input
preprocessing. For ResNet, call
`tf.keras.applications.resnet.preprocess_input` on your inputs before passing
them to the model. `resnet.preprocess_input` will convert the input images
from RGB to BGR, then will zero-center each color channel with respect to the
ImageNet dataset, without scaling.
Args:
include_top: whether to include the fully-connected
layer at the top of the network.
weights: one of `None` (random initialization),
'imagenet' (pre-training on ImageNet),
or the path to the weights file to be loaded.
input_tensor: optional TF-Keras tensor (i.e. output of `layers.Input()`)
to use as image input for the model.
input_shape: optional shape tuple, only to be specified
if `include_top` is False (otherwise the input shape
has to be `(224, 224, 3)` (with `'channels_last'` data format)
or `(3, 224, 224)` (with `'channels_first'` data format).
It should have exactly 3 inputs channels,
and width and height should be no smaller than 32.
E.g. `(200, 200, 3)` would be one valid value.
pooling: Optional pooling mode for feature extraction
when `include_top` is `False`.
- `None` means that the output of the model will be
the 4D tensor output of the
last convolutional block.
- `avg` means that global average pooling
will be applied to the output of the
last convolutional block, and thus
the output of the model will be a 2D tensor.
- `max` means that global max pooling will
be applied.
classes: optional number of classes to classify images
into, only to be specified if `include_top` is True, and
if no `weights` argument is specified.
classifier_activation: A `str` or callable. The activation function to use
on the "top" layer. Ignored unless `include_top=True`. Set
`classifier_activation=None` to return the logits of the "top" layer.
When loading pretrained weights, `classifier_activation` can only
be `None` or `"softmax"`.
Returns:
A TF-Keras model instance.
"""
setattr(ResNet50, "__doc__", ResNet50.__doc__ + DOC)
setattr(ResNet101, "__doc__", ResNet101.__doc__ + DOC)
setattr(ResNet152, "__doc__", ResNet152.__doc__ + DOC)
| tf-keras/tf_keras/applications/resnet.py/0 | {
"file_path": "tf-keras/tf_keras/applications/resnet.py",
"repo_id": "tf-keras",
"token_count": 9700
} | 214 |
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Microbenchmarks for TF-Keras components in eager mode."""
import time
import tensorflow.compat.v2 as tf
import tf_keras as keras
from tf_keras.utils import tf_inspect
# isort: off
from tensorflow.python.eager import context
from tensorflow.python.eager.context import get_executor
def _run_benchmark(func, num_iters, execution_mode=None):
with context.execution_mode(execution_mode):
# call func to warm up
func()
if execution_mode == context.ASYNC:
get_executor().wait()
start = time.time()
for _ in range(num_iters):
func()
if execution_mode == context.ASYNC:
get_executor().wait()
end = time.time()
return end - start
class MicroBenchmarksBase(tf.test.Benchmark):
"""Run and report benchmark results."""
def run_report(self, run_benchmark, func, num_iters, execution_mode=None):
"""Run and report benchmark results."""
total_time = run_benchmark(func, num_iters, execution_mode)
mean_us = total_time * 1e6 / num_iters
metrics = [
{
"name": "exp_per_sec",
"value": float(f"{num_iters / total_time:.3f}"),
},
{
"name": "us_per_exp",
"value": float(f"{total_time * 1000000.0 / num_iters:.3f}"),
},
]
benchmark_name = self._get_benchmark_name()
self.report_benchmark(
iters=num_iters,
wall_time=mean_us,
metrics=metrics,
name=benchmark_name,
)
def _get_benchmark_name(self):
"""Mostly copied from benchmark.py _get_name()."""
stack = tf_inspect.stack()
name = None
for frame in stack[::-1]:
f_locals = frame[0].f_locals
f_self = f_locals.get("self", None)
if isinstance(f_self, tf.test.Benchmark):
name = frame[3] # Get the method name
# This is a hack to get around the fact that some methods might
# have a disable_tfrt decorator around them. In that case a
# function called 'decorated' wraps the real called function
# underneath and so we peek one deeper into the stack to get the
# real name.
if name == "decorated":
continue
else:
break
if name is None:
raise ValueError("Unable to determine calling Benchmark function.")
if tf.__internal__.is_tfrt_enabled():
name = name + "_tfrt"
return name
def _run(self, func, num_iters, execution_mode=None):
self.run_report(_run_benchmark, func, num_iters, execution_mode)
def benchmark_layers_call_overhead(self):
class OnlyOverheadLayer(keras.layers.Layer):
def call(self, x):
return x
layer = OnlyOverheadLayer()
x = tf.convert_to_tensor([[1.0]])
def fn():
layer(x)
self._run(fn, 10000)
def benchmark_op_layer_call_overhead(self):
model_input = keras.Input(shape=(1,))
model_output = model_input
x = tf.convert_to_tensor([[1.1]])
for _ in range(20):
model_output = tf.multiply(model_output, x)
model = keras.Model(inputs=model_input, outputs=model_output)
def fn():
model(x)
fn()
self._run(fn, 100)
def benchmark_model_predict_tensorlike_overhead(self):
class OnlyOverheadLayer(keras.layers.Layer):
def call(self, x):
return x
model = keras.Sequential([OnlyOverheadLayer()])
x = tf.convert_to_tensor([[1.0]])
def fn():
model.predict(x)
self._run(fn, 20)
def benchmark_layers_embeddings_embedding_overhead(self):
layer = keras.layers.Embedding(1, 1)
x = tf.zeros((1, 1), dtype="int32")
def fn():
layer(x)
self._run(fn, 10000)
class KerasLayerCallOverheadBenchmarks(
MicroBenchmarksBase, metaclass=tf.__internal__.test.ParameterizedBenchmark
):
# The set of layers for benchmarking. To add benchmarks for new layers,
# please add the parameter configs to "_benchmark_paramters".
# The parameter of each layer benchmark is a tuple contains:
# 1) The benchmark name with convention "{module_name}_{layer_name}";
# 2) The layer instance;
# 3) The shape of the input to the layer;
# 4) The kwargs used in the benchmark. It can include the number of
# iterations to run the benchmarks, and kwargs used in the layer call.
# By default, # of iteration is 10000.
_benchmark_parameters = [
(
"advanced_activations_leaky_relu",
keras.layers.LeakyReLU(),
(1, 1),
),
("advanced_activations_prelu", keras.layers.PReLU(), (1, 1)),
("advanced_activations_elu", keras.layers.ELU(), (1, 1)),
(
"advanced_activations_thresholded_relu",
keras.layers.ThresholdedReLU(),
(1, 1),
),
("advanced_activations_softmax", keras.layers.Softmax(), (1, 1)),
("advanced_activations_relu", keras.layers.ReLU(), (1, 1)),
("core_masking", keras.layers.Masking(), (1, 1)),
(
"core_dropout",
keras.layers.Dropout(0.5),
(1, 1),
{"training": True},
),
("core_flatten", keras.layers.Flatten(), (1, 1, 1)),
("core_dense", keras.layers.Dense(1), (1, 1)),
("convolutional_conv1d", keras.layers.Conv1D(1, (1,)), (1, 1, 1)),
(
"convolutional_conv2d",
keras.layers.Conv2D(1, (1, 1)),
(1, 1, 1, 1),
),
(
"convolutional_conv3d",
keras.layers.Conv3D(1, (1, 1, 1)),
(1, 1, 1, 1, 1),
),
(
"batch_norm_fused_inf",
keras.layers.BatchNormalization(fused=True),
(1, 1, 1, 1),
),
(
"batch_norm_fused_train",
keras.layers.BatchNormalization(fused=True),
(1, 1, 1, 1),
{"training": True},
),
(
"batch_norm_nonfused_inf",
keras.layers.BatchNormalization(fused=False),
(1, 1, 1, 1),
),
(
"batch_norm_nonfused_train",
keras.layers.BatchNormalization(fused=False),
(1, 1, 1, 1),
{"training": True},
),
(
"normalization_layer_normalization",
keras.layers.LayerNormalization(),
(1, 1),
{"iters": 100, "training": True},
),
]
def benchmark_layer(self, layer, input_shape, kwargs=None):
x = tf.ones(input_shape)
def fn():
layer(x, **(kwargs or {}))
default_iters = 10000
iters = kwargs.pop("iters", default_iters) if kwargs else default_iters
self._run(fn, iters)
if __name__ == "__main__":
if tf.compat.v1.executing_eagerly():
# Only run test when eager is enabled (skip test in v1).
tf.test.main()
| tf-keras/tf_keras/benchmarks/eager_microbenchmarks_test.py/0 | {
"file_path": "tf-keras/tf_keras/benchmarks/eager_microbenchmarks_test.py",
"repo_id": "tf-keras",
"token_count": 3759
} | 215 |
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
from __future__ import absolute_import as _absolute_import
from __future__ import division as _division
from __future__ import print_function as _print_function
import os
import time
import uuid
from tensorflow.python.profiler import profiler_v2 as profiler
def run_with_xprof(
self,
func,
num_iters_xprof=100,
enable_python_trace=True,
logdir="/tmp/layer_benchmark_xprof/",
):
suid = str(uuid.uuid4())
if enable_python_trace:
options = profiler.ProfilerOptions(python_tracer_level=1)
logdir = os.path.join(logdir, str(uuid.uuid4()) + "_with_python")
else:
options = profiler.ProfilerOptions(python_tracer_level=0)
logdir = os.path.join(logdir, suid)
start = time.time()
with profiler.Profile(logdir, options):
for _ in range(num_iters_xprof):
func()
total_time = time.time() - start
us_per_example = float(f"{total_time * 1000000.0 / num_iters_xprof:.3f}")
return logdir, us_per_example
| tf-keras/tf_keras/benchmarks/layer_benchmarks/run_xprof.py/0 | {
"file_path": "tf-keras/tf_keras/benchmarks/layer_benchmarks/run_xprof.py",
"repo_id": "tf-keras",
"token_count": 565
} | 216 |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for TF-Keras callbacks."""
import collections
import csv
import json
import os
import re
import shutil
import sys
import threading
import time
import unittest
from unittest import mock
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
import tf_keras as keras
from tf_keras.callbacks import BackupAndRestore
from tf_keras.callbacks import BackupAndRestoreExperimental
from tf_keras.callbacks import Callback
from tf_keras.engine import sequential
from tf_keras.layers import Activation
from tf_keras.layers import Dense
from tf_keras.optimizers import sgd
from tf_keras.optimizers.legacy import gradient_descent
from tf_keras.optimizers.schedules import learning_rate_schedule
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
from tf_keras.utils import io_utils
from tf_keras.utils import np_utils
from tf_keras.utils import tf_utils
# isort: off
from tensorflow.python.platform import tf_logging as logging
try:
import h5py
except ImportError:
h5py = None
try:
import requests
except ImportError:
requests = None
TRAIN_SAMPLES = 10
TEST_SAMPLES = 10
NUM_CLASSES = 2
INPUT_DIM = 3
NUM_HIDDEN = 5
BATCH_SIZE = 5
CALLBACK_HOOKS = [
"on_batch_begin",
"on_batch_end",
"on_epoch_begin",
"on_epoch_end",
"on_predict_batch_begin",
"on_predict_batch_end",
"on_predict_begin",
"on_predict_end",
"on_test_batch_begin",
"on_test_batch_end",
"on_test_begin",
"on_test_end",
"on_train_batch_begin",
"on_train_batch_end",
"on_train_begin",
"on_train_end",
]
class Counter(keras.callbacks.Callback):
"""Counts the number of times each callback method was run.
Attributes:
method_counts: dict. Contains the counts of time each callback method was
run.
"""
def __init__(self):
self.method_counts = collections.defaultdict(int)
for method_name in CALLBACK_HOOKS:
setattr(
self,
method_name,
self.wrap_with_counts(method_name, getattr(self, method_name)),
)
def wrap_with_counts(self, method_name, method):
def _call_and_count(*args, **kwargs):
self.method_counts[method_name] += 1
return method(*args, **kwargs)
return _call_and_count
class CallAllHooks(keras.callbacks.Callback):
"""A callback that calls self._run for all hooks"""
def __init__(self):
for method_name in CALLBACK_HOOKS:
setattr(self, method_name, self._run)
def _run(self, *args, logs=None):
raise NotImplementedError
def _get_numpy():
return np.ones((10, 10)), np.ones((10, 1))
def _get_sequence():
class MySequence(keras.utils.data_utils.Sequence):
def __getitem__(self, _):
return np.ones((2, 10)), np.ones((2, 1))
def __len__(self):
return 5
return MySequence(), None
@test_combinations.run_with_all_model_types
@test_combinations.run_all_keras_modes
class CallbackCountsTest(test_combinations.TestCase):
def _check_counts(self, counter, expected_counts):
"""Checks that the counts registered by `counter` are those expected."""
for method_name, expected_count in expected_counts.items():
self.assertEqual(
counter.method_counts[method_name],
expected_count,
msg="For method {}: expected {}, got: {}".format(
method_name,
expected_count,
counter.method_counts[method_name],
),
)
def _get_model(self):
layers = [
keras.layers.Dense(10, activation="relu"),
keras.layers.Dense(1, activation="sigmoid"),
]
model = test_utils.get_model_from_layers(layers, input_shape=(10,))
model.compile(
tf.compat.v1.train.AdamOptimizer(0.001),
"binary_crossentropy",
run_eagerly=test_utils.should_run_eagerly(),
)
return model
@parameterized.named_parameters(
("with_numpy", _get_numpy()), ("with_sequence", _get_sequence())
)
def test_callback_hooks_are_called_in_fit(self, data):
if not tf.executing_eagerly():
self.skipTest("Behavior changed in v2.")
x, y = data
val_x, val_y = np.ones((4, 10)), np.ones((4, 1))
model = self._get_model()
counter = Counter()
model.fit(
x,
y,
validation_data=(val_x, val_y),
batch_size=2,
steps_per_epoch=5,
epochs=5,
callbacks=[counter],
)
self._check_counts(
counter,
{
"on_batch_begin": 25,
"on_batch_end": 25,
"on_epoch_begin": 5,
"on_epoch_end": 5,
"on_predict_batch_begin": 0,
"on_predict_batch_end": 0,
"on_predict_begin": 0,
"on_predict_end": 0,
"on_test_batch_begin": 10,
"on_test_batch_end": 10,
"on_test_begin": 5,
"on_test_end": 5,
"on_train_batch_begin": 25,
"on_train_batch_end": 25,
"on_train_begin": 1,
"on_train_end": 1,
},
)
@parameterized.named_parameters(
("with_numpy", _get_numpy()), ("with_sequence", _get_sequence())
)
def test_callback_hooks_are_called_in_evaluate(self, data):
x, y = data
is_sequence = isinstance(x, keras.utils.data_utils.Sequence)
model = self._get_model()
counter = Counter()
model.evaluate(
x,
y,
batch_size=2 if not is_sequence else None,
steps=5 if is_sequence else None,
callbacks=[counter],
)
self._check_counts(
counter,
{
"on_test_batch_begin": 5,
"on_test_batch_end": 5,
"on_test_begin": 1,
"on_test_end": 1,
},
)
@parameterized.named_parameters(
("with_numpy", _get_numpy()), ("with_sequence", _get_sequence())
)
def test_callback_hooks_are_called_in_predict(self, data):
x = data[0]
is_sequence = isinstance(x, keras.utils.data_utils.Sequence)
model = self._get_model()
counter = Counter()
model.predict(
x,
batch_size=2 if not is_sequence else None,
steps=5 if is_sequence else None,
callbacks=[counter],
)
self._check_counts(
counter,
{
"on_predict_batch_begin": 5,
"on_predict_batch_end": 5,
"on_predict_begin": 1,
"on_predict_end": 1,
},
)
def test_callback_list_methods(self):
counter = Counter()
callback_list = keras.callbacks.CallbackList([counter])
batch = 0
callback_list.on_test_batch_begin(batch)
callback_list.on_test_batch_end(batch)
callback_list.on_predict_batch_begin(batch)
callback_list.on_predict_batch_end(batch)
self._check_counts(
counter,
{
"on_test_batch_begin": 1,
"on_test_batch_end": 1,
"on_predict_batch_begin": 1,
"on_predict_batch_end": 1,
},
)
class KerasCallbacksTest(test_combinations.TestCase, parameterized.TestCase):
def _get_model(self, input_shape=None, additional_metrics=None):
additional_metrics = additional_metrics or []
layers = [
keras.layers.Dense(3, activation="relu"),
keras.layers.Dense(2, activation="softmax"),
]
model = test_utils.get_model_from_layers(
layers, input_shape=input_shape
)
model.compile(
loss="mse",
optimizer="rmsprop",
metrics=[keras.metrics.CategoricalAccuracy(name="my_acc")]
+ additional_metrics,
run_eagerly=test_utils.should_run_eagerly(),
)
return model
@test_combinations.run_with_all_model_types
@test_combinations.run_all_keras_modes
def test_progbar_logging(self):
model = self._get_model(input_shape=(3,))
x = tf.ones((200, 3))
y = tf.zeros((200, 2))
dataset = tf.data.Dataset.from_tensor_slices((x, y)).batch(10)
expected_log = r"(.*- loss:.*- my_acc:.*)+"
io_utils.enable_interactive_logging()
with self.captureWritesToStream(sys.stdout) as printed:
model.fit(dataset, epochs=2, steps_per_epoch=10)
self.assertRegex(printed.contents(), expected_log)
@test_combinations.run_with_all_model_types
@test_combinations.run_all_keras_modes
def test_progbar_logging_with_stateful_metrics(self):
class AddAllOnes(keras.metrics.Metric):
"""A simple metric that adds all the one's in `y_true`."""
def __init__(self, name="add_all_ones", **kwargs):
super().__init__(name=name, **kwargs)
self.total = self.add_weight(name="total", initializer="zeros")
def update_state(self, y_true, y_pred, sample_weight=None):
self.total.assign_add(
tf.cast(tf.reduce_sum(y_true), dtype=tf.float32)
)
def result(self):
return self.total
x_train = np.array([[0, 1, 0, 1, 0, 1, 0, 1]] * 8).astype(float)
y_train = np.array(
[[1, 0], [0, 0], [1, 1], [1, 0], [0, 1], [1, 0], [1, 0], [0, 0]]
)
# There are 7 ones in total in `y_train` after two batches.
expected_log = r"(.*- loss:.*- my_acc:.*- add_all_ones: 7.0000)+"
io_utils.enable_interactive_logging()
with self.captureWritesToStream(sys.stdout) as printed:
model = self._get_model(
input_shape=(8,), additional_metrics=[AddAllOnes()]
)
model.fit(x_train, y_train, verbose=1, batch_size=4, shuffle=False)
self.assertRegex(printed.contents(), expected_log)
# When not executing eagerly, `model.evaluate` does not have the metrics
# results printed.
if tf.executing_eagerly():
with self.captureWritesToStream(sys.stdout) as printed:
model = self._get_model(
input_shape=(8,), additional_metrics=[AddAllOnes()]
)
model.evaluate(x_train, y_train, verbose=1, batch_size=4)
self.assertRegex(printed.contents(), expected_log)
@test_combinations.run_all_keras_modes
def test_trivial_backup_restore(self):
if test_utils.should_run_eagerly():
model = keras.Sequential([keras.layers.Dense(1)])
model.compile("sgd", "mse")
cbk = BackupAndRestore(self.get_temp_dir())
model.fit(
np.ones((10, 1)), np.ones((10, 1)), epochs=1, callbacks=[cbk]
)
def test_backup_restore_train_counter(self):
if not tf.compat.v1.executing_eagerly():
self.skipTest(
"BackupAndRestore only available when eager execution is "
"enabled"
)
model = keras.Sequential([keras.layers.Dense(1)])
model.compile("sgd", "mse")
cbk = BackupAndRestore(self.get_temp_dir())
class InterruptingCallback(keras.callbacks.Callback):
"""A callback to intentionally introduce interruption to
training."""
def on_epoch_end(self, epoch, log=None):
logging.info(f"counter: {model._train_counter}")
if epoch == 5 or epoch == 12:
raise RuntimeError("Interruption")
self.get_temp_dir()
# The following asserts that the train counter is fault tolerant.
self.assertEqual(model._train_counter.numpy(), 0)
try:
model.fit(
np.ones((10, 1)),
np.ones((10, 1)),
epochs=20,
callbacks=[cbk, InterruptingCallback()],
)
except RuntimeError:
pass
self.assertEqual(model._train_counter.numpy(), 6)
try:
model.fit(
np.ones((10, 1)),
np.ones((10, 1)),
epochs=20,
callbacks=[cbk, InterruptingCallback()],
)
except RuntimeError:
pass
self.assertEqual(model._train_counter.numpy(), 13)
def _test_backup_and_restore_callback_with(self, cls):
if not tf.compat.v1.executing_eagerly():
self.skipTest(
"BackupAndRestore only available when execution is enabled"
)
class InterruptingCallback(keras.callbacks.Callback):
"""A callback to intentionally introduce interruption to
training."""
def on_epoch_end(self, epoch, log=None):
if epoch == 15:
raise RuntimeError("Interruption")
model = keras.Sequential([keras.layers.Dense(10)])
optimizer = sgd.SGD()
model.compile(optimizer, loss="mse")
x = tf.random.uniform((24, 10))
y = tf.random.uniform((24,))
dataset = tf.data.Dataset.from_tensor_slices((x, y)).repeat().batch(2)
backup_callback = cls(backup_dir=self.get_temp_dir())
try:
model.fit(
dataset,
epochs=20,
steps_per_epoch=5,
callbacks=[backup_callback, InterruptingCallback()],
)
except RuntimeError:
logging.warning("***Handling interruption***")
# This continues at the epoch where it left off.
model.fit(
dataset,
epochs=20,
steps_per_epoch=5,
callbacks=[backup_callback],
)
def _test_backup_and_restore_callback_at_steps(
self, cls, epoch_int, steps_int, mode
):
if not tf.compat.v1.executing_eagerly():
self.skipTest(
"BackupAndRestore only available when eager execution is "
"enabled"
)
class InterruptingCallback(keras.callbacks.Callback):
"""A callback to intentionally introduce interruption to
training."""
batch_count = 0
def on_epoch_end(self, epoch, log=None):
if epoch == epoch_int:
raise RuntimeError("EpochInterruption")
def on_batch_end(self, batch, logs=None):
self.batch_count += 1
if self.batch_count == steps_int:
raise RuntimeError("StepsInterruption")
class VerifyRestore(Callback):
"""Verify if the training restored to the correct epoch and step."""
def __init__(self, initial_epoch, initial_step):
super(VerifyRestore, self).__init__()
self.initial_epoch = initial_epoch
self.initial_step = initial_step
self._current_epoch = 0
def on_epoch_begin(self, epoch, logs=None):
self._current_epoch = epoch
if epoch < self.initial_epoch:
raise ValueError(
"Training did not restore at epoch (%d) and step (%d)"
% (self.initial_epoch, self.initial_step)
)
def on_batch_begin(self, batch, logs=None):
if (
batch <= self.initial_step
and self._current_epoch < self.initial_epoch
):
raise ValueError(
"Training did not restore at Epoch (%d) and step (%d)"
% (self.initial_epoch, self.initial_step)
)
model = keras.Sequential([keras.layers.Dense(10)])
optimizer = sgd.SGD()
model.compile(optimizer, loss="mse")
x = tf.random.uniform((24, 10))
y = tf.random.uniform((24,))
dataset = tf.data.Dataset.from_tensor_slices((x, y)).repeat().batch(2)
save_freq_arg = "epoch" if mode == "epoch" else 7
backup_callback = cls(
backup_dir=self.get_temp_dir(), save_freq=save_freq_arg
)
# epoch where the restore should resume from
if save_freq_arg == "epoch":
init_epoch = epoch_int
init_step = 0
elif save_freq_arg:
init_epoch = int(((steps_int // 7) * 7) // 5)
init_step = int((((steps_int // 7) * 7) % 5) - 1)
else:
init_epoch = 0
init_step = 0
# callback to verify accurate training state restore
verify_restore_callback = VerifyRestore(
initial_epoch=init_epoch, initial_step=init_step
)
try:
model.fit(
dataset,
epochs=20,
steps_per_epoch=5,
callbacks=[backup_callback, InterruptingCallback()],
)
except RuntimeError as e:
if str(e) == "EpochInterruption":
logging.warning("***Handling interruption at epoch***")
elif str(e) == "StepsInterruption":
logging.warning("***Handling interruption at Nth step***")
# This continues at the epoch and step where it left off.
model.fit(
dataset,
epochs=20,
steps_per_epoch=5,
callbacks=[backup_callback, verify_restore_callback],
)
def test_experimental_backup_and_restore(self):
"""Ensure the legacy endpoint of `BackupAndRestore` gives warning."""
warning_messages = []
def warning(msg):
warning_messages.append(msg)
with tf.compat.v1.test.mock.patch.object(logging, "warning", warning):
self._test_backup_and_restore_callback_with(
BackupAndRestoreExperimental
)
warning_msg = (
"`tf.keras.callbacks.experimental.BackupAndRestore` "
"endpoint is deprecated"
)
self.assertIn(warning_msg, "\n".join(warning_messages))
warning_msg = "***Handling interruption***"
self.assertIn(warning_msg, "\n".join(warning_messages))
def test_backup_and_restore(self):
"""Ensure the public endpoint of `BackupAndRestore` is working."""
warning_messages = []
def warning(msg):
warning_messages.append(msg)
with tf.compat.v1.test.mock.patch.object(logging, "warning", warning):
self._test_backup_and_restore_callback_with(BackupAndRestore)
warning_msg = (
"`tf.keras.callbacks.experimental.BackupAndRestore` "
"endpoint is deprecated"
)
self.assertNotIn(warning_msg, "\n".join(warning_messages))
warning_msg = "***Handling interruption***"
self.assertIn(warning_msg, "\n".join(warning_messages))
def test_backup_and_restore_steps(self):
"""Ensure the public endpoint of `BackupAndRestore` is working."""
warning_messages = []
def warning(msg):
warning_messages.append(msg)
with tf.compat.v1.test.mock.patch.object(logging, "warning", warning):
# interrupt at steps before 1 epoch
self._test_backup_and_restore_callback_at_steps(
BackupAndRestore, epoch_int=20, steps_int=3, mode="batch"
)
warning_msg = (
"`tf.keras.callbacks.experimental.BackupAndRestore` "
"endpoint is deprecated"
)
self.assertNotIn(warning_msg, "\n".join(warning_messages))
warning_msg = "***Handling interruption at Nth step***"
self.assertIn(warning_msg, "\n".join(warning_messages))
# interrupt at steps after 1 epoch
warning_messages = []
with tf.compat.v1.test.mock.patch.object(logging, "warning", warning):
self._test_backup_and_restore_callback_at_steps(
BackupAndRestore, epoch_int=20, steps_int=8, mode="batch"
)
warning_msg = "***Handling interruption at Nth step***"
self.assertIn(warning_msg, "\n".join(warning_messages))
# interrupt at epoch before steps
warning_messages = []
with tf.compat.v1.test.mock.patch.object(logging, "warning", warning):
self._test_backup_and_restore_callback_at_steps(
BackupAndRestore, epoch_int=1, steps_int=12, mode="epoch"
)
warning_msg = "***Handling interruption at epoch***"
self.assertIn(warning_msg, "\n".join(warning_messages))
def test_backup_and_restore_steps_last_batch(self):
"""Ensure the public endpoint of `BackupAndRestore` is working."""
warning_messages = []
def warning(msg):
warning_messages.append(msg)
with tf.compat.v1.test.mock.patch.object(logging, "warning", warning):
# interrupt at last step in 7th epoch
self._test_backup_and_restore_callback_at_steps(
BackupAndRestore, epoch_int=20, steps_int=35, mode="batch"
)
warning_msg = (
"`tf.keras.callbacks.experimental.BackupAndRestore` "
"endpoint is deprecated"
)
self.assertNotIn(warning_msg, "\n".join(warning_messages))
warning_msg = "***Handling interruption at Nth step***"
self.assertIn(warning_msg, "\n".join(warning_messages))
def test_backup_and_restore_steps_false_save_freq(self):
"""Ensure the public endpoint of `BackupAndRestore` is working."""
warning_messages = []
def warning(msg):
warning_messages.append(msg)
with tf.compat.v1.test.mock.patch.object(logging, "warning", warning):
# interrupt at steps before 1 epoch
self._test_backup_and_restore_callback_at_steps(
BackupAndRestore, epoch_int=20, steps_int=3, mode=False
)
warning_msg = (
"`tf.keras.callbacks.experimental.BackupAndRestore` "
"endpoint is deprecated"
)
self.assertNotIn(warning_msg, "\n".join(warning_messages))
warning_msg = "***Handling interruption at Nth step***"
self.assertIn(warning_msg, "\n".join(warning_messages))
# interrupt at steps after 1 epoch
warning_messages = []
with tf.compat.v1.test.mock.patch.object(logging, "warning", warning):
self._test_backup_and_restore_callback_at_steps(
BackupAndRestore, epoch_int=20, steps_int=8, mode="batch"
)
warning_msg = "***Handling interruption at Nth step***"
self.assertIn(warning_msg, "\n".join(warning_messages))
# interrupt at epoch before steps
warning_messages = []
with tf.compat.v1.test.mock.patch.object(logging, "warning", warning):
self._test_backup_and_restore_callback_at_steps(
BackupAndRestore, epoch_int=1, steps_int=12, mode="epoch"
)
warning_msg = "***Handling interruption at epoch***"
self.assertIn(warning_msg, "\n".join(warning_messages))
def test_backup_and_restore_steps_clean_up(self):
if not tf.executing_eagerly():
self.skipTest(
"BackupAndRestore only available when eager execution is "
"enabled."
)
path = self.get_temp_dir()
callback = BackupAndRestore(path, delete_checkpoint=True)
model = keras.Sequential([keras.layers.Dense(10)])
optimizer = gradient_descent.SGD()
model.compile(optimizer, loss="mse")
x = tf.random.uniform((24, 10))
y = tf.random.uniform((24,))
dataset = tf.data.Dataset.from_tensor_slices((x, y)).batch(2)
model.fit(dataset, epochs=1, callbacks=[callback])
self.assertEmpty(os.listdir(path))
callback = BackupAndRestore(path, delete_checkpoint=False)
model.fit(dataset, epochs=1, callbacks=[callback])
self.assertNotEmpty(os.listdir(path))
@test_combinations.run_all_keras_modes
def test_callback_warning(self):
class SleepCallback(keras.callbacks.Callback):
def on_train_batch_end(self, batch, logs=None):
time.sleep(0.1)
model = sequential.Sequential()
model.add(keras.layers.Dense(1))
model.compile(
"sgd", loss="mse", run_eagerly=test_utils.should_run_eagerly()
)
warning_messages = []
def warning(msg):
warning_messages.append(msg)
with tf.compat.v1.test.mock.patch.object(logging, "warning", warning):
model.fit(
np.ones((16, 1), "float32"),
np.ones((16, 1), "float32"),
batch_size=3,
epochs=1,
callbacks=[SleepCallback()],
)
warning_msg = (
"Callback method `on_train_batch_end` is slow compared "
"to the batch time"
)
self.assertIn(warning_msg, "\n".join(warning_messages))
@test_combinations.run_all_keras_modes
def test_default_callbacks_no_warning(self):
# Test that without the callback no warning is raised
model = sequential.Sequential()
model.add(keras.layers.Dense(1))
model.compile(
"sgd", loss="mse", run_eagerly=test_utils.should_run_eagerly()
)
warning_messages = []
def warning(msg):
warning_messages.append(msg)
with tf.compat.v1.test.mock.patch.object(logging, "warning", warning):
model.fit(
np.ones((16, 1), "float32"),
np.ones((16, 1), "float32"),
batch_size=3,
epochs=1,
)
self.assertListEqual(warning_messages, [])
@test_combinations.run_with_all_model_types(exclude_models="functional")
@test_combinations.run_all_keras_modes
def test_progbar_logging_deferred_model_build(self):
model = self._get_model()
self.assertFalse(model.built)
x = tf.ones((200, 3))
y = tf.zeros((200, 2))
dataset = tf.data.Dataset.from_tensor_slices((x, y)).batch(10)
expected_log = r"(.*- loss:.*- my_acc:.*)+"
io_utils.enable_interactive_logging()
with self.captureWritesToStream(sys.stdout) as printed:
model.fit(dataset, epochs=2, steps_per_epoch=10)
self.assertRegex(printed.contents(), expected_log)
@test_combinations.run_with_all_model_types
@test_combinations.run_all_keras_modes
def test_progbar_logging_validation_data(self):
model = self._get_model(input_shape=(3,))
x = tf.ones((50, 3))
y = tf.zeros((50, 2))
training_dataset = tf.data.Dataset.from_tensor_slices((x, y)).batch(10)
val_dataset = tf.data.Dataset.from_tensor_slices((x, y)).batch(10)
expected_log = (
r"(.*5/5.*- loss:.*- my_acc:.*- val_loss:.*- val_my_acc:.*)+"
)
io_utils.enable_interactive_logging()
with self.captureWritesToStream(sys.stdout) as printed:
model.fit(training_dataset, epochs=2, validation_data=val_dataset)
self.assertRegex(printed.contents(), expected_log)
@test_combinations.run_with_all_model_types
@test_combinations.run_all_keras_modes(always_skip_v1=True)
def test_progbar_logging_validation_split(self):
model = self._get_model(input_shape=(3,))
x = np.ones((100, 3))
y = np.zeros((100, 2))
expected_log = (
r"(?s).*1/2.*8/8.*- loss:.*- my_acc:.*- val_loss:.*- val_my_acc:"
r".*2/2.*8/8.*- loss:.*- my_acc:.*- val_loss:.*- val_my_acc:.*"
)
io_utils.enable_interactive_logging()
with self.captureWritesToStream(sys.stdout) as printed:
model.fit(x, y, batch_size=10, epochs=2, validation_split=0.2)
self.assertRegex(printed.contents(), expected_log)
@test_combinations.run_with_all_model_types
@test_combinations.run_all_keras_modes(always_skip_v1=True)
def test_progbar_logging_training_validation(self):
model = self._get_model(input_shape=(2,))
def generator():
for _ in range(100):
yield [1, 1], 1
training = (
tf.data.Dataset.from_generator(
generator=generator,
output_types=("float64", "float64"),
output_shapes=([2], []),
)
.batch(2)
.repeat()
)
validation = tf.data.Dataset.from_generator(
generator=generator,
output_types=("float64", "float64"),
output_shapes=([2], []),
).batch(2)
expected_log = (
r"(?s).*1/2.*20/20.*- loss:.*- my_acc:.*- val_loss:.*- val_my_acc:"
r".*2/2.*20/20.*- loss:.*- my_acc:.*- val_loss:.*- val_my_acc:.*"
)
io_utils.enable_interactive_logging()
with self.captureWritesToStream(sys.stdout) as printed:
model.fit(
x=training,
validation_data=validation,
epochs=2,
steps_per_epoch=20,
)
self.assertRegex(printed.contents(), expected_log)
@test_combinations.run_with_all_model_types
@test_combinations.run_all_keras_modes(always_skip_v1=True)
def test_progbar_logging_with_dataset_and_partial_batch(self):
model = self._get_model(input_shape=(2,))
def generator():
# Have a partial batch at the end.
for _ in range(9):
yield np.random.random(2), 1
training = tf.data.Dataset.from_generator(
generator=generator,
output_types=("float64", "float64"),
output_shapes=([2], []),
).batch(2)
validation = tf.data.Dataset.from_generator(
generator=generator,
output_types=("float64", "float64"),
output_shapes=([2], []),
).batch(2)
io_utils.enable_interactive_logging()
with self.captureWritesToStream(sys.stdout) as printed:
model.fit(x=training, validation_data=validation)
# Make sure the value of val_ metrics are not zeros.
log_content = printed.contents()
val_loss = re.findall(r"val_loss: (\d\.\d+)", log_content)
self.assertLen(val_loss, 1)
self.assertGreater(float(val_loss[0]), 0.0)
@test_combinations.run_with_all_model_types
@parameterized.named_parameters(
("h5", ".h5"),
("keras", ".keras"),
)
def test_ModelCheckpoint(self, save_format):
if save_format == ".h5" and h5py is None:
return # Skip test if models cannot be saved.
model_type = test_utils.get_model_type()
if model_type == "subclass":
# Skip test since subclassed models cannot be saved in .h5 format.
return
if not tf.__internal__.tf2.enabled():
self.skipTest("Checkpoint callback only available in v2.")
layers = [
keras.layers.Dense(
NUM_HIDDEN, input_dim=INPUT_DIM, activation="relu"
),
keras.layers.Dense(NUM_CLASSES, activation="softmax"),
]
model = test_utils.get_model_from_layers(layers, input_shape=(3,))
model.compile(
loss="categorical_crossentropy",
optimizer="rmsprop",
metrics=["acc"],
)
temp_dir = self.get_temp_dir()
self.addCleanup(shutil.rmtree, temp_dir, ignore_errors=True)
# Save model to a subdir inside the temp_dir so we can test
# automatic directory creation.
filepath = os.path.join(temp_dir, "subdir", "checkpoint" + save_format)
(x_train, y_train), (x_test, y_test) = test_utils.get_test_data(
train_samples=TRAIN_SAMPLES,
test_samples=TEST_SAMPLES,
input_shape=(INPUT_DIM,),
num_classes=NUM_CLASSES,
)
y_test = np_utils.to_categorical(y_test)
y_train = np_utils.to_categorical(y_train)
# Case 1
monitor = "val_loss"
save_best_only = False
mode = "auto"
cbks = [
keras.callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
mode=mode,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
assert os.path.exists(filepath)
os.remove(filepath)
# Case 2
mode = "min"
cbks = [
keras.callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
mode=mode,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
assert os.path.exists(filepath)
os.remove(filepath)
# Case 3
mode = "max"
monitor = "val_acc"
cbks = [
keras.callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
mode=mode,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
assert os.path.exists(filepath)
os.remove(filepath)
# Case 4
save_best_only = True
cbks = [
keras.callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
mode=mode,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
assert os.path.exists(filepath)
os.remove(filepath)
# Case 5: metric not available.
cbks = [
keras.callbacks.ModelCheckpoint(
filepath, monitor="unknown", save_best_only=True
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
# File won't be written.
assert not os.path.exists(filepath)
# Case 6
save_best_only = False
period = 2
mode = "auto"
filepath = os.path.join(
temp_dir, "checkpoint.{epoch:02d}" + save_format
)
cbks = [
keras.callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
mode=mode,
period=period,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=4,
verbose=1,
)
assert os.path.exists(filepath.format(epoch=2))
assert os.path.exists(filepath.format(epoch=4))
os.remove(filepath.format(epoch=2))
os.remove(filepath.format(epoch=4))
assert not os.path.exists(filepath.format(epoch=1))
assert not os.path.exists(filepath.format(epoch=3))
# Invalid use: this will raise a warning but not an Exception.
keras.callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
mode="unknown",
)
# Case 7: `ModelCheckpoint` with a combination of `save_freq` and
# `period`. Though `period` is deprecated, we're testing it for
# backward-compatibility.
filepath = os.path.join(
temp_dir, "checkpoint.epoch{epoch:02d}" + save_format
)
cbks = [
keras.callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
mode=mode,
save_freq="epoch",
period=5,
)
]
assert not os.path.exists(filepath.format(epoch=0))
assert not os.path.exists(filepath.format(epoch=5))
model.fit(
x_train,
y_train,
batch_size=2,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=10,
verbose=1,
)
assert not os.path.exists(filepath.format(epoch=1))
assert not os.path.exists(filepath.format(epoch=2))
assert not os.path.exists(filepath.format(epoch=3))
assert not os.path.exists(filepath.format(epoch=4))
assert os.path.exists(filepath.format(epoch=5))
assert not os.path.exists(filepath.format(epoch=6))
assert os.path.exists(filepath.format(epoch=10))
os.remove(filepath.format(epoch=5))
os.remove(filepath.format(epoch=10))
# Case 8: `ModelCheckpoint` with an integer `save_freq`
filepath = os.path.join(
temp_dir, "checkpoint.epoch{epoch:02d}" + save_format
)
cbks = [
keras.callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
mode=mode,
save_freq=15,
period=100,
) # The period should be ignored (this test tests this).
]
assert not os.path.exists(filepath.format(epoch=3))
model.fit(
x_train,
y_train,
batch_size=2,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=10,
verbose=1,
)
assert not os.path.exists(filepath.format(epoch=1))
assert not os.path.exists(filepath.format(epoch=2))
assert os.path.exists(filepath.format(epoch=3))
assert not os.path.exists(filepath.format(epoch=4))
assert not os.path.exists(filepath.format(epoch=5))
assert os.path.exists(filepath.format(epoch=6))
assert not os.path.exists(filepath.format(epoch=7))
assert not os.path.exists(filepath.format(epoch=8))
assert os.path.exists(filepath.format(epoch=9))
os.remove(filepath.format(epoch=3))
os.remove(filepath.format(epoch=6))
os.remove(filepath.format(epoch=9))
# Case 9: `ModelCheckpoint` with valid and invalid save_freq argument.
with self.assertRaisesRegex(ValueError, "Unrecognized save_freq"):
keras.callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
mode=mode,
save_freq="invalid_save_freq",
)
# The following should not raise ValueError.
keras.callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
mode=mode,
save_freq="epoch",
)
keras.callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
mode=mode,
save_freq=3,
)
# Case 10: `ModelCheckpoint` with valid and invalid `options` argument.
if save_format == ".h5":
with self.assertRaisesRegex(
TypeError, "tf.train.CheckpointOptions"
):
keras.callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
save_weights_only=True,
mode=mode,
options=tf.saved_model.SaveOptions(),
)
with self.assertRaisesRegex(
TypeError, "tf.saved_model.SaveOptions"
):
keras.callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
save_weights_only=False,
mode=mode,
options=tf.train.CheckpointOptions(),
)
keras.callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
save_weights_only=True,
mode=mode,
options=tf.train.CheckpointOptions(),
)
keras.callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
save_weights_only=False,
mode=mode,
options=tf.saved_model.SaveOptions(),
)
# Case 11: `ModelCheckpoint` save model with batch number in filename.
filepath = os.path.join(
temp_dir,
"checkpoint.epoch{epoch:02d}batch{batch:02d}" + save_format,
)
cbks = [
keras.callbacks.ModelCheckpoint(
filepath, monitor=monitor, save_freq=1
)
]
assert not os.path.exists(filepath.format(epoch=1, batch=1))
assert not os.path.exists(filepath.format(epoch=1, batch=2))
assert not os.path.exists(filepath.format(epoch=2, batch=1))
assert not os.path.exists(filepath.format(epoch=2, batch=2))
assert not os.path.exists(filepath.format(epoch=3, batch=1))
assert not os.path.exists(filepath.format(epoch=3, batch=2))
assert not os.path.exists(filepath.format(epoch=4, batch=1))
assert not os.path.exists(filepath.format(epoch=4, batch=2))
assert not os.path.exists(filepath.format(epoch=5, batch=1))
assert not os.path.exists(filepath.format(epoch=5, batch=2))
model.fit(
x_train,
y_train,
batch_size=5,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=5,
verbose=1,
)
assert os.path.exists(filepath.format(epoch=1, batch=1))
assert os.path.exists(filepath.format(epoch=1, batch=2))
assert os.path.exists(filepath.format(epoch=2, batch=1))
assert os.path.exists(filepath.format(epoch=2, batch=2))
assert os.path.exists(filepath.format(epoch=3, batch=1))
assert os.path.exists(filepath.format(epoch=3, batch=2))
assert os.path.exists(filepath.format(epoch=4, batch=1))
assert os.path.exists(filepath.format(epoch=4, batch=2))
assert os.path.exists(filepath.format(epoch=5, batch=1))
assert os.path.exists(filepath.format(epoch=5, batch=2))
os.remove(filepath.format(epoch=1, batch=1))
os.remove(filepath.format(epoch=1, batch=2))
os.remove(filepath.format(epoch=2, batch=1))
os.remove(filepath.format(epoch=2, batch=2))
os.remove(filepath.format(epoch=3, batch=1))
os.remove(filepath.format(epoch=3, batch=2))
os.remove(filepath.format(epoch=4, batch=1))
os.remove(filepath.format(epoch=4, batch=2))
os.remove(filepath.format(epoch=5, batch=1))
os.remove(filepath.format(epoch=5, batch=2))
# Case 12: ModelCheckpoint saves model with initial_value_threshold
# param
mode = "max"
monitor = "val_acc"
initial_value_threshold = 0
save_best_only = True
filepath = os.path.join(temp_dir, "checkpoint" + save_format)
cbks = [
keras.callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
initial_value_threshold=initial_value_threshold,
mode=mode,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
assert os.path.exists(filepath)
os.remove(filepath)
# Case 13: ModelCheckpoint saves model with initial_value_threshold
# param
mode = "auto"
monitor = "val_loss"
initial_value_threshold = None
save_best_only = True
cbks = [
keras.callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
initial_value_threshold=initial_value_threshold,
mode=mode,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
assert os.path.exists(filepath)
os.remove(filepath)
# Case 14: ModelCkpt doesn't save model if loss was minimum earlier
mode = "min"
monitor = "val_loss"
initial_value_threshold = 0
save_best_only = True
cbks = [
keras.callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
initial_value_threshold=initial_value_threshold,
mode=mode,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
assert not os.path.exists(filepath)
# Case 15: ModelCheckpoint doesn't save model if loss was min earlier in
# auto mode
mode = "auto"
monitor = "val_loss"
initial_value_threshold = 0
save_best_only = True
cbks = [
keras.callbacks.ModelCheckpoint(
filepath,
monitor=monitor,
save_best_only=save_best_only,
initial_value_threshold=initial_value_threshold,
mode=mode,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
assert not os.path.exists(filepath)
@test_utils.run_v2_only
def test_ModelCheckpoint_subclass_SavedModel_save_weights_false(self):
model = test_utils.get_small_subclass_mlp(NUM_HIDDEN, NUM_CLASSES)
model.compile(
loss="categorical_crossentropy",
optimizer="rmsprop",
metrics=["acc"],
)
temp_dir = self.get_temp_dir()
self.addCleanup(shutil.rmtree, temp_dir, ignore_errors=True)
filepath = os.path.join(temp_dir, "checkpoint")
cbks = [
keras.callbacks.ModelCheckpoint(filepath, save_weights_only=False)
]
(x_train, y_train), _ = test_utils.get_test_data(
train_samples=TRAIN_SAMPLES,
test_samples=TEST_SAMPLES,
input_shape=(INPUT_DIM,),
num_classes=NUM_CLASSES,
)
y_train = np_utils.to_categorical(y_train, num_classes=NUM_CLASSES)
model.fit(x_train, y_train, callbacks=cbks, epochs=1, verbose=0)
# Check that the filepath is a SavedModel directory.
self.assertIn("saved_model.pb", os.listdir(filepath))
@test_utils.run_v2_only
def test_ModelCheckpoint_subclass_KerasV3(self):
model = test_utils.get_small_subclass_mlp(NUM_HIDDEN, NUM_CLASSES)
model.compile(
loss="categorical_crossentropy",
optimizer="rmsprop",
metrics=["acc"],
)
temp_dir = self.get_temp_dir()
self.addCleanup(shutil.rmtree, temp_dir, ignore_errors=True)
filepath = os.path.join(temp_dir, "checkpoint.keras")
cbks = [
keras.callbacks.ModelCheckpoint(filepath, save_weights_only=False)
]
(x_train, y_train), _ = test_utils.get_test_data(
train_samples=TRAIN_SAMPLES,
test_samples=TEST_SAMPLES,
input_shape=(INPUT_DIM,),
num_classes=NUM_CLASSES,
)
y_train = np_utils.to_categorical(y_train, num_classes=NUM_CLASSES)
model.fit(x_train, y_train, callbacks=cbks, epochs=1, verbose=0)
assert os.path.exists(filepath)
def _get_dummy_resource_for_model_checkpoint_testing(self):
def get_input_datasets():
# Simple training input.
train_input = [[1.0]] * 16
train_label = [[0.0]] * 16
ds = tf.data.Dataset.from_tensor_slices((train_input, train_label))
return ds.batch(8, drop_remainder=True)
# Very simple bias model to eliminate randomness.
optimizer = gradient_descent.SGD(0.1)
model = sequential.Sequential()
model.add(test_utils.Bias(input_shape=(1,)))
model.compile(loss="mae", optimizer=optimizer, metrics=["mae"])
train_ds = get_input_datasets()
temp_dir = self.get_temp_dir()
filepath = os.path.join(temp_dir, "checkpoint.epoch{epoch:02d}.h5")
# The filepath shouldn't exist at the beginning.
self.assertFalse(os.path.exists(filepath))
callback = keras.callbacks.ModelCheckpoint(
filepath=filepath, save_weights_only=True
)
return model, train_ds, callback, filepath
def _run_load_weights_on_restart_test_common_iterations(self):
(
model,
train_ds,
callback,
filepath,
) = self._get_dummy_resource_for_model_checkpoint_testing()
initial_epochs = 3
model.fit(train_ds, epochs=initial_epochs, callbacks=[callback])
# The files should exist after fitting with callback.
for epoch in range(initial_epochs):
self.assertTrue(os.path.exists(filepath.format(epoch=epoch + 1)))
self.assertFalse(
os.path.exists(filepath.format(epoch=initial_epochs + 1))
)
self.assertEqual(
callback._get_most_recently_modified_file_matching_pattern(
filepath
),
filepath.format(epoch=initial_epochs),
)
model.fit(train_ds, epochs=1)
weights_after_one_more_epoch = model.get_weights()
# The filepath should continue to exist after fitting without callback.
for epoch in range(initial_epochs):
self.assertTrue(os.path.exists(filepath.format(epoch=epoch + 1)))
return model, train_ds, filepath, weights_after_one_more_epoch
@staticmethod
def get_ModelCheckpoint_load_weights_on_restart_true_test(
save_weights_only,
):
def func(self):
(
model,
train_ds,
filepath,
weights_after_one_more_epoch,
) = self._run_load_weights_on_restart_test_common_iterations()
# Sleep for some short time period ensuring the files are created
# with a different time (in MacOS OSS the granularity is only 1
# second).
time.sleep(2)
callback = keras.callbacks.ModelCheckpoint(
filepath=filepath,
save_weights_only=save_weights_only,
load_weights_on_restart=True,
)
model.fit(train_ds, epochs=1, callbacks=[callback])
weights_after_model_restoring_and_one_more_epoch = (
model.get_weights()
)
self.assertEqual(
callback._get_most_recently_modified_file_matching_pattern(
filepath
),
filepath.format(epoch=1),
)
model.fit(
train_ds,
epochs=1,
callbacks=[
keras.callbacks.ModelCheckpoint(
filepath=filepath,
save_weights_only=save_weights_only,
load_weights_on_restart=True,
)
],
)
weights_with_one_final_extra_epoch = model.get_weights()
# Asserting the weights one epoch after initial fitting and another
# epoch after that are closed, if a ModelCheckpoint with
# load_weights_on_restart=True is given (so the model is restored at
# the beginning of training).
self.assertAllClose(
weights_after_one_more_epoch,
weights_after_model_restoring_and_one_more_epoch,
)
self.assertNotAllClose(
weights_after_one_more_epoch, weights_with_one_final_extra_epoch
)
return func
@staticmethod
def get_ModelCheckpoint_load_weights_on_restart_false_test(
save_weights_only,
):
def func(self):
(
model,
train_ds,
filepath,
weights_after_one_more_epoch,
) = self._run_load_weights_on_restart_test_common_iterations()
model.fit(
train_ds,
epochs=1,
callbacks=[
keras.callbacks.ModelCheckpoint(
filepath=filepath, save_weights_only=save_weights_only
)
],
)
weights_after_model_restoring_and_one_more_epoch = (
model.get_weights()
)
# Asserting the weights one epoch after initial fitting and another
# epoch after that are different, if a ModelCheckpoint with
# load_weights_on_restart=False is given (so the model is not
# restored at the beginning of training).
self.assertNotAllClose(
weights_after_one_more_epoch,
weights_after_model_restoring_and_one_more_epoch,
)
return func
test_model_checkpoint_load_weights_on_restart_true_save_weights_only_true = get_ModelCheckpoint_load_weights_on_restart_true_test.__func__( # noqa: E501
True
)
test_model_checkpoint_load_weights_on_restart_true_save_weights_only_false = get_ModelCheckpoint_load_weights_on_restart_true_test.__func__( # noqa: E501
False
)
test_model_checkpoint_load_weights_on_restart_false_save_weights_only_true = get_ModelCheckpoint_load_weights_on_restart_false_test.__func__( # noqa: E501
True
)
test_model_checkpoint_load_weights_on_restart_false_save_weights_only_false = get_ModelCheckpoint_load_weights_on_restart_false_test.__func__( # noqa: E501
False
)
def test_ModelCheckpoint_override_if_file_exist(self):
(
model,
train_ds,
filepath,
_,
) = self._run_load_weights_on_restart_test_common_iterations()
# Sleep for some short time period to ensure the files are created with
# a different time (in MacOS OSS the granularity is only 1 second).
time.sleep(2)
callback = keras.callbacks.ModelCheckpoint(
filepath=filepath, save_weights_only=True
)
model.load_weights(
callback._get_most_recently_modified_file_matching_pattern(filepath)
)
weights_before_additional_fit = model.get_weights()
model.fit(train_ds, epochs=1, callbacks=[callback])
model.load_weights(
callback._get_most_recently_modified_file_matching_pattern(filepath)
)
weights_after_additional_fit = model.get_weights()
self.assertNotAllClose(
weights_before_additional_fit, weights_after_additional_fit
)
def test_fit_with_ModelCheckpoint_with_tf_config(self):
(
model,
train_ds,
callback,
_,
) = self._get_dummy_resource_for_model_checkpoint_testing()
os.environ["TF_CONFIG"] = json.dumps(
{
"cluster": {"worker": ["localhost:23333"]},
"task": {"type": "worker", "index": 0},
}
)
# `model.fit()` should work regardless of the presence of `TF_CONFIG`.
model.fit(train_ds, epochs=1, callbacks=[callback])
def test_fit_with_ModelCheckpoint_with_dir_as_h5_filepath(self):
(
model,
train_ds,
callback,
filepath,
) = self._get_dummy_resource_for_model_checkpoint_testing()
temp_dir = self.get_temp_dir()
filepath = os.path.join(temp_dir, "temp.h5")
self.assertFalse(os.path.exists(filepath))
os.mkdir(filepath)
self.assertTrue(os.path.exists(filepath))
callback = keras.callbacks.ModelCheckpoint(filepath=filepath)
with self.assertRaisesRegex(
IOError,
"Please specify a non-directory filepath for ModelCheckpoint.",
):
model.fit(train_ds, epochs=1, callbacks=[callback])
def test_ModelCheckpoint_KerasV3_save_options_error(self):
(
model,
train_ds,
callback,
filepath,
) = self._get_dummy_resource_for_model_checkpoint_testing()
temp_dir = self.get_temp_dir()
filepath = os.path.join(temp_dir, "temp.keras")
with self.assertRaisesRegex(
ValueError, "The native TF-Keras format does not support"
):
_ = keras.callbacks.ModelCheckpoint(
filepath=filepath, options=tf.saved_model.SaveOptions()
)
def test_ModelCheckpoint_with_bad_path_placeholders(self):
(
model,
train_ds,
callback,
filepath,
) = self._get_dummy_resource_for_model_checkpoint_testing()
temp_dir = self.get_temp_dir()
filepath = os.path.join(temp_dir, "chkpt_{epoch:02d}_{mape:.2f}.h5")
callback = keras.callbacks.ModelCheckpoint(filepath=filepath)
with self.assertRaisesRegex(
KeyError, "Failed to format this callback filepath.*"
):
model.fit(train_ds, epochs=1, callbacks=[callback])
def test_ModelCheckpoint_nonblocking(self):
filepath = self.get_temp_dir()
# Should only cause a sync block when saving is actually performed.
callback = keras.callbacks.ModelCheckpoint(
filepath=filepath, save_freq=100
)
self.assertTrue(callback._supports_tf_logs)
model = keras.Sequential([keras.layers.Dense(1)])
cb_list = keras.callbacks.CallbackList(
[callback], model=model, epochs=1, steps=10, verbose=0
)
tensor = tf.convert_to_tensor(1.0)
def mock_numpy():
raise RuntimeError(
"If this error is seen, ModelCheckpoint is causing a blocking "
"NumPy conversion even when not checkpointing."
)
tensor.numpy = mock_numpy
logs = {"metric": tensor}
cb_list.on_train_begin(logs)
cb_list.on_epoch_begin(0, logs)
cb_list.on_train_batch_begin(0, logs)
cb_list.on_train_batch_end(0, logs)
cb_list.on_epoch_end(0, logs)
cb_list.on_train_end(logs)
cb_list.on_test_begin(logs)
cb_list.on_test_batch_begin(0, logs)
cb_list.on_test_batch_end(0, logs)
cb_list.on_test_end(logs)
cb_list.on_predict_begin(logs)
cb_list.on_predict_batch_begin(logs)
cb_list.on_predict_batch_end(logs)
cb_list.on_predict_end(logs)
def _run_fit_with_ModelCheckpoint_with_steps_per_execution(
self,
model,
savepath,
save_freq,
train_samples,
steps_per_execution,
epochs,
check_ckpt_epochs,
check_ckpt_batchs,
):
assert len(check_ckpt_epochs) == len(check_ckpt_batchs)
(x_train, y_train), _ = test_utils.get_test_data(
train_samples=train_samples,
test_samples=0,
input_shape=(INPUT_DIM,),
num_classes=NUM_CLASSES,
)
y_train = np_utils.to_categorical(y_train)
model.compile(
loss="categorical_crossentropy",
optimizer="rmsprop",
steps_per_execution=steps_per_execution,
)
self.assertFalse(os.path.exists(savepath))
callback = keras.callbacks.ModelCheckpoint(
filepath=os.path.join(savepath, "ckpt_{epoch}_{batch}"),
save_freq=save_freq,
)
model.fit(
x_train,
y_train,
batch_size=1,
epochs=epochs,
verbose=0,
callbacks=[callback],
)
self.assertTrue(os.path.exists(savepath))
for i in range(len(check_ckpt_epochs)):
epoch = check_ckpt_epochs[i]
batch = check_ckpt_batchs[i]
ckpt_name = "ckpt_" + str(epoch) + "_" + str(batch)
ckpt_path = os.path.join(savepath, ckpt_name)
self.assertTrue(os.path.exists(ckpt_path))
self.assertIn("saved_model.pb", os.listdir(ckpt_path))
shutil.rmtree(savepath)
@test_combinations.run_with_all_model_types
@test_utils.run_v2_only
def test_fit_with_ModelCheckpoint_with_steps_per_execution(self):
layers = [
keras.layers.Dense(
NUM_HIDDEN, input_dim=INPUT_DIM, activation="relu"
),
keras.layers.Dense(NUM_CLASSES, activation="softmax"),
]
model = test_utils.get_model_from_layers(
layers, input_shape=(INPUT_DIM,)
)
temp_dir = self.get_temp_dir()
savepath = os.path.join(temp_dir, "checkpoint")
for steps_per_execution in [None, 7]:
self._run_fit_with_ModelCheckpoint_with_steps_per_execution(
model,
savepath,
save_freq=7,
train_samples=7,
steps_per_execution=steps_per_execution,
epochs=1,
check_ckpt_epochs=[1],
check_ckpt_batchs=[7],
)
self._run_fit_with_ModelCheckpoint_with_steps_per_execution(
model,
savepath,
save_freq=7,
train_samples=7,
steps_per_execution=steps_per_execution,
epochs=2,
check_ckpt_epochs=[1, 2],
check_ckpt_batchs=[7, 7],
)
self._run_fit_with_ModelCheckpoint_with_steps_per_execution(
model,
savepath,
save_freq=14,
train_samples=7,
steps_per_execution=steps_per_execution,
epochs=2,
check_ckpt_epochs=[2],
check_ckpt_batchs=[7],
)
self._run_fit_with_ModelCheckpoint_with_steps_per_execution(
model,
savepath,
save_freq=7,
train_samples=14,
steps_per_execution=steps_per_execution,
epochs=2,
check_ckpt_epochs=[1, 1, 2, 2],
check_ckpt_batchs=[7, 14, 7, 14],
)
def test_verbose_2_logging(self):
data = np.random.random((100, 1))
labels = np.where(data > 0.5, 1, 0)
model = keras.models.Sequential(
(
keras.layers.Dense(1, input_dim=1, activation="relu"),
keras.layers.Dense(1, activation="sigmoid"),
)
)
model.compile(
optimizer="sgd", loss="binary_crossentropy", metrics=["accuracy"]
)
expected_log = r"(.*- loss:.*- acc.*:.*epoch)+"
with self.captureWritesToStream(sys.stdout) as printed:
model.fit(data, labels, verbose=2, epochs=20)
self.assertRegex(printed.contents(), expected_log)
def test_ProgbarLogger_verbose_2_nonblocking(self):
# Should only cause a sync block on epoch end methods.
callback = keras.callbacks.ProgbarLogger(count_mode="steps")
self.assertTrue(callback._supports_tf_logs)
model = keras.Sequential([keras.layers.Dense(1)])
cb_list = keras.callbacks.CallbackList(
[callback], model=model, epochs=1, steps=10, verbose=2
)
tensor = tf.convert_to_tensor(1.0)
def mock_numpy():
raise RuntimeError(
"If this error is seen, ModelCheckpoint is causing a blocking "
"NumPy conversion even when not checkpointing."
)
tensor.numpy = mock_numpy
logs = {"metric": tensor}
cb_list.on_train_begin(logs)
cb_list.on_epoch_begin(0, logs)
cb_list.on_train_batch_begin(0, logs)
cb_list.on_train_batch_end(0, logs)
cb_list.on_test_begin(logs)
cb_list.on_test_batch_begin(0, logs)
cb_list.on_test_batch_end(0, logs)
cb_list.on_test_end(logs)
with self.assertRaisesRegex(RuntimeError, "NumPy conversion"):
# on_epoch_end should still block.
cb_list.on_epoch_end(0, logs)
cb_list.on_train_end(logs)
def test_EarlyStopping(self):
with self.cached_session():
np.random.seed(123)
(x_train, y_train), (x_test, y_test) = test_utils.get_test_data(
train_samples=TRAIN_SAMPLES,
test_samples=TEST_SAMPLES,
input_shape=(INPUT_DIM,),
num_classes=NUM_CLASSES,
)
y_test = np_utils.to_categorical(y_test)
y_train = np_utils.to_categorical(y_train)
model = test_utils.get_small_sequential_mlp(
num_hidden=NUM_HIDDEN,
num_classes=NUM_CLASSES,
input_dim=INPUT_DIM,
)
model.compile(
loss="categorical_crossentropy",
optimizer="rmsprop",
metrics=["acc"],
)
cases = [
("max", "val_acc"),
("min", "val_loss"),
("auto", "val_acc"),
("auto", "loss"),
("unknown", "unknown"),
]
for mode, monitor in cases:
patience = 0
cbks = [
keras.callbacks.EarlyStopping(
patience=patience, monitor=monitor, mode=mode
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=5,
verbose=0,
)
def test_EarlyStopping_patience(self):
cases = [0, 1, 2, 3]
losses = [10.0, 9.0, 8.0, 9.0, 8.9, 8.8, 8.7, 8.6, 8.5]
for patience in cases:
stopper = keras.callbacks.EarlyStopping(
monitor="loss", patience=patience
)
stopper.model = keras.models.Sequential()
stopper.on_train_begin()
for epoch, loss in enumerate(losses):
stopper.on_epoch_end(epoch=epoch, logs={"loss": loss})
if stopper.model.stop_training:
break
self.assertEqual(stopper.stopped_epoch, max(patience, 1) + 2)
def test_EarlyStopping_reuse(self):
with self.cached_session():
np.random.seed(1337)
patience = 3
data = np.random.random((100, 1))
labels = np.where(data > 0.5, 1, 0)
model = keras.models.Sequential(
(
keras.layers.Dense(1, input_dim=1, activation="relu"),
keras.layers.Dense(1, activation="sigmoid"),
)
)
model.compile(
optimizer="sgd",
loss="binary_crossentropy",
metrics=["accuracy"],
)
weights = model.get_weights()
# This should allow training to go for at least `patience` epochs
model.set_weights(weights)
stopper = keras.callbacks.EarlyStopping(
monitor="acc", patience=patience
)
hist = model.fit(
data, labels, callbacks=[stopper], verbose=0, epochs=20
)
assert len(hist.epoch) >= patience
def test_EarlyStopping_with_baseline(self):
with self.cached_session():
np.random.seed(1337)
baseline = 0.6
(data, labels), _ = test_utils.get_test_data(
train_samples=100,
test_samples=50,
input_shape=(1,),
num_classes=NUM_CLASSES,
)
model = test_utils.get_small_sequential_mlp(
num_hidden=1, num_classes=1, input_dim=1
)
model.compile(
optimizer="sgd", loss="binary_crossentropy", metrics=["acc"]
)
stopper = keras.callbacks.EarlyStopping(
monitor="acc", baseline=baseline
)
hist = model.fit(
data, labels, callbacks=[stopper], verbose=0, epochs=20
)
assert len(hist.epoch) == 2
patience = 3
stopper = keras.callbacks.EarlyStopping(
monitor="acc", patience=patience, baseline=baseline
)
hist = model.fit(
data, labels, callbacks=[stopper], verbose=0, epochs=20
)
assert len(hist.epoch) >= patience
def test_EarlyStopping_final_weights_when_restoring_model_weights(self):
class DummyModel:
def __init__(self):
self.stop_training = False
self.weights = -1
def get_weights(self):
return self.weights
def set_weights(self, weights):
self.weights = weights
def set_weight_to_epoch(self, epoch):
self.weights = epoch
early_stop = keras.callbacks.EarlyStopping(
monitor="val_loss", patience=2, restore_best_weights=True
)
early_stop.model = DummyModel()
losses = [0.2, 0.15, 0.1, 0.11, 0.12]
# The best configuration is in the epoch 2 (loss = 0.1000).
epochs_trained = 0
early_stop.on_train_begin()
for epoch in range(len(losses)):
epochs_trained += 1
early_stop.model.set_weight_to_epoch(epoch=epoch)
early_stop.on_epoch_end(epoch, logs={"val_loss": losses[epoch]})
if early_stop.model.stop_training:
break
early_stop.on_train_end()
# The best configuration is in epoch 2 (loss = 0.1000),
# and while patience = 2, we're restoring the best weights,
# so we end up at the epoch with the best weights, i.e. epoch 2
self.assertEqual(early_stop.model.get_weights(), 2)
# Check early stopping when no model beats the baseline.
early_stop = keras.callbacks.EarlyStopping(
monitor="val_loss",
patience=5,
baseline=0.5,
restore_best_weights=True,
)
early_stop.model = DummyModel()
losses = [0.9, 0.8, 0.7, 0.71, 0.72, 0.73]
# The best configuration is in the epoch 2 (loss = 0.7000).
epochs_trained = 0
early_stop.on_train_begin()
for epoch in range(len(losses)):
epochs_trained += 1
early_stop.model.set_weight_to_epoch(epoch=epoch)
early_stop.on_epoch_end(epoch, logs={"val_loss": losses[epoch]})
if early_stop.model.stop_training:
break
early_stop.on_train_end()
# No epoch improves on the baseline, so we should train for only 5
# epochs, and restore the second model.
self.assertEqual(epochs_trained, 5)
self.assertEqual(early_stop.model.get_weights(), 2)
def test_EarlyStopping_with_start_from_epoch(self):
with self.cached_session():
np.random.seed(1337)
(data, labels), _ = test_utils.get_test_data(
train_samples=TRAIN_SAMPLES,
test_samples=TEST_SAMPLES,
input_shape=(INPUT_DIM,),
num_classes=NUM_CLASSES,
)
labels = np_utils.to_categorical(labels)
model = test_utils.get_small_sequential_mlp(
num_hidden=NUM_HIDDEN,
num_classes=NUM_CLASSES,
input_dim=INPUT_DIM,
)
model.compile(
optimizer="sgd", loss="binary_crossentropy", metrics=["acc"]
)
start_from_epoch = 2
patience = 3
stopper = keras.callbacks.EarlyStopping(
monitor="acc",
patience=patience,
start_from_epoch=start_from_epoch,
)
history = model.fit(
data, labels, callbacks=[stopper], verbose=0, epochs=20
)
# Test 'patience' argument functions correctly when used
# in conjunction with 'start_from_epoch'.
self.assertGreaterEqual(
len(history.epoch), patience + start_from_epoch
)
start_from_epoch = 2
patience = 0
stopper = keras.callbacks.EarlyStopping(
monitor="acc",
patience=patience,
start_from_epoch=start_from_epoch,
)
history = model.fit(
data, labels, callbacks=[stopper], verbose=0, epochs=20
)
# Test for boundary condition when 'patience' = 0.
self.assertGreaterEqual(len(history.epoch), start_from_epoch)
def test_RemoteMonitor(self):
if requests is None:
self.skipTest("`requests` required to run this test")
return None
monitor = keras.callbacks.RemoteMonitor()
# This will raise a warning since the default address in unreachable:
monitor.on_epoch_end(0, logs={"loss": 0.0})
def test_LearningRateScheduler(self):
with self.cached_session():
np.random.seed(1337)
(x_train, y_train), (x_test, y_test) = test_utils.get_test_data(
train_samples=TRAIN_SAMPLES,
test_samples=TEST_SAMPLES,
input_shape=(INPUT_DIM,),
num_classes=NUM_CLASSES,
)
y_test = np_utils.to_categorical(y_test)
y_train = np_utils.to_categorical(y_train)
model = test_utils.get_small_sequential_mlp(
num_hidden=NUM_HIDDEN,
num_classes=NUM_CLASSES,
input_dim=INPUT_DIM,
)
model.compile(
loss="categorical_crossentropy",
optimizer="sgd",
metrics=["accuracy"],
)
cbks = [
keras.callbacks.LearningRateScheduler(
lambda x: 1.0 / (1.0 + x), verbose=1
)
]
io_utils.enable_interactive_logging()
with self.captureWritesToStream(sys.stdout) as printed:
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=5,
)
self.assertIn(
"LearningRateScheduler setting learning rate to 1.0",
printed.contents(),
)
assert (
float(keras.backend.get_value(model.optimizer.lr)) - 0.2
) < keras.backend.epsilon()
cbks = [keras.callbacks.LearningRateScheduler(lambda x, lr: lr / 2)]
model.compile(
loss="categorical_crossentropy",
optimizer="sgd",
metrics=["accuracy"],
)
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=2,
verbose=0,
)
assert (
float(keras.backend.get_value(model.optimizer.lr)) - 0.01 / 4
) < keras.backend.epsilon()
cbks = [
keras.callbacks.LearningRateScheduler(
lambda epoch, _: learning_rate_schedule.CosineDecay(
0.01, 2
)(epoch)
)
]
model.compile(
loss="categorical_crossentropy",
optimizer="sgd",
metrics=["accuracy"],
)
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=2,
verbose=0,
)
cosine_decay_np = 0.5 * (1 + np.cos(np.pi * (1 / 2)))
decayed_learning_rate = 0.01 * cosine_decay_np
assert (
float(keras.backend.get_value(model.optimizer.lr))
- decayed_learning_rate
) < keras.backend.epsilon()
def test_ReduceLROnPlateau(self):
with self.cached_session():
tf_utils.set_random_seed(1337)
(x_train, y_train), (x_test, y_test) = test_utils.get_test_data(
train_samples=TRAIN_SAMPLES,
test_samples=TEST_SAMPLES,
input_shape=(INPUT_DIM,),
num_classes=NUM_CLASSES,
)
y_test = np_utils.to_categorical(y_test)
y_train = np_utils.to_categorical(y_train)
def make_model():
tf_utils.set_random_seed(1337)
model = test_utils.get_small_sequential_mlp(
num_hidden=NUM_HIDDEN,
num_classes=NUM_CLASSES,
input_dim=INPUT_DIM,
)
model.compile(
loss="categorical_crossentropy",
optimizer=gradient_descent.SGD(lr=0.1),
)
return model
# TODO(psv): Make sure the callback works correctly when min_delta
# is set as 0. Test fails when the order of this callback and
# assertion is interchanged.
model = make_model()
cbks = [
keras.callbacks.ReduceLROnPlateau(
monitor="val_loss",
factor=0.1,
min_delta=0,
patience=1,
cooldown=5,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=2,
verbose=0,
)
self.assertAllClose(
float(keras.backend.get_value(model.optimizer.lr)),
0.1,
atol=1e-4,
)
model = make_model()
# This should reduce the LR after the first epoch (due to high
# epsilon).
cbks = [
keras.callbacks.ReduceLROnPlateau(
monitor="val_loss",
factor=0.1,
min_delta=10,
patience=1,
cooldown=5,
)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=2,
verbose=2,
)
self.assertAllClose(
float(keras.backend.get_value(model.optimizer.lr)),
0.01,
atol=1e-4,
)
def test_ReduceLROnPlateau_patience(self):
class DummyOptimizer:
def __init__(self):
self.lr = keras.backend.variable(1.0)
class DummyModel:
def __init__(self):
self.optimizer = DummyOptimizer()
reduce_on_plateau = keras.callbacks.ReduceLROnPlateau(
monitor="val_loss", patience=2
)
reduce_on_plateau.model = DummyModel()
losses = [0.0860, 0.1096, 0.1040]
lrs = []
for epoch in range(len(losses)):
reduce_on_plateau.on_epoch_end(
epoch, logs={"val_loss": losses[epoch]}
)
lrs.append(
keras.backend.get_value(reduce_on_plateau.model.optimizer.lr)
)
# The learning rates should be 1.0 except the last one
for lr in lrs[:-1]:
self.assertEqual(lr, 1.0)
self.assertLess(lrs[-1], 1.0)
def test_ReduceLROnPlateau_backwards_compatibility(self):
with tf.compat.v1.test.mock.patch.object(
logging, "warning"
) as mock_log:
reduce_on_plateau = keras.callbacks.ReduceLROnPlateau(epsilon=1e-13)
self.assertRegex(
str(mock_log.call_args), "`epsilon` argument is deprecated"
)
self.assertFalse(hasattr(reduce_on_plateau, "epsilon"))
self.assertTrue(hasattr(reduce_on_plateau, "min_delta"))
self.assertEqual(reduce_on_plateau.min_delta, 1e-13)
def test_CSVLogger(self):
with self.cached_session():
np.random.seed(1337)
temp_dir = self.get_temp_dir()
self.addCleanup(shutil.rmtree, temp_dir, ignore_errors=True)
filepath = os.path.join(temp_dir, "log.tsv")
sep = "\t"
(x_train, y_train), (x_test, y_test) = test_utils.get_test_data(
train_samples=TRAIN_SAMPLES,
test_samples=TEST_SAMPLES,
input_shape=(INPUT_DIM,),
num_classes=NUM_CLASSES,
)
y_test = np_utils.to_categorical(y_test)
y_train = np_utils.to_categorical(y_train)
def make_model():
np.random.seed(1337)
model = test_utils.get_small_sequential_mlp(
num_hidden=NUM_HIDDEN,
num_classes=NUM_CLASSES,
input_dim=INPUT_DIM,
)
model.compile(
loss="categorical_crossentropy",
optimizer=gradient_descent.SGD(lr=0.1),
metrics=["accuracy"],
)
return model
# case 1, create new file with defined separator
model = make_model()
cbks = [keras.callbacks.CSVLogger(filepath, separator=sep)]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
assert os.path.exists(filepath)
with open(filepath) as csvfile:
dialect = csv.Sniffer().sniff(csvfile.read())
assert dialect.delimiter == sep
del model
del cbks
# case 2, append data to existing file, skip header
model = make_model()
cbks = [
keras.callbacks.CSVLogger(filepath, separator=sep, append=True)
]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
verbose=0,
)
# case 3, reuse of CSVLogger object
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=2,
verbose=0,
)
with open(filepath) as csvfile:
list_lines = csvfile.readlines()
for line in list_lines:
assert line.count(sep) == 4
assert len(list_lines) == 5
output = " ".join(list_lines)
assert len(re.findall("epoch", output)) == 1
os.remove(filepath)
# case 3, Verify Val. loss also registered when Validation Freq > 1
model = make_model()
cbks = [keras.callbacks.CSVLogger(filepath, separator=sep)]
hist = model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
validation_freq=3,
callbacks=cbks,
epochs=5,
verbose=0,
)
assert os.path.exists(filepath)
# Verify that validation loss is registered at val. freq
with open(filepath) as csvfile:
rows = csv.DictReader(csvfile, delimiter=sep)
for idx, row in enumerate(rows, 1):
self.assertIn("val_loss", row)
if idx == 3:
self.assertEqual(
row["val_loss"], str(hist.history["val_loss"][0])
)
else:
self.assertEqual(row["val_loss"], "NA")
def test_stop_training_csv(self):
# Test that using the CSVLogger callback with the TerminateOnNaN
# callback does not result in invalid CSVs.
np.random.seed(1337)
tmpdir = self.get_temp_dir()
self.addCleanup(shutil.rmtree, tmpdir, ignore_errors=True)
with self.cached_session():
fp = os.path.join(tmpdir, "test.csv")
(x_train, y_train), (x_test, y_test) = test_utils.get_test_data(
train_samples=TRAIN_SAMPLES,
test_samples=TEST_SAMPLES,
input_shape=(INPUT_DIM,),
num_classes=NUM_CLASSES,
)
y_test = np_utils.to_categorical(y_test)
y_train = np_utils.to_categorical(y_train)
cbks = [
keras.callbacks.TerminateOnNaN(),
keras.callbacks.CSVLogger(fp),
]
model = keras.models.Sequential()
for _ in range(5):
model.add(
keras.layers.Dense(
2, input_dim=INPUT_DIM, activation="relu"
)
)
model.add(keras.layers.Dense(NUM_CLASSES, activation="linear"))
model.compile(loss="mean_squared_error", optimizer="rmsprop")
def data_generator():
i = 0
max_batch_index = len(x_train) // BATCH_SIZE
tot = 0
while 1:
if tot > 3 * len(x_train):
yield (
np.ones([BATCH_SIZE, INPUT_DIM]) * np.nan,
np.ones([BATCH_SIZE, NUM_CLASSES]) * np.nan,
)
else:
yield (
x_train[i * BATCH_SIZE : (i + 1) * BATCH_SIZE],
y_train[i * BATCH_SIZE : (i + 1) * BATCH_SIZE],
)
i += 1
tot += 1
i %= max_batch_index
history = model.fit_generator(
data_generator(),
len(x_train) // BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=20,
)
loss = history.history["loss"]
assert len(loss) > 1
assert loss[-1] == np.inf or np.isnan(loss[-1])
values = []
with open(fp) as f:
# On Windows, due to \r\n line ends, we may end up reading empty
# lines after each line. Skip empty lines.
values = [x for x in csv.reader(f) if x]
assert "nan" in values[-1], "The last epoch was not logged."
@test_combinations.run_all_keras_modes(always_skip_v1=True)
def test_TerminateOnNaN(self):
np.random.seed(1337)
(x_train, y_train), (x_test, y_test) = test_utils.get_test_data(
train_samples=TRAIN_SAMPLES,
test_samples=TEST_SAMPLES,
input_shape=(INPUT_DIM,),
num_classes=NUM_CLASSES,
)
y_test = np_utils.to_categorical(y_test)
y_train = np_utils.to_categorical(y_train)
cbks = [keras.callbacks.TerminateOnNaN()]
model = keras.models.Sequential()
initializer = keras.initializers.Constant(value=1e5)
for _ in range(5):
model.add(
keras.layers.Dense(
2,
input_dim=INPUT_DIM,
activation="relu",
kernel_initializer=initializer,
)
)
model.add(keras.layers.Dense(NUM_CLASSES))
model.compile(loss="mean_squared_error", optimizer="rmsprop")
history = model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=20,
)
loss = history.history["loss"]
self.assertEqual(len(loss), 1)
self.assertTrue(np.isnan(loss[0]) or np.isinf(loss[0]))
@unittest.skipIf(
os.name == "nt",
"use_multiprocessing=True does not work on windows properly.",
)
def test_LambdaCallback(self):
with self.cached_session():
np.random.seed(1337)
(x_train, y_train), (x_test, y_test) = test_utils.get_test_data(
train_samples=TRAIN_SAMPLES,
test_samples=TEST_SAMPLES,
input_shape=(INPUT_DIM,),
num_classes=NUM_CLASSES,
)
y_test = np_utils.to_categorical(y_test)
y_train = np_utils.to_categorical(y_train)
model = keras.models.Sequential()
model.add(
keras.layers.Dense(
NUM_HIDDEN, input_dim=INPUT_DIM, activation="relu"
)
)
model.add(keras.layers.Dense(NUM_CLASSES, activation="softmax"))
model.compile(
loss="categorical_crossentropy",
optimizer="sgd",
metrics=["accuracy"],
)
# Start an arbitrary process that should run during model
# training and be terminated after training has completed.
e = threading.Event()
def target():
e.wait()
t = threading.Thread(target=target)
t.start()
cleanup_callback = keras.callbacks.LambdaCallback(
on_train_end=lambda logs: e.set()
)
cbks = [cleanup_callback]
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=5,
verbose=0,
)
t.join()
assert not t.is_alive()
def test_RemoteMonitor_np_array(self):
if requests is None:
self.skipTest("`requests` required to run this test")
with tf.compat.v1.test.mock.patch.object(
requests, "post"
) as requests_post:
monitor = keras.callbacks.RemoteMonitor(send_as_json=True)
a = np.arange(1) # a 1 by 1 array
logs = {"loss": 0.0, "val": a}
monitor.on_epoch_end(0, logs=logs)
send = {"loss": 0.0, "epoch": 0, "val": 0}
requests_post.assert_called_once_with(
monitor.root + monitor.path, json=send, headers=monitor.headers
)
def test_RemoteMonitor_np_float32(self):
if requests is None:
self.skipTest("`requests` required to run this test")
with tf.compat.v1.test.mock.patch.object(
requests, "post"
) as requests_post:
monitor = keras.callbacks.RemoteMonitor(send_as_json=True)
a = np.float32(1.0) # a float32 generic type
logs = {"loss": 0.0, "val": a}
monitor.on_epoch_end(0, logs=logs)
send = {"loss": 0.0, "epoch": 0, "val": 1.0}
requests_post.assert_called_once_with(
monitor.root + monitor.path, json=send, headers=monitor.headers
)
def test_RemoteMonitorWithJsonPayload(self):
if requests is None:
self.skipTest("`requests` required to run this test")
return None
with self.cached_session():
(x_train, y_train), (x_test, y_test) = test_utils.get_test_data(
train_samples=TRAIN_SAMPLES,
test_samples=TEST_SAMPLES,
input_shape=(INPUT_DIM,),
num_classes=NUM_CLASSES,
)
y_test = keras.utils.np_utils.to_categorical(y_test)
y_train = keras.utils.np_utils.to_categorical(y_train)
model = keras.models.Sequential()
model.add(
keras.layers.Dense(
NUM_HIDDEN, input_dim=INPUT_DIM, activation="relu"
)
)
model.add(keras.layers.Dense(NUM_CLASSES, activation="softmax"))
model.compile(
loss="categorical_crossentropy",
optimizer="rmsprop",
metrics=["accuracy"],
)
cbks = [keras.callbacks.RemoteMonitor(send_as_json=True)]
with tf.compat.v1.test.mock.patch.object(requests, "post"):
model.fit(
x_train,
y_train,
batch_size=BATCH_SIZE,
validation_data=(x_test, y_test),
callbacks=cbks,
epochs=1,
)
def test_progbar_infers_steps(self):
x, y = np.ones((10, 1)), np.ones((10, 1))
data = tf.data.Dataset.from_tensor_slices((x, y)).batch(2)
data = data.filter(lambda x, y: True) # Unknown cardinality.
progbar = keras.callbacks.ProgbarLogger("steps")
model = keras.Sequential([keras.layers.Dense(1)])
model.compile("sgd", "mse")
self.assertIsNone(progbar.target)
model.fit(data, epochs=2, callbacks=[progbar])
self.assertEqual(progbar.target, 5)
@test_combinations.run_all_keras_modes(always_skip_v1=True)
def test_callback_passed_floats(self):
class MyCallback(keras.callbacks.Callback):
def on_batch_end(self, batch, logs=None):
assert isinstance(batch, int)
assert isinstance(logs["loss"], float)
self.on_batch_end_called = True
def on_epoch_end(self, batch, logs=None):
assert isinstance(batch, int)
assert isinstance(logs["loss"], float)
self.on_epoch_end_called = True
x, y = np.ones((10, 1)), np.ones((10, 1))
model = keras.Sequential([keras.layers.Dense(1)])
model.compile("sgd", "mse", run_eagerly=test_utils.should_run_eagerly())
callback = MyCallback()
model.fit(x, y, epochs=2, callbacks=[callback])
self.assertTrue(callback.on_batch_end_called)
self.assertTrue(callback.on_batch_end_called)
@test_combinations.run_all_keras_modes(always_skip_v1=True)
def test_implements_batch_hooks(self):
class MyCallbackWithBatchHooks(keras.callbacks.Callback):
def __init__(self):
self.train_batches = 0
self.test_batches = 0
self.predict_batches = 0
def on_train_batch_end(self, batch, logs=None):
self.train_batches += 1
def on_test_batch_end(self, batch, logs=None):
self.test_batches += 1
def on_predict_batch_end(self, batch, logs=None):
self.predict_batches += 1
class MyCallbackWithTFBatchHooks(keras.callbacks.Callback):
def __init__(self):
super().__init__()
self._supports_tf_logs = True
class MyCallbackWithoutBatchHooks(keras.callbacks.Callback):
def __init__(self):
self.epochs = 0
def on_epoch_end(self, epoch, logs=None):
self.epochs += 1
x, y = np.ones((10, 1)), np.ones((10, 1))
model = keras.Sequential([keras.layers.Dense(1)])
model.compile("sgd", "mse")
my_cb = MyCallbackWithBatchHooks()
cb_list = keras.callbacks.CallbackList([my_cb], verbose=0)
self.assertTrue(cb_list._should_call_train_batch_hooks)
self.assertTrue(cb_list._should_call_test_batch_hooks)
self.assertTrue(cb_list._should_call_predict_batch_hooks)
self.assertFalse(cb_list._batch_hooks_support_tf_logs)
model.fit(x, y, epochs=2, batch_size=10, callbacks=[my_cb], verbose=0)
model.evaluate(x, y, batch_size=10, callbacks=[my_cb], verbose=0)
model.predict(x, batch_size=10, callbacks=[my_cb], verbose=0)
self.assertEqual(my_cb.train_batches, 2)
self.assertEqual(my_cb.test_batches, 1)
self.assertEqual(my_cb.predict_batches, 1)
my_cb = MyCallbackWithTFBatchHooks()
cb_list = keras.callbacks.CallbackList([my_cb], verbose=0)
self.assertTrue(cb_list._batch_hooks_support_tf_logs)
my_cb = MyCallbackWithoutBatchHooks()
cb_list = keras.callbacks.CallbackList([my_cb], verbose=0)
self.assertLen(cb_list.callbacks, 1)
self.assertFalse(cb_list._should_call_train_batch_hooks)
self.assertFalse(cb_list._should_call_test_batch_hooks)
self.assertFalse(cb_list._should_call_predict_batch_hooks)
model.fit(x, y, epochs=2, batch_size=10, callbacks=[my_cb], verbose=0)
model.evaluate(x, y, batch_size=10, callbacks=[my_cb], verbose=0)
model.predict(x, batch_size=10, callbacks=[my_cb], verbose=0)
@test_combinations.run_all_keras_modes(always_skip_v1=True)
def test_logs_conversion(self):
assert_dict_equal = self.assertDictEqual
class MutateNumpyLogs(CallAllHooks):
def _run(self, *args, logs=None):
logs = logs or args[-1]
logs["numpy"] = 1
class MutateTensorFlowLogs(CallAllHooks):
def __init__(self):
super().__init__()
self._supports_tf_logs = True
def _run(self, *args, logs=None):
logs = logs or args[-1]
logs["tf"] = 2
class AssertNumpyLogs(CallAllHooks):
def _run(self, *args, logs=None):
logs = logs or args[-1]
assert_dict_equal(logs, {"all": 0, "numpy": 1, "tf": 2})
class AssertTensorFlowLogs(AssertNumpyLogs):
def __init__(self):
super().__init__()
self._supports_tf_logs = True
cb_list = keras.callbacks.CallbackList(
[
MutateNumpyLogs(),
MutateTensorFlowLogs(),
AssertNumpyLogs(),
AssertTensorFlowLogs(),
]
)
assert len(cb_list.callbacks) == 4
cb_list.on_epoch_begin(0, logs={"all": 0})
cb_list.on_epoch_end(0, logs={"all": 0})
cb_list.on_predict_batch_begin(0, logs={"all": 0})
cb_list.on_predict_batch_end(0, logs={"all": 0})
cb_list.on_predict_begin(logs={"all": 0})
cb_list.on_predict_end(logs={"all": 0})
cb_list.on_test_batch_begin(0, logs={"all": 0})
cb_list.on_test_batch_end(0, logs={"all": 0})
cb_list.on_test_begin(logs={"all": 0})
cb_list.on_test_end(logs={"all": 0})
cb_list.on_train_batch_begin(0, logs={"all": 0})
cb_list.on_train_batch_end(0, logs={"all": 0})
cb_list.on_train_begin(logs={"all": 0})
cb_list.on_train_end(logs={"all": 0})
@test_combinations.run_all_keras_modes(always_skip_v1=True)
def test_implements_batch_hooks_override(self):
class MyCallback(keras.callbacks.Callback):
def __init__(self, should_run=True):
self.should_run = should_run
self.train_batches = 0
self.test_batches = 0
self.predict_batches = 0
def on_train_batch_end(self, batch, logs=None):
self.train_batches += 1
def on_test_batch_end(self, batch, logs=None):
self.test_batches += 1
def on_predict_batch_end(self, batch, logs=None):
self.predict_batches += 1
def _implements_train_batch_hooks(self):
return self.should_run
def _implements_test_batch_hooks(self):
return self.should_run
def _implements_predict_batch_hooks(self):
return self.should_run
x, y = np.ones((10, 1)), np.ones((10, 1))
model = keras.Sequential([keras.layers.Dense(1)])
model.compile("sgd", "mse")
my_cb = MyCallback(should_run=True)
cb_list = keras.callbacks.CallbackList([my_cb], verbose=0)
self.assertTrue(cb_list._should_call_train_batch_hooks)
self.assertTrue(cb_list._should_call_test_batch_hooks)
self.assertTrue(cb_list._should_call_predict_batch_hooks)
model.fit(x, y, epochs=2, batch_size=10, callbacks=[my_cb], verbose=0)
model.evaluate(x, y, batch_size=10, callbacks=[my_cb], verbose=0)
model.predict(x, batch_size=10, callbacks=[my_cb], verbose=0)
self.assertEqual(my_cb.train_batches, 2)
self.assertEqual(my_cb.test_batches, 1)
self.assertEqual(my_cb.predict_batches, 1)
my_cb = MyCallback(should_run=False)
cb_list = keras.callbacks.CallbackList([my_cb], verbose=0)
self.assertFalse(cb_list._should_call_train_batch_hooks)
self.assertFalse(cb_list._should_call_test_batch_hooks)
self.assertFalse(cb_list._should_call_predict_batch_hooks)
model.fit(x, y, epochs=2, batch_size=10, callbacks=[my_cb], verbose=0)
model.evaluate(x, y, batch_size=10, callbacks=[my_cb], verbose=0)
model.predict(x, batch_size=10, callbacks=[my_cb], verbose=0)
self.assertEqual(my_cb.train_batches, 0)
self.assertEqual(my_cb.test_batches, 0)
self.assertEqual(my_cb.predict_batches, 0)
@test_combinations.run_all_keras_modes(always_skip_v1=True)
def test_default_callbacks_do_not_call_batch_hooks(self):
model = keras.Sequential([keras.layers.Dense(1)])
log_dir = self.get_temp_dir()
cb_list = keras.callbacks.CallbackList(
[
keras.callbacks.TensorBoard(log_dir, profile_batch=0),
keras.callbacks.ModelCheckpoint(log_dir),
],
add_progbar=True,
model=model,
verbose=2,
epochs=3,
)
self.assertLen(cb_list.callbacks, 3)
self.assertFalse(cb_list._should_call_train_batch_hooks)
self.assertFalse(cb_list._should_call_test_batch_hooks)
self.assertFalse(cb_list._should_call_predict_batch_hooks)
@test_combinations.run_all_keras_modes(always_skip_v1=True)
def test_change_tf_functions_during_fit(self):
class ChangeFunctions(keras.callbacks.Callback):
def on_epoch_end(self, epochs, logs=None):
def new_fn(iterator):
raise ValueError("New function substituted successfully.")
self.model.train_function = new_fn
self.model.test_function = new_fn
self.model.predict_function = new_fn
model = keras.Sequential([keras.layers.Dense(1)])
model.compile("sgd", "mse")
x, y = np.ones((10, 10)), np.ones((10, 1))
with self.assertRaisesRegex(ValueError, "New function "):
model.fit(
x, y, batch_size=2, epochs=2, callbacks=[ChangeFunctions()]
)
with self.assertRaisesRegex(ValueError, "New function "):
model.evaluate(x, y, batch_size=2)
with self.assertRaisesRegex(ValueError, "New function "):
model.predict(x, batch_size=2)
@test_combinations.run_all_keras_modes(always_skip_v1=True)
def test_stop_training_batch_level(self):
class MyCallback(keras.callbacks.Callback):
def __init__(self):
super().__init__()
self.batch_counter = 0
def on_train_batch_end(self, batch, logs=None):
self.batch_counter += 1
if batch == 2:
self.model.stop_training = True
model = keras.Sequential([keras.layers.Dense(1)])
model.compile("sgd", "mse")
x, y = np.ones((10, 10)), np.ones((10, 1))
my_cb = MyCallback()
# Will run 5 batches if `stop_training` doesn't work.
model.fit(x, y, batch_size=2, callbacks=[my_cb])
self.assertEqual(my_cb.batch_counter, 3)
@test_combinations.run_all_keras_modes(always_skip_v1=True)
def test_built_in_callback_order(self):
class CustomCallback(keras.callbacks.Callback):
pass
class TestingCallbackList(keras.callbacks.CallbackList):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
if (
(not isinstance(self.callbacks[0], CustomCallback))
or (
not isinstance(
self.callbacks[1], keras.callbacks.History
)
)
or (
not isinstance(
self.callbacks[2], keras.callbacks.ProgbarLogger
)
)
):
raise AssertionError(
f"Callback order unexpected: {self.callbacks}"
)
with mock.patch.object(
keras.callbacks, "CallbackList", TestingCallbackList
):
model = keras.Sequential([keras.layers.Dense(1)])
model.compile("sgd", "mse")
custom_callback = CustomCallback()
model.fit(
np.ones((10, 10)),
np.ones((10, 1)),
epochs=5,
callbacks=[custom_callback],
)
# A summary that was emitted during a test. Fields:
# logdir: str. The logdir of the FileWriter to which the summary was
# written.
# tag: str. The name of the summary.
_ObservedSummary = collections.namedtuple("_ObservedSummary", ("logdir", "tag"))
class _SummaryFile:
"""A record of summary tags and the files to which they were written.
Fields `scalars`, `images`, `histograms`, and `tensors` are sets
containing `_ObservedSummary` values.
"""
def __init__(self):
self.scalars = set()
self.images = set()
self.histograms = set()
self.tensors = set()
self.graph_defs = []
self.convert_from_v2_summary_proto = False
def list_summaries(logdir):
"""Read all summaries under the logdir into a `_SummaryFile`.
Args:
logdir: A path to a directory that contains zero or more event
files, either as direct children or in transitive subdirectories.
Summaries in these events must only contain old-style scalars,
images, and histograms. Non-summary events, like `graph_def`s, are
ignored.
Returns:
A `_SummaryFile` object reflecting all summaries written to any
event files in the logdir or any of its descendant directories.
Raises:
ValueError: If an event file contains an summary of unexpected kind.
"""
result = _SummaryFile()
for dirpath, _, filenames in os.walk(logdir):
for filename in filenames:
if not filename.startswith("events.out."):
continue
path = os.path.join(dirpath, filename)
for event in tf.compat.v1.train.summary_iterator(path):
if event.graph_def:
result.graph_defs.append(event.graph_def)
if not event.summary: # (e.g., it's a `graph_def` event)
continue
for value in event.summary.value:
tag = value.tag
# Case on the `value` rather than the summary metadata
# because the TF-Keras callback uses `summary_ops_v2` to
# emit old-style summaries. See b/124535134.
kind = value.WhichOneof("value")
container = {
"simple_value": result.scalars,
"image": result.images,
"histo": result.histograms,
"tensor": result.tensors,
}.get(kind)
if container is None:
raise ValueError(
"Unexpected summary kind %r in event file %s:\n%r"
% (kind, path, event)
)
elif kind == "tensor" and tag != "keras":
# Convert the tf2 summary proto to old style for type
# checking.
plugin_name = value.metadata.plugin_data.plugin_name
container = {
"images": result.images,
"histograms": result.histograms,
"scalars": result.scalars,
}.get(plugin_name)
if container is not None:
result.convert_from_v2_summary_proto = True
else:
container = result.tensors
container.add(_ObservedSummary(logdir=dirpath, tag=tag))
return result
@test_combinations.run_with_all_model_types
@test_combinations.run_all_keras_modes(always_skip_v1=True)
class TestTensorBoardV2(test_combinations.TestCase):
def setUp(self):
super(TestTensorBoardV2, self).setUp()
self.logdir = os.path.join(self.get_temp_dir(), "tb")
self.train_dir = os.path.join(self.logdir, "train")
self.validation_dir = os.path.join(self.logdir, "validation")
def _get_model(self, compile_model=True):
layers = [
keras.layers.Conv2D(8, (3, 3)),
keras.layers.Flatten(),
keras.layers.Dense(1),
]
model = test_utils.get_model_from_layers(
layers, input_shape=(10, 10, 1)
)
if compile_model:
opt = gradient_descent.SGD(learning_rate=0.001)
model.compile(
opt, "mse", run_eagerly=test_utils.should_run_eagerly()
)
return model
def test_TensorBoard_default_logdir(self):
"""Regression test for cross-platform pathsep in default logdir."""
os.chdir(self.get_temp_dir())
model = self._get_model()
x, y = np.ones((10, 10, 10, 1)), np.ones((10, 1))
tb_cbk = keras.callbacks.TensorBoard() # no logdir specified
model.fit(
x,
y,
batch_size=2,
epochs=2,
validation_data=(x, y),
callbacks=[tb_cbk],
)
summary_file = list_summaries(logdir=".")
train_dir = os.path.join(".", "logs", "train")
validation_dir = os.path.join(".", "logs", "validation")
self.assertEqual(
summary_file.scalars,
{
_ObservedSummary(logdir=train_dir, tag="epoch_loss"),
_ObservedSummary(logdir=validation_dir, tag="epoch_loss"),
_ObservedSummary(
logdir=validation_dir, tag="evaluation_loss_vs_iterations"
),
},
)
def test_TensorBoard_basic(self):
model = self._get_model()
x, y = np.ones((10, 10, 10, 1)), np.ones((10, 1))
tb_cbk = keras.callbacks.TensorBoard(self.logdir)
model.fit(
x,
y,
batch_size=2,
epochs=2,
validation_data=(x, y),
callbacks=[tb_cbk],
)
summary_file = list_summaries(self.logdir)
self.assertEqual(
summary_file.scalars,
{
_ObservedSummary(logdir=self.train_dir, tag="epoch_loss"),
_ObservedSummary(logdir=self.validation_dir, tag="epoch_loss"),
_ObservedSummary(
logdir=self.validation_dir,
tag="evaluation_loss_vs_iterations",
),
},
)
def test_TensorBoard_across_invocations(self):
"""Regression test for summary writer resource use-after-free.
See: <https://github.com/tensorflow/tensorflow/issues/25707>
"""
model = self._get_model()
x, y = np.ones((10, 10, 10, 1)), np.ones((10, 1))
tb_cbk = keras.callbacks.TensorBoard(self.logdir)
for _ in (1, 2):
model.fit(
x,
y,
batch_size=2,
epochs=2,
validation_data=(x, y),
callbacks=[tb_cbk],
)
summary_file = list_summaries(self.logdir)
self.assertEqual(
summary_file.scalars,
{
_ObservedSummary(logdir=self.train_dir, tag="epoch_loss"),
_ObservedSummary(logdir=self.validation_dir, tag="epoch_loss"),
_ObservedSummary(
logdir=self.validation_dir,
tag="evaluation_loss_vs_iterations",
),
},
)
def test_TensorBoard_no_spurious_event_files(self):
model = self._get_model()
x, y = np.ones((10, 10, 10, 1)), np.ones((10, 1))
tb_cbk = keras.callbacks.TensorBoard(self.logdir)
model.fit(x, y, batch_size=2, epochs=2, callbacks=[tb_cbk])
events_file_run_basenames = set()
for dirpath, _, filenames in os.walk(self.train_dir):
if any(fn.startswith("events.out.") for fn in filenames):
events_file_run_basenames.add(os.path.basename(dirpath))
self.assertEqual(events_file_run_basenames, {"train"})
def test_TensorBoard_batch_metrics(self):
model = self._get_model()
x, y = np.ones((10, 10, 10, 1)), np.ones((10, 1))
tb_cbk = keras.callbacks.TensorBoard(self.logdir, update_freq=1)
model.fit(
x,
y,
batch_size=2,
epochs=2,
validation_data=(x, y),
callbacks=[tb_cbk],
)
summary_file = list_summaries(self.logdir)
self.assertEqual(
summary_file.scalars,
{
_ObservedSummary(logdir=self.train_dir, tag="batch_loss"),
_ObservedSummary(logdir=self.train_dir, tag="epoch_loss"),
_ObservedSummary(logdir=self.validation_dir, tag="epoch_loss"),
_ObservedSummary(
logdir=self.validation_dir,
tag="evaluation_loss_vs_iterations",
),
},
)
def test_TensorBoard_learning_rate_schedules(self):
model = self._get_model(compile_model=False)
opt = gradient_descent.SGD(learning_rate_schedule.CosineDecay(0.01, 1))
model.compile(opt, "mse", run_eagerly=test_utils.should_run_eagerly())
x, y = np.ones((10, 10, 10, 1)), np.ones((10, 1))
model.fit(
x,
y,
batch_size=2,
epochs=2,
callbacks=[keras.callbacks.TensorBoard(self.logdir)],
)
summary_file = list_summaries(self.logdir)
self.assertEqual(
summary_file.scalars,
{
_ObservedSummary(logdir=self.train_dir, tag="epoch_loss"),
_ObservedSummary(
logdir=self.train_dir, tag="epoch_learning_rate"
),
},
)
def test_TensorBoard_global_step(self):
model = self._get_model(compile_model=False)
opt = gradient_descent.SGD(learning_rate_schedule.CosineDecay(0.01, 1))
model.compile(opt, "mse", run_eagerly=test_utils.should_run_eagerly())
x, y = np.ones((10, 10, 10, 1)), np.ones((10, 1))
model.fit(
x,
y,
batch_size=2,
epochs=2,
verbose=0,
callbacks=[
keras.callbacks.TensorBoard(
self.logdir,
update_freq=1,
profile_batch=0,
write_steps_per_second=True,
)
],
)
summary_file = list_summaries(self.logdir)
self.assertEqual(
summary_file.scalars,
{
_ObservedSummary(logdir=self.train_dir, tag="batch_loss"),
_ObservedSummary(logdir=self.train_dir, tag="epoch_loss"),
_ObservedSummary(
logdir=self.train_dir, tag="epoch_learning_rate"
),
_ObservedSummary(
logdir=self.train_dir, tag="epoch_steps_per_second"
),
_ObservedSummary(
logdir=self.train_dir, tag="batch_steps_per_second"
),
},
)
def test_TensorBoard_weight_histograms(self):
model = self._get_model()
x, y = np.ones((10, 10, 10, 1)), np.ones((10, 1))
tb_cbk = keras.callbacks.TensorBoard(self.logdir, histogram_freq=1)
model_type = test_utils.get_model_type()
model.fit(
x,
y,
batch_size=2,
epochs=2,
validation_data=(x, y),
callbacks=[tb_cbk],
)
summary_file = list_summaries(self.logdir)
self.assertEqual(
summary_file.scalars,
{
_ObservedSummary(logdir=self.train_dir, tag="epoch_loss"),
_ObservedSummary(logdir=self.validation_dir, tag="epoch_loss"),
_ObservedSummary(
logdir=self.validation_dir,
tag="evaluation_loss_vs_iterations",
),
},
)
self.assertEqual(
self._strip_layer_names(summary_file.histograms, model_type),
{
_ObservedSummary(logdir=self.train_dir, tag="bias_0/histogram"),
_ObservedSummary(
logdir=self.train_dir, tag="kernel_0/histogram"
),
},
)
def test_TensorBoard_weight_images(self):
model = self._get_model()
x, y = np.ones((10, 10, 10, 1)), np.ones((10, 1))
tb_cbk = keras.callbacks.TensorBoard(
self.logdir, histogram_freq=1, write_images=True
)
model_type = test_utils.get_model_type()
model.fit(
x,
y,
batch_size=2,
epochs=2,
validation_data=(x, y),
callbacks=[tb_cbk],
)
summary_file = list_summaries(self.logdir)
self.assertEqual(
summary_file.scalars,
{
_ObservedSummary(logdir=self.train_dir, tag="epoch_loss"),
_ObservedSummary(logdir=self.validation_dir, tag="epoch_loss"),
_ObservedSummary(
logdir=self.validation_dir,
tag="evaluation_loss_vs_iterations",
),
},
)
self.assertEqual(
self._strip_layer_names(summary_file.histograms, model_type),
{
_ObservedSummary(logdir=self.train_dir, tag="bias_0/histogram"),
_ObservedSummary(
logdir=self.train_dir, tag="kernel_0/histogram"
),
},
)
if summary_file.convert_from_v2_summary_proto:
expected_image_summaries = {
_ObservedSummary(logdir=self.train_dir, tag="bias_0/image"),
_ObservedSummary(logdir=self.train_dir, tag="kernel_0/image"),
}
else:
expected_image_summaries = {
_ObservedSummary(logdir=self.train_dir, tag="bias_0/image/0"),
_ObservedSummary(logdir=self.train_dir, tag="kernel_0/image/0"),
_ObservedSummary(logdir=self.train_dir, tag="kernel_0/image/1"),
_ObservedSummary(logdir=self.train_dir, tag="kernel_0/image/2"),
}
self.assertEqual(
self._strip_layer_names(summary_file.images, model_type),
expected_image_summaries,
)
def test_TensorBoard_projector_callback(self):
layers = [
keras.layers.Embedding(10, 10, name="test_embedding"),
keras.layers.Dense(10, activation="relu"),
keras.layers.Dense(1, activation="sigmoid"),
]
model = test_utils.get_model_from_layers(layers, input_shape=(10,))
model.compile(
optimizer="adam",
loss=keras.losses.BinaryCrossentropy(from_logits=True),
run_eagerly=test_utils.should_run_eagerly(),
)
x, y = np.ones((10, 10)), np.ones((10, 10))
tb_cbk = keras.callbacks.TensorBoard(
self.logdir,
embeddings_freq=1,
embeddings_metadata={"test_embedding": "metadata.tsv"},
)
model.fit(
x,
y,
batch_size=2,
epochs=2,
validation_data=(x, y),
callbacks=[tb_cbk],
)
with open(os.path.join(self.logdir, "projector_config.pbtxt")) as f:
self.assertEqual(
f.readlines(),
[
"embeddings {\n",
" tensor_name: "
'"layer_with_weights-0/embeddings/.ATTRIBUTES/'
'VARIABLE_VALUE"\n',
' metadata_path: "metadata.tsv"\n',
"}\n",
],
)
def test_custom_summary(self):
if not tf.executing_eagerly():
self.skipTest("Custom summaries only supported in V2 code path.")
def scalar_v2_mock(name, data, step=None):
"""A reimplementation of the scalar plugin to avoid circular
deps."""
metadata = tf.compat.v1.SummaryMetadata()
# Should match value in tensorboard/plugins/scalar/metadata.py.
metadata.plugin_data.plugin_name = "scalars"
with tf.summary.experimental.summary_scope(
name, "scalar_summary", values=[data, step]
) as (tag, _):
return tf.summary.write(
tag=tag,
tensor=tf.cast(data, "float32"),
step=step,
metadata=metadata,
)
class LayerWithSummary(keras.layers.Layer):
def call(self, x):
scalar_v2_mock("custom_summary", tf.reduce_sum(x))
return x
model = test_utils.get_model_from_layers(
[LayerWithSummary()], input_shape=(5,), name="model"
)
model.compile("sgd", "mse", run_eagerly=test_utils.should_run_eagerly())
tb_cbk = keras.callbacks.TensorBoard(self.logdir, update_freq=1)
x, y = np.ones((10, 5)), np.ones((10, 5))
model.fit(
x, y, batch_size=2, validation_data=(x, y), callbacks=[tb_cbk]
)
summary_file = list_summaries(self.logdir)
self.assertEqual(
summary_file.scalars,
{
_ObservedSummary(logdir=self.train_dir, tag="batch_loss"),
_ObservedSummary(logdir=self.train_dir, tag="epoch_loss"),
_ObservedSummary(logdir=self.validation_dir, tag="epoch_loss"),
_ObservedSummary(
logdir=self.validation_dir,
tag="evaluation_loss_vs_iterations",
),
_ObservedSummary(
logdir=self.train_dir,
tag="model/layer_with_summary/custom_summary",
),
_ObservedSummary(
logdir=self.validation_dir,
tag="model/layer_with_summary/custom_summary",
),
},
)
def _strip_layer_names(self, summaries, model_type):
"""Deduplicate summary names modulo layer prefix.
This removes the first slash-component of each tag name: for
instance, "foo/bar/baz" becomes "bar/baz".
Args:
summaries: A `set` of `_ObservedSummary` values.
model_type: The model type currently being tested.
Returns:
A new `set` of `_ObservedSummary` values with layer prefixes
removed.
"""
result = set()
for summary in summaries:
if "/" not in summary.tag:
raise ValueError(f"tag has no layer name: {summary.tag!r}")
start_from = 2 if "subclass" in model_type else 1
new_tag = "/".join(summary.tag.split("/")[start_from:])
result.add(summary._replace(tag=new_tag))
return result
def test_TensorBoard_invalid_argument(self):
with self.assertRaisesRegex(ValueError, "Unrecognized arguments"):
keras.callbacks.TensorBoard(wwrite_images=True)
def test_TensorBoard_non_blocking(self):
model = keras.Sequential([keras.layers.Dense(1)])
tb = keras.callbacks.TensorBoard(self.logdir)
self.assertTrue(tb._supports_tf_logs)
cb_list = keras.callbacks.CallbackList(
[tb], model=model, epochs=1, steps=100, verbose=0
)
tensor = tf.convert_to_tensor(1.0)
def mock_numpy():
raise RuntimeError(
"If this error is seen, TensorBoard is causing a blocking "
"NumPy conversion."
)
with tf.compat.v1.test.mock.patch.object(tensor, "numpy", mock_numpy):
logs = {"metric": tensor}
cb_list.on_train_begin(logs)
cb_list.on_epoch_begin(0, logs)
cb_list.on_train_batch_begin(0, logs)
cb_list.on_train_batch_end(0, logs)
cb_list.on_epoch_end(0, logs)
cb_list.on_train_end(logs)
cb_list.on_test_begin(logs)
cb_list.on_test_batch_begin(0, logs)
cb_list.on_test_batch_end(0, logs)
cb_list.on_test_end(logs)
cb_list.on_predict_begin(logs)
cb_list.on_predict_batch_begin(logs)
cb_list.on_predict_batch_end(logs)
cb_list.on_predict_end(logs)
# Note that this test specifies model_type explicitly.
@test_combinations.run_all_keras_modes(always_skip_v1=True)
class TestTensorBoardV2NonParameterizedTest(test_combinations.TestCase):
def setUp(self):
super(TestTensorBoardV2NonParameterizedTest, self).setUp()
self.logdir = os.path.join(self.get_temp_dir(), "tb")
self.train_dir = os.path.join(self.logdir, "train")
self.validation_dir = os.path.join(self.logdir, "validation")
def _get_seq_model(self):
model = keras.models.Sequential(
[
keras.layers.Conv2D(8, (3, 3), input_shape=(10, 10, 1)),
keras.layers.Flatten(),
keras.layers.Dense(1),
]
)
opt = gradient_descent.SGD(learning_rate=0.001)
model.compile(opt, "mse", run_eagerly=test_utils.should_run_eagerly())
return model
def _count_xplane_file(self, logdir):
profile_dir = os.path.join(logdir, "plugins", "profile")
count = 0
for dirpath, dirnames, filenames in os.walk(profile_dir):
del dirpath # unused
del dirnames # unused
for filename in filenames:
if filename.endswith(".xplane.pb"):
count += 1
return count
def fitModelAndAssertKerasModelWritten(self, model):
x, y = np.ones((10, 10, 10, 1)), np.ones((10, 1))
tb_cbk = keras.callbacks.TensorBoard(
self.logdir, write_graph=True, profile_batch=0
)
model.fit(
x,
y,
batch_size=2,
epochs=3,
validation_data=(x, y),
callbacks=[tb_cbk],
)
summary_file = list_summaries(self.logdir)
self.assertEqual(
summary_file.tensors,
{
_ObservedSummary(logdir=self.train_dir, tag="keras"),
},
)
if not model.run_eagerly:
# There should be one train graph
self.assertLen(summary_file.graph_defs, 1)
for graph_def in summary_file.graph_defs:
graph_def_str = str(graph_def)
# All the model layers should appear in the graphs
for layer in model.layers:
if "input" not in layer.name:
self.assertIn(layer.name, graph_def_str)
def test_TensorBoard_writeSequentialModel_noInputShape(self):
model = keras.models.Sequential(
[
keras.layers.Conv2D(8, (3, 3)),
keras.layers.Flatten(),
keras.layers.Dense(1),
]
)
model.compile("sgd", "mse", run_eagerly=test_utils.should_run_eagerly())
self.fitModelAndAssertKerasModelWritten(model)
def test_TensorBoard_writeSequentialModel_withInputShape(self):
model = keras.models.Sequential(
[
keras.layers.Conv2D(8, (3, 3), input_shape=(10, 10, 1)),
keras.layers.Flatten(),
keras.layers.Dense(1),
]
)
model.compile("sgd", "mse", run_eagerly=test_utils.should_run_eagerly())
self.fitModelAndAssertKerasModelWritten(model)
def test_TensorBoard_writeModel(self):
inputs = keras.layers.Input([10, 10, 1])
x = keras.layers.Conv2D(8, (3, 3), activation="relu")(inputs)
x = keras.layers.Flatten()(x)
x = keras.layers.Dense(1)(x)
model = keras.models.Model(inputs=inputs, outputs=[x])
model.compile("sgd", "mse", run_eagerly=test_utils.should_run_eagerly())
self.fitModelAndAssertKerasModelWritten(model)
def test_TensorBoard_autoTrace(self):
model = self._get_seq_model()
x, y = np.ones((10, 10, 10, 1)), np.ones((10, 1))
tb_cbk = keras.callbacks.TensorBoard(
self.logdir, histogram_freq=1, profile_batch=1, write_graph=False
)
model.fit(
x,
y,
batch_size=2,
epochs=2,
validation_data=(x, y),
callbacks=[tb_cbk],
)
summary_file = list_summaries(self.logdir)
self.assertEqual(
summary_file.tensors,
{
_ObservedSummary(logdir=self.train_dir, tag="batch_1"),
},
)
self.assertEqual(1, self._count_xplane_file(logdir=self.logdir))
def test_TensorBoard_autoTrace_outerProfiler(self):
"""Runs a profiler session that interferes with the callback's one.
The callback will not generate a profile but execution will proceed
without crashing due to unhandled exceptions.
"""
tf.profiler.experimental.start(logdir="")
model = self._get_seq_model()
x, y = np.ones((10, 10, 10, 1)), np.ones((10, 1))
tb_cbk = keras.callbacks.TensorBoard(
self.logdir, histogram_freq=1, profile_batch=1, write_graph=False
)
model.fit(
x,
y,
batch_size=2,
epochs=2,
validation_data=(x, y),
callbacks=[tb_cbk],
)
summary_file = list_summaries(self.logdir)
tf.profiler.experimental.stop(save=False)
self.assertEqual(
summary_file.tensors,
{
_ObservedSummary(logdir=self.train_dir, tag="batch_1"),
},
)
self.assertEqual(0, self._count_xplane_file(logdir=self.train_dir))
def test_TensorBoard_autoTrace_tagNameWithBatchNum(self):
model = self._get_seq_model()
x, y = np.ones((10, 10, 10, 1)), np.ones((10, 1))
tb_cbk = keras.callbacks.TensorBoard(
self.logdir, histogram_freq=1, profile_batch=2, write_graph=False
)
model.fit(
x,
y,
batch_size=2,
epochs=2,
validation_data=(x, y),
callbacks=[tb_cbk],
)
summary_file = list_summaries(self.logdir)
self.assertEqual(
summary_file.tensors,
{
_ObservedSummary(logdir=self.train_dir, tag="batch_2"),
},
)
self.assertEqual(1, self._count_xplane_file(logdir=self.logdir))
def test_TensorBoard_autoTrace_profileBatchRangeSingle(self):
model = self._get_seq_model()
x, y = np.ones((10, 10, 10, 1)), np.ones((10, 1))
tb_cbk = keras.callbacks.TensorBoard(
self.logdir,
histogram_freq=1,
profile_batch="2,2",
write_graph=False,
)
model.fit(
x,
y,
batch_size=3,
epochs=2,
validation_data=(x, y),
callbacks=[tb_cbk],
)
summary_file = list_summaries(self.logdir)
self.assertEqual(
summary_file.tensors,
{
# Trace will be logged once at the batch it stops profiling.
_ObservedSummary(logdir=self.train_dir, tag="batch_2"),
},
)
self.assertEqual(1, self._count_xplane_file(logdir=self.logdir))
def test_TensorBoard_autoTrace_profileBatchRangeTwice(self):
model = self._get_seq_model()
x, y = np.ones((10, 10, 10, 1)), np.ones((10, 1))
tb_cbk = keras.callbacks.TensorBoard(
self.logdir,
histogram_freq=1,
profile_batch="10,10",
write_graph=False,
)
model.fit(
x,
y,
batch_size=3,
epochs=10,
validation_data=(x, y),
callbacks=[tb_cbk],
)
time.sleep(1) # Avoids the second profile over-writing the first.
model.fit(
x,
y,
batch_size=3,
epochs=10,
validation_data=(x, y),
callbacks=[tb_cbk],
)
self.assertEqual(2, self._count_xplane_file(logdir=self.logdir))
# Test case that replicates a GitHub issue.
# https://github.com/tensorflow/tensorflow/issues/37543
def test_TensorBoard_autoTrace_profileTwiceGraphMode(self):
tf.compat.v1.disable_eager_execution()
inp = keras.Input((1,))
out = keras.layers.Dense(units=1)(inp)
model = keras.Model(inp, out)
model.compile(gradient_descent.SGD(1), "mse")
logdir = os.path.join(self.get_temp_dir(), "tb1")
model.fit(
np.zeros((64, 1)),
np.zeros((64, 1)),
batch_size=32,
callbacks=[keras.callbacks.TensorBoard(logdir, profile_batch=1)],
)
# Verifies trace exists in the first logdir.
self.assertEqual(1, self._count_xplane_file(logdir=logdir))
logdir = os.path.join(self.get_temp_dir(), "tb2")
model.fit(
np.zeros((64, 1)),
np.zeros((64, 1)),
batch_size=32,
callbacks=[keras.callbacks.TensorBoard(logdir, profile_batch=2)],
)
# Verifies trace exists in the second logdir.
self.assertEqual(1, self._count_xplane_file(logdir=logdir))
def test_TensorBoard_autoTrace_profileBatchRange(self):
model = self._get_seq_model()
x, y = np.ones((10, 10, 10, 1)), np.ones((10, 1))
tb_cbk = keras.callbacks.TensorBoard(
self.logdir,
histogram_freq=1,
profile_batch="1,3",
write_graph=False,
)
model.fit(
x,
y,
batch_size=4,
epochs=2,
validation_data=(x, y),
callbacks=[tb_cbk],
)
summary_file = list_summaries(self.logdir)
self.assertEqual(
summary_file.tensors,
{
# Trace will be logged once at the batch it stops profiling.
_ObservedSummary(logdir=self.train_dir, tag="batch_3"),
},
)
self.assertEqual(1, self._count_xplane_file(logdir=self.logdir))
def test_TensorBoard_autoTrace_profileInvalidBatchRange(self):
with self.assertRaises(ValueError):
keras.callbacks.TensorBoard(
self.logdir,
histogram_freq=1,
profile_batch="-1,3",
write_graph=False,
)
with self.assertRaises(ValueError):
keras.callbacks.TensorBoard(
self.logdir,
histogram_freq=1,
profile_batch="1,None",
write_graph=False,
)
with self.assertRaises(ValueError):
keras.callbacks.TensorBoard(
self.logdir,
histogram_freq=1,
profile_batch="6,5",
write_graph=False,
)
with self.assertRaises(ValueError):
keras.callbacks.TensorBoard(
self.logdir,
histogram_freq=1,
profile_batch=-1,
write_graph=False,
)
def test_TensorBoard_autoTrace_profile_batch_largerThanBatchCount(self):
model = self._get_seq_model()
x, y = np.ones((10, 10, 10, 1)), np.ones((10, 1))
tb_cbk = keras.callbacks.TensorBoard(
self.logdir,
histogram_freq=1,
profile_batch=10000,
write_graph=False,
)
model.fit(
x,
y,
batch_size=2,
epochs=2,
validation_data=(x, y),
callbacks=[tb_cbk],
)
summary_file = list_summaries(self.logdir)
# Enabled trace only on the 10000th batch, thus it should be empty.
self.assertEmpty(summary_file.tensors)
self.assertEqual(0, self._count_xplane_file(logdir=self.train_dir))
class MostRecentlyModifiedFileMatchingPatternTest(tf.test.TestCase):
def test_get_most_recently_modified_file_matching_pattern(self):
file_pattern = "f.batch{batch:02d}epoch{epoch:02d}.h5"
test_dir = self.get_temp_dir()
path_pattern = os.path.join(test_dir, file_pattern)
file_paths = [
os.path.join(test_dir, file_name)
for file_name in [
"f.batch03epoch02.h5",
"f.batch02epoch02.h5",
"f.batch01epoch01.h5",
]
]
for file_path in file_paths:
with open(file_path, "w") as f:
# Ensure there are some intervals between file creation.
time.sleep(2)
f.write("foo bar")
# Ensure the files have been actually written.
self.assertEqual(
set(
[
os.path.join(test_dir, file_name)
for file_name in os.listdir(test_dir)
]
),
set(file_paths),
)
self.assertEqual(
keras.callbacks.ModelCheckpoint(
None
)._get_most_recently_modified_file_matching_pattern(path_pattern),
file_paths[-1],
)
def test_some_file_not_matching_pattern(self):
file_pattern = "f.batch{batch:02d}epoch{epoch:02d}.h5"
test_dir = self.get_temp_dir()
path_pattern = os.path.join(test_dir, file_pattern)
file_paths = [
os.path.join(test_dir, file_name)
for file_name in [
"f.batch03epoch02.h5",
"f.batch02epoch02.h5",
"f.baatch01epoch01.h5",
]
]
for file_path in file_paths:
with open(file_path, "w") as f:
# Ensure there are some intervals between file creation.
time.sleep(2)
f.write("foo bar")
self.assertEqual(
keras.callbacks.ModelCheckpoint(
None
)._get_most_recently_modified_file_matching_pattern(path_pattern),
file_paths[-2],
)
def test_get_same_file_if_file_name_equals_pattern(self):
file_name = "f.batch02.h5"
test_dir = self.get_temp_dir()
file_path = os.path.join(test_dir, file_name)
with open(file_path, "w") as f:
f.write("foo bar")
self.assertEqual(
os.path.join(test_dir, os.listdir(test_dir)[0]), file_path
)
self.assertEqual(
keras.callbacks.ModelCheckpoint(
None
)._get_most_recently_modified_file_matching_pattern(file_path),
file_path,
)
def test_get_none_if_file_does_not_exist(self):
file_name = "f.batch02.h5"
test_dir = self.get_temp_dir()
file_path = os.path.join(test_dir, file_name)
self.assertEmpty(os.listdir(test_dir))
self.assertEqual(
keras.callbacks.ModelCheckpoint(
None
)._get_most_recently_modified_file_matching_pattern(file_path),
None,
)
def test_using_checkpoint_management_latest_checkpoint(self):
file_pattern = "f.batch{batch:02d}epoch{epoch:02d}"
ckpt_file_name = "f.batchXepochY"
test_dir = self.get_temp_dir()
path_pattern = os.path.join(test_dir, file_pattern)
ckpt_file_path = os.path.join(test_dir, ckpt_file_name)
with open(ckpt_file_path, "w") as f:
f.write("dummy ckpt")
tf.__internal__.train.update_checkpoint_state(test_dir, ckpt_file_path)
file_paths = [
os.path.join(test_dir, file_name)
for file_name in ["f.batch03epoch02", "f.batch02epoch02"]
]
for file_path in file_paths:
with open(file_path, "w") as f:
f.write("foo bar")
# The result returned from checkpoint_management.latest_checkpoint takes
# priority, so even if it was written earlier, we should still return
# that.
self.assertEqual(
keras.callbacks.ModelCheckpoint(
None
)._get_most_recently_modified_file_matching_pattern(path_pattern),
ckpt_file_path,
)
class SummaryOpsTest(tf.test.TestCase):
def tearDown(self):
super(SummaryOpsTest, self).tearDown()
tf.summary.trace_off()
def keras_model(self, *args, **kwargs):
logdir = self.get_temp_dir()
writer = tf.summary.create_file_writer(logdir)
with writer.as_default():
keras.callbacks.keras_model_summary(*args, **kwargs)
writer.close()
events = events_from_logdir(logdir)
# The first event contains no summary values. The written content goes
# to the second event.
return events[1]
@test_utils.run_v2_only
def testKerasModel(self):
model = keras.Sequential(
[Dense(10, input_shape=(100,)), Activation("relu", name="my_relu")]
)
event = self.keras_model(name="my_name", data=model, step=1)
first_val = event.summary.value[0]
self.assertEqual(
model.to_json(), first_val.tensor.string_val[0].decode()
)
@test_utils.run_v2_only
def testKerasModel_usesDefaultStep(self):
model = keras.Sequential(
[Dense(10, input_shape=(100,)), Activation("relu", name="my_relu")]
)
try:
tf.summary.experimental.set_step(42)
event = self.keras_model(name="my_name", data=model)
self.assertEqual(42, event.step)
finally:
# Reset to default state for other tests.
tf.summary.experimental.set_step(None)
@test_utils.run_v2_only
def testKerasModel_subclass(self):
class SimpleSubclass(keras.Model):
def __init__(self):
super().__init__(name="subclass")
self.dense = Dense(10, input_shape=(100,))
self.activation = Activation("relu", name="my_relu")
def call(self, inputs):
x = self.dense(inputs)
return self.activation(x)
# Intentionally erroring out at json serialization to test the
# warning.
def get_config(self):
raise NotImplementedError
model = SimpleSubclass()
with tf.compat.v1.test.mock.patch.object(
logging, "warning"
) as mock_log:
self.assertFalse(
keras.callbacks.keras_model_summary(
name="my_name", data=model, step=1
)
)
self.assertRegex(
str(mock_log.call_args), "Model failed to serialize as JSON."
)
@test_utils.run_v2_only
def testKerasModel_otherExceptions(self):
model = keras.Sequential()
with tf.compat.v1.test.mock.patch.object(
model, "to_json"
) as mock_to_json:
with tf.compat.v1.test.mock.patch.object(
logging, "warning"
) as mock_log:
mock_to_json.side_effect = Exception("oops")
self.assertFalse(
keras.callbacks.keras_model_summary(
name="my_name", data=model, step=1
)
)
self.assertRegex(
str(mock_log.call_args),
"Model failed to serialize as JSON. Ignoring",
)
def events_from_file(filepath):
"""Returns all events in a single event file.
Args:
filepath: Path to the event file.
Returns:
A list of all tf.Event protos in the event file.
"""
result = []
raw_dataset = tf.data.TFRecordDataset([filepath])
for raw_record in raw_dataset.take(10):
event = tf.compat.v1.Event()
event.ParseFromString(raw_record.numpy())
result.append(event)
return result
def events_from_logdir(logdir):
"""Returns all events in the single eventfile in logdir.
Args:
logdir: The directory in which the single event file is sought.
Returns:
A list of all tf.Event protos from the single event file.
Raises:
AssertionError: If logdir does not contain exactly one file.
"""
assert tf.compat.v1.gfile.Exists(logdir)
files = tf.compat.v1.gfile.ListDirectory(logdir)
assert len(files) == 1, f"Found not exactly one file in logdir: {files}"
return events_from_file(os.path.join(logdir, files[0]))
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/callbacks_test.py/0 | {
"file_path": "tf-keras/tf_keras/callbacks_test.py",
"repo_id": "tf-keras",
"token_count": 79548
} | 217 |
# TF-Keras with Distribution Strategy Tests
This directory contains unit tests that combine TF-Keras library with
[Distribution Training](https://www.tensorflow.org/guide/distributed_training).
Tests that use a custom training loop instead of TF-Keras compile/fit should be
placed under python/distribute directory instead.
| tf-keras/tf_keras/distribute/README.md/0 | {
"file_path": "tf-keras/tf_keras/distribute/README.md",
"repo_id": "tf-keras",
"token_count": 81
} | 218 |
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for distributed training utility functions."""
import tensorflow.compat.v2 as tf
from tf_keras import callbacks
from tf_keras.distribute import distributed_training_utils_v1
from tf_keras.optimizers.legacy import adam
class DistributedTrainingUtilsTest(tf.test.TestCase):
def test_validate_callbacks_predefined_callbacks(self):
supported_predefined_callbacks = [
callbacks.TensorBoard(),
callbacks.CSVLogger(filename="./log.csv"),
callbacks.EarlyStopping(),
callbacks.ModelCheckpoint(filepath="./checkpoint"),
callbacks.TerminateOnNaN(),
callbacks.ProgbarLogger(),
callbacks.History(),
callbacks.RemoteMonitor(),
]
distributed_training_utils_v1.validate_callbacks(
supported_predefined_callbacks, adam.Adam()
)
unsupported_predefined_callbacks = [
callbacks.ReduceLROnPlateau(),
callbacks.LearningRateScheduler(schedule=lambda epoch: 0.001),
]
for callback in unsupported_predefined_callbacks:
with self.assertRaisesRegex(
ValueError, "You must specify a TF-Keras Optimizer V2"
):
distributed_training_utils_v1.validate_callbacks(
[callback], tf.compat.v1.train.AdamOptimizer()
)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/distribute/distributed_training_utils_test.py/0 | {
"file_path": "tf-keras/tf_keras/distribute/distributed_training_utils_test.py",
"repo_id": "tf-keras",
"token_count": 793
} | 219 |
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Test MirroredVariable in MirroredStrategy and MultiWorkerMirroredStrategy."""
import tensorflow.compat.v2 as tf
from tf_keras.distribute import distributed_training_utils
from tf_keras.layers import core
def _mimic_two_cpus():
try:
cpus = tf.config.list_physical_devices("CPU")
except tf.errors.NotFoundError:
# Testing device not available. Skip the test.
return False
tf.config.set_logical_device_configuration(
cpus[0],
[
tf.config.LogicalDeviceConfiguration(),
tf.config.LogicalDeviceConfiguration(),
],
)
return True
def get_strategy_with_mimicing_cpus():
if not _mimic_two_cpus():
return None
return tf.distribute.MultiWorkerMirroredStrategy._from_local_devices(
("/device:CPU:0", "/device:CPU:1")
)
@tf.__internal__.distribute.combinations.generate(
tf.__internal__.test.combinations.combine(
distribution=list(
filter(
None.__ne__,
[
tf.__internal__.distribute.combinations.mirrored_strategy_with_gpu_and_cpu, # noqa: E501
get_strategy_with_mimicing_cpus(),
],
)
),
mode=["graph", "eager"],
)
)
class MirroredVariableCreationTest(tf.test.TestCase):
"""Base class that tests mirrored variable creator.
Currently it assumes all strategy objects have two replicas.
"""
@classmethod
def setUpClass(cls):
_mimic_two_cpus()
def assertAllDifferent(self, objs):
for i in range(len(objs)):
for j in range(len(objs)):
if i == j:
continue
self.assertIsNot(objs[i], objs[j])
def _is_mirrored(self, val):
if distributed_training_utils.is_distributed_variable(val):
if val._policy:
return val._policy._is_mirrored()
# Since `Mirrored` is a private symbol in tf.distribute, we're checking
# with `DistributedValues` as an approximation.
return isinstance(val, tf.distribute.DistributedValues)
def testWithLayers(self, distribution):
def model_fn(features):
layer1 = core.Dense(1)
layer1(features)
layer2 = core.Dense(1)
layer2(features)
# We rely on names and orders to make sure replica references the
# same MirroredVariable. Uniquifying names may involve global
# states, merge_call switches threads so we need to test things work
# after merge_call.
tf.distribute.get_replica_context().merge_call(lambda _: _)
layer3 = core.Dense(1)
layer3(features)
return [
(layer1.kernel, layer1.bias),
(layer2.kernel, layer2.bias),
(layer3.kernel, layer3.bias),
]
iterator = distribution.make_input_fn_iterator(
lambda _: tf.data.Dataset.from_tensors([[1.0]]).repeat(10)
)
self.evaluate(iterator.initializer)
features = iterator.get_next()
with distribution.scope():
result = distribution.extended.call_for_each_replica(
model_fn, args=(features,)
)
for kernel, bias in result:
self.assertTrue(self._is_mirrored(kernel))
self.assertAllDifferent(
distribution.experimental_local_results(kernel)
)
self.assertTrue(self._is_mirrored(bias))
self.assertAllDifferent(
distribution.experimental_local_results(kernel)
)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/distribute/mirrored_variable_test.py/0 | {
"file_path": "tf-keras/tf_keras/distribute/mirrored_variable_test.py",
"repo_id": "tf-keras",
"token_count": 1928
} | 220 |
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""A simple network to use in tests and examples."""
import tensorflow.compat.v2 as tf
from tf_keras.legacy_tf_layers import core
from tf_keras.legacy_tf_layers import normalization
from tf_keras.optimizers.legacy import optimizer_v2
def minimize_loss_example(optimizer, use_bias=False, use_callable_loss=True):
"""Example of non-distribution-aware legacy code."""
def dataset_fn():
dataset = tf.data.Dataset.from_tensors([[1.0]]).repeat()
# TODO(isaprykin): batch with drop_remainder causes shapes to be
# fully defined for TPU. Remove this when XLA supports dynamic shapes.
return dataset.batch(1, drop_remainder=True)
layer = core.Dense(1, use_bias=use_bias)
def model_fn(x):
"""A very simple model written by the user."""
def loss_fn():
y = tf.reshape(layer(x), []) - tf.constant(1.0)
return y * y
if isinstance(optimizer, optimizer_v2.OptimizerV2):
return optimizer.minimize(
loss_fn, lambda: layer.trainable_variables
)
elif use_callable_loss:
return optimizer.minimize(loss_fn)
else:
return optimizer.minimize(loss_fn())
return model_fn, dataset_fn, layer
def batchnorm_example(
optimizer_fn,
batch_per_epoch=1,
momentum=0.9,
renorm=False,
update_ops_in_replica_mode=False,
):
"""Example of non-distribution-aware legacy code with batch
normalization."""
def dataset_fn():
# input shape is [16, 8], input values are increasing in both
# dimensions.
return tf.data.Dataset.from_tensor_slices(
[
[
[float(x * 8 + y + z * 100) for y in range(8)]
for x in range(16)
]
for z in range(batch_per_epoch)
]
).repeat()
optimizer = optimizer_fn()
batchnorm = normalization.BatchNormalization(
renorm=renorm, momentum=momentum, fused=False
)
layer = core.Dense(1, use_bias=False)
def model_fn(x):
"""A model that uses batchnorm."""
def loss_fn():
y = batchnorm(x, training=True)
with tf.control_dependencies(
tf.compat.v1.get_collection(tf.compat.v1.GraphKeys.UPDATE_OPS)
if update_ops_in_replica_mode
else []
):
loss = tf.reduce_mean(
tf.reduce_sum(layer(y)) - tf.constant(1.0)
)
# `x` and `y` will be fetched by the gradient computation, but not
# `loss`.
return loss
if isinstance(optimizer, optimizer_v2.OptimizerV2):
return optimizer.minimize(
loss_fn, lambda: layer.trainable_variables
)
# Callable loss.
return optimizer.minimize(loss_fn)
return model_fn, dataset_fn, batchnorm
| tf-keras/tf_keras/distribute/test_example.py/0 | {
"file_path": "tf-keras/tf_keras/distribute/test_example.py",
"repo_id": "tf-keras",
"token_count": 1561
} | 221 |
# Copyright 2022 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for DTensor based strategy training."""
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
from tf_keras import backend
from tf_keras import mixed_precision
from tf_keras.dtensor import integration_test_utils
from tf_keras.optimizers import adam
from tf_keras.utils import tf_utils
# isort: off
# Import the MirroredStrategy that is backed by DTensor
# It is not a public API yet, so we do a private symbol import for now.
from tensorflow.python.distribute.experimental import (
mirrored_strategy as dtensor_mirrored_strategy,
)
from tensorflow.dtensor.python.tests import test_util
class TrainingTest(test_util.DTensorBaseTest):
def setUp(self):
super().setUp()
backend.enable_tf_random_generator()
tf_utils.set_random_seed(1337)
global_ids = test_util.create_device_ids_array((2,))
local_device_ids = np.ravel(global_ids).tolist()
mesh_dict = {
device: tf.experimental.dtensor.Mesh(
["batch"],
global_ids,
local_device_ids,
test_util.create_device_list((2,), device),
)
for device in ("CPU", "GPU", "TPU")
}
self.mesh = self.configTestMesh(mesh_dict)
def tearDown(self):
super().tearDown()
# clean up the mixed precision setting if any.
mixed_precision.set_global_policy("float32")
@parameterized.product(
run_eagerly=[True, False],
jit_compile=[True, False],
optimizer_creator=[lambda: adam.Adam(), lambda: "adam"],
enable_mixed_precision=[True, False],
)
def test_model_fit(
self,
run_eagerly,
jit_compile,
optimizer_creator,
enable_mixed_precision,
):
if run_eagerly and jit_compile:
self.skipTest("run_eagerly can't run with jit_compile")
if enable_mixed_precision and self.mesh.device_type() != "GPU":
self.skipTest("Only run mixed_precision on GPU for performance")
if enable_mixed_precision:
mixed_precision.set_global_policy("mixed_float16")
dtensor_strategy = dtensor_mirrored_strategy.MirroredStrategy(
mesh=self.mesh
)
# Make fake MNIST-like image data.
batch_size = 64
dataset = tf.data.Dataset.from_tensor_slices(
(
np.random.uniform(size=(batch_size, 28, 28, 1)).astype(
np.float32
),
np.random.randint(0, 10, size=(batch_size,)),
)
)
dataset = dataset.shuffle(64).repeat().batch(64, drop_remainder=True)
with dtensor_strategy.scope():
model = integration_test_utils.get_model()
optimizer = optimizer_creator()
model.compile(
loss="SparseCategoricalCrossentropy",
optimizer=optimizer,
metrics="acc",
run_eagerly=run_eagerly,
jit_compile=jit_compile,
)
model.fit(dataset, steps_per_epoch=10)
prediction = model.predict(
np.random.uniform(size=(batch_size, 28, 28, 1)).astype(np.float32)
)
self.assertEqual(prediction.shape, (batch_size, 10))
if enable_mixed_precision:
self.assertEqual(prediction.dtype, tf.float16)
else:
self.assertEqual(prediction.dtype, tf.float32)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/dtensor/strategy_integration_test.py/0 | {
"file_path": "tf-keras/tf_keras/dtensor/strategy_integration_test.py",
"repo_id": "tf-keras",
"token_count": 1788
} | 222 |
# Copyright 2018 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for numerical correctness."""
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
import tf_keras as keras
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
class MultiInputSubclassed(keras.Model):
"""Subclassed Model that adds its inputs and then adds a bias."""
def __init__(self):
super().__init__()
self.add = keras.layers.Add()
self.bias = test_utils.Bias()
def call(self, inputs):
added = self.add(inputs)
return self.bias(added)
def multi_input_functional():
"""Functional Model that adds its inputs and then adds a bias."""
input_1 = keras.Input(shape=(1,))
input_2 = keras.Input(shape=(1,))
input_3 = keras.Input(shape=(1,))
added = keras.layers.Add()([input_1, input_2, input_3])
output = test_utils.Bias()(added)
return keras.Model([input_1, input_2, input_3], output)
@test_combinations.run_with_all_model_types
@test_combinations.run_all_keras_modes
class SimpleBiasTest(test_combinations.TestCase):
def _get_simple_bias_model(self):
model = test_utils.get_model_from_layers(
[test_utils.Bias()], input_shape=(1,)
)
model.compile(
keras.optimizers.legacy.gradient_descent.SGD(0.1),
"mae",
run_eagerly=test_utils.should_run_eagerly(),
)
return model
def test_simple_bias_fit(self):
x = np.array([[0.0], [1.0], [2.0]])
y = np.array([[0.5], [2.0], [3.5]])
model = self._get_simple_bias_model()
history = model.fit(x, y, batch_size=3, epochs=5)
self.assertAllClose(history.history["loss"], [1.0, 0.9, 0.8, 0.7, 0.6])
def test_simple_bias_evaluate(self):
x = np.array([[0.0], [1.0], [2.0]])
y = np.array([[1.0], [3.0], [5.0]])
model = self._get_simple_bias_model()
loss = model.evaluate(x, y, batch_size=1)
self.assertAlmostEqual(loss, 2.0)
def test_simple_bias_predict(self):
x = np.array([[0.0], [1.0], [2.0]])
model = self._get_simple_bias_model()
pred = model.predict(x, batch_size=1)
self.assertAllClose(x, pred)
@test_combinations.run_all_keras_modes
class MultipleInputTest(test_combinations.TestCase):
def _get_multiple_input_model(self, subclassed=True):
if subclassed:
model = MultiInputSubclassed()
else:
model = multi_input_functional()
model.compile(
keras.optimizers.legacy.gradient_descent.SGD(0.1),
"mae",
run_eagerly=test_utils.should_run_eagerly(),
)
return model
@parameterized.named_parameters(("subclassed", True), ("functional", False))
def test_multiple_input_fit(self, subclassed):
x = [
np.array([[1.0], [2.0], [3.0]]),
np.array([[4.0], [5.0], [6.0]]),
np.array([[7.0], [8.0], [9.0]]),
]
y = np.array([[12.5], [16.0], [19.5]])
model = self._get_multiple_input_model(subclassed)
history = model.fit(x, y, batch_size=3, epochs=5)
self.assertAllClose(history.history["loss"], [1.0, 0.9, 0.8, 0.7, 0.6])
@parameterized.named_parameters(("subclassed", True), ("functional", False))
def test_multiple_input_evaluate(self, subclassed):
x = [
np.array([[1.0], [2.0], [3.0]]),
np.array([[4.0], [5.0], [6.0]]),
np.array([[7.0], [8.0], [9.0]]),
]
y = np.array([[13.0], [17.0], [21.0]])
model = self._get_multiple_input_model(subclassed)
loss = model.evaluate(x, y, batch_size=3)
self.assertAlmostEqual(loss, 2.0)
@parameterized.named_parameters(("subclassed", True), ("functional", False))
def test_multiple_input_predict(self, subclassed):
x = [
np.array([[1.0], [2.0], [3.0]]),
np.array([[4.0], [5.0], [6.0]]),
np.array([[7.0], [8.0], [9.0]]),
]
model = self._get_multiple_input_model(subclassed)
pred = model.predict(x, batch_size=1)
self.assertAllClose(pred, [[12.0], [15.0], [18.0]])
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/engine/correctness_test.py/0 | {
"file_path": "tf-keras/tf_keras/engine/correctness_test.py",
"repo_id": "tf-keras",
"token_count": 2215
} | 223 |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ,============================================================================
"""Tests for layer graphs construction & handling."""
import tensorflow.compat.v2 as tf
from tf_keras.engine import base_layer
from tf_keras.engine import node as node_module
from tf_keras.testing_infra import test_combinations
class DummyTensor(tf.__internal__.types.Tensor):
def __init__(self, shape=None):
self._shape = shape
@property
def shape(self):
return self._shape
class DummyLayer(base_layer.Layer):
pass
class NetworkConstructionTest(test_combinations.TestCase):
def test_chained_node_construction(self):
# test basics
a = DummyTensor(shape=(None, 32))
b = DummyTensor(shape=(None, 32))
a_layer = DummyLayer()
node = node_module.Node(a_layer, outputs=a)
self.assertEqual(node.outbound_layer, a_layer)
self.assertTrue(node.is_input)
self.assertListEqual(node.inbound_layers, [])
self.assertListEqual(node.input_tensors, [a])
self.assertListEqual(node.input_shapes, [(None, 32)])
self.assertListEqual(node.output_tensors, [a])
self.assertListEqual(node.output_shapes, [(None, 32)])
b_layer = DummyLayer()
node_module.Node(b_layer, outputs=b)
dense = DummyLayer()
a_2 = DummyTensor()
node_a = node_module.Node(layer=dense, call_args=(a,), outputs=a_2)
b_2 = DummyTensor()
node_b = node_module.Node(layer=dense, call_args=(b,), outputs=b_2)
# test the node attributes
self.assertFalse(node_a.is_input)
self.assertFalse(node_b.is_input)
self.assertEqual(node_a.call_args, (a,))
self.assertEqual(node_a.call_kwargs, {})
self.assertEqual(node_a.outputs, a_2)
# Test the layer wiring
self.assertLen(dense._inbound_nodes, 2)
self.assertLen(dense._outbound_nodes, 0)
self.assertEqual(dense._inbound_nodes, [node_a, node_b])
self.assertEqual(dense._inbound_nodes[0].inbound_layers, a_layer)
self.assertEqual(dense._inbound_nodes[0].outbound_layer, dense)
self.assertEqual(dense._inbound_nodes[1].inbound_layers, b_layer)
self.assertEqual(dense._inbound_nodes[1].outbound_layer, dense)
self.assertIs(dense._inbound_nodes[0].input_tensors, a)
self.assertIs(dense._inbound_nodes[1].input_tensors, b)
def test_multi_input_node(self):
# test multi-input layer
a = DummyTensor()
b = DummyTensor()
dense = DummyLayer()
a_2 = DummyTensor()
node_module.Node(layer=dense, call_args=(a,), outputs=a_2)
b_2 = DummyTensor()
node_module.Node(layer=dense, call_args=(b,), outputs=b_2)
concat_layer = DummyLayer()
merged = DummyTensor()
node_module.Node(
layer=concat_layer, call_args=([a_2, b_2],), outputs=merged
)
(
merge_layer,
merge_node_index,
merge_tensor_index,
) = merged._keras_history
self.assertEqual(merge_node_index, 0)
self.assertEqual(merge_tensor_index, 0)
self.assertLen(merge_layer._inbound_nodes, 1)
self.assertLen(merge_layer._outbound_nodes, 0)
self.assertLen(merge_layer._inbound_nodes[0].input_tensors, 2)
self.assertEqual(
merge_layer._inbound_nodes[0].input_tensors, [a_2, b_2]
)
self.assertLen(merge_layer._inbound_nodes[0].inbound_layers, 2)
def test_arg_and_kwarg_mix(self):
input_layer = DummyLayer()
input_layer_2 = DummyLayer()
a = DummyTensor()
node_a = node_module.Node(layer=input_layer, outputs=a)
b = DummyTensor()
node_b = node_module.Node(layer=input_layer_2, outputs=b)
arg_2 = DummyTensor()
arg_3 = DummyTensor()
node_c = node_module.Node(layer=input_layer, outputs=arg_3)
kwarg_x = DummyTensor()
kwarg_y = DummyTensor()
node_d = node_module.Node(layer=input_layer, outputs=kwarg_y)
merge_layer = DummyLayer()
merged = DummyTensor()
node = node_module.Node(
layer=merge_layer,
call_args=([a, b], arg_2, arg_3),
call_kwargs={"x": kwarg_x, "y": kwarg_y},
outputs=merged,
)
(
merge_layer,
merge_node_index,
merge_tensor_index,
) = merged._keras_history
# Check the saved call args/kwargs
self.assertEqual(([a, b], arg_2, arg_3), node.call_args)
self.assertEqual({"x": kwarg_x, "y": kwarg_y}, node.call_kwargs)
# Only the inputs that were produced by input nodes should appear in
# keras_tensors
self.assertEqual({a, b, arg_3, kwarg_y}, set(node.keras_inputs))
self.assertEqual(
set(node.parent_nodes), {node_a, node_b, node_c, node_d}
)
# Check the layer wirings
self.assertEqual(merge_node_index, 0)
self.assertEqual(merge_tensor_index, 0)
self.assertLen(merge_layer._inbound_nodes, 1)
self.assertLen(merge_layer._outbound_nodes, 0)
self.assertLen(input_layer._outbound_nodes, 3)
self.assertLen(input_layer_2._outbound_nodes, 1)
self.assertLen(merge_layer._inbound_nodes[0].input_tensors, 2)
self.assertEqual(merge_layer._inbound_nodes[0].input_tensors, [a, b])
self.assertLen(merge_layer._inbound_nodes[0].inbound_layers, 4)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/engine/node_test.py/0 | {
"file_path": "tf-keras/tf_keras/engine/node_test.py",
"repo_id": "tf-keras",
"token_count": 2799
} | 224 |
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""End-to-end tests for a variety of small models."""
import collections
import itertools
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
import tf_keras as keras
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
def _conv2d_filter(**kwargs):
"""Conv with non-default strides and dilation rate is not supported."""
return kwargs["strides"] <= 1 or kwargs["dilation_rate"] <= 1
# Scheme: (layer_class, data_shape, fuzz_dims, constructor_args, filter_fn)
# layer_class:
# A keras Layer class to be tested.
# data_shape:
# The shape of the input data. (not including batch dim)
# fuzz_dims:
# Dimensions which can be unspecified during model construction. For
# instance, if data_shape is (2, 5) and fuzz_dims is (False, True), a pass
# with model input shape of (2, None) will also be performed.
# constructor_args:
# An OrderedDict (to ensure consistent test names) with a key and a list
# of values to test. Test cases will be generated for the Cartesian product
# of all constructor args, so adding more fields can cause the drastically
# increase the testing load.
# filter_fn:
# If not None, this function will be called on each set of generated
# constructor args, and prevents generation of contradictory combinations.
# A True return value indicates a valid test.
_LAYERS_TO_TEST = [
(
keras.layers.Dense,
(1,),
(False,),
collections.OrderedDict([("units", [1])]),
None,
),
(
keras.layers.Activation,
(2, 2),
(True, True),
collections.OrderedDict([("activation", ["relu"])]),
None,
),
(
keras.layers.Dropout,
(16,),
(False,),
collections.OrderedDict([("rate", [0.25])]),
None,
),
(
keras.layers.BatchNormalization,
(8, 8, 3),
(True, True, False),
collections.OrderedDict(
[("axis", [3]), ("center", [True, False]), ("scale", [True, False])]
),
None,
),
(
keras.layers.Conv1D,
(8, 8),
(False, False),
collections.OrderedDict(
[
("filters", [1]),
("kernel_size", [1, 3]),
("strides", [1, 2]),
("padding", ["valid", "same"]),
("use_bias", [True]),
("kernel_regularizer", ["l2"]),
("data_format", ["channels_last"]),
]
),
None,
),
(
keras.layers.Conv2D,
(8, 8, 3),
(True, True, False),
collections.OrderedDict(
[
("filters", [1]),
("kernel_size", [1, 3]),
("strides", [1, 2]),
("padding", ["valid", "same"]),
("use_bias", [True, False]),
("kernel_regularizer", ["l2"]),
("dilation_rate", [1, 2]),
("data_format", ["channels_last"]),
]
),
_conv2d_filter,
),
(
keras.layers.LSTM,
(4, 4),
(False, False),
collections.OrderedDict(
[
("units", [1]),
("kernel_regularizer", ["l2"]),
("dropout", [0, 0.5]),
("stateful", [True, False]),
("unroll", [True, False]),
("return_sequences", [True, False]),
]
),
None,
),
]
def _gather_test_cases():
cases = []
for (
layer_type,
inp_shape,
fuzz_dims,
arg_dict,
filter_fn,
) in _LAYERS_TO_TEST:
arg_combinations = [[(k, i) for i in v] for k, v in arg_dict.items()]
for arguments in itertools.product(*arg_combinations):
layer_kwargs = {k: v for k, v in arguments}
if filter_fn is not None and not filter_fn(**layer_kwargs):
continue
name = "_{}_{}".format(
layer_type.__name__,
"_".join("{}_{}".format(*i) for i in arguments),
)
cases.append((name, layer_type, inp_shape, fuzz_dims, layer_kwargs))
return cases
OUTPUT_TEST_CASES = _gather_test_cases()
class CoreLayerIntegrationTest(test_combinations.TestCase):
"""Test that layers and models produce the correct tensor types."""
# In v1 graph there are only symbolic tensors.
@test_combinations.run_all_keras_modes(always_skip_v1=True)
@parameterized.named_parameters(*OUTPUT_TEST_CASES)
def test_layer_output_type(
self, layer_to_test, input_shape, _, layer_kwargs
):
layer = layer_to_test(**layer_kwargs)
input_data = np.ones(shape=(2,) + input_shape, dtype=np.float32)
layer_result = layer(input_data)
inp = keras.layers.Input(shape=input_shape, batch_size=2)
model = keras.models.Model(inp, layer_to_test(**layer_kwargs)(inp))
model_result = model(input_data)
for x in [layer_result, model_result]:
if not isinstance(x, tf.Tensor):
raise ValueError(
f"Tensor or EagerTensor expected, got type {type(x)}"
)
if (
isinstance(x, tf.__internal__.EagerTensor)
!= tf.executing_eagerly()
):
expected_type = (
tf.__internal__.EagerTensor
if tf.executing_eagerly()
else tf.Tensor
)
raise ValueError(
f"Expected type {expected_type}, got type {type(x)}"
)
def _run_fit_eval_predict(
self, layer_to_test, input_shape, data_shape, layer_kwargs
):
batch_size = 2
run_eagerly = test_utils.should_run_eagerly()
def map_fn(_):
x = keras.backend.random_uniform(shape=data_shape)
y = keras.backend.random_uniform(shape=(1,))
return x, y
dataset = tf.data.Dataset.range(4).map(map_fn).batch(batch_size)
inp = keras.layers.Input(shape=input_shape, batch_size=batch_size)
layer = layer_to_test(**layer_kwargs)(inp)
# Condense the output down to a single scalar.
layer = keras.layers.Flatten()(layer)
layer = keras.layers.Lambda(lambda x: tf.reduce_mean(x, keepdims=True))(
layer
)
layer = keras.layers.Dense(1, activation=None)(layer)
model = keras.models.Model(inp, layer)
model.compile(loss="mse", optimizer="sgd", run_eagerly=run_eagerly)
model.fit(dataset, verbose=2, epochs=2)
model.compile(loss="mse", optimizer="sgd", run_eagerly=run_eagerly)
model.fit(dataset.repeat(2), verbose=2, epochs=2, steps_per_epoch=2)
eval_dataset = tf.data.Dataset.range(4).map(map_fn).batch(batch_size)
model.evaluate(eval_dataset, verbose=2)
def pred_map_fn(_):
return keras.backend.random_uniform(shape=data_shape)
pred_dataset = tf.data.Dataset.range(4)
pred_dataset = pred_dataset.map(pred_map_fn).batch(batch_size)
model.predict(pred_dataset, verbose=2)
@test_combinations.run_all_keras_modes(always_skip_v1=False)
@parameterized.named_parameters(*OUTPUT_TEST_CASES)
def test_model_loops(
self, layer_to_test, input_shape, fuzz_dims, layer_kwargs
):
self._run_fit_eval_predict(
layer_to_test, input_shape, input_shape, layer_kwargs
)
if any(fuzz_dims):
fuzzed_shape = []
for dim, should_fuzz in zip(input_shape, fuzz_dims):
fuzzed_shape.append(None if should_fuzz else dim)
self._run_fit_eval_predict(
layer_to_test, fuzzed_shape, input_shape, layer_kwargs
)
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/engine/training_integration_test.py/0 | {
"file_path": "tf-keras/tf_keras/engine/training_integration_test.py",
"repo_id": "tf-keras",
"token_count": 4107
} | 225 |
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for dense_features."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
from tf_keras.feature_column import dense_features as df
from tf_keras.testing_infra import test_combinations
# isort: off
from tensorflow.python.eager import backprop
from tensorflow.python.framework import (
test_util as tf_test_utils,
)
def _initialized_session(config=None):
sess = tf.compat.v1.Session(config=config)
sess.run(tf.compat.v1.global_variables_initializer())
sess.run(tf.compat.v1.tables_initializer())
return sess
class DenseFeaturesTest(test_combinations.TestCase):
@test_combinations.generate(
test_combinations.combine(mode=["graph", "eager"])
)
def test_retrieving_input(self):
features = {"a": [0.0]}
dense_features = df.DenseFeatures(tf.feature_column.numeric_column("a"))
inputs = self.evaluate(dense_features(features))
self.assertAllClose([[0.0]], inputs)
@test_combinations.generate(test_combinations.combine(mode=["eager"]))
def test_reuses_variables(self):
sparse_input = tf.SparseTensor(
indices=((0, 0), (1, 0), (2, 0)),
values=(0, 1, 2),
dense_shape=(3, 3),
)
# Create feature columns (categorical and embedding).
categorical_column = tf.feature_column.categorical_column_with_identity(
key="a", num_buckets=3
)
embedding_dimension = 2
def _embedding_column_initializer(shape, dtype, partition_info=None):
del shape # unused
del dtype # unused
del partition_info # unused
embedding_values = ((1, 0), (0, 1), (1, 1)) # id 0 # id 1 # id 2
return embedding_values
embedding_column = tf.feature_column.embedding_column(
categorical_column,
dimension=embedding_dimension,
initializer=_embedding_column_initializer,
)
dense_features = df.DenseFeatures([embedding_column])
features = {"a": sparse_input}
inputs = dense_features(features)
variables = dense_features.variables
# Sanity check: test that the inputs are correct.
self.assertAllEqual([[1, 0], [0, 1], [1, 1]], inputs)
# Check that only one variable was created.
self.assertEqual(1, len(variables))
# Check that invoking dense_features on the same features does not
# create additional variables
_ = dense_features(features)
self.assertEqual(1, len(variables))
self.assertIs(variables[0], dense_features.variables[0])
@test_combinations.generate(test_combinations.combine(mode=["eager"]))
def test_dense_feature_with_partitioner(self):
sparse_input = tf.SparseTensor(
indices=((0, 0), (1, 0), (2, 0), (3, 0)),
values=(0, 1, 3, 2),
dense_shape=(4, 4),
)
# Create feature columns (categorical and embedding).
categorical_column = tf.feature_column.categorical_column_with_identity(
key="a", num_buckets=4
)
embedding_dimension = 2
def _embedding_column_initializer(shape, dtype, partition_info=None):
offset = partition_info._var_offset[0]
del shape # unused
del dtype # unused
if offset == 0:
embedding_values = ((1, 0), (0, 1)) # id 0 # id 1
else:
embedding_values = ((1, 1), (2, 2)) # id 2 # id 3
return embedding_values
embedding_column = tf.feature_column.embedding_column(
categorical_column,
dimension=embedding_dimension,
initializer=_embedding_column_initializer,
)
dense_features = df.DenseFeatures(
[embedding_column],
partitioner=tf.compat.v1.fixed_size_partitioner(2),
)
features = {"a": sparse_input}
inputs = dense_features(features)
variables = dense_features.variables
# Sanity check: test that the inputs are correct.
self.assertAllEqual([[1, 0], [0, 1], [2, 2], [1, 1]], inputs)
# Check that only one variable was created.
self.assertEqual(2, len(variables))
# Check that invoking dense_features on the same features does not
# create additional variables
_ = dense_features(features)
self.assertEqual(2, len(variables))
self.assertIs(variables[0], dense_features.variables[0])
self.assertIs(variables[1], dense_features.variables[1])
@test_combinations.generate(test_combinations.combine(mode=["eager"]))
def test_feature_column_dense_features_gradient(self):
sparse_input = tf.SparseTensor(
indices=((0, 0), (1, 0), (2, 0)),
values=(0, 1, 2),
dense_shape=(3, 3),
)
# Create feature columns (categorical and embedding).
categorical_column = tf.feature_column.categorical_column_with_identity(
key="a", num_buckets=3
)
embedding_dimension = 2
def _embedding_column_initializer(shape, dtype, partition_info=None):
del shape # unused
del dtype # unused
del partition_info # unused
embedding_values = ((1, 0), (0, 1), (1, 1)) # id 0 # id 1 # id 2
return embedding_values
embedding_column = tf.feature_column.embedding_column(
categorical_column,
dimension=embedding_dimension,
initializer=_embedding_column_initializer,
)
dense_features = df.DenseFeatures([embedding_column])
features = {"a": sparse_input}
def scale_matrix():
matrix = dense_features(features)
return 2 * matrix
# Sanity check: Verify that scale_matrix returns the correct output.
self.assertAllEqual([[2, 0], [0, 2], [2, 2]], scale_matrix())
# Check that the returned gradient is correct.
grad_function = backprop.implicit_grad(scale_matrix)
grads_and_vars = grad_function()
indexed_slice = grads_and_vars[0][0]
gradient = grads_and_vars[0][0].values
self.assertAllEqual([0, 1, 2], indexed_slice.indices)
self.assertAllEqual([[2, 2], [2, 2], [2, 2]], gradient)
def test_raises_if_empty_feature_columns(self):
with self.assertRaisesRegex(
ValueError, "feature_columns must not be empty"
):
df.DenseFeatures(feature_columns=[])(features={})
def test_should_be_dense_column(self):
with self.assertRaisesRegex(ValueError, "must be a .*DenseColumn"):
df.DenseFeatures(
feature_columns=[
tf.feature_column.categorical_column_with_hash_bucket(
"wire_cast", 4
)
]
)(features={"a": [[0]]})
def test_does_not_support_dict_columns(self):
with self.assertRaisesRegex(
ValueError, "Expected feature_columns to be iterable, found dict."
):
df.DenseFeatures(
feature_columns={"a": tf.feature_column.numeric_column("a")}
)(features={"a": [[0]]})
def test_bare_column(self):
with tf.Graph().as_default():
features = features = {"a": [0.0]}
net = df.DenseFeatures(tf.feature_column.numeric_column("a"))(
features
)
self.evaluate(tf.compat.v1.global_variables_initializer())
self.evaluate(tf.compat.v1.tables_initializer())
self.assertAllClose([[0.0]], self.evaluate(net))
def test_column_generator(self):
with tf.Graph().as_default():
features = features = {"a": [0.0], "b": [1.0]}
columns = (
tf.feature_column.numeric_column(key) for key in features
)
net = df.DenseFeatures(columns)(features)
self.evaluate(tf.compat.v1.global_variables_initializer())
self.evaluate(tf.compat.v1.tables_initializer())
self.assertAllClose([[0.0, 1.0]], self.evaluate(net))
def test_raises_if_duplicate_name(self):
with self.assertRaisesRegex(
ValueError, "Duplicate feature column name found for columns"
):
df.DenseFeatures(
feature_columns=[
tf.feature_column.numeric_column("a"),
tf.feature_column.numeric_column("a"),
]
)(features={"a": [[0]]})
def test_one_column(self):
price = tf.feature_column.numeric_column("price")
with tf.Graph().as_default():
features = {"price": [[1.0], [5.0]]}
net = df.DenseFeatures([price])(features)
self.evaluate(tf.compat.v1.global_variables_initializer())
self.evaluate(tf.compat.v1.tables_initializer())
self.assertAllClose([[1.0], [5.0]], self.evaluate(net))
def test_multi_dimension(self):
price = tf.feature_column.numeric_column("price", shape=2)
with tf.Graph().as_default():
features = {"price": [[1.0, 2.0], [5.0, 6.0]]}
net = df.DenseFeatures([price])(features)
self.evaluate(tf.compat.v1.global_variables_initializer())
self.evaluate(tf.compat.v1.tables_initializer())
self.assertAllClose([[1.0, 2.0], [5.0, 6.0]], self.evaluate(net))
def test_compute_output_shape(self):
price1 = tf.feature_column.numeric_column("price1", shape=2)
price2 = tf.feature_column.numeric_column("price2", shape=4)
with tf.Graph().as_default():
features = {
"price1": [[1.0, 2.0], [5.0, 6.0]],
"price2": [[3.0, 4.0, 5.0, 6.0], [7.0, 8.0, 9.0, 10.0]],
}
dense_features = df.DenseFeatures([price1, price2])
self.assertEqual(
(None, 6), dense_features.compute_output_shape((None,))
)
net = dense_features(features)
self.evaluate(tf.compat.v1.global_variables_initializer())
self.evaluate(tf.compat.v1.tables_initializer())
self.assertAllClose(
[
[1.0, 2.0, 3.0, 4.0, 5.0, 6.0],
[5.0, 6.0, 7.0, 8.0, 9.0, 10.0],
],
self.evaluate(net),
)
def test_raises_if_shape_mismatch(self):
price = tf.feature_column.numeric_column("price", shape=2)
with tf.Graph().as_default():
features = {"price": [[1.0], [5.0]]}
with self.assertRaisesRegex(
Exception,
r"Cannot reshape a tensor with 2 elements to shape \[2,2\]",
):
df.DenseFeatures([price])(features)
def test_reshaping(self):
price = tf.feature_column.numeric_column("price", shape=[1, 2])
with tf.Graph().as_default():
features = {"price": [[[1.0, 2.0]], [[5.0, 6.0]]]}
net = df.DenseFeatures([price])(features)
self.evaluate(tf.compat.v1.global_variables_initializer())
self.evaluate(tf.compat.v1.tables_initializer())
self.assertAllClose([[1.0, 2.0], [5.0, 6.0]], self.evaluate(net))
def test_multi_column(self):
price1 = tf.feature_column.numeric_column("price1", shape=2)
price2 = tf.feature_column.numeric_column("price2")
with tf.Graph().as_default():
features = {
"price1": [[1.0, 2.0], [5.0, 6.0]],
"price2": [[3.0], [4.0]],
}
net = df.DenseFeatures([price1, price2])(features)
self.evaluate(tf.compat.v1.global_variables_initializer())
self.evaluate(tf.compat.v1.tables_initializer())
self.assertAllClose(
[[1.0, 2.0, 3.0], [5.0, 6.0, 4.0]], self.evaluate(net)
)
def test_cols_to_output_tensors(self):
price1 = tf.feature_column.numeric_column("price1", shape=2)
price2 = tf.feature_column.numeric_column("price2")
with tf.Graph().as_default():
cols_dict = {}
features = {
"price1": [[1.0, 2.0], [5.0, 6.0]],
"price2": [[3.0], [4.0]],
}
dense_features = df.DenseFeatures([price1, price2])
net = dense_features(features, cols_dict)
self.evaluate(tf.compat.v1.global_variables_initializer())
self.evaluate(tf.compat.v1.tables_initializer())
self.assertAllClose(
[[1.0, 2.0], [5.0, 6.0]], self.evaluate(cols_dict[price1])
)
self.assertAllClose(
[[3.0], [4.0]], self.evaluate(cols_dict[price2])
)
self.assertAllClose(
[[1.0, 2.0, 3.0], [5.0, 6.0, 4.0]], self.evaluate(net)
)
def test_column_order(self):
price_a = tf.feature_column.numeric_column("price_a")
price_b = tf.feature_column.numeric_column("price_b")
with tf.Graph().as_default():
features = {
"price_a": [[1.0]],
"price_b": [[3.0]],
}
net1 = df.DenseFeatures([price_a, price_b])(features)
net2 = df.DenseFeatures([price_b, price_a])(features)
self.evaluate(tf.compat.v1.global_variables_initializer())
self.evaluate(tf.compat.v1.tables_initializer())
self.assertAllClose([[1.0, 3.0]], self.evaluate(net1))
self.assertAllClose([[1.0, 3.0]], self.evaluate(net2))
def test_fails_for_categorical_column(self):
animal = tf.feature_column.categorical_column_with_identity(
"animal", num_buckets=4
)
with tf.Graph().as_default():
features = {
"animal": tf.SparseTensor(
indices=[[0, 0], [0, 1]], values=[1, 2], dense_shape=[1, 2]
)
}
with self.assertRaisesRegex(Exception, "must be a .*DenseColumn"):
df.DenseFeatures([animal])(features)
def test_static_batch_size_mismatch(self):
price1 = tf.feature_column.numeric_column("price1")
price2 = tf.feature_column.numeric_column("price2")
with tf.Graph().as_default():
features = {
"price1": [[1.0], [5.0], [7.0]], # batchsize = 3
"price2": [[3.0], [4.0]], # batchsize = 2
}
with self.assertRaisesRegex(
ValueError,
r"Batch size \(first dimension\) of each feature must be same.",
):
df.DenseFeatures([price1, price2])(features)
def test_subset_of_static_batch_size_mismatch(self):
price1 = tf.feature_column.numeric_column("price1")
price2 = tf.feature_column.numeric_column("price2")
price3 = tf.feature_column.numeric_column("price3")
with tf.Graph().as_default():
features = {
"price1": tf.compat.v1.placeholder(
dtype=tf.int64
), # batchsize = 3
"price2": [[3.0], [4.0]], # batchsize = 2
"price3": [[3.0], [4.0], [5.0]], # batchsize = 3
}
with self.assertRaisesRegex(
ValueError,
r"Batch size \(first dimension\) of each feature must be same.",
):
df.DenseFeatures([price1, price2, price3])(features)
def test_runtime_batch_size_mismatch(self):
price1 = tf.feature_column.numeric_column("price1")
price2 = tf.feature_column.numeric_column("price2")
with tf.Graph().as_default():
features = {
"price1": tf.compat.v1.placeholder(
dtype=tf.int64
), # batchsize = 3
"price2": [[3.0], [4.0]], # batchsize = 2
}
net = df.DenseFeatures([price1, price2])(features)
with _initialized_session() as sess:
with self.assertRaisesRegex(
tf.errors.OpError,
"Dimension 0 in both shapes must be equal|"
"Dimensions of inputs should match",
):
sess.run(
net,
feed_dict={features["price1"]: [[1.0], [5.0], [7.0]]},
)
def test_runtime_batch_size_matches(self):
price1 = tf.feature_column.numeric_column("price1")
price2 = tf.feature_column.numeric_column("price2")
with tf.Graph().as_default():
features = {
"price1": tf.compat.v1.placeholder(
dtype=tf.int64
), # batchsize = 2
"price2": tf.compat.v1.placeholder(
dtype=tf.int64
), # batchsize = 2
}
net = df.DenseFeatures([price1, price2])(features)
with _initialized_session() as sess:
sess.run(
net,
feed_dict={
features["price1"]: [[1.0], [5.0]],
features["price2"]: [[1.0], [5.0]],
},
)
def test_multiple_layers_with_same_embedding_column(self):
some_sparse_column = (
tf.feature_column.categorical_column_with_hash_bucket(
"sparse_feature", hash_bucket_size=5
)
)
some_embedding_column = tf.feature_column.embedding_column(
some_sparse_column, dimension=10
)
with tf.Graph().as_default():
features = {
"sparse_feature": [["a"], ["x"]],
}
all_cols = [some_embedding_column]
df.DenseFeatures(all_cols)(features)
df.DenseFeatures(all_cols)(features)
# Make sure that 2 variables get created in this case.
self.assertEqual(
2,
len(
tf.compat.v1.get_collection(
tf.compat.v1.GraphKeys.GLOBAL_VARIABLES
)
),
)
expected_var_names = [
"dense_features/sparse_feature_embedding/embedding_weights:0",
"dense_features_1/sparse_feature_embedding/embedding_weights:0",
]
self.assertCountEqual(
expected_var_names,
[
v.name
for v in tf.compat.v1.get_collection(
tf.compat.v1.GraphKeys.GLOBAL_VARIABLES
)
],
)
@tf_test_utils.run_deprecated_v1
def test_multiple_layers_with_same_shared_embedding_column(self):
categorical_column_a = (
tf.feature_column.categorical_column_with_identity(
key="aaa", num_buckets=3
)
)
categorical_column_b = (
tf.feature_column.categorical_column_with_identity(
key="bbb", num_buckets=3
)
)
embedding_dimension = 2
(
embedding_column_b,
embedding_column_a,
) = tf.feature_column.shared_embeddings(
[categorical_column_b, categorical_column_a],
dimension=embedding_dimension,
)
with tf.Graph().as_default():
features = {
"aaa": tf.SparseTensor(
indices=((0, 0), (1, 0), (1, 1)),
values=(0, 1, 0),
dense_shape=(2, 2),
),
"bbb": tf.SparseTensor(
indices=((0, 0), (1, 0), (1, 1)),
values=(1, 2, 1),
dense_shape=(2, 2),
),
}
all_cols = [embedding_column_a, embedding_column_b]
df.DenseFeatures(all_cols)(features)
df.DenseFeatures(all_cols)(features)
# Make sure that only 1 variable gets created in this case.
self.assertEqual(
1,
len(
tf.compat.v1.get_collection(
tf.compat.v1.GraphKeys.GLOBAL_VARIABLES
)
),
)
self.assertCountEqual(
["aaa_bbb_shared_embedding:0"],
[
v.name
for v in tf.compat.v1.get_collection(
tf.compat.v1.GraphKeys.GLOBAL_VARIABLES
)
],
)
@tf_test_utils.run_deprecated_v1
def test_multiple_layers_with_same_shared_embedding_column_diff_graphs(
self,
):
categorical_column_a = (
tf.feature_column.categorical_column_with_identity(
key="aaa", num_buckets=3
)
)
categorical_column_b = (
tf.feature_column.categorical_column_with_identity(
key="bbb", num_buckets=3
)
)
embedding_dimension = 2
(
embedding_column_b,
embedding_column_a,
) = tf.feature_column.shared_embeddings(
[categorical_column_b, categorical_column_a],
dimension=embedding_dimension,
)
all_cols = [embedding_column_a, embedding_column_b]
with tf.Graph().as_default():
features = {
"aaa": tf.SparseTensor(
indices=((0, 0), (1, 0), (1, 1)),
values=(0, 1, 0),
dense_shape=(2, 2),
),
"bbb": tf.SparseTensor(
indices=((0, 0), (1, 0), (1, 1)),
values=(1, 2, 1),
dense_shape=(2, 2),
),
}
df.DenseFeatures(all_cols)(features)
# Make sure that only 1 variable gets created in this case.
self.assertEqual(
1,
len(
tf.compat.v1.get_collection(
tf.compat.v1.GraphKeys.GLOBAL_VARIABLES
)
),
)
with tf.Graph().as_default():
features1 = {
"aaa": tf.SparseTensor(
indices=((0, 0), (1, 0), (1, 1)),
values=(0, 1, 0),
dense_shape=(2, 2),
),
"bbb": tf.SparseTensor(
indices=((0, 0), (1, 0), (1, 1)),
values=(1, 2, 1),
dense_shape=(2, 2),
),
}
df.DenseFeatures(all_cols)(features1)
# Make sure that only 1 variable gets created in this case.
self.assertEqual(
1,
len(
tf.compat.v1.get_collection(
tf.compat.v1.GraphKeys.GLOBAL_VARIABLES
)
),
)
self.assertCountEqual(
["aaa_bbb_shared_embedding:0"],
[
v.name
for v in tf.compat.v1.get_collection(
tf.compat.v1.GraphKeys.GLOBAL_VARIABLES
)
],
)
@tf_test_utils.run_deprecated_v1
def test_with_1d_sparse_tensor(self):
embedding_values = (
(1.0, 2.0, 3.0, 4.0, 5.0), # id 0
(6.0, 7.0, 8.0, 9.0, 10.0), # id 1
(11.0, 12.0, 13.0, 14.0, 15.0), # id 2
)
def _initializer(shape, dtype, partition_info=None):
del shape, dtype, partition_info
return embedding_values
# price has 1 dimension in dense_features
price = tf.feature_column.numeric_column("price")
# one_hot_body_style has 3 dims in dense_features.
body_style = tf.feature_column.categorical_column_with_vocabulary_list(
"body-style", vocabulary_list=["hardtop", "wagon", "sedan"]
)
one_hot_body_style = tf.feature_column.indicator_column(body_style)
# embedded_body_style has 5 dims in dense_features.
country = tf.feature_column.categorical_column_with_vocabulary_list(
"country", vocabulary_list=["US", "JP", "CA"]
)
embedded_country = tf.feature_column.embedding_column(
country, dimension=5, initializer=_initializer
)
# Provides 1-dim tensor and dense tensor.
features = {
"price": tf.constant(
[
11.0,
12.0,
]
),
"body-style": tf.SparseTensor(
indices=((0,), (1,)),
values=("sedan", "hardtop"),
dense_shape=(2,),
),
# This is dense tensor for the categorical_column.
"country": tf.constant(["CA", "US"]),
}
self.assertEqual(1, features["price"].shape.ndims)
self.assertEqual(1, features["body-style"].dense_shape.get_shape()[0])
self.assertEqual(1, features["country"].shape.ndims)
net = df.DenseFeatures([price, one_hot_body_style, embedded_country])(
features
)
self.assertEqual(1 + 3 + 5, net.shape[1])
with _initialized_session() as sess:
# Each row is formed by concatenating `embedded_body_style`,
# `one_hot_body_style`, and `price` in order.
self.assertAllEqual(
[
[0.0, 0.0, 1.0, 11.0, 12.0, 13.0, 14.0, 15.0, 11.0],
[1.0, 0.0, 0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 12.0],
],
sess.run(net),
)
@tf_test_utils.run_deprecated_v1
def test_with_1d_unknown_shape_sparse_tensor(self):
embedding_values = (
(1.0, 2.0), # id 0
(6.0, 7.0), # id 1
(11.0, 12.0), # id 2
)
def _initializer(shape, dtype, partition_info=None):
del shape, dtype, partition_info
return embedding_values
# price has 1 dimension in dense_features
price = tf.feature_column.numeric_column("price")
# one_hot_body_style has 3 dims in dense_features.
body_style = tf.feature_column.categorical_column_with_vocabulary_list(
"body-style", vocabulary_list=["hardtop", "wagon", "sedan"]
)
one_hot_body_style = tf.feature_column.indicator_column(body_style)
# embedded_body_style has 5 dims in dense_features.
country = tf.feature_column.categorical_column_with_vocabulary_list(
"country", vocabulary_list=["US", "JP", "CA"]
)
embedded_country = tf.feature_column.embedding_column(
country, dimension=2, initializer=_initializer
)
# Provides 1-dim tensor and dense tensor.
features = {
"price": tf.compat.v1.placeholder(tf.float32),
"body-style": tf.compat.v1.sparse_placeholder(tf.string),
# This is dense tensor for the categorical_column.
"country": tf.compat.v1.placeholder(tf.string),
}
self.assertIsNone(features["price"].shape.ndims)
self.assertIsNone(features["body-style"].get_shape().ndims)
self.assertIsNone(features["country"].shape.ndims)
price_data = np.array([11.0, 12.0])
body_style_data = tf.compat.v1.SparseTensorValue(
indices=((0,), (1,)), values=("sedan", "hardtop"), dense_shape=(2,)
)
country_data = np.array([["US"], ["CA"]])
net = df.DenseFeatures([price, one_hot_body_style, embedded_country])(
features
)
self.assertEqual(1 + 3 + 2, net.shape[1])
with _initialized_session() as sess:
# Each row is formed by concatenating `embedded_body_style`,
# `one_hot_body_style`, and `price` in order.
self.assertAllEqual(
[
[0.0, 0.0, 1.0, 1.0, 2.0, 11.0],
[1.0, 0.0, 0.0, 11.0, 12.0, 12.0],
],
sess.run(
net,
feed_dict={
features["price"]: price_data,
features["body-style"]: body_style_data,
features["country"]: country_data,
},
),
)
@tf_test_utils.run_deprecated_v1
def test_with_rank_0_feature(self):
# price has 1 dimension in dense_features
price = tf.feature_column.numeric_column("price")
features = {
"price": tf.constant(0),
}
self.assertEqual(0, features["price"].shape.ndims)
# Static rank 0 should fail
with self.assertRaisesRegex(
ValueError, "Feature .* cannot have rank 0"
):
df.DenseFeatures([price])(features)
# Dynamic rank 0 should fail
features = {
"price": tf.compat.v1.placeholder(tf.float32),
}
net = df.DenseFeatures([price])(features)
self.assertEqual(1, net.shape[1])
with _initialized_session() as sess:
with self.assertRaisesOpError("Feature .* cannot have rank 0"):
sess.run(net, feed_dict={features["price"]: np.array(1)})
class IndicatorColumnTest(tf.test.TestCase):
@tf_test_utils.run_deprecated_v1
def test_dense_features(self):
animal = tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_identity(
"animal", num_buckets=4
)
)
with tf.Graph().as_default():
features = {
"animal": tf.SparseTensor(
indices=[[0, 0], [0, 1]], values=[1, 2], dense_shape=[1, 2]
)
}
net = df.DenseFeatures([animal])(features)
self.evaluate(tf.compat.v1.global_variables_initializer())
self.evaluate(tf.compat.v1.tables_initializer())
self.assertAllClose([[0.0, 1.0, 1.0, 0.0]], self.evaluate(net))
class EmbeddingColumnTest(tf.test.TestCase, parameterized.TestCase):
@parameterized.named_parameters(
{
"testcase_name": "use_safe_embedding_lookup",
"use_safe_embedding_lookup": True,
"partition_variables": False,
},
{
"testcase_name": "dont_use_safe_embedding_lookup",
"use_safe_embedding_lookup": False,
"partition_variables": False,
},
{
"testcase_name": "use_safe_embedding_lookup_partitioned",
"use_safe_embedding_lookup": True,
"partition_variables": True,
},
{
"testcase_name": "dont_use_safe_embedding_lookup_partitioned",
"use_safe_embedding_lookup": False,
"partition_variables": True,
},
)
@tf_test_utils.run_deprecated_v1
def test_dense_features(
self, use_safe_embedding_lookup, partition_variables
):
# Inputs.
vocabulary_size = 4
sparse_input = tf.compat.v1.SparseTensorValue(
# example 0, ids [2]
# example 1, ids [0, 1]
# example 2, ids []
# example 3, ids [1]
indices=((0, 0), (1, 0), (1, 4), (3, 0)),
values=(2, 0, 1, 1),
dense_shape=(4, 5),
)
# Embedding variable.
embedding_dimension = 2
embedding_values = (
(1.0, 2.0), # id 0
(3.0, 5.0), # id 1
(7.0, 11.0), # id 2
(9.0, 13.0), # id 3
)
def _initializer(shape, dtype, partition_info=None):
self.assertEqual(tf.float32, dtype)
if partition_variables:
assert partition_info is not None
self.assertEqual(
[vocabulary_size, embedding_dimension],
partition_info.full_shape,
)
self.assertAllEqual((2, embedding_dimension), shape)
return tf.slice(
embedding_values, partition_info.var_offset, shape
)
else:
self.assertAllEqual(
(vocabulary_size, embedding_dimension), shape
)
self.assertIsNone(partition_info)
return embedding_values
# Expected lookup result, using combiner='mean'.
expected_lookups = (
# example 0, ids [2], embedding = [7, 11]
(7.0, 11.0),
# example 1, ids [0, 1], embedding = mean([1, 2] + [3, 5]) = [2,
# 3.5]
(2.0, 3.5),
# example 2, ids [], embedding = [0, 0]
(0.0, 0.0),
# example 3, ids [1], embedding = [3, 5]
(3.0, 5.0),
)
# Build columns.
categorical_column = tf.feature_column.categorical_column_with_identity(
key="aaa", num_buckets=vocabulary_size
)
partitioner = None
if partition_variables:
partitioner = tf.compat.v1.fixed_size_partitioner(2, axis=0)
with tf.compat.v1.variable_scope("vars", partitioner=partitioner):
embedding_column = tf.feature_column.embedding_column(
categorical_column,
dimension=embedding_dimension,
initializer=_initializer,
use_safe_embedding_lookup=use_safe_embedding_lookup,
)
# Provide sparse input and get dense result.
l = df.DenseFeatures((embedding_column,))
dense_features = l({"aaa": sparse_input})
# Assert expected embedding variable and lookups.
global_vars = tf.compat.v1.get_collection(
tf.compat.v1.GraphKeys.GLOBAL_VARIABLES
)
if partition_variables:
self.assertCountEqual(
(
"vars/dense_features/aaa_embedding/embedding_weights/"
"part_0:0",
"vars/dense_features/aaa_embedding/embedding_weights/"
"part_1:0",
),
tuple([v.name for v in global_vars]),
)
else:
self.assertCountEqual(
("vars/dense_features/aaa_embedding/embedding_weights:0",),
tuple([v.name for v in global_vars]),
)
for v in global_vars:
self.assertIsInstance(v, tf.Variable)
trainable_vars = tf.compat.v1.get_collection(
tf.compat.v1.GraphKeys.TRAINABLE_VARIABLES
)
if partition_variables:
self.assertCountEqual(
(
"vars/dense_features/aaa_embedding/embedding_weights/"
"part_0:0",
"vars/dense_features/aaa_embedding/embedding_weights/"
"part_1:0",
),
tuple([v.name for v in trainable_vars]),
)
else:
self.assertCountEqual(
("vars/dense_features/aaa_embedding/embedding_weights:0",),
tuple([v.name for v in trainable_vars]),
)
self.evaluate(tf.compat.v1.global_variables_initializer())
self.evaluate(tf.compat.v1.tables_initializer())
if partition_variables:
self.assertAllEqual(
embedding_values,
self.evaluate(tf.concat(trainable_vars, axis=0)),
)
else:
self.assertAllEqual(
embedding_values, self.evaluate(trainable_vars[0])
)
self.assertAllEqual(expected_lookups, self.evaluate(dense_features))
if use_safe_embedding_lookup:
self.assertIn(
"SparseFillEmptyRows",
[
x.type
for x in tf.compat.v1.get_default_graph().get_operations()
],
)
else:
self.assertNotIn(
"SparseFillEmptyRows",
[
x.type
for x in tf.compat.v1.get_default_graph().get_operations()
],
)
@tf_test_utils.run_deprecated_v1
def test_dense_features_not_trainable(self):
# Inputs.
vocabulary_size = 3
sparse_input = tf.compat.v1.SparseTensorValue(
# example 0, ids [2]
# example 1, ids [0, 1]
# example 2, ids []
# example 3, ids [1]
indices=((0, 0), (1, 0), (1, 4), (3, 0)),
values=(2, 0, 1, 1),
dense_shape=(4, 5),
)
# Embedding variable.
embedding_dimension = 2
embedding_values = (
(1.0, 2.0), # id 0
(3.0, 5.0), # id 1
(7.0, 11.0), # id 2
)
def _initializer(shape, dtype, partition_info=None):
self.assertAllEqual((vocabulary_size, embedding_dimension), shape)
self.assertEqual(tf.float32, dtype)
self.assertIsNone(partition_info)
return embedding_values
# Expected lookup result, using combiner='mean'.
expected_lookups = (
# example 0, ids [2], embedding = [7, 11]
(7.0, 11.0),
# example 1, ids [0, 1], embedding = mean([1, 2] + [3, 5]) = [2,
# 3.5]
(2.0, 3.5),
# example 2, ids [], embedding = [0, 0]
(0.0, 0.0),
# example 3, ids [1], embedding = [3, 5]
(3.0, 5.0),
)
# Build columns.
categorical_column = tf.feature_column.categorical_column_with_identity(
key="aaa", num_buckets=vocabulary_size
)
embedding_column = tf.feature_column.embedding_column(
categorical_column,
dimension=embedding_dimension,
initializer=_initializer,
trainable=False,
)
# Provide sparse input and get dense result.
dense_features = df.DenseFeatures((embedding_column,))(
{"aaa": sparse_input}
)
# Assert expected embedding variable and lookups.
global_vars = tf.compat.v1.get_collection(
tf.compat.v1.GraphKeys.GLOBAL_VARIABLES
)
self.assertCountEqual(
("dense_features/aaa_embedding/embedding_weights:0",),
tuple([v.name for v in global_vars]),
)
self.assertCountEqual(
[],
tf.compat.v1.get_collection(
tf.compat.v1.GraphKeys.TRAINABLE_VARIABLES
),
)
self.evaluate(tf.compat.v1.global_variables_initializer())
self.evaluate(tf.compat.v1.tables_initializer())
self.assertAllEqual(embedding_values, self.evaluate(global_vars[0]))
self.assertAllEqual(expected_lookups, self.evaluate(dense_features))
class SharedEmbeddingColumnTest(tf.test.TestCase, parameterized.TestCase):
def _test_dense_features(self, trainable=True):
# Inputs.
vocabulary_size = 3
sparse_input_a = tf.compat.v1.SparseTensorValue(
# example 0, ids [2]
# example 1, ids [0, 1]
indices=((0, 0), (1, 0), (1, 4)),
values=(2, 0, 1),
dense_shape=(2, 5),
)
sparse_input_b = tf.compat.v1.SparseTensorValue(
# example 0, ids [0]
# example 1, ids []
indices=((0, 0),),
values=(0,),
dense_shape=(2, 5),
)
sparse_input_c = tf.compat.v1.SparseTensorValue(
# example 0, ids [2]
# example 1, ids [0, 1]
indices=((0, 1), (1, 1), (1, 3)),
values=(2, 0, 1),
dense_shape=(2, 5),
)
sparse_input_d = tf.compat.v1.SparseTensorValue(
# example 0, ids [2]
# example 1, ids []
indices=((0, 1),),
values=(2,),
dense_shape=(2, 5),
)
# Embedding variable.
embedding_dimension = 2
embedding_values = (
(1.0, 2.0), # id 0
(3.0, 5.0), # id 1
(7.0, 11.0), # id 2
)
def _initializer(shape, dtype, partition_info=None):
self.assertAllEqual((vocabulary_size, embedding_dimension), shape)
self.assertEqual(tf.float32, dtype)
self.assertIsNone(partition_info)
return embedding_values
# Expected lookup result, using combiner='mean'.
expected_lookups = (
# example 0:
# A ids [2], embedding = [7, 11]
# B ids [0], embedding = [1, 2]
# C ids [2], embedding = [7, 11]
# D ids [2], embedding = [7, 11]
(7.0, 11.0, 1.0, 2.0, 7.0, 11.0, 7.0, 11.0),
# example 1:
# A ids [0, 1], embedding = mean([1, 2] + [3, 5]) = [2, 3.5]
# B ids [], embedding = [0, 0]
# C ids [0, 1], embedding = mean([1, 2] + [3, 5]) = [2, 3.5]
# D ids [], embedding = [0, 0]
(2.0, 3.5, 0.0, 0.0, 2.0, 3.5, 0.0, 0.0),
)
# Build columns.
categorical_column_a = (
tf.feature_column.categorical_column_with_identity(
key="aaa", num_buckets=vocabulary_size
)
)
categorical_column_b = (
tf.feature_column.categorical_column_with_identity(
key="bbb", num_buckets=vocabulary_size
)
)
categorical_column_c = (
tf.feature_column.categorical_column_with_identity(
key="ccc", num_buckets=vocabulary_size
)
)
categorical_column_d = (
tf.feature_column.categorical_column_with_identity(
key="ddd", num_buckets=vocabulary_size
)
)
(
embedding_column_a,
embedding_column_b,
) = tf.feature_column.shared_embeddings(
[categorical_column_a, categorical_column_b],
dimension=embedding_dimension,
initializer=_initializer,
trainable=trainable,
)
(
embedding_column_c,
embedding_column_d,
) = tf.feature_column.shared_embeddings(
[categorical_column_c, categorical_column_d],
dimension=embedding_dimension,
initializer=_initializer,
trainable=trainable,
)
features = {
"aaa": sparse_input_a,
"bbb": sparse_input_b,
"ccc": sparse_input_c,
"ddd": sparse_input_d,
}
# Provide sparse input and get dense result.
dense_features = df.DenseFeatures(
feature_columns=(
embedding_column_b,
embedding_column_a,
embedding_column_c,
embedding_column_d,
)
)(features)
# Assert expected embedding variable and lookups.
global_vars = tf.compat.v1.get_collection(
tf.compat.v1.GraphKeys.GLOBAL_VARIABLES
)
self.assertCountEqual(
["aaa_bbb_shared_embedding:0", "ccc_ddd_shared_embedding:0"],
tuple([v.name for v in global_vars]),
)
for v in global_vars:
self.assertIsInstance(v, tf.Variable)
trainable_vars = tf.compat.v1.get_collection(
tf.compat.v1.GraphKeys.TRAINABLE_VARIABLES
)
if trainable:
self.assertCountEqual(
["aaa_bbb_shared_embedding:0", "ccc_ddd_shared_embedding:0"],
tuple([v.name for v in trainable_vars]),
)
else:
self.assertCountEqual([], tuple([v.name for v in trainable_vars]))
shared_embedding_vars = global_vars
self.evaluate(tf.compat.v1.global_variables_initializer())
self.evaluate(tf.compat.v1.tables_initializer())
self.assertAllEqual(
embedding_values, self.evaluate(shared_embedding_vars[0])
)
self.assertAllEqual(expected_lookups, self.evaluate(dense_features))
@tf_test_utils.run_deprecated_v1
def test_dense_features(self):
self._test_dense_features()
@tf_test_utils.run_deprecated_v1
def test_dense_features_no_trainable(self):
self._test_dense_features(trainable=False)
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class DenseFeaturesSerializationTest(tf.test.TestCase, parameterized.TestCase):
@parameterized.named_parameters(
("trainable", True, "trainable"), ("not_trainable", False, "frozen")
)
def test_get_config(self, trainable, name):
cols = [
tf.feature_column.numeric_column("a"),
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_identity(
key="b", num_buckets=3
),
dimension=2,
),
]
orig_layer = df.DenseFeatures(cols, trainable=trainable, name=name)
config = orig_layer.get_config()
self.assertEqual(config["name"], orig_layer.name)
self.assertEqual(config["trainable"], trainable)
self.assertLen(config["feature_columns"], 2)
self.assertEqual(
config["feature_columns"][0]["class_name"], "NumericColumn"
)
self.assertEqual(config["feature_columns"][0]["config"]["shape"], (1,))
self.assertEqual(
config["feature_columns"][1]["class_name"], "EmbeddingColumn"
)
@parameterized.named_parameters(
("trainable", True, "trainable"), ("not_trainable", False, "frozen")
)
def test_from_config(self, trainable, name):
cols = [
tf.feature_column.numeric_column("a"),
tf.feature_column.embedding_column(
tf.feature_column.categorical_column_with_vocabulary_list(
"b", vocabulary_list=["1", "2", "3"]
),
dimension=2,
),
tf.feature_column.indicator_column(
tf.feature_column.categorical_column_with_hash_bucket(
key="c", hash_bucket_size=3
)
),
]
orig_layer = df.DenseFeatures(cols, trainable=trainable, name=name)
config = orig_layer.get_config()
new_layer = df.DenseFeatures.from_config(config)
self.assertEqual(new_layer.name, orig_layer.name)
self.assertEqual(new_layer.trainable, trainable)
self.assertLen(new_layer._feature_columns, 3)
self.assertEqual(new_layer._feature_columns[0].name, "a")
self.assertEqual(new_layer._feature_columns[1].initializer.mean, 0.0)
self.assertEqual(
new_layer._feature_columns[1].categorical_column.name, "b"
)
self.assertIsInstance(new_layer._feature_columns[0], cols[0].__class__)
self.assertIsInstance(new_layer._feature_columns[1], cols[1].__class__)
self.assertIsInstance(new_layer._feature_columns[2], cols[2].__class__)
def test_crossed_column(self):
a = tf.feature_column.categorical_column_with_vocabulary_list(
"a", vocabulary_list=["1", "2", "3"]
)
b = tf.feature_column.categorical_column_with_vocabulary_list(
"b", vocabulary_list=["1", "2", "3"]
)
ab = tf.feature_column.crossed_column([a, b], hash_bucket_size=2)
cols = [tf.feature_column.indicator_column(ab)]
orig_layer = df.DenseFeatures(cols)
config = orig_layer.get_config()
new_layer = df.DenseFeatures.from_config(config)
self.assertLen(new_layer._feature_columns, 1)
self.assertEqual(new_layer._feature_columns[0].name, "a_X_b_indicator")
@test_combinations.generate(test_combinations.combine(mode=["graph", "eager"]))
class SequenceFeatureColumnsTest(tf.test.TestCase):
"""Tests DenseFeatures with sequence feature columns."""
def test_embedding_column(self):
"""Tests that error is raised for sequence embedding column."""
vocabulary_size = 3
sparse_input = tf.compat.v1.SparseTensorValue(
# example 0, ids [2]
# example 1, ids [0, 1]
indices=((0, 0), (1, 0), (1, 1)),
values=(2, 0, 1),
dense_shape=(2, 2),
)
categorical_column_a = (
tf.feature_column.sequence_categorical_column_with_identity(
key="aaa", num_buckets=vocabulary_size
)
)
embedding_column_a = tf.feature_column.embedding_column(
categorical_column_a, dimension=2
)
input_layer = df.DenseFeatures([embedding_column_a])
with self.assertRaisesRegex(
ValueError,
r"In embedding_column: aaa_embedding\. categorical_column must not "
r"be of type SequenceCategoricalColumn\.",
):
_ = input_layer({"aaa": sparse_input})
def test_indicator_column(self):
"""Tests that error is raised for sequence indicator column."""
vocabulary_size = 3
sparse_input = tf.compat.v1.SparseTensorValue(
# example 0, ids [2]
# example 1, ids [0, 1]
indices=((0, 0), (1, 0), (1, 1)),
values=(2, 0, 1),
dense_shape=(2, 2),
)
categorical_column_a = (
tf.feature_column.sequence_categorical_column_with_identity(
key="aaa", num_buckets=vocabulary_size
)
)
indicator_column_a = tf.feature_column.indicator_column(
categorical_column_a
)
input_layer = df.DenseFeatures([indicator_column_a])
with self.assertRaisesRegex(
ValueError,
r"In indicator_column: aaa_indicator\. categorical_column must not "
r"be of type SequenceCategoricalColumn\.",
):
_ = input_layer({"aaa": sparse_input})
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/feature_column/dense_features_test.py/0 | {
"file_path": "tf-keras/tf_keras/feature_column/dense_features_test.py",
"repo_id": "tf-keras",
"token_count": 26765
} | 226 |
"""Class to specify an input's shape/dtype/value range.
"""
import tensorflow as tf
class InputSpec:
def __init__(self, shape, dtype="float32", range=None):
self.shape = shape
self.dtype = dtype
self.range = range
def spec_to_value(spec):
shape = spec.shape
dtype = spec.dtype
rg = spec.range or [0, 1]
if dtype == "string":
return tf.constant(
["some string" for _ in range(shape[0])], dtype="string"
)
return tf.random.stateless_uniform(
shape, seed=[123, 1], minval=rg[0], maxval=rg[1], dtype=dtype
)
| tf-keras/tf_keras/integration_test/models/input_spec.py/0 | {
"file_path": "tf-keras/tf_keras/integration_test/models/input_spec.py",
"repo_id": "tf-keras",
"token_count": 262
} | 227 |
# Copyright 2021 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Demonstrate TF-Keras preprocessing layers applied in tf.data.Dataset.map."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow.compat.v2 as tf
from tf_keras.integration_test import preprocessing_test_utils as utils
ds_combinations = tf.__internal__.distribute.combinations
multi_process_runner = tf.__internal__.distribute.multi_process_runner
test_combinations = tf.__internal__.test.combinations
# Note: Strategy combinations are not (yet) public APIs, so they are subject
# to API changes and backward-compatibility is not guaranteed.
STRATEGIES = [
ds_combinations.default_strategy,
ds_combinations.mirrored_strategy_with_two_cpus,
ds_combinations.mirrored_strategy_with_two_gpus,
ds_combinations.tpu_strategy,
ds_combinations.cloud_tpu_strategy,
ds_combinations.parameter_server_strategy_3worker_2ps_cpu,
ds_combinations.parameter_server_strategy_3worker_2ps_1gpu,
ds_combinations.multi_worker_mirrored_2x1_cpu,
ds_combinations.multi_worker_mirrored_2x2_gpu,
ds_combinations.central_storage_strategy_with_two_gpus,
]
@ds_combinations.generate(
test_combinations.combine(strategy=STRATEGIES, mode="eager")
)
class PreprocessingAppliedInDatasetCreatorTest(tf.test.TestCase):
"""Demonstrate TF-Keras preprocessing layers applied in
tf.data.Dataset.map.
"""
def testDistributedModelFit(self, strategy):
if not tf.__internal__.tf2.enabled() and isinstance(
strategy, tf.distribute.experimental.ParameterServerStrategy
):
self.skipTest(
"Parameter Server strategy with dataset creator need to be run "
"when eager execution is enabled."
)
with strategy.scope():
preprocessing_model = utils.make_preprocessing_model(
self.get_temp_dir()
)
training_model = utils.make_training_model()
training_model.compile(optimizer="sgd", loss="binary_crossentropy")
def dataset_fn(input_context):
dataset = utils.make_dataset()
dataset = dataset.shard(
input_context.num_input_pipelines,
input_context.input_pipeline_id,
)
batch_size = input_context.get_per_replica_batch_size(
global_batch_size=utils.BATCH_SIZE
)
dataset = dataset.batch(batch_size).repeat().prefetch(2)
return dataset.map(lambda x, y: (preprocessing_model(x), y))
dataset_creator = tf.keras.utils.experimental.DatasetCreator(dataset_fn)
training_model.fit(
dataset_creator, epochs=2, steps_per_epoch=utils.STEPS
)
if __name__ == "__main__":
multi_process_runner.test_main()
| tf-keras/tf_keras/integration_test/preprocessing_applied_in_dataset_creator_test.py/0 | {
"file_path": "tf-keras/tf_keras/integration_test/preprocessing_applied_in_dataset_creator_test.py",
"repo_id": "tf-keras",
"token_count": 1325
} | 228 |
# Description:
# Contains the TF-Keras layers (internal TensorFlow version).
# Placeholder: load unaliased py_library
load("@org_keras//tf_keras:tf_keras.bzl", "tf_py_test")
package(
# copybara:uncomment default_applicable_licenses = ["//tf_keras:license"],
# TODO(scottzhu): Remove non-keras deps from TF.
default_visibility = [
"//tf_keras:friends",
"//third_party/tensorflow/python/distribute:__pkg__",
"//third_party/tensorflow/python/feature_column:__pkg__",
"//third_party/tensorflow/python/trackable:__pkg__",
"//third_party/tensorflow/tools/pip_package:__pkg__",
],
licenses = ["notice"],
)
# A separate build for layers without serialization to avoid circular deps
# with feature column.
py_library(
name = "layers",
srcs = [
"__init__.py",
"serialization.py",
],
srcs_version = "PY3",
deps = [
":kernelized",
":noise",
"//tf_keras/feature_column",
"//tf_keras/layers/activation",
"//tf_keras/layers/attention",
"//tf_keras/layers/convolutional",
"//tf_keras/layers/core",
"//tf_keras/layers/locally_connected",
"//tf_keras/layers/merging",
"//tf_keras/layers/normalization",
"//tf_keras/layers/pooling",
"//tf_keras/layers/preprocessing",
"//tf_keras/layers/regularization",
"//tf_keras/layers/reshaping",
"//tf_keras/layers/rnn",
"//tf_keras/premade_models",
"//tf_keras/utils:tf_utils",
],
)
py_library(
name = "kernelized",
srcs = ["kernelized.py"],
srcs_version = "PY3",
deps = [
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras:base_layer",
"//tf_keras/engine:input_spec",
"//tf_keras/initializers",
],
)
py_library(
name = "noise",
srcs = ["noise.py"],
srcs_version = "PY3",
deps = [
"//tf_keras/layers/regularization:alpha_dropout",
"//tf_keras/layers/regularization:gaussian_dropout",
"//tf_keras/layers/regularization:gaussian_noise",
],
)
tf_py_test(
name = "tensorflow_op_layer_test",
size = "medium",
srcs = ["tensorflow_op_layer_test.py"],
python_version = "PY3",
shard_count = 3,
deps = [
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/saving",
"//tf_keras/testing_infra:test_combinations",
],
)
tf_py_test(
name = "subclassed_layers_test",
size = "medium",
srcs = ["subclassed_layers_test.py"],
python_version = "PY3",
shard_count = 3,
deps = [
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
],
)
tf_py_test(
name = "serialization_test",
size = "small",
srcs = ["serialization_test.py"],
python_version = "PY3",
deps = [
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
],
)
tf_py_test(
name = "kernelized_test",
size = "small",
srcs = ["kernelized_test.py"],
python_version = "PY3",
deps = [
":layers",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras:backend",
"//tf_keras/initializers",
"//tf_keras/testing_infra:test_combinations",
],
)
tf_py_test(
name = "layers_test",
size = "small",
srcs = ["layers_test.py"],
python_version = "PY3",
deps = [
":layers",
"//:expect_tensorflow_installed",
],
)
| tf-keras/tf_keras/layers/BUILD/0 | {
"file_path": "tf-keras/tf_keras/layers/BUILD",
"repo_id": "tf-keras",
"token_count": 1907
} | 229 |
# Description:
# Contains the TF-Keras attention layers.
# Placeholder: load unaliased py_library
load("@org_keras//tf_keras:tf_keras.bzl", "tf_py_test")
package(
# copybara:uncomment default_applicable_licenses = ["//tf_keras:license"],
default_visibility = [
"//tf_keras:friends",
"//third_party/py/tensorflow_gnn:__subpackages__",
"//third_party/tensorflow/python/distribute:__pkg__",
"//third_party/tensorflow/python/feature_column:__pkg__",
"//third_party/tensorflow/python/trackable:__pkg__",
"//third_party/tensorflow/tools/pip_package:__pkg__",
"//third_party/tensorflow_models/official/projects/residual_mobilenet/modeling/backbones:__pkg__",
],
licenses = ["notice"],
)
py_library(
name = "attention",
srcs = [
"__init__.py",
],
srcs_version = "PY3",
deps = [
":additive_attention",
":attention_layer",
":multi_head_attention",
],
)
py_library(
name = "multi_head_attention",
srcs = ["multi_head_attention.py"],
srcs_version = "PY3",
deps = [
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras:constraints",
"//tf_keras:regularizers",
"//tf_keras/engine:base_layer",
"//tf_keras/initializers",
"//tf_keras/layers/activation",
"//tf_keras/layers/core",
"//tf_keras/layers/regularization",
"//tf_keras/utils:tf_utils",
],
)
py_library(
name = "base_dense_attention",
srcs = ["base_dense_attention.py"],
srcs_version = "PY3",
deps = [
"//:expect_tensorflow_installed",
"//tf_keras:backend",
"//tf_keras:base_layer",
"//tf_keras/utils:control_flow_util",
],
)
py_library(
name = "attention_layer",
srcs = ["attention.py"],
srcs_version = "PY3",
deps = [
":base_dense_attention",
"//:expect_tensorflow_installed",
],
)
py_library(
name = "additive_attention",
srcs = ["additive_attention.py"],
srcs_version = "PY3",
deps = [
":base_dense_attention",
"//:expect_tensorflow_installed",
],
)
tf_py_test(
name = "multi_head_attention_test",
srcs = ["multi_head_attention_test.py"],
python_version = "PY3",
deps = [
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
],
)
tf_py_test(
name = "base_dense_attention_test",
size = "medium",
srcs = ["base_dense_attention_test.py"],
python_version = "PY3",
deps = [
":base_dense_attention",
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/testing_infra:test_combinations",
],
)
tf_py_test(
name = "attention_test",
size = "medium",
srcs = ["attention_test.py"],
python_version = "PY3",
deps = [
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/layers/core",
"//tf_keras/testing_infra:test_combinations",
],
)
tf_py_test(
name = "additive_attention_test",
size = "medium",
srcs = ["additive_attention_test.py"],
python_version = "PY3",
deps = [
"//:expect_absl_installed", # absl/testing:parameterized
"//:expect_numpy_installed",
"//:expect_tensorflow_installed",
"//tf_keras",
"//tf_keras/mixed_precision:policy",
"//tf_keras/testing_infra:test_combinations",
"//tf_keras/testing_infra:test_utils",
],
)
| tf-keras/tf_keras/layers/attention/BUILD/0 | {
"file_path": "tf-keras/tf_keras/layers/attention/BUILD",
"repo_id": "tf-keras",
"token_count": 1868
} | 230 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Keras 1D transposed convolution layer (sometimes called deconvolution)."""
import tensorflow.compat.v2 as tf
from tf_keras import activations
from tf_keras import constraints
from tf_keras import initializers
from tf_keras import regularizers
from tf_keras.dtensor import utils
from tf_keras.engine.input_spec import InputSpec
from tf_keras.layers.convolutional.conv1d import Conv1D
from tf_keras.utils import conv_utils
# isort: off
from tensorflow.python.util.tf_export import keras_export
@keras_export(
"keras.layers.Conv1DTranspose", "keras.layers.Convolution1DTranspose"
)
class Conv1DTranspose(Conv1D):
"""Transposed convolution layer (sometimes called Deconvolution).
The need for transposed convolutions generally arises
from the desire to use a transformation going in the opposite direction
of a normal convolution, i.e., from something that has the shape of the
output of some convolution to something that has the shape of its input
while maintaining a connectivity pattern that is compatible with
said convolution.
When using this layer as the first layer in a model,
provide the keyword argument `input_shape`
(tuple of integers or `None`, does not include the sample axis),
e.g. `input_shape=(128, 3)` for data with 128 time steps and 3 channels.
Args:
filters: Integer, the dimensionality of the output space
(i.e. the number of output filters in the convolution).
kernel_size: An integer length of the 1D convolution window.
strides: An integer specifying the stride of the convolution along the
time dimension. Specifying a stride value != 1 is incompatible with
specifying a `dilation_rate` value != 1. Defaults to `1`.
padding: one of `"valid"` or `"same"` (case-insensitive).
`"valid"` means no padding. `"same"` results in padding with zeros
evenly to the left/right or up/down of the input such that output has
the same height/width dimension as the input.
output_padding: An integer specifying the amount of padding along
the time dimension of the output tensor.
The amount of output padding must be lower than the stride.
If set to `None` (default), the output shape is inferred.
data_format: A string, one of `channels_last` (default) or
`channels_first`. The ordering of the dimensions in the inputs.
`channels_last` corresponds to inputs with shape
`(batch_size, length, channels)` while `channels_first` corresponds to
inputs with shape `(batch_size, channels, length)`.
dilation_rate: an integer, specifying
the dilation rate to use for dilated convolution.
Currently, specifying a `dilation_rate` value != 1 is
incompatible with specifying a stride value != 1.
Also dilation rate larger than 1 is not currently supported.
activation: Activation function to use.
If you don't specify anything, no activation is applied
(see `keras.activations`).
use_bias: Boolean, whether the layer uses a bias vector.
kernel_initializer: Initializer for the `kernel` weights matrix
(see `keras.initializers`). Defaults to 'glorot_uniform'.
bias_initializer: Initializer for the bias vector
(see `keras.initializers`). Defaults to 'zeros'.
kernel_regularizer: Regularizer function applied to
the `kernel` weights matrix (see `keras.regularizers`).
bias_regularizer: Regularizer function applied to the bias vector
(see `keras.regularizers`).
activity_regularizer: Regularizer function applied to
the output of the layer (its "activation") (see `keras.regularizers`).
kernel_constraint: Constraint function applied to the kernel matrix
(see `keras.constraints`).
bias_constraint: Constraint function applied to the bias vector
(see `keras.constraints`).
Input shape:
3D tensor with shape:
`(batch_size, steps, channels)`
Output shape:
3D tensor with shape:
`(batch_size, new_steps, filters)`
If `output_padding` is specified:
```
new_timesteps = ((timesteps - 1) * strides + kernel_size -
2 * padding + output_padding)
```
Returns:
A tensor of rank 3 representing
`activation(conv1dtranspose(inputs, kernel) + bias)`.
Raises:
ValueError: if `padding` is "causal".
ValueError: when both `strides` > 1 and `dilation_rate` > 1.
References:
- [A guide to convolution arithmetic for deep learning](
https://arxiv.org/abs/1603.07285v1)
- [Deconvolutional Networks](
https://www.matthewzeiler.com/mattzeiler/deconvolutionalnetworks.pdf)
"""
@utils.allow_initializer_layout
def __init__(
self,
filters,
kernel_size,
strides=1,
padding="valid",
output_padding=None,
data_format=None,
dilation_rate=1,
activation=None,
use_bias=True,
kernel_initializer="glorot_uniform",
bias_initializer="zeros",
kernel_regularizer=None,
bias_regularizer=None,
activity_regularizer=None,
kernel_constraint=None,
bias_constraint=None,
**kwargs,
):
super().__init__(
filters=filters,
kernel_size=kernel_size,
strides=strides,
padding=padding,
data_format=data_format,
dilation_rate=dilation_rate,
activation=activations.get(activation),
use_bias=use_bias,
kernel_initializer=initializers.get(kernel_initializer),
bias_initializer=initializers.get(bias_initializer),
kernel_regularizer=regularizers.get(kernel_regularizer),
bias_regularizer=regularizers.get(bias_regularizer),
activity_regularizer=regularizers.get(activity_regularizer),
kernel_constraint=constraints.get(kernel_constraint),
bias_constraint=constraints.get(bias_constraint),
**kwargs,
)
self.output_padding = output_padding
if self.output_padding is not None:
self.output_padding = conv_utils.normalize_tuple(
self.output_padding, 1, "output_padding", allow_zero=True
)
for stride, out_pad in zip(self.strides, self.output_padding):
if out_pad >= stride:
raise ValueError(
"Strides must be greater than output padding. "
f"Received strides={self.strides}, "
f"output_padding={self.output_padding}."
)
def build(self, input_shape):
input_shape = tf.TensorShape(input_shape)
if len(input_shape) != 3:
raise ValueError(
"Inputs should have rank 3. "
f"Received input_shape={input_shape}."
)
channel_axis = self._get_channel_axis()
if input_shape.dims[channel_axis].value is None:
raise ValueError(
"The channel dimension of the inputs "
"to `Conv1DTranspose` should be defined. "
f"The input_shape received is {input_shape}, "
f"where axis {channel_axis} (0-based) "
"is the channel dimension, which found to be `None`."
)
input_dim = int(input_shape[channel_axis])
self.input_spec = InputSpec(ndim=3, axes={channel_axis: input_dim})
kernel_shape = self.kernel_size + (self.filters, input_dim)
self.kernel = self.add_weight(
name="kernel",
shape=kernel_shape,
initializer=self.kernel_initializer,
regularizer=self.kernel_regularizer,
constraint=self.kernel_constraint,
trainable=True,
dtype=self.dtype,
)
if self.use_bias:
self.bias = self.add_weight(
name="bias",
shape=(self.filters,),
initializer=self.bias_initializer,
regularizer=self.bias_regularizer,
constraint=self.bias_constraint,
trainable=True,
dtype=self.dtype,
)
else:
self.bias = None
self.built = True
def call(self, inputs):
inputs_shape = tf.shape(inputs)
batch_size = inputs_shape[0]
if self.data_format == "channels_first":
t_axis = 2
else:
t_axis = 1
length = inputs_shape[t_axis]
if self.output_padding is None:
output_padding = None
else:
output_padding = self.output_padding[0]
# Infer the dynamic output shape:
out_length = conv_utils.deconv_output_length(
length,
self.kernel_size[0],
padding=self.padding,
output_padding=output_padding,
stride=self.strides[0],
dilation=self.dilation_rate[0],
)
if self.data_format == "channels_first":
output_shape = (batch_size, self.filters, out_length)
else:
output_shape = (batch_size, out_length, self.filters)
data_format = conv_utils.convert_data_format(self.data_format, ndim=3)
output_shape_tensor = tf.stack(output_shape)
outputs = tf.nn.conv1d_transpose(
inputs,
self.kernel,
output_shape_tensor,
strides=self.strides,
padding=self.padding.upper(),
data_format=data_format,
dilations=self.dilation_rate,
)
if not tf.executing_eagerly() and inputs.shape.rank:
# Infer the static output shape:
out_shape = self.compute_output_shape(inputs.shape)
outputs.set_shape(out_shape)
if self.use_bias:
outputs = tf.nn.bias_add(
outputs, self.bias, data_format=data_format
)
if self.activation is not None:
return self.activation(outputs)
return outputs
def compute_output_shape(self, input_shape):
input_shape = tf.TensorShape(input_shape).as_list()
output_shape = list(input_shape)
if self.data_format == "channels_first":
c_axis, t_axis = 1, 2
else:
c_axis, t_axis = 2, 1
if self.output_padding is None:
output_padding = None
else:
output_padding = self.output_padding[0]
output_shape[c_axis] = self.filters
output_shape[t_axis] = conv_utils.deconv_output_length(
output_shape[t_axis],
self.kernel_size[0],
padding=self.padding,
output_padding=output_padding,
stride=self.strides[0],
dilation=self.dilation_rate[0],
)
return tf.TensorShape(output_shape)
def get_config(self):
config = super().get_config()
config["output_padding"] = self.output_padding
return config
# Alias
Convolution1DTranspose = Conv1DTranspose
| tf-keras/tf_keras/layers/convolutional/conv1d_transpose.py/0 | {
"file_path": "tf-keras/tf_keras/layers/convolutional/conv1d_transpose.py",
"repo_id": "tf-keras",
"token_count": 5026
} | 231 |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for TF-Keras core layers."""
import os
import textwrap
import numpy as np
import tensorflow.compat.v2 as tf
import tf_keras as keras
from tf_keras import initializers
from tf_keras.layers import core
from tf_keras.mixed_precision import policy
from tf_keras.saving.serialization_lib import SafeModeScope
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
@test_combinations.run_all_keras_modes
class DropoutLayersTest(test_combinations.TestCase):
def test_dropout(self):
test_utils.layer_test(
keras.layers.Dropout, kwargs={"rate": 0.5}, input_shape=(3, 2)
)
test_utils.layer_test(
keras.layers.Dropout,
kwargs={"rate": 0.5, "noise_shape": [3, 1]},
input_shape=(3, 2),
)
def test_dropout_supports_masking(self):
dropout = keras.layers.Dropout(0.5)
self.assertEqual(True, dropout.supports_masking)
def test_spatial_dropout_1d(self):
test_utils.layer_test(
keras.layers.SpatialDropout1D,
kwargs={"rate": 0.5},
input_shape=(2, 3, 4),
)
def test_spatial_dropout_2d(self):
test_utils.layer_test(
keras.layers.SpatialDropout2D,
kwargs={"rate": 0.5},
input_shape=(2, 3, 4, 5),
)
test_utils.layer_test(
keras.layers.SpatialDropout2D,
kwargs={"rate": 0.5, "data_format": "channels_first"},
input_shape=(2, 3, 4, 5),
)
def test_spatial_dropout_3d(self):
test_utils.layer_test(
keras.layers.SpatialDropout3D,
kwargs={"rate": 0.5},
input_shape=(2, 3, 4, 4, 5),
)
test_utils.layer_test(
keras.layers.SpatialDropout3D,
kwargs={"rate": 0.5, "data_format": "channels_first"},
input_shape=(2, 3, 4, 4, 5),
)
def test_dropout_partial_noise_shape(self):
inputs = keras.Input(shape=(5, 10))
layer = keras.layers.Dropout(0.5, noise_shape=(None, 1, None))
outputs = layer(inputs)
model = keras.Model(inputs, outputs)
out = model(np.ones((20, 5, 10)), training=True)
out_np = keras.backend.get_value(out)
# Test that dropout mask is shared across second dim.
self.assertAllClose(out_np[:, 0, :], out_np[:, 1, :])
def test_dropout_with_saving(self):
inputs = keras.Input(shape=(5, 10))
layer = keras.layers.Dropout(0.5, force_generator=True)
outputs = layer(inputs)
model = keras.Model(inputs, outputs)
train = model(np.ones((20, 5, 10)), training=True)
predict = model(np.ones((20, 5, 10)))
# Make sure the weights from tf.random.Generator is not present in the
# model which will cause weight loading issue for existing application
# models if it contains dropout layer.
self.assertEmpty(layer.get_weights())
self.assertEmpty(model.get_weights())
# Make sure the layer does dropout value when training
self.assertNotAllClose(train, predict)
with self.subTest("savedmodel"):
model.save(
os.path.join(self.get_temp_dir(), "savedmodel"),
save_format="tf",
)
loaded_model = keras.models.load_model(
os.path.join(self.get_temp_dir(), "savedmodel")
)
predict2 = loaded_model(np.ones((20, 5, 10)))
self.assertAllClose(predict, predict2)
# Make sure the model dropout different value after loading
train2 = loaded_model(np.ones((20, 5, 10)), training=True)
self.assertNotAllClose(train, train2)
self.assertIsNotNone(loaded_model.layers[1]._random_generator)
with self.subTest("keras_v3"):
if not tf.__internal__.tf2.enabled():
self.skipTest(
"TF2 must be enabled to use the new `.keras` saving."
)
model.save(os.path.join(self.get_temp_dir(), "model.keras"))
loaded_model = keras.models.load_model(
os.path.join(self.get_temp_dir(), "model.keras")
)
predict2 = loaded_model(np.ones((20, 5, 10)))
self.assertAllClose(predict, predict2)
# Make sure the model dropout different value after loading
train2 = loaded_model(np.ones((20, 5, 10)), training=True)
self.assertNotAllClose(train, train2)
self.assertIsNotNone(loaded_model.layers[1]._random_generator)
with self.subTest("checkpoint"):
# Also make sure the checkpoint doesn't contain any variable from
# the dropout layer, to keep the backward compatibility.
checkpoint = tf.train.Checkpoint(model)
save_path = checkpoint.save(
os.path.join(self.get_temp_dir(), "checkpoint")
)
checkpoint_var_names = [
name_value_tuple[0]
for name_value_tuple in tf.train.list_variables(save_path)
]
for name in checkpoint_var_names:
self.assertNotIn("dropout", name)
@test_combinations.run_all_keras_modes
class LambdaLayerTest(test_combinations.TestCase):
def test_lambda(self):
with SafeModeScope(safe_mode=False):
test_utils.layer_test(
keras.layers.Lambda,
kwargs={"function": lambda x: x + 1},
input_shape=(3, 2),
)
test_utils.layer_test(
keras.layers.Lambda,
kwargs={
"function": lambda x, a, b: x * a + b,
"arguments": {"a": 0.6, "b": 0.4},
},
input_shape=(3, 2),
)
# test serialization with function
def f(x):
return x + 1
ld = keras.layers.Lambda(f)
config = ld.get_config()
with SafeModeScope(safe_mode=False):
ld = keras.layers.deserialize(
{"class_name": "Lambda", "config": config}
)
self.assertEqual(ld.function(3), 4)
# test with lambda
ld = keras.layers.Lambda(
lambda x: keras.backend.concatenate([tf.square(x), x])
)
config = ld.get_config()
ld = keras.layers.Lambda.from_config(config)
self.assertAllEqual(self.evaluate(ld.function([3])), [9, 3])
def test_lambda_multiple_inputs(self):
ld = keras.layers.Lambda(lambda x: x[0], output_shape=lambda x: x[0])
x1 = np.ones([3, 2], np.float32)
x2 = np.ones([3, 5], np.float32)
out = ld([x1, x2])
self.assertAllEqual(out.shape, [3, 2])
def test_lambda_output_shape(self):
l = keras.layers.Lambda(lambda x: x + 1, output_shape=(1, 1))
l(keras.backend.variable(np.ones((1, 1))))
self.assertEqual((1, 1), l.get_config()["output_shape"])
def test_lambda_output_shape_function(self):
def get_output_shape(input_shape):
return 1 * input_shape
l = keras.layers.Lambda(lambda x: x + 1, output_shape=get_output_shape)
l(keras.backend.variable(np.ones((1, 1))))
self.assertEqual("lambda", l.get_config()["output_shape_type"])
def test_lambda_output_shape_autocalculate_multiple_inputs(self):
def lambda_fn(x):
return tf.matmul(x[0], x[1])
l = keras.layers.Lambda(lambda_fn, dtype=tf.float64)
output_shape = l.compute_output_shape([(10, 10), (10, 20)])
self.assertAllEqual((10, 20), output_shape)
output_signature = l.compute_output_signature(
[
tf.TensorSpec(dtype=tf.float64, shape=(10, 10)),
tf.TensorSpec(dtype=tf.float64, shape=(10, 20)),
]
)
self.assertAllEqual((10, 20), output_signature.shape)
self.assertAllEqual(tf.float64, output_signature.dtype)
def test_lambda_output_shape_list_multiple_outputs(self):
def lambda_fn(x):
return x
l = keras.layers.Lambda(lambda_fn, output_shape=[(10,), (20,)])
output_shape = l.compute_output_shape([(10, 10), (10, 20)])
self.assertAllEqual([(10, 10), (10, 20)], output_shape)
def test_lambda_output_shape_tuple_with_none(self):
def lambda_fn(x):
return x
l = keras.layers.Lambda(lambda_fn, output_shape=(None, 10))
output_shape = l.compute_output_shape((5, 10, 20))
self.assertAllEqual([5, None, 10], output_shape.as_list())
def test_lambda_output_shape_function_multiple_outputs(self):
def lambda_fn(x):
return x
def output_shape_fn(input_shape):
return input_shape
l = keras.layers.Lambda(lambda_fn, output_shape=output_shape_fn)
output_shape = l.compute_output_shape([(10, 10), (10, 20)])
self.assertAllEqual([(10, 10), (10, 20)], output_shape)
def test_lambda_output_shape_nested(self):
def lambda_fn(inputs):
return (inputs[1]["a"], {"b": inputs[0]})
l = keras.layers.Lambda(lambda_fn)
output_shape = l.compute_output_shape(((10, 20), {"a": (10, 5)}))
self.assertAllEqual(((10, 5), {"b": (10, 20)}), output_shape)
def test_lambda_config_serialization(self):
# Test serialization with output_shape and output_shape_type
layer = keras.layers.Lambda(
lambda x: x + 1, output_shape=(1, 1), mask=lambda i, m: m
)
layer(keras.backend.variable(np.ones((1, 1))))
config = layer.get_config()
with SafeModeScope(safe_mode=False):
layer = keras.layers.deserialize(
{"class_name": "Lambda", "config": config}
)
self.assertAllEqual(layer.function(1), 2)
self.assertAllEqual(layer._output_shape, (1, 1))
self.assertAllEqual(layer.mask(1, True), True)
layer = keras.layers.Lambda.from_config(config)
self.assertAllEqual(layer.function(1), 2)
self.assertAllEqual(layer._output_shape, (1, 1))
self.assertAllEqual(layer.mask(1, True), True)
def test_lambda_with_training_arg(self):
def fn(x, training=True):
return keras.backend.in_train_phase(x, 2 * x, training=training)
layer = keras.layers.Lambda(fn)
x = keras.backend.ones(())
train_out = layer(x, training=True)
eval_out = layer(x, training=False)
self.assertEqual(keras.backend.get_value(train_out), 1.0)
self.assertEqual(keras.backend.get_value(eval_out), 2.0)
def test_lambda_with_mask(self):
def add_one(inputs):
return inputs + 1.0
def mask(unused_inputs, previous_mask):
return previous_mask
layer = keras.layers.Lambda(add_one, mask=mask)
x = np.ones([5, 4, 3])
x[:, -1, :] = 0
masking = keras.layers.Masking()
out = layer(masking(x))
expected_out = np.full([5, 4, 3], 2.0)
expected_out[:, -1, :] = 1.0
expected_mask = np.ones([5, 4])
expected_mask[:, -1] = 0.0
self.assertAllClose(self.evaluate(out), expected_out)
self.assertIsNotNone(out._keras_mask)
self.assertAllClose(self.evaluate(out._keras_mask), expected_mask)
def test_lambda_with_ragged_input(self):
def add_one(inputs):
return inputs + 1.0
layer = keras.layers.Lambda(add_one)
ragged_input = tf.ragged.constant([[1.0], [2.0, 3.0]])
out = layer(ragged_input)
expected_out = tf.ragged.constant([[2.0], [3.0, 4.0]])
self.assertAllClose(out, expected_out)
def test_lambda_deserialization_does_not_pollute_core(self):
layer = keras.layers.Lambda(lambda x: x + 1)
config = layer.get_config()
keras.layers.Lambda.from_config(config)
self.assertNotIn(self.__class__.__name__, dir(core))
class TestStatefulLambda(test_combinations.TestCase):
@test_combinations.run_all_keras_modes
@test_combinations.run_with_all_model_types
def test_lambda_with_variable_in_model(self):
v = tf.Variable(1.0, trainable=True)
def lambda_fn(x, v):
return x * v
# While it is generally not advised to mix Variables with Lambda layers,
# if the variables are explicitly set as attributes then they are still
# tracked. This is consistent with the base Layer behavior.
layer = keras.layers.Lambda(lambda_fn, arguments={"v": v})
self.assertLen(layer.trainable_weights, 0)
layer.v = v
self.assertLen(layer.trainable_weights, 1)
model = test_utils.get_model_from_layers([layer], input_shape=(10,))
model.compile(
keras.optimizers.legacy.gradient_descent.SGD(0.1),
"mae",
run_eagerly=test_utils.should_run_eagerly(),
)
x, y = np.ones((10, 10), "float32"), 2 * np.ones((10, 10), "float32")
model.fit(x, y, batch_size=2, epochs=2, validation_data=(x, y))
self.assertLen(model.trainable_weights, 1)
self.assertAllClose(
keras.backend.get_value(model.trainable_weights[0]), 2.0
)
@test_combinations.run_all_keras_modes
@test_combinations.run_with_all_model_types
def test_creation_inside_lambda(self):
def lambda_fn(x):
scale = tf.Variable(1.0, trainable=True, name="scale")
shift = tf.Variable(1.0, trainable=True, name="shift")
return x * scale + shift
expected_error = textwrap.dedent(
r"""
( )?The following Variables were created within a Lambda layer \(shift_and_scale\)""" # noqa: E501
r"""
( )?but are not tracked by said layer:
( )? <tf.Variable \'.*shift_and_scale/scale:0\'.+
( )? <tf.Variable \'.*shift_and_scale/shift:0\'.+
( )?The layer cannot safely ensure proper Variable reuse.+"""
)
with self.assertRaisesRegex(ValueError, expected_error):
layer = keras.layers.Lambda(lambda_fn, name="shift_and_scale")
model = test_utils.get_model_from_layers([layer], input_shape=(1,))
model(tf.ones((4, 1)))
@test_combinations.run_all_keras_modes
@test_combinations.run_with_all_model_types
def test_transitive_variable_creation(self):
dense = keras.layers.Dense(1, use_bias=False, kernel_initializer="ones")
def bad_lambda_fn(x):
return dense(x + 1) # Dense layer is built on first call
expected_error = textwrap.dedent(
r"""
( )?The following Variables were created within a Lambda layer \(bias_dense\)
( )?but are not tracked by said layer:
( )? <tf.Variable \'.*bias_dense/dense/kernel:0\'.+
( )?The layer cannot safely ensure proper Variable reuse.+"""
)
with self.assertRaisesRegex(ValueError, expected_error):
layer = keras.layers.Lambda(bad_lambda_fn, name="bias_dense")
model = test_utils.get_model_from_layers([layer], input_shape=(1,))
model(tf.ones((4, 1)))
@test_combinations.run_all_keras_modes
@test_combinations.run_with_all_model_types
def test_warns_on_variable_capture(self):
v = tf.Variable(1.0, trainable=True)
def lambda_fn(x):
return x * v
expected_warning = textwrap.dedent(
r"""
( )?The following Variables were used a Lambda layer\'s call \(lambda\), but
( )?are not present in its tracked objects:
( )? <tf.Variable \'.*Variable:0\'.+
( )?It is possible that this is intended behavior.+"""
)
layer = keras.layers.Lambda(lambda_fn)
def patched_warn(msg):
raise ValueError(msg)
layer._warn = patched_warn
with self.assertRaisesRegex(ValueError, expected_warning):
model = test_utils.get_model_from_layers([layer], input_shape=(1,))
model(tf.ones((4, 1)))
@test_combinations.run_all_keras_modes
@test_combinations.run_with_all_model_types
def test_lambda_skip_state_variable_from_initializer(self):
# Force the initializers to use the tf.random.Generator, which will
# contain the state variable.
kernel_initializer = initializers.RandomNormalV2()
kernel_initializer._random_generator._rng_type = (
kernel_initializer._random_generator.RNG_STATEFUL
)
dense = keras.layers.Dense(
1, use_bias=False, kernel_initializer=kernel_initializer
)
def lambda_fn(x):
return dense(x + 1) # Dense layer is built on first call
# While it is generally not advised to mix Variables with Lambda layers,
# if the variables are explicitly set as attributes then they are still
# tracked. This is consistent with the base Layer behavior.
layer = keras.layers.Lambda(lambda_fn)
layer.dense = dense
model = test_utils.get_model_from_layers([layer], input_shape=(10,))
model.compile(
keras.optimizers.legacy.gradient_descent.SGD(0.1),
"mae",
run_eagerly=test_utils.should_run_eagerly(),
)
x, y = np.ones((10, 10), "float32"), 2 * np.ones((10, 10), "float32")
model.fit(x, y, batch_size=2, epochs=2, validation_data=(x, y))
self.assertLen(model.trainable_weights, 1)
@test_combinations.run_all_keras_modes
class CoreLayersTest(test_combinations.TestCase):
def test_masking(self):
test_utils.layer_test(
keras.layers.Masking, kwargs={}, input_shape=(3, 2, 3)
)
def test_keras_mask(self):
x = np.ones((10, 10))
y = keras.layers.Masking(1.0)(x)
self.assertTrue(hasattr(y, "_keras_mask"))
self.assertIsNotNone(y._keras_mask)
self.assertAllClose(self.evaluate(y._keras_mask), np.zeros((10,)))
def test_compute_mask_with_positional_mask_arg(self):
class MyLayer(keras.layers.Layer):
def call(self, inputs, mask=None):
return inputs
def compute_mask(self, inputs, mask=None):
if mask is not None:
return tf.ones(())
else:
return tf.zeros(())
x, mask = tf.ones((1, 1)), tf.ones((1, 1))
layer = MyLayer()
y = layer(x, mask)
# Check that `mask` was correctly sent to `compute_mask`.
self.assertEqual(keras.backend.get_value(y._keras_mask), 1)
def test_activation(self):
# with string argument
test_utils.layer_test(
keras.layers.Activation,
kwargs={"activation": "relu"},
input_shape=(3, 2),
)
# with function argument
test_utils.layer_test(
keras.layers.Activation,
kwargs={"activation": keras.backend.relu},
input_shape=(3, 2),
)
def test_dense(self):
test_utils.layer_test(
keras.layers.Dense, kwargs={"units": 3}, input_shape=(3, 2)
)
test_utils.layer_test(
keras.layers.Dense, kwargs={"units": 3}, input_shape=(3, 4, 2)
)
test_utils.layer_test(
keras.layers.Dense, kwargs={"units": 3}, input_shape=(None, None, 2)
)
test_utils.layer_test(
keras.layers.Dense, kwargs={"units": 3}, input_shape=(3, 4, 5, 2)
)
def test_dense_output(self):
dense_inputs = tf.convert_to_tensor(
np.random.uniform(size=(10, 10)).astype("f")
)
# Create some sparse data where multiple rows and columns are missing.
sparse_inputs = tf.SparseTensor(
indices=np.random.randint(low=0, high=10, size=(5, 2)),
values=np.random.uniform(size=(5,)).astype("f"),
dense_shape=[10, 10],
)
sparse_inputs = tf.sparse.reorder(sparse_inputs)
# Create some ragged data.
ragged_inputs = tf.RaggedTensor.from_row_splits(
np.random.uniform(size=(10, 10)).astype("f"),
row_splits=[0, 4, 6, 6, 9, 10],
)
layer = keras.layers.Dense(
5,
kernel_initializer=keras.initializers.RandomUniform(),
bias_initializer=keras.initializers.RandomUniform(),
dtype="float32",
)
dense_outputs = layer(dense_inputs)
sparse_outpus = layer(sparse_inputs)
ragged_outputs = layer(ragged_inputs)
expected_dense = tf.add(
tf.matmul(dense_inputs, keras.backend.get_value(layer.kernel)),
keras.backend.get_value(layer.bias),
)
expected_sparse = tf.add(
tf.matmul(
tf.sparse.to_dense(sparse_inputs),
keras.backend.get_value(layer.kernel),
),
keras.backend.get_value(layer.bias),
)
expected_ragged_values = tf.add(
tf.matmul(
ragged_inputs.flat_values, keras.backend.get_value(layer.kernel)
),
keras.backend.get_value(layer.bias),
)
expected_ragged = tf.RaggedTensor.from_row_splits(
expected_ragged_values, row_splits=[0, 4, 6, 6, 9, 10]
)
self.assertAllClose(dense_outputs, expected_dense)
self.assertAllClose(sparse_outpus, expected_sparse)
self.assertAllClose(ragged_outputs, expected_ragged)
def test_dense_dtype(self):
inputs = tf.convert_to_tensor(
np.random.randint(low=0, high=7, size=(2, 2))
)
layer = keras.layers.Dense(5, dtype="float32")
outputs = layer(inputs)
self.assertEqual(outputs.dtype, "float32")
def test_dense_with_policy(self):
inputs = tf.convert_to_tensor(
np.random.randint(low=0, high=7, size=(2, 2))
)
layer = keras.layers.Dense(5, dtype=policy.Policy("mixed_float16"))
outputs = layer(inputs)
output_signature = layer.compute_output_signature(
tf.TensorSpec(dtype="float16", shape=(2, 2))
)
self.assertEqual(output_signature.dtype, tf.float16)
self.assertEqual(output_signature.shape, (2, 5))
self.assertEqual(outputs.dtype, "float16")
self.assertEqual(layer.kernel.dtype, "float32")
def test_dense_regularization(self):
layer = keras.layers.Dense(
3,
kernel_regularizer=keras.regularizers.l1(0.01),
bias_regularizer="l1",
activity_regularizer="l2",
name="dense_reg",
)
layer(keras.backend.variable(np.ones((2, 4))))
self.assertEqual(3, len(layer.losses))
def test_dense_constraints(self):
k_constraint = keras.constraints.max_norm(0.01)
b_constraint = keras.constraints.max_norm(0.01)
layer = keras.layers.Dense(
3, kernel_constraint=k_constraint, bias_constraint=b_constraint
)
layer(keras.backend.variable(np.ones((2, 4))))
self.assertEqual(layer.kernel.constraint, k_constraint)
self.assertEqual(layer.bias.constraint, b_constraint)
def test_dense_layer_ragged_tensor(self):
layer = keras.layers.Dense(2, kernel_initializer="ones", use_bias=False)
# a.shape = [2, None, 2]; a.ragged_rank=1
a = tf.ragged.constant(
[[[1.0, 2], [3, 4], [5, 6]], [[7, 8]]], ragged_rank=1
)
a_out = layer(a)
keras.backend.get_value(layer.kernel) # ensures var is built in TF 1.x.
self.assertAllEqual(a_out, [[[3.0, 3], [7, 7], [11, 11]], [[15, 15]]])
# b.shape = [4, 2]; b.ragged_rank=1
b = tf.RaggedTensor.from_uniform_row_length(
[1.0, 2, 3, 4, 5, 6, 7, 8], 2
)
self.assertAllEqual(layer(b), [[3.0, 3], [7, 7], [11, 11], [15, 15]])
# c.shape = [2, 2, 2]; c.ragged_rank=2
c = tf.RaggedTensor.from_uniform_row_length(b, 2)
self.assertAllEqual(
layer(c), [[[3.0, 3], [7, 7]], [[11, 11], [15, 15]]]
)
def test_dense_layer_ragged_tensor_savedmodel(self):
# Check that we don't get a deadlock when saving a TF-Keras model with
# a dense layer that processes RaggedTensors. (This happened because
# Dense.call() had a recursive call, which is not currently supported
# by the @tf.function decorator.)
class TestModel(keras.Model):
def __init__(self):
super().__init__()
self._layer = keras.layers.Dense(
1, kernel_initializer="ones", use_bias=False
)
def call(self, inputs):
return self._layer(inputs)
model = TestModel()
result = model(
tf.RaggedTensor.from_row_lengths([[1.0], [2], [3]], [1, 2])
)
keras.backend.get_value(model._layer.kernel) # required in TF 1.x.
self.assertAllClose(result, [[[1.0]], [[2.0], [3.0]]])
model.save(
os.path.join(self.get_temp_dir(), "savedmodel"), save_format="tf"
)
def test_dense_layer_unsupported_ragged_tensor_error(self):
layer = keras.layers.Dense(2)
with self.assertRaisesRegex(
ValueError,
"The last dimension of the inputs to a Dense layer should "
r"be defined. Found None. Full input shape received: .*",
):
layer(tf.ragged.constant([[1.0, 2], [3, 4, 5]]))
with self.assertRaisesRegex(
ValueError,
"Dense layer only supports RaggedTensors when the "
r"innermost dimension is non-ragged. Received: inputs.shape=.*",
):
layer.call(tf.ragged.constant([[1.0, 2], [3, 4, 5]]))
@test_combinations.run_all_keras_modes
class TFOpLambdaTest(test_combinations.TestCase):
def test_non_tf_symbol(self):
def dummy_func(a, b):
return a + b
layer = core.TFOpLambda(dummy_func)
self.assertIsNone(layer.symbol)
self.assertEqual(layer.name, "dummy_func")
with self.assertRaisesRegex(
ValueError, "was generated from .*dummy_func"
):
layer.get_config()
if __name__ == "__main__":
tf.test.main()
| tf-keras/tf_keras/layers/core/core_test.py/0 | {
"file_path": "tf-keras/tf_keras/layers/core/core_test.py",
"repo_id": "tf-keras",
"token_count": 12920
} | 232 |
"""Test DynamicEmbedding with Parameter server strategy."""
import numpy as np
import tensorflow.compat.v2 as tf
from absl.testing import parameterized
import tf_keras as keras
from tf_keras.layers.experimental import dynamic_lookup
from tf_keras.testing_infra import test_utils
ds_combinations = tf.__internal__.distribute.combinations
@test_utils.run_v2_only
class DistributedDynamiclookupTest(tf.test.TestCase, parameterized.TestCase):
@ds_combinations.generate(
tf.__internal__.test.combinations.combine(
strategy=[
ds_combinations.parameter_server_strategy_3worker_2ps_cpu
],
mode="eager",
)
)
def test_dynamic_lookup_with_pss(self, strategy):
train_data = np.array(
[
["a", "j", "c", "d", "e"],
["a", "h", "i", "j", "b"],
["i", "h", "c", "j", "e"],
]
)
train_labels = np.array([0, 1, 2])
vocab = tf.constant(["a", "b", "c", "d", "e"])
vocabulary_size = 5
eviction_policy = "LFU"
with strategy.scope():
# Define the model
model = keras.models.Sequential(
[
dynamic_lookup.DynamicLookup(
vocabulary_size,
initial_vocabulary=vocab,
eviction_policy=eviction_policy,
name="dynamic_lookup",
),
keras.layers.Flatten(),
keras.layers.Dense(3, activation="softmax"),
]
)
# Compile the model
model.compile(
optimizer="adam",
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
result = model.fit(
train_data,
train_labels,
epochs=10,
batch_size=1,
steps_per_epoch=1,
)
# Assert model trains
self.assertEqual(result.history["loss"][0] > 0, True)
if __name__ == "__main__":
tf.__internal__.distribute.multi_process_runner.test_main()
| tf-keras/tf_keras/layers/experimental/dynamic_lookup_distributed_test.py/0 | {
"file_path": "tf-keras/tf_keras/layers/experimental/dynamic_lookup_distributed_test.py",
"repo_id": "tf-keras",
"token_count": 1201
} | 233 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Layer that concatenates several inputs."""
import tensorflow.compat.v2 as tf
from tf_keras import backend
from tf_keras.layers.merging.base_merge import _Merge
from tf_keras.utils import tf_utils
# isort: off
from tensorflow.python.util.tf_export import keras_export
@keras_export("keras.layers.Concatenate")
class Concatenate(_Merge):
"""Layer that concatenates a list of inputs.
It takes as input a list of tensors, all of the same shape except
for the concatenation axis, and returns a single tensor that is the
concatenation of all inputs.
>>> x = np.arange(20).reshape(2, 2, 5)
>>> print(x)
[[[ 0 1 2 3 4]
[ 5 6 7 8 9]]
[[10 11 12 13 14]
[15 16 17 18 19]]]
>>> y = np.arange(20, 30).reshape(2, 1, 5)
>>> print(y)
[[[20 21 22 23 24]]
[[25 26 27 28 29]]]
>>> tf.keras.layers.Concatenate(axis=1)([x, y])
<tf.Tensor: shape=(2, 3, 5), dtype=int64, numpy=
array([[[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[20, 21, 22, 23, 24]],
[[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[25, 26, 27, 28, 29]]])>
>>> x1 = tf.keras.layers.Dense(8)(np.arange(10).reshape(5, 2))
>>> x2 = tf.keras.layers.Dense(8)(np.arange(10, 20).reshape(5, 2))
>>> concatted = tf.keras.layers.Concatenate()([x1, x2])
>>> concatted.shape
TensorShape([5, 16])
"""
def __init__(self, axis=-1, **kwargs):
"""Instantiates a Concatenate layer.
>>> x = np.arange(20).reshape(2, 2, 5)
>>> print(x)
[[[ 0 1 2 3 4]
[ 5 6 7 8 9]]
[[10 11 12 13 14]
[15 16 17 18 19]]]
>>> y = np.arange(20, 30).reshape(2, 1, 5)
>>> print(y)
[[[20 21 22 23 24]]
[[25 26 27 28 29]]]
>>> tf.keras.layers.Concatenate(axis=1)([x, y])
<tf.Tensor: shape=(2, 3, 5), dtype=int64, numpy=
array([[[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[20, 21, 22, 23, 24]],
[[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[25, 26, 27, 28, 29]]])>
Args:
axis: Axis along which to concatenate.
**kwargs: standard layer keyword arguments.
"""
super().__init__(**kwargs)
self.axis = axis
self.supports_masking = True
self._reshape_required = False
@tf_utils.shape_type_conversion
def build(self, input_shape):
# Used purely for shape validation.
if len(input_shape) < 1 or not isinstance(input_shape[0], tuple):
raise ValueError(
"A `Concatenate` layer should be called on a list of "
f"at least 1 input. Received: input_shape={input_shape}"
)
if all(shape is None for shape in input_shape):
return
reduced_inputs_shapes = [list(shape) for shape in input_shape]
shape_set = set()
for i in range(len(reduced_inputs_shapes)):
del reduced_inputs_shapes[i][self.axis]
shape_set.add(tuple(reduced_inputs_shapes[i]))
if len(shape_set) != 1:
err_msg = (
"A `Concatenate` layer requires inputs with matching shapes "
"except for the concatenation axis. "
f"Received: input_shape={input_shape}"
)
# Make sure all the shapes have same ranks.
ranks = set(len(shape) for shape in shape_set)
if len(ranks) != 1:
raise ValueError(err_msg)
# Get the only rank for the set.
(rank,) = ranks
for axis in range(rank):
# Skip the Nones in the shape since they are dynamic, also the
# axis for concat has been removed above.
unique_dims = set(
shape[axis]
for shape in shape_set
if shape[axis] is not None
)
if len(unique_dims) > 1:
raise ValueError(err_msg)
def _merge_function(self, inputs):
return backend.concatenate(inputs, axis=self.axis)
@tf_utils.shape_type_conversion
def compute_output_shape(self, input_shape):
if (not isinstance(input_shape, (tuple, list))) or (
not isinstance(input_shape[0], (tuple, list))
):
# The tf_utils.shape_type_conversion decorator turns tensorshapes
# into tuples, so we need to verify that `input_shape` is a
# list/tuple, *and* that the individual elements are themselves
# shape tuples.
raise ValueError(
"A `Concatenate` layer should be called on a list of inputs. "
f"Received: input_shape={input_shape}"
)
input_shapes = input_shape
output_shape = list(input_shapes[0])
for shape in input_shapes[1:]:
if output_shape[self.axis] is None or shape[self.axis] is None:
output_shape[self.axis] = None
break
output_shape[self.axis] += shape[self.axis]
return tuple(output_shape)
def compute_mask(self, inputs, mask=None):
if mask is None:
return None
if not isinstance(mask, (tuple, list)):
raise ValueError(f"`mask` should be a list. Received mask={mask}")
if not isinstance(inputs, (tuple, list)):
raise ValueError(
f"`inputs` should be a list. Received: inputs={inputs}"
)
if len(mask) != len(inputs):
raise ValueError(
"The lists `inputs` and `mask` should have the same length. "
f"Received: inputs={inputs} of length {len(inputs)}, and "
f"mask={mask} of length {len(mask)}"
)
if all(m is None for m in mask):
return None
# Make a list of masks while making sure
# the dimensionality of each mask
# is the same as the corresponding input.
masks = []
for input_i, mask_i in zip(inputs, mask):
if mask_i is None:
# Input is unmasked. Append all 1s to masks,
masks.append(tf.ones_like(input_i, dtype="bool"))
elif backend.ndim(mask_i) < backend.ndim(input_i):
# Mask is smaller than the input, expand it
masks.append(tf.expand_dims(mask_i, axis=-1))
else:
masks.append(mask_i)
concatenated = backend.concatenate(masks, axis=self.axis)
return backend.all(concatenated, axis=-1, keepdims=False)
def get_config(self):
config = {
"axis": self.axis,
}
base_config = super().get_config()
return dict(list(base_config.items()) + list(config.items()))
@keras_export("keras.layers.concatenate")
def concatenate(inputs, axis=-1, **kwargs):
"""Functional interface to the `Concatenate` layer.
>>> x = np.arange(20).reshape(2, 2, 5)
>>> print(x)
[[[ 0 1 2 3 4]
[ 5 6 7 8 9]]
[[10 11 12 13 14]
[15 16 17 18 19]]]
>>> y = np.arange(20, 30).reshape(2, 1, 5)
>>> print(y)
[[[20 21 22 23 24]]
[[25 26 27 28 29]]]
>>> tf.keras.layers.concatenate([x, y],
... axis=1)
<tf.Tensor: shape=(2, 3, 5), dtype=int64, numpy=
array([[[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[20, 21, 22, 23, 24]],
[[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19],
[25, 26, 27, 28, 29]]])>
Args:
inputs: A list of input tensors.
axis: Concatenation axis.
**kwargs: Standard layer keyword arguments.
Returns:
A tensor, the concatenation of the inputs alongside axis `axis`.
"""
return Concatenate(axis=axis, **kwargs)(inputs)
| tf-keras/tf_keras/layers/merging/concatenate.py/0 | {
"file_path": "tf-keras/tf_keras/layers/merging/concatenate.py",
"repo_id": "tf-keras",
"token_count": 4104
} | 234 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Layer Normalization layer."""
import tensorflow.compat.v2 as tf
from tf_keras import constraints
from tf_keras import initializers
from tf_keras import regularizers
from tf_keras.dtensor import utils
from tf_keras.engine.base_layer import Layer
from tf_keras.utils import tf_utils
# isort: off
from tensorflow.python.util.tf_export import keras_export
@keras_export("keras.layers.LayerNormalization")
class LayerNormalization(Layer):
"""Layer normalization layer (Ba et al., 2016).
Normalize the activations of the previous layer for each given example in a
batch independently, rather than across a batch like Batch Normalization.
i.e. applies a transformation that maintains the mean activation within each
example close to 0 and the activation standard deviation close to 1.
Given a tensor `inputs`, moments are calculated and normalization
is performed across the axes specified in `axis`.
Example:
>>> data = tf.constant(np.arange(10).reshape(5, 2) * 10, dtype=tf.float32)
>>> print(data)
tf.Tensor(
[[ 0. 10.]
[20. 30.]
[40. 50.]
[60. 70.]
[80. 90.]], shape=(5, 2), dtype=float32)
>>> layer = tf.keras.layers.LayerNormalization(axis=1)
>>> output = layer(data)
>>> print(output)
tf.Tensor(
[[-1. 1.]
[-1. 1.]
[-1. 1.]
[-1. 1.]
[-1. 1.]], shape=(5, 2), dtype=float32)
Notice that with Layer Normalization the normalization happens across the
axes *within* each example, rather than across different examples in the
batch.
If `scale` or `center` are enabled, the layer will scale the normalized
outputs by broadcasting them with a trainable variable `gamma`, and center
the outputs by broadcasting with a trainable variable `beta`. `gamma` will
default to a ones tensor and `beta` will default to a zeros tensor, so that
centering and scaling are no-ops before training has begun.
So, with scaling and centering enabled the normalization equations
are as follows:
Let the intermediate activations for a mini-batch to be the `inputs`.
For each sample `x_i` in `inputs` with `k` features, we compute the mean and
variance of the sample:
```python
mean_i = sum(x_i[j] for j in range(k)) / k
var_i = sum((x_i[j] - mean_i) ** 2 for j in range(k)) / k
```
and then compute a normalized `x_i_normalized`, including a small factor
`epsilon` for numerical stability.
```python
x_i_normalized = (x_i - mean_i) / sqrt(var_i + epsilon)
```
And finally `x_i_normalized ` is linearly transformed by `gamma` and `beta`,
which are learned parameters:
```python
output_i = x_i_normalized * gamma + beta
```
`gamma` and `beta` will span the axes of `inputs` specified in `axis`, and
this part of the inputs' shape must be fully defined.
For example:
>>> layer = tf.keras.layers.LayerNormalization(axis=[1, 2, 3])
>>> layer.build([5, 20, 30, 40])
>>> print(layer.beta.shape)
(20, 30, 40)
>>> print(layer.gamma.shape)
(20, 30, 40)
Note that other implementations of layer normalization may choose to define
`gamma` and `beta` over a separate set of axes from the axes being
normalized across. For example, Group Normalization
([Wu et al. 2018](https://arxiv.org/abs/1803.08494)) with group size of 1
corresponds to a Layer Normalization that normalizes across height, width,
and channel and has `gamma` and `beta` span only the channel dimension.
So, this Layer Normalization implementation will not match a Group
Normalization layer with group size set to 1.
Args:
axis: Integer or List/Tuple. The axis or axes to normalize across.
Typically, this is the features axis/axes. The left-out axes are
typically the batch axis/axes. `-1` is the last dimension in the
input. Defaults to `-1`.
epsilon: Small float added to variance to avoid dividing by zero. Defaults
to 1e-3
center: If True, add offset of `beta` to normalized tensor. If False,
`beta` is ignored. Defaults to `True`.
scale: If True, multiply by `gamma`. If False, `gamma` is not used.
When the next layer is linear (also e.g. `nn.relu`), this can be
disabled since the scaling will be done by the next layer.
Defaults to `True`.
beta_initializer: Initializer for the beta weight. Defaults to zeros.
gamma_initializer: Initializer for the gamma weight. Defaults to ones.
beta_regularizer: Optional regularizer for the beta weight. None by
default.
gamma_regularizer: Optional regularizer for the gamma weight. None by
default.
beta_constraint: Optional constraint for the beta weight. None by default.
gamma_constraint: Optional constraint for the gamma weight. None by
default.
Input shape:
Arbitrary. Use the keyword argument `input_shape` (tuple of
integers, does not include the samples axis) when using this layer as the
first layer in a model.
Output shape:
Same shape as input.
Reference:
- [Lei Ba et al., 2016](https://arxiv.org/abs/1607.06450).
"""
@utils.allow_initializer_layout
def __init__(
self,
axis=-1,
epsilon=1e-3,
center=True,
scale=True,
beta_initializer="zeros",
gamma_initializer="ones",
beta_regularizer=None,
gamma_regularizer=None,
beta_constraint=None,
gamma_constraint=None,
**kwargs
):
super().__init__(**kwargs)
if isinstance(axis, (list, tuple)):
self.axis = list(axis)
elif isinstance(axis, int):
self.axis = axis
else:
raise TypeError(
"Expected an int or a list/tuple of ints for the "
"argument 'axis', but received: %r" % axis
)
self.epsilon = epsilon
self.center = center
self.scale = scale
self.beta_initializer = initializers.get(beta_initializer)
self.gamma_initializer = initializers.get(gamma_initializer)
self.beta_regularizer = regularizers.get(beta_regularizer)
self.gamma_regularizer = regularizers.get(gamma_regularizer)
self.beta_constraint = constraints.get(beta_constraint)
self.gamma_constraint = constraints.get(gamma_constraint)
self.supports_masking = True
# Indicates whether a faster fused implementation can be used. This will
# be set to True or False in build()"
self._fused = None
def _fused_can_be_used(self, ndims):
"""Returns false if fused implementation cannot be used.
Check if the axis is contiguous and can be collapsed into the last axis.
The self.axis is assumed to have no duplicates.
"""
axis = sorted(self.axis)
can_use_fused = False
if axis[-1] == ndims - 1 and axis[-1] - axis[0] == len(axis) - 1:
can_use_fused = True
# fused_batch_norm will silently raise epsilon to be at least 1.001e-5,
# so we cannot used the fused version if epsilon is below that value.
# Also, the variable dtype must be float32, as fused_batch_norm only
# supports float32 variables.
if self.epsilon < 1.001e-5 or self.dtype != "float32":
can_use_fused = False
return can_use_fused
def build(self, input_shape):
self.axis = tf_utils.validate_axis(self.axis, input_shape)
input_shape = tf.TensorShape(input_shape)
rank = input_shape.rank
param_shape = [input_shape[dim] for dim in self.axis]
if self.scale:
self.gamma = self.add_weight(
name="gamma",
shape=param_shape,
initializer=self.gamma_initializer,
regularizer=self.gamma_regularizer,
constraint=self.gamma_constraint,
trainable=True,
experimental_autocast=False,
)
else:
self.gamma = None
if self.center:
self.beta = self.add_weight(
name="beta",
shape=param_shape,
initializer=self.beta_initializer,
regularizer=self.beta_regularizer,
constraint=self.beta_constraint,
trainable=True,
experimental_autocast=False,
)
else:
self.beta = None
self._fused = self._fused_can_be_used(rank)
self.built = True
def call(self, inputs):
# TODO(b/229545225): Remove the RaggedTensor check.
is_ragged = isinstance(inputs, tf.RaggedTensor)
if is_ragged:
inputs_lengths = inputs.nested_row_lengths()
inputs = inputs.to_tensor()
inputs = tf.cast(inputs, self.compute_dtype)
# Compute the axes along which to reduce the mean / variance
input_shape = inputs.shape
ndims = len(input_shape)
# Broadcasting only necessary for norm when the axis is not just
# the last dimension
broadcast_shape = [1] * ndims
for dim in self.axis:
broadcast_shape[dim] = input_shape.dims[dim].value
def _broadcast(v):
if (
v is not None
and len(v.shape) != ndims
and self.axis != [ndims - 1]
):
return tf.reshape(v, broadcast_shape)
return v
if not self._fused:
input_dtype = inputs.dtype
if (
input_dtype in ("float16", "bfloat16")
and self.dtype == "float32"
):
# If mixed precision is used, cast inputs to float32 so that
# this is at least as numerically stable as the fused version.
inputs = tf.cast(inputs, "float32")
# Calculate the moments on the last axis (layer activations).
mean, variance = tf.nn.moments(inputs, self.axis, keepdims=True)
scale, offset = _broadcast(self.gamma), _broadcast(self.beta)
# Compute layer normalization using the batch_normalization
# function.
outputs = tf.nn.batch_normalization(
inputs,
mean,
variance,
offset=offset,
scale=scale,
variance_epsilon=self.epsilon,
)
outputs = tf.cast(outputs, input_dtype)
else:
# Collapse dims before self.axis, and dims in self.axis
axis = sorted(self.axis)
tensor_shape = tf.shape(inputs)
pre_dim = tf.reduce_prod(tensor_shape[: axis[0]])
in_dim = tf.reduce_prod(tensor_shape[axis[0] :])
squeezed_shape = [1, pre_dim, in_dim, 1]
# This fused operation requires reshaped inputs to be NCHW.
data_format = "NCHW"
inputs = tf.reshape(inputs, squeezed_shape)
# self.gamma and self.beta have the wrong shape for
# fused_batch_norm, so we cannot pass them as the scale and offset
# parameters. Therefore, we create two constant tensors in correct
# shapes for fused_batch_norm and later construct a separate
# calculation on the scale and offset.
scale = tf.ones([pre_dim], dtype=self.dtype)
offset = tf.zeros([pre_dim], dtype=self.dtype)
# Compute layer normalization using the fused_batch_norm function.
outputs, _, _ = tf.compat.v1.nn.fused_batch_norm(
inputs,
scale=scale,
offset=offset,
epsilon=self.epsilon,
data_format=data_format,
)
outputs = tf.reshape(outputs, tensor_shape)
scale, offset = _broadcast(self.gamma), _broadcast(self.beta)
if scale is not None:
outputs = outputs * tf.cast(scale, outputs.dtype)
if offset is not None:
outputs = outputs + tf.cast(offset, outputs.dtype)
# If some components of the shape got lost due to adjustments, fix that.
outputs.set_shape(input_shape)
if is_ragged:
outputs = tf.RaggedTensor.from_tensor(outputs, inputs_lengths)
return outputs
def compute_output_shape(self, input_shape):
return input_shape
def get_config(self):
config = {
"axis": self.axis,
"epsilon": self.epsilon,
"center": self.center,
"scale": self.scale,
"beta_initializer": initializers.serialize(self.beta_initializer),
"gamma_initializer": initializers.serialize(self.gamma_initializer),
"beta_regularizer": regularizers.serialize(self.beta_regularizer),
"gamma_regularizer": regularizers.serialize(self.gamma_regularizer),
"beta_constraint": constraints.serialize(self.beta_constraint),
"gamma_constraint": constraints.serialize(self.gamma_constraint),
}
base_config = super().get_config()
return dict(list(base_config.items()) + list(config.items()))
| tf-keras/tf_keras/layers/normalization/layer_normalization.py/0 | {
"file_path": "tf-keras/tf_keras/layers/normalization/layer_normalization.py",
"repo_id": "tf-keras",
"token_count": 5834
} | 235 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Private base class for pooling 2D layers."""
import tensorflow.compat.v2 as tf
from tf_keras import backend
from tf_keras.engine.base_layer import Layer
from tf_keras.engine.input_spec import InputSpec
from tf_keras.utils import conv_utils
class Pooling2D(Layer):
"""Pooling layer for arbitrary pooling functions, for 2D data (e.g. images).
This class only exists for code reuse. It will never be an exposed API.
Args:
pool_function: The pooling function to apply, e.g. `tf.nn.max_pool2d`.
pool_size: An integer or tuple/list of 2 integers:
(pool_height, pool_width)
specifying the size of the pooling window.
Can be a single integer to specify the same value for
all spatial dimensions.
strides: An integer or tuple/list of 2 integers,
specifying the strides of the pooling operation.
Can be a single integer to specify the same value for
all spatial dimensions.
padding: A string. The padding method, either 'valid' or 'same'.
Case-insensitive.
data_format: A string, one of `channels_last` (default) or
`channels_first`.
The ordering of the dimensions in the inputs.
`channels_last` corresponds to inputs with shape
`(batch, height, width, channels)` while `channels_first` corresponds to
inputs with shape `(batch, channels, height, width)`.
name: A string, the name of the layer.
"""
def __init__(
self,
pool_function,
pool_size,
strides,
padding="valid",
data_format=None,
name=None,
**kwargs
):
super().__init__(name=name, **kwargs)
if data_format is None:
data_format = backend.image_data_format()
if strides is None:
strides = pool_size
self.pool_function = pool_function
self.pool_size = conv_utils.normalize_tuple(pool_size, 2, "pool_size")
self.strides = conv_utils.normalize_tuple(
strides, 2, "strides", allow_zero=True
)
self.padding = conv_utils.normalize_padding(padding)
self.data_format = conv_utils.normalize_data_format(data_format)
self.input_spec = InputSpec(ndim=4)
def call(self, inputs):
if self.data_format == "channels_last":
pool_shape = (1,) + self.pool_size + (1,)
strides = (1,) + self.strides + (1,)
else:
pool_shape = (1, 1) + self.pool_size
strides = (1, 1) + self.strides
outputs = self.pool_function(
inputs,
ksize=pool_shape,
strides=strides,
padding=self.padding.upper(),
data_format=conv_utils.convert_data_format(self.data_format, 4),
)
return outputs
def compute_output_shape(self, input_shape):
input_shape = tf.TensorShape(input_shape).as_list()
if self.data_format == "channels_first":
rows = input_shape[2]
cols = input_shape[3]
else:
rows = input_shape[1]
cols = input_shape[2]
rows = conv_utils.conv_output_length(
rows, self.pool_size[0], self.padding, self.strides[0]
)
cols = conv_utils.conv_output_length(
cols, self.pool_size[1], self.padding, self.strides[1]
)
if self.data_format == "channels_first":
return tf.TensorShape([input_shape[0], input_shape[1], rows, cols])
else:
return tf.TensorShape([input_shape[0], rows, cols, input_shape[3]])
def get_config(self):
config = {
"pool_size": self.pool_size,
"padding": self.padding,
"strides": self.strides,
"data_format": self.data_format,
}
base_config = super().get_config()
return dict(list(base_config.items()) + list(config.items()))
| tf-keras/tf_keras/layers/pooling/base_pooling2d.py/0 | {
"file_path": "tf-keras/tf_keras/layers/pooling/base_pooling2d.py",
"repo_id": "tf-keras",
"token_count": 1883
} | 236 |
# Placeholder: load unaliased py_library
# Benchmarks for TF-Keras preprocessing layers.
load("@org_keras//tf_keras:tf_keras.bzl", "cuda_py_test")
# buildifier: disable=same-origin-load
load("@org_keras//tf_keras:tf_keras.bzl", "tf_py_test")
package(
# copybara:uncomment default_applicable_licenses = ["//tf_keras:license"],
default_visibility = [
"//tf_keras:friends",
"//third_party/tensorflow/tools/pip_package:__pkg__",
],
licenses = ["notice"],
)
tf_py_test(
name = "category_encoding_benchmark",
srcs = ["category_encoding_benchmark.py"],
python_version = "PY3",
deps = [
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
"//tf_keras/layers/preprocessing:category_encoding",
],
)
tf_py_test(
name = "hashing_benchmark",
srcs = ["hashing_benchmark.py"],
python_version = "PY3",
deps = [
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
"//tf_keras/layers/preprocessing:hashing",
],
)
tf_py_test(
name = "index_lookup_adapt_benchmark",
srcs = ["index_lookup_adapt_benchmark.py"],
python_version = "PY3",
deps = [
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
"//tf_keras/layers/preprocessing:index_lookup",
],
)
tf_py_test(
name = "index_lookup_forward_benchmark",
srcs = ["index_lookup_forward_benchmark.py"],
python_version = "PY3",
deps = [
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
"//tf_keras/layers/preprocessing:index_lookup",
],
)
tf_py_test(
name = "normalization_adapt_benchmark",
srcs = ["normalization_adapt_benchmark.py"],
python_version = "PY3",
deps = [
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
"//tf_keras/layers/preprocessing:normalization",
],
)
tf_py_test(
name = "discretization_adapt_benchmark",
srcs = ["discretization_adapt_benchmark.py"],
python_version = "PY3",
deps = [
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
"//tf_keras/layers/preprocessing:discretization",
],
)
cuda_py_test(
name = "image_preproc_benchmark",
srcs = ["image_preproc_benchmark.py"],
python_version = "PY3",
deps = [
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
"//tf_keras/layers/preprocessing:image_preprocessing",
],
)
tf_py_test(
name = "bucketized_column_dense_benchmark",
srcs = ["bucketized_column_dense_benchmark.py"],
python_version = "PY3",
deps = [
":feature_column_benchmark",
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
],
)
tf_py_test(
name = "hashed_crossing_benchmark",
srcs = ["hashed_crossing_benchmark.py"],
python_version = "PY3",
deps = [
":feature_column_benchmark",
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
],
)
tf_py_test(
name = "category_hash_dense_benchmark",
srcs = ["category_hash_dense_benchmark.py"],
python_version = "PY3",
deps = [
":feature_column_benchmark",
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
],
)
tf_py_test(
name = "category_hash_varlen_benchmark",
srcs = ["category_hash_varlen_benchmark.py"],
python_version = "PY3",
deps = [
":feature_column_benchmark",
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
],
)
tf_py_test(
name = "category_vocab_file_dense_benchmark",
srcs = ["category_vocab_file_dense_benchmark.py"],
python_version = "PY3",
tags = ["no_windows"],
deps = [
":feature_column_benchmark",
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
],
)
tf_py_test(
name = "category_vocab_file_varlen_benchmark",
srcs = ["category_vocab_file_varlen_benchmark.py"],
python_version = "PY3",
tags = ["no_windows"],
deps = [
":feature_column_benchmark",
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
],
)
tf_py_test(
name = "category_vocab_list_dense_benchmark",
srcs = ["category_vocab_list_dense_benchmark.py"],
python_version = "PY3",
tags = ["no_windows"],
deps = [
":feature_column_benchmark",
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
],
)
tf_py_test(
name = "category_vocab_list_indicator_dense_benchmark",
srcs = ["category_vocab_list_indicator_dense_benchmark.py"],
python_version = "PY3",
tags = ["no_windows"],
deps = [
":feature_column_benchmark",
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
],
)
tf_py_test(
name = "category_vocab_list_indicator_varlen_benchmark",
srcs = ["category_vocab_list_indicator_varlen_benchmark.py"],
python_version = "PY3",
tags = ["no_windows"],
deps = [
":feature_column_benchmark",
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
],
)
tf_py_test(
name = "category_vocab_list_varlen_benchmark",
srcs = ["category_vocab_list_varlen_benchmark.py"],
python_version = "PY3",
tags = ["no_windows"],
deps = [
":feature_column_benchmark",
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
],
)
tf_py_test(
name = "embedding_dense_benchmark",
srcs = ["embedding_dense_benchmark.py"],
python_version = "PY3",
deps = [
":feature_column_benchmark",
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
],
)
tf_py_test(
name = "embedding_varlen_benchmark",
srcs = ["embedding_varlen_benchmark.py"],
python_version = "PY3",
deps = [
":feature_column_benchmark",
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
],
)
py_library(
name = "feature_column_benchmark",
srcs = ["feature_column_benchmark.py"],
deps = [
"//:expect_tensorflow_installed",
],
)
tf_py_test(
name = "weighted_embedding_varlen_benchmark",
srcs = ["weighted_embedding_varlen_benchmark.py"],
python_version = "PY3",
deps = [
":feature_column_benchmark",
"//:expect_tensorflow_installed",
"//tf_keras/api:tf_keras_api",
],
)
| tf-keras/tf_keras/layers/preprocessing/benchmarks/BUILD/0 | {
"file_path": "tf-keras/tf_keras/layers/preprocessing/benchmarks/BUILD",
"repo_id": "tf-keras",
"token_count": 3120
} | 237 |
# Copyright 2020 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for keras.layers.preprocessing.hashing."""
import numpy as np
import tensorflow.compat.v2 as tf
import tf_keras as keras
from tf_keras import backend
from tf_keras.distribute import strategy_combinations
from tf_keras.layers.preprocessing import hashing
from tf_keras.layers.preprocessing import preprocessing_test_utils
from tf_keras.testing_infra import test_combinations
from tf_keras.testing_infra import test_utils
# isort: off
from tensorflow.python.framework import (
test_util as tf_test_utils,
)
@test_utils.run_v2_only
@tf.__internal__.distribute.combinations.generate(
tf.__internal__.test.combinations.combine(
strategy=strategy_combinations.all_strategies
+ strategy_combinations.multi_worker_mirrored_strategies
+ strategy_combinations.parameter_server_strategies_single_worker
+ strategy_combinations.parameter_server_strategies_multi_worker,
mode=["eager"],
)
)
class HashingDistributionTest(
test_combinations.TestCase, preprocessing_test_utils.PreprocessingLayerTest
):
def test_strategy(self, strategy):
if (
backend.is_tpu_strategy(strategy)
and not tf_test_utils.is_mlir_bridge_enabled()
):
self.skipTest("TPU tests require MLIR bridge")
input_data = np.asarray([["omar"], ["stringer"], ["marlo"], ["wire"]])
input_dataset = tf.data.Dataset.from_tensor_slices(input_data).batch(
2, drop_remainder=True
)
expected_output = [[0], [0], [1], [0]]
tf.config.set_soft_device_placement(True)
with strategy.scope():
input_data = keras.Input(shape=(None,), dtype=tf.string)
layer = hashing.Hashing(num_bins=2)
int_data = layer(input_data)
model = keras.Model(inputs=input_data, outputs=int_data)
output_dataset = model.predict(input_dataset)
self.assertAllEqual(expected_output, output_dataset)
if __name__ == "__main__":
tf.__internal__.distribute.multi_process_runner.test_main()
| tf-keras/tf_keras/layers/preprocessing/hashing_distribution_test.py/0 | {
"file_path": "tf-keras/tf_keras/layers/preprocessing/hashing_distribution_test.py",
"repo_id": "tf-keras",
"token_count": 999
} | 238 |
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests utils for preprocessing layers."""
import collections
import numpy as np
import tensorflow.compat.v2 as tf
class ArrayLike:
def __init__(self, values):
self.values = values
def __array__(self):
return np.array(self.values)
class PreprocessingLayerTest(tf.test.TestCase):
"""Base test class for preprocessing layer API validation."""
# TODO(b/137303934): Consider incorporating something like this Close vs All
# behavior into core tf.test.TestCase.
def assertAllCloseOrEqual(self, a, b, msg=None):
"""Asserts that elements are close (if numeric) or equal (if string)."""
if a is None or b is None:
self.assertAllEqual(a, b, msg=msg)
elif isinstance(a, (list, tuple)):
self.assertEqual(len(a), len(b))
for a_value, b_value in zip(a, b):
self.assertAllCloseOrEqual(a_value, b_value, msg=msg)
elif isinstance(a, collections.abc.Mapping):
self.assertEqual(len(a), len(b))
for key, a_value in a.items():
b_value = b[key]
error_message = f"{msg} ({key})" if msg else None
self.assertAllCloseOrEqual(a_value, b_value, error_message)
elif (
isinstance(a, float)
or hasattr(a, "dtype")
and np.issubdtype(a.dtype, np.number)
):
self.assertAllClose(a, b, msg=msg)
else:
self.assertAllEqual(a, b, msg=msg)
def assert_extracted_output_equal(self, combiner, acc1, acc2, msg=None):
data_1 = combiner.extract(acc1)
data_2 = combiner.extract(acc2)
self.assertAllCloseOrEqual(data_1, data_2, msg=msg)
# This is an injection seam so that tests like TextVectorizationTest can
# define their own methods for asserting that accumulators are equal.
compare_accumulators = assertAllCloseOrEqual
def validate_accumulator_computation(self, combiner, data, expected):
"""Validate that various combinations of compute and merge are
identical."""
if len(data) < 4:
raise AssertionError(
"Data must have at least 4 elements. Received "
f"len(data)={len(data)}."
)
data_0 = np.array([data[0]])
data_1 = np.array([data[1]])
data_2 = np.array(data[2:])
single_compute = combiner.compute(data)
all_merge = combiner.merge(
[
combiner.compute(data_0),
combiner.compute(data_1),
combiner.compute(data_2),
]
)
self.compare_accumulators(
single_compute,
all_merge,
msg="Sharding data should not change the data output.",
)
unordered_all_merge = combiner.merge(
[
combiner.compute(data_1),
combiner.compute(data_2),
combiner.compute(data_0),
]
)
self.compare_accumulators(
all_merge,
unordered_all_merge,
msg=(
"The order of merge arguments should not change the data "
"output."
),
)
hierarchical_merge = combiner.merge(
[
combiner.compute(data_1),
combiner.merge(
[combiner.compute(data_2), combiner.compute(data_0)]
),
]
)
self.compare_accumulators(
all_merge,
hierarchical_merge,
msg="Nesting merge arguments should not change the data output.",
)
nested_compute = combiner.compute(
data_0, combiner.compute(data_1, combiner.compute(data_2))
)
self.compare_accumulators(
all_merge,
nested_compute,
msg="Nesting compute arguments should not change the data output.",
)
mixed_compute = combiner.merge(
[
combiner.compute(data_0),
combiner.compute(data_1, combiner.compute(data_2)),
]
)
self.compare_accumulators(
all_merge,
mixed_compute,
msg=(
"Mixing merge and compute calls should not change the data "
"output."
),
)
single_merge = combiner.merge(
[
combiner.merge([combiner.compute(data_0)]),
combiner.compute(data_1, combiner.compute(data_2)),
]
)
self.compare_accumulators(
all_merge,
single_merge,
msg=(
"Calling merge with a data length of 1 should not change "
"the data output."
),
)
self.compare_accumulators(
expected,
all_merge,
msg="Calculated accumulators did not match expected accumulator.",
)
def validate_accumulator_extract(self, combiner, data, expected):
"""Validate that the expected results of computing and extracting."""
acc = combiner.compute(data)
extracted_data = combiner.extract(acc)
self.assertAllCloseOrEqual(expected, extracted_data)
def validate_accumulator_extract_and_restore(
self, combiner, data, expected
):
"""Validate that the extract<->restore loop loses no data."""
acc = combiner.compute(data)
extracted_data = combiner.extract(acc)
restored_acc = combiner.restore(extracted_data)
self.assert_extracted_output_equal(combiner, acc, restored_acc)
self.assertAllCloseOrEqual(expected, combiner.extract(restored_acc))
def validate_accumulator_serialize_and_deserialize(
self, combiner, data, expected
):
"""Validate that the serialize<->deserialize loop loses no data."""
acc = combiner.compute(data)
serialized_data = combiner.serialize(acc)
deserialized_data = combiner.deserialize(serialized_data)
self.compare_accumulators(acc, deserialized_data)
self.compare_accumulators(expected, deserialized_data)
def validate_accumulator_uniqueness(self, combiner, data):
"""Validate that every call to compute creates a unique accumulator."""
acc = combiner.compute(data)
acc2 = combiner.compute(data)
self.assertIsNot(acc, acc2)
self.compare_accumulators(acc, acc2)
| tf-keras/tf_keras/layers/preprocessing/preprocessing_test_utils.py/0 | {
"file_path": "tf-keras/tf_keras/layers/preprocessing/preprocessing_test_utils.py",
"repo_id": "tf-keras",
"token_count": 3307
} | 239 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains the GaussianDropout layer."""
import numpy as np
import tensorflow.compat.v2 as tf
from tf_keras import backend
from tf_keras.engine import base_layer
from tf_keras.utils import tf_utils
# isort: off
from tensorflow.python.util.tf_export import keras_export
@keras_export("keras.layers.GaussianDropout")
class GaussianDropout(base_layer.BaseRandomLayer):
"""Apply multiplicative 1-centered Gaussian noise.
As it is a regularization layer, it is only active at training time.
Args:
rate: Float, drop probability (as with `Dropout`).
The multiplicative noise will have
standard deviation `sqrt(rate / (1 - rate))`.
seed: Integer, optional random seed to enable deterministic behavior.
Call arguments:
inputs: Input tensor (of any rank).
training: Python boolean indicating whether the layer should behave in
training mode (adding dropout) or in inference mode (doing nothing).
Input shape:
Arbitrary. Use the keyword argument `input_shape`
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.
Output shape:
Same shape as input.
"""
def __init__(self, rate, seed=None, **kwargs):
super().__init__(seed=seed, **kwargs)
self.supports_masking = True
self.rate = rate
self.seed = seed
def call(self, inputs, training=None):
if 0 < self.rate < 1:
def noised():
stddev = np.sqrt(self.rate / (1.0 - self.rate))
return inputs * self._random_generator.random_normal(
shape=tf.shape(inputs),
mean=1.0,
stddev=stddev,
dtype=inputs.dtype,
)
return backend.in_train_phase(noised, inputs, training=training)
return inputs
def get_config(self):
config = {"rate": self.rate, "seed": self.seed}
base_config = super().get_config()
return dict(list(base_config.items()) + list(config.items()))
@tf_utils.shape_type_conversion
def compute_output_shape(self, input_shape):
return input_shape
| tf-keras/tf_keras/layers/regularization/gaussian_dropout.py/0 | {
"file_path": "tf-keras/tf_keras/layers/regularization/gaussian_dropout.py",
"repo_id": "tf-keras",
"token_count": 1045
} | 240 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Contains the Permute layer."""
import copy
import tensorflow.compat.v2 as tf
from tf_keras.engine.base_layer import Layer
from tf_keras.engine.input_spec import InputSpec
# isort: off
from tensorflow.python.util.tf_export import keras_export
@keras_export("keras.layers.Permute")
class Permute(Layer):
"""Permutes the dimensions of the input according to a given pattern.
Useful e.g. connecting RNNs and convnets.
Example:
```python
model = Sequential()
model.add(Permute((2, 1), input_shape=(10, 64)))
# now: model.output_shape == (None, 64, 10)
# note: `None` is the batch dimension
```
Args:
dims: Tuple of integers. Permutation pattern does not include the
samples dimension. Indexing starts at 1.
For instance, `(2, 1)` permutes the first and second dimensions
of the input.
Input shape:
Arbitrary. Use the keyword argument `input_shape`
(tuple of integers, does not include the samples axis)
when using this layer as the first layer in a model.
Output shape:
Same as the input shape, but with the dimensions re-ordered according
to the specified pattern.
"""
def __init__(self, dims, **kwargs):
super().__init__(**kwargs)
self.dims = tuple(dims)
if sorted(dims) != list(range(1, len(dims) + 1)):
raise ValueError(
"Invalid permutation argument `dims` for Permute Layer. "
"The set of indices in `dims` must be consecutive and start "
f"from 1. Received dims={dims}"
)
self.input_spec = InputSpec(ndim=len(self.dims) + 1)
def compute_output_shape(self, input_shape):
input_shape = tf.TensorShape(input_shape).as_list()
output_shape = copy.copy(input_shape)
for i, dim in enumerate(self.dims):
target_dim = input_shape[dim]
output_shape[i + 1] = target_dim
return tf.TensorShape(output_shape)
def call(self, inputs):
return tf.transpose(inputs, perm=(0,) + self.dims)
def get_config(self):
config = {"dims": self.dims}
base_config = super().get_config()
return dict(list(base_config.items()) + list(config.items()))
| tf-keras/tf_keras/layers/reshaping/permute.py/0 | {
"file_path": "tf-keras/tf_keras/layers/reshaping/permute.py",
"repo_id": "tf-keras",
"token_count": 1078
} | 241 |
# Copyright 2015 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Base class for RNN cells."""
from tf_keras.engine import base_layer
from tf_keras.layers.rnn import rnn_utils
# isort: off
from tensorflow.python.util.tf_export import keras_export
@keras_export("keras.layers.AbstractRNNCell")
class AbstractRNNCell(base_layer.Layer):
"""Abstract object representing an RNN cell.
See
[the TF-Keras RNN API guide](https://www.tensorflow.org/guide/keras/rnn)
for details about the usage of RNN API.
This is the base class for implementing RNN cells with custom behavior.
Every `RNNCell` must have the properties below and implement `call` with
the signature `(output, next_state) = call(input, state)`.
Examples:
```python
class MinimalRNNCell(AbstractRNNCell):
def __init__(self, units, **kwargs):
self.units = units
super(MinimalRNNCell, self).__init__(**kwargs)
@property
def state_size(self):
return self.units
def build(self, input_shape):
self.kernel = self.add_weight(shape=(input_shape[-1], self.units),
initializer='uniform',
name='kernel')
self.recurrent_kernel = self.add_weight(
shape=(self.units, self.units),
initializer='uniform',
name='recurrent_kernel')
self.built = True
def call(self, inputs, states):
prev_output = states[0]
h = backend.dot(inputs, self.kernel)
output = h + backend.dot(prev_output, self.recurrent_kernel)
return output, output
```
This definition of cell differs from the definition used in the literature.
In the literature, 'cell' refers to an object with a single scalar output.
This definition refers to a horizontal array of such units.
An RNN cell, in the most abstract setting, is anything that has
a state and performs some operation that takes a matrix of inputs.
This operation results in an output matrix with `self.output_size` columns.
If `self.state_size` is an integer, this operation also results in a new
state matrix with `self.state_size` columns. If `self.state_size` is a
(possibly nested tuple of) TensorShape object(s), then it should return a
matching structure of Tensors having shape `[batch_size].concatenate(s)`
for each `s` in `self.batch_size`.
"""
def call(self, inputs, states):
"""The function that contains the logic for one RNN step calculation.
Args:
inputs: the input tensor, which is a slide from the overall RNN input
by the time dimension (usually the second dimension).
states: the state tensor from previous step, which has the same shape
as `(batch, state_size)`. In the case of timestep 0, it will be the
initial state user specified, or zero filled tensor otherwise.
Returns:
A tuple of two tensors:
1. output tensor for the current timestep, with size `output_size`.
2. state tensor for next step, which has the shape of `state_size`.
"""
raise NotImplementedError
@property
def state_size(self):
"""size(s) of state(s) used by this cell.
It can be represented by an Integer, a TensorShape or a tuple of
Integers or TensorShapes.
"""
raise NotImplementedError
@property
def output_size(self):
"""Integer or TensorShape: size of outputs produced by this cell."""
raise NotImplementedError
def get_initial_state(self, inputs=None, batch_size=None, dtype=None):
return rnn_utils.generate_zero_filled_state_for_cell(
self, inputs, batch_size, dtype
)
| tf-keras/tf_keras/layers/rnn/abstract_rnn_cell.py/0 | {
"file_path": "tf-keras/tf_keras/layers/rnn/abstract_rnn_cell.py",
"repo_id": "tf-keras",
"token_count": 1634
} | 242 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.